US20180314247A1 - Enhancing Autonomous Vehicle Perception wth Off-Vehicle Collected Data - Google Patents
Enhancing Autonomous Vehicle Perception wth Off-Vehicle Collected Data Download PDFInfo
- Publication number
- US20180314247A1 US20180314247A1 US15/497,821 US201715497821A US2018314247A1 US 20180314247 A1 US20180314247 A1 US 20180314247A1 US 201715497821 A US201715497821 A US 201715497821A US 2018314247 A1 US2018314247 A1 US 2018314247A1
- Authority
- US
- United States
- Prior art keywords
- data
- autonomous vehicle
- vehicle
- reported data
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008447 perception Effects 0.000 title claims description 25
- 230000002708 enhancing effect Effects 0.000 title 1
- 238000000034 method Methods 0.000 claims abstract description 46
- 230000003993 interaction Effects 0.000 claims description 13
- 230000001133 acceleration Effects 0.000 claims description 11
- 230000004048 modification Effects 0.000 claims 2
- 238000012986 modification Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 36
- 238000005259 measurement Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 230000004807 localization Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000035515 penetration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000013442 quality metrics Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004441 surface measurement Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
-
- G06F17/30477—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
- G06Q10/047—Optimisation of routes or paths, e.g. travelling salesman problem
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/161—Decentralised systems, e.g. inter-vehicle communication
- G08G1/163—Decentralised systems, e.g. inter-vehicle communication involving continuous checking
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/13—Receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/01—Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/13—Receivers
- G01S19/14—Receivers specially adapted for specific applications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2637—Vehicle, car, auto, wheelchair
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/008—Registering or indicating the working of vehicles communicating information to a remotely located station
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
Definitions
- autonomous vehicle refers to a vehicle with autonomous functions, including semi-autonomous vehicles and fully-autonomous vehicles.
- a method includes receiving, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle.
- the data is collected by a collecting device external to the autonomous vehicle, and is relayed to the autonomous vehicle via a server.
- the reported data includes a current location of the object, a type of the object, or a predicted location of the object.
- the method further includes determining, at the autonomous vehicle, whether the reported data of the object correlates with a found object in an object list. If the determination finds the found object in the object list, the method , adds the reported data of the object to data associated with the found object in the object list. Otherwise, the method adds the reported data of the object to an object list of objects detected by sensors from on-board the autonomous vehicle.
- the collective device can be an off-vehicle collecting device.
- the type of the object is a pedestrian, bicycle, vehicle, or vehicle type.
- the reported data includes location data collected from the collecting device of the object.
- the location data provides the current location of the object.
- a history of the location data can be used to predict future locations of the object.
- the location data can be acquired using a global positioning system (GPS) receiver, cell-tower signals, WiFi signals, or other location sensing devices and methods.
- GPS global positioning system
- the type of the object is determined based on a vibration pattern of the collecting device, wherein the collecting device is a mobile device.
- the type of object is determined by a user of the mobile device self-identifying the type of object.
- the reported data further includes velocity and acceleration data of the object, a route map of the object, or a calendar of a user of the object.
- the predicted location of the object is determined by loading the route map of the object, or by loading a destination location from the calendar of the user and generating a route map from the current location to the destination location.
- the method includes modifying a sensor model based on discrepancies between the reported data from the collection device and data from the on-board sensors of the autonomous vehicle. In another embodiment, the method includes building a sensor model based on the reported data from the collection device and data from the on-board sensors of the autonomous vehicle.
- the method further includes generating the reported data of the object the object by analyzing at least one image taken by the collecting device.
- the method further includes identifying the reported data as an emergency vehicle signal.
- the method can further include authenticating the reported data as the emergency vehicle signal.
- the method further includes reporting, from the autonomous vehicle to the server, a location and direction of the vehicle, such that the data returned from the server is related to a current and future locations of the autonomous vehicle.
- the data returned from the server is selected from a list of object data reported from multiple collecting devices stored at the server.
- the method further includes reporting, from the autonomous vehicle to the server, at least one of a location and direction of the vehicle, such that the responsively received data is related to the same location of the vehicle.
- one of the objects of the object list is determined by on-board sensors of the autonomous vehicle.
- a system in an embodiment, includes a processor and a memory with computer code instructions stored therein.
- the memory is operatively coupled to said processor such that the computer code instructions configure the processor to implement a machine interaction controller.
- the machine interaction controller is configured to receive, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle.
- the data is collected by a collecting device external to the autonomous vehicle and relayed to the autonomous vehicle via a server.
- the reported data includes a current location of the object, a type of the object, or a predicted location of the object.
- the memory and processor are further configured to implement a perception controller configured to determine, at the autonomous vehicle, whether the reported data of the object correlates with a found object in an object list, wherein at least one of the objects of the object list is determined by on-board sensors of the autonomous vehicle. If the determination finds the found object in the object list, the perception controller adds the reported data of the object to data associated with the found object in the object list. Otherwise, the perception controller adds the reported data of the object to an object list of objects detected by sensor from on-board sensors of the autonomous vehicle.
- a non-transitory computer-readable medium is configured to store instructions for providing data to an autonomous vehicle.
- the instructions when loaded and executed by a processor, causes the processor to receive, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle.
- the data is collected by a collecting device external to the autonomous vehicle and relayed to the autonomous vehicle via a server.
- the reported data includes a current location of the object, a type of the object, or a predicted location of the object.
- FIG. 1 is a diagram illustrating steps in an embodiment of an automated control system of the Observe, Orient, Decide, and Act (OODA) model.
- OODA Observe, Orient, Decide, and Act
- FIG. 2 is a block diagram of an embodiment of an autonomous vehicle high-level architecture.
- FIG. 3 is a block diagram illustrating an embodiment of the sensor interaction controller (SIC), perception controller (PC), and localization controller (LC).
- SIC sensor interaction controller
- PC perception controller
- LC localization controller
- FIG. 4 is a block diagram illustrating an example embodiment of the automatic driving controller (ADC), vehicle controller (VC) and actuator controller.
- ADC automatic driving controller
- VC vehicle controller
- actuator controller actuator controller
- FIG. 5 is a diagram illustrating decision time scales of the ADC and VC.
- FIG. 6A is a block diagram illustrating an example embodiment of the present disclosure.
- FIG. 6B is a block diagram illustrating an example embodiment of the present disclosure.
- FIG. 6C is a block diagram illustrating an example embodiment of another aspect of the present disclosure.
- FIG. 6D is a block diagram illustrating an example embodiment of another aspect of the present disclosure.
- FIG. 7B is a diagram illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle.
- FIG. 8 is a diagram of an example mobile application running on a mobile device.
- FIG. 9A is a diagram of a sensor field of view employing the data received from the mobile application.
- FIG. 9B is a diagram of a sensor field of view employing the data received from the mobile application.
- FIG. 10 is a network diagram illustrating a mobile application communicating with a server and a representation of an autonomous vehicle.
- FIG. 12 is a flow diagram illustrating a process employed by an example embodiment of the present disclosure at a vehicle and server.
- FIG. 1 is a diagram illustrating steps in an embodiment of an automated control system of the Observe, Orient, Decide, and Act (OODA) model.
- Automated systems such as highly-automated driving systems, or, self-driving cars, or autonomous vehicles, employ an OODA model.
- the observe virtual layer 102 involves sensing features from the world using machine sensors, such as laser ranging, radar, infra-red, vision systems, or other systems.
- the orientation virtual layer 104 involves perceiving situational awareness based on the sensed information. Examples of orientation virtual layer activities are Kalman filtering, model based matching, machine or deep learning, and Bayesian predictions.
- the decide virtual layer 106 selects an action from multiple objects to a final decision.
- the act virtual layer 108 provides guidance and control for executing the decision.
- FIG. 2 is a block diagram 200 of an embodiment of an autonomous vehicle high-level architecture 206 .
- the architecture 206 is built using a top-down approach to enable fully automated driving. Further, the architecture 206 is preferably modular such that it can be adaptable with hardware from different vehicle manufacturers. The architecture 206 , therefore, has several modular elements functionally divided to maximize these properties.
- the modular architecture 206 described herein can interface with sensor systems 202 of any vehicle 204 . Further, the modular architecture 206 can receive vehicle information from and communicate with any vehicle 204 .
- Elements of the modular architecture 206 include sensors 202 , Sensor Interface Controller (SIC) 208 , localization controller (LC) 210 , perception controller (PC) 212 , automated driving controller 214 (ADC), vehicle controller 216 (VC), system controller 218 (SC), human interaction controller 220 (HC) and machine interaction controller 222 (MC).
- SIC Sensor Interface Controller
- LC localization controller
- PC perception controller
- ADC automated driving controller
- VC vehicle controller 216
- SC system controller 218
- HC human interaction controller 220
- MC machine interaction controller
- the observation layer of the model includes gathering sensor readings, for example, from vision sensors, Radar (Radio Detection And Ranging), LIDAR (Light Detection And Ranging), and Global Positioning Systems (GPS).
- the sensors 202 shown in FIG. 2 shows such an observation layer.
- Examples of the orientation layer of the model can include determining where a car is relative to the world, relative to the road it is driving on, and relative to lane markings on the road, shown by Perception Controller (PC) 212 and Localization Controller (LC) 210 of FIG. 2 .
- PC Perception Controller
- LC Localization Controller
- Examples of the decision layer of the model include determining a corridor to automatically drive the car, and include elements such as the Automatic Driving Controller (ADC) 214 and Vehicle Controller (VC) 216 of FIG. 2 .
- Examples of the act layer include converting that corridor into commands to the vehicle's driving systems (e.g., steering sub-system, acceleration sub-system, and breaking sub-system) that direct the car along the corridor, such as actuator control 410 of FIG. 4 .
- a person of ordinary skill in the art can recognize that the layers of the system are not strictly sequential, and as observations change, so do the results of the other layers.
- the module architecture 206 receives measurements from sensors 202 . While different sensors may output different sets of information in different formats, the modular architecture 206 includes Sensor Interface Controller (SIC) 208 , sometimes also referred to as a Sensor Interface Server (SIS), configured to translate the sensor data into data having a vendor-neutral format that can be read by the modular architecture 206 . Therefore, the modular architecture 206 learns about the environment around the vehicle 204 from the vehicle's sensors, no matter the vendor, manufacturer, or configuration of the sensors. The SIS 208 can further tag each sensor's data with a metadata tag having its location and orientation in the car, which can be used by the perception controller to determine the unique angle, perspective, and blind spot of each sensor.
- SIC Sensor Interface Controller
- SIS Sensor Interface Server
- the modular architecture 206 includes vehicle controller 216 (VC).
- the VC 216 is configured to send commands to the vehicle and receive status messages from the vehicle.
- the vehicle controller 216 receives status messages from the vehicle 204 indicating the vehicle's status, such as information regarding the vehicle's speed, attitude, steering position, braking status, and fuel level, or any other information about the vehicle's subsystems that is relevant for autonomous driving.
- the modular architecture 206 based on the information from the vehicle 204 and the sensors 202 , therefore can calculate commands to send from the VC 216 to the vehicle 204 to implement self-driving.
- the functions of the various modules within the modular architecture 206 are described in further detail below.
- the modular architecture 206 when viewing the modular architecture 206 at a high level, it receives (a) sensor information from the sensors 202 and (b) vehicle status information from the vehicle 204 , and in turn, provides the vehicle instructions to the vehicle 204 .
- Such an architecture allows the modular architecture to be employed for any vehicle with any sensor configuration. Therefore, any vehicle platform that includes a sensor subsystem (e.g., sensors 202 ) and an actuation subsystem having the ability to provide vehicle status and accept driving commands (e.g., actuator control 410 of FIG. 4 ) can integrate with the modular architecture 206 .
- the sensors 202 and SIC 208 reside in the “observe” virtual layer. As described above, the SIC 208 receives measurements (e.g., sensor data) having various formats. The SIC 208 is configured to convert vendor-specific data directly from the sensors to vendor-neutral data. In this way, the set of sensors 202 can include any brand of Radar, LIDAR, image sensor, or other sensors, and the modular architecture 206 can use their perceptions of the environment effectively.
- the measurements output by the sensor interface server are then processed by perception controller (PC) 212 and localization controller (LC) 210 .
- the PC 212 and LC 210 both reside in the “orient” virtual layer of the OODA model.
- the LC 210 determines a robust world-location of the vehicle that can be more precise than a GPS signal, and still determines the world-location of the vehicle when there is no available or an inaccurate GPS signal.
- the LC 210 determines the location based on GPS data and sensor data.
- the PC 212 on the other hand, generates prediction models representing a state of the environment around the car, including objects around the car and state of the road.
- FIG. 3 provides further details regarding the SIC 208 , LC 210 and PC 212 .
- Automated driving controller 214 and vehicle controller 216 (VC) receive the outputs of the perception controller and localization controller.
- the ADC 214 and VC 216 reside in the “decide” virtual layer of the OODA model.
- the ADC 214 is responsible for destination selection, route and lane guidance, and high-level traffic surveillance.
- the ADC 214 further is responsible for lane selection within the route, and identification of safe harbor areas to diver the vehicle in case of an emergency.
- the ADC 214 selects a route to reach the destination, and a corridor within the route to direct the vehicle.
- the ADC 214 passes this corridor onto the VC 216 .
- the VC 216 provides a trajectory and lower level driving functions to direct the vehicle through the corridor safely.
- the VC 216 first determines the best trajectory to maneuver through the corridor while providing comfort to the driver, an ability to reach safe harbor, emergency maneuverability, and ability to follow the vehicle's current trajectory. In emergency situations, the VC 216 overrides the corridor provided by the ADC 214 and immediately guides the car into a safe harbor corridor, returning to the corridor provided by the ADC 214 when it is safe to do so. The VC 216 , after determining how to maneuver the vehicle, including safety maneuvers, then provides actuation commands to the vehicle 204 , which executes the commands in its steering, throttle, and braking subsystems. This element of the VC 216 is therefore in the “act” virtual layer of the OODA model. FIG. 4 describes the ADC 214 and VC 216 in further detail.
- the MC 222 can coordinate messages with other machines or vehicles.
- other vehicles can electronically and wirelessly transmit route intentions, intended corridors of travel, and sensed objects that may be in other vehicle's blind spot to autonomous vehicles, and the MC 222 can receive such information, and relay it to the VC 216 and ADC 214 via the SC 218 .
- the MC 222 can send information to other vehicles wirelessly.
- the MC 222 can receive a notification that the vehicle intends to turn.
- the MC 222 receives this information via the VC 216 sending a status message to the SC 218 , which relays the status to the MC 222 .
- other examples of machine communication can also be implemented.
- FIG. 6 shows the HC 220 , MC 222 , and SC 218 in further detail.
- FIG. 3 is a block diagram 300 illustrating an embodiment of the sensor interaction controller 304 (SIC), perception controller (PC) 306 , and localization controller (LC) 308 .
- a sensor array 302 of the vehicle can include various types of sensors, such as a camera 302 a, radar 302 b, LIDAR 302 c, GPS 302 d, IMU 302 e, vehicle-to-everything (V2X) 302 f, or external collected data 302 g (e.g., from a mobile device). Each sensor sends individual vendor defined data types to the SIC 304 .
- the camera 302 a sends object lists and images
- the radar 302 b sends object lists, and in-phase/quadrature (IQ) data
- the LIDAR 302 c sends object lists and scan points
- the GPS 302 d sends position and velocity
- the IMU 302 e sends acceleration data
- the V2X 302 f controller sends tracks of other vehicles, turn signals, other sensor data, or traffic light data.
- the SIC 304 monitors and diagnoses faults at each of the sensors 302 a - f .
- the SIC 304 processes external collected data 302 g received from mobile devices associated with objects.
- the SIC 304 isolates the data from each sensor from its vendor specific package and sends vendor neutral data types to the perception controller (PC) 306 and localization controller 308 (LC).
- the SIC 304 forwards localization feature measurements and position and attitude measurements to the LC 308 , and forwards tracked object measurements, driving surface measurements, and position & attitude measurements to the PC 306 .
- the SIC 304 can further be updated with firmware so that new sensors having different formats can be used with the same modular architecture.
- the LC 308 fuses GPS and IMU data with Radar, Lidar, and Vision data to determine a vehicle location, velocity, and attitude with more precision than GPS can provide alone.
- the LC 308 reports that robustly determined location, velocity, and attitude to the PC 306 .
- the LC 308 further monitors measurements representing position, velocity, and attitude data for accuracy relative to each other, such that if one sensor measurement fails or becomes degraded, such as a GPS signal in a city, the LC 308 can correct for it.
- the PC 306 identifies and locates objects around the vehicle based on the sensed information.
- the PC 306 further estimates drivable surface regions surrounding the vehicle, and further estimates other surfaces such as road shoulders or drivable terrain in the case of an emergency.
- the PC 306 further provides a stochastic prediction of future locations of objects.
- the PC 306 further stores a history of objects and drivable surfaces.
- the PC 306 outputs two predictions, a strategic prediction, and a tactical prediction.
- the tactical prediction represents the world around 2-4 seconds into the future, which only predicts the nearest traffic and road to the vehicle. This prediction includes a free space harbor on shoulder of the road or other location. This tactical prediction is based entirely on measurements from sensors on the vehicle of nearest traffic and road conditions.
- the strategic prediction is a long term prediction that predicts areas of the car's visible environment beyond the visible range of the sensors. This prediction is for greater than four seconds into the future, but has a higher uncertainty than the tactical prediction because objects (e.g., cars and people) may change their currently observed behavior in an unanticipated manner.
- objects e.g., cars and people
- Such a prediction can also be based on sensor measurements from external sources including other autonomous vehicles, manual vehicles with a sensor system and sensor communication network, sensors positioned near or on the roadway or received over a network from transponders on the objects, and traffic lights, signs, or other signals configured to communicate wirelessly with the autonomous vehicle.
- FIG. 4 is a block diagram 400 illustrating an example embodiment of the automatic driving controller (ADC) 402 , vehicle controller (VC) 404 and actuator controller 410 .
- the ADC 402 and VC 404 execute the “decide” virtual layer of the CODA model.
- the ADC 402 based on destination input by the operator and current position, first creates an overall route from the current position to the destination including a list of roads and junctions between roads in order to reach the destination.
- This strategic route plan may be based on traffic conditions, and can change based on updating traffic conditions, however such changes are generally enforced for large changes in estimated time of arrival (ETA).
- ETA estimated time of arrival
- the ADC 402 plans a safe, collision-free, corridor for the autonomous vehicle to drive through based on the surrounding objects and permissible drivable surface—both supplied by the PC.
- This corridor is continuously sent as a request to the VC 404 and is updated as traffic and other conditions change.
- the VC 404 receives the updates to the corridor in real time.
- the ADC 402 receives back from the VC 404 the current actual trajectory of the vehicle, which is also used to modify the next planned update to the driving corridor request.
- the ADC 402 generates a strategic corridor for the vehicle to navigate.
- the ADC 402 generates the corridor based on predictions of the free space on the road in the strategic/tactical prediction.
- the ADC 402 further receives the vehicle position information and vehicle attitude information from the perception controller of FIG. 3 .
- the VC 404 further provides the ADC 402 with an actual trajectory of the vehicle from the vehicle's actuator control 410 . Based on this information, the ADC 402 calculates feasible corridors to drive the road, or any drivable surface. In the example of being on an empty road, the corridor may follow the lane ahead of the car.
- the ADC 402 can determine whether there is free space in a passing lane and in front of the car to safely execute the pass.
- the ADC 402 can automatically calculate based on (a) the current distance to the second car to be passed, (b) amount of drivable road space available in the passing lane, (c) amount of free space in front of the second car to be passed, (d) speed of the vehicle to be passed, (e) current speed of the autonomous vehicle, and (f) known acceleration of the autonomous vehicle, a corridor for the vehicle to travel through to execute the pass maneuver.
- the ADC 402 can determine a corridor to switch lanes when approaching a highway exit. In addition to all of the above factors, the ADC 402 monitors the planned route to the destination and, upon approaching a junction, calculates the best corridor to safely and legally continue on the planned route.
- the ADC 402 the provides the requested corridor 406 to the VC 404 , which works in tandem with the ADC 402 to allow the vehicle to navigate the corridor.
- the requested corridor 406 places geometric and velocity constraints on any planned trajectories for a number of seconds into the future.
- the VC 404 determines a trajectory to maneuver within the corridor 406 .
- the VC 404 bases its maneuvering decisions from the tactical/maneuvering prediction received from the perception controller and the position of the vehicle and the attitude of the vehicle. As described previously, the tactical/maneuvering prediction is for a shorter time period, but has less uncertainty. Therefore, for lower-level maneuvering and safety calculations, the VC 404 effectively uses the tactical/maneuvering prediction to plan collision-free trajectories within requested corridor 406 . As needed in emergency situations, the VC 404 plans trajectories outside the corridor 406 to avoid collisions with other objects.
- the VC 404 determines, based on the requested corridor 406 , the current velocity and acceleration of the car, and the nearest objects, how to drive the car through that corridor 406 while avoiding collisions with objects and remain on the drivable surface.
- the VC 404 calculates a tactical trajectory within the corridor, which allows the vehicle to maintain a safe separation between objects.
- the tactical trajectory also includes a backup safe harbor trajectory in the case of an emergency, such as a vehicle unexpectedly decelerating or stopping, or another vehicle swerving in front of the autonomous vehicle.
- the VC 404 may be required to command a maneuver suddenly outside of the requested corridor from the ADC 402 .
- This emergency maneuver can be initiated entirely by the VC 404 as it has faster response times than the ADC 402 to imminent collision threats.
- This capability isolates the safety critical collision avoidance responsibility within the VC 404 .
- the VC 404 sends maneuvering commands to the actuators that control steering, throttling, and braking of the vehicle platform.
- the VC 404 executes its maneuvering strategy by sending a current vehicle trajectory 408 having driving commands (e.g., steering, throttle, braking) to the vehicle's actuator controls 410 .
- the vehicle's actuator controls 410 apply the commands to the car's respective steering, throttle, and braking systems.
- the VC 404 sending the trajectory 408 to the actuator controls represent the “Act” virtual layer of the CODA model.
- the VC is the only component needing configuration to control a specific model of car (e.g., format of each command, acceleration performance, turning performance, and braking performance), whereas the ADC remaining highly agnostic to the specific vehicle capacities.
- the VC 404 can be updated with firmware configured to allow interfacing with particular vehicle's actuator control systems, or a fleet-wide firmware update for all vehicles.
- FIG. 5 is a diagram 500 illustrating decision time scales of the ADC 402 and VC 404 .
- the ADC 402 implements higher-level, strategic 502 and tactical 504 decisions by generating the corridor.
- the ADC 402 therefore implements the decisions having a longer range/or time scale.
- the estimate of world state used by the ADC 402 for planning strategic routes and tactical driving corridors for behaviors such as passing or making turns has higher uncertainty, but predicts longer into the future, which is necessary for planning these autonomous actions.
- the strategic predictions have high uncertainty because they predict beyond the sensor's visible range, relying solely on non-vision technologies, such as Radar, for predictions of objects far away from the car, that events can change quickly due to, for example, a human suddenly changing his or her behavior, or the lack of visibility of objects beyond the visible range of the sensors.
- Many tactical decisions such as passing a car at highway speed, require perception Beyond the Visible Range (BVR) of an autonomous vehicle (e.g., 100 m or greater), whereas all maneuverability 506 decisions are made based on locally perceived objects to avoid collisions.
- BVR Visible Range
- the VC 404 uses maneuverability predictions (or estimates) of the state of the environment immediately around the car for fast response planning of collision-free trajectories for the autonomous vehicle.
- the VC 402 issues actuation commands, on the lowest end of the time scale, representing the execution of the already planned corridor and maneuvering through the corridor.
- FIG. 6A is a block diagram 600 illustrating an example embodiment of the present disclosure.
- a autonomous or semi-autonomous vehicle 602 includes a plurality of sensors, as described above. Using those sensors, the vehicle has a field of view including sensor acquired data 604 .
- the vehicle detects a cyclist 608 a, vehicle 608 b, and pedestrian 608 c.
- the vehicle may automatically categorize the detected objects 608 a - c as a respective cyclist, vehicle, and pedestrian, or it may lack the information to do so accurately.
- the vehicle's sensors may be obscured from the objects 608 a - c by lacking a line of sight, or by other interference, for example.
- each object 608 a - c includes or carries a mobile device external to the autonomous vehicle having a transmitter, or carries a separate dedicated transmitter, that transmits reported data 610 a - c , respectively, to a server 608 .
- the server therefore, maintains a list of reported data collected by multiple devices, and determines which data of the list to distribute to each autonomous vehicle.
- the server may maintain its list of object by a variety of methods, including time-limiting data in the list, such that data beyond a given time threshold is automatically deleted.
- the mobile device can also be a collecting device external to the autonomous vehicle.
- the reported data 610 a - c can include a current location of the object (e.g., GPS location), a type of the object (e.g., pedestrian, cyclist, motorist, or type of vehicle), a predicted location of the object (e.g., based on detected current velocity and direction of object and map data), and social media or personal information of the user.
- Social media information can employ calendar or events information to determine a destination of the user. From there, the vehicle can determine a best route from the current location to the destination, and predict a location or direction of the user. This is especially useful with pedestrians and cyclists, who may not be using a GPS application that can relay its GPS route to the server automatically like a motorist may do. Therefore, the reported data 610 a - c comes from the objects 608 a - c themselves, and does not originate from a sensor of the vehicle 602 .
- FIG. 6B is a block diagram 650 illustrating an example embodiment of the present disclosure.
- FIG. 6B is a logical continuation of FIG. 6A .
- the server relays the reported data 652 a - c .
- the server 608 can relay the data as received from the objects 608 a - c , or can aggregate or otherwise modify the data 610 a - c to create reported data 652 a - c .
- the server 608 can also predict effects of propagation delay of the Internet signal in the relayed reported data 652 a - c .
- the external collected data input 302 g of FIG. 3 can further assist the vehicle 602 in object types, or even additional objects.
- the vehicle identifies the cyclist 608 a as identified cyclist 654 a, the vehicle 608 b as identified vehicle 654 b, and the pedestrian 608 c as identified pedestrian 654 c.
- the reported data 654 a - c can include a current location of the object, a type of the object, and a predicted location of the object. This information can be correlated with already detected objects and their location relative to the car.
- the reported data 652 a - c can be reported to multiple vehicles, as long as the server has determined the objects in the reported data 652 a - c is relevant to each vehicle. Therefore, each vehicle can received reported data custom to its route and location, as illustrated in further detail below in relation to FIG. 6D .
- the reported data 654 a - c can include an indication and authentication of an emergency vehicle mode. Then, other vehicles in the area can automatically know that an emergency vehicle is approaching, and react accordingly by decelerating and approaching the side of the road until the emergency vehicle passes.
- V2I vehicle to infrastructure
- V2X vehicle to anything
- V2V vehicle to vehicle
- V2I, V2X, and V2V these radios are built for a special purpose and only built for cars. Therefore, a ubiquitous mobile device, such as an iPhone , Android® Smartphone, or Windows® Phone cannot communicate with a car via those interfaces without adding additional hardware. Therefore, Applicant's present disclosure overcomes these limitation by taking advantage of more common network standards.
- V2I, V2X, and V2V have limited range because of their peer-to-peer nature, being a special radio made for special frequencies.
- broadcasting over a universal network, such as the Internet means that the packet transfers can be targeted to any vehicle in a certain location.
- the vehicle only needs to know about objects being reported in its vicinity, not in other countries, states/provinces, or even city blocks. Therefore, a packet exchange process can occur, where the vehicle notifies the server of its location and direction, and the server relays packets related to that location.
- the present method and system provide flexibility to deliver the correct data to the correct vehicles, where the V2I, V2X, and V2V provide data to all receivers within range of the radios. Therefore, in some scenarios, the present system allows object detection at a further range, but can also exclude packets that are irrelevant to the recipient (e.g., objects that the car has already passed, etc.).
- V2I, V2X, and V2V The market penetration of V2I, V2X, and V2V is far less compared to the market penetration with LTE/Internet connected smartphones.
- LTE has scaled better than V2I, V2X, and V2V, and therefore is a more effective technology to accomplish the goals of the present application, once certain challenges are overcome.
- Waze and Google Maps report the general level of traffic congestion on a road, but do not report the locations of individual cars or objects in real time.
- Waze® does allow user reporting of incidents, such as traffic backups, road construction, law enforcement locations, and accidents.
- these reports are limited in the sense that they are not dynamic once reported, and further are not reported in real time by a device associated with the object being reported.
- Waze's reports do not allow for the precise location accuracy of the present disclosure. Waze's reports, rather, report where on a route an incident is, but are agnostic to where exactly the incident on the road is.
- Waze's precision is one-dimensional with regards to the route, in that it reports incidents on a mile/foot marker on a given road.
- Waze lacks the two-dimensional capacity of embodiments of the present disclosure, to detect the presence of objects not only at a certain range of the road, but on the road's width as well.
- Applicant's disclosure can report movement of individual objects in real time, instead of relying on a momentary report of a user that is not updated.
- FIG. 6C is a block diagram 670 illustrating an example embodiment of another aspect of the present disclosure.
- the server 608 calculates an updated sensor model 672 , which can be sent to new vehicles or as an update to existing vehicles.
- the differences (or delta) between the reported data 610 a - c and sensor result 674 can be used to adjust the updated sensor model 672 .
- the sensor model 672 can be sent back to the vehicle 602 (e.g., as a download, firmware update, or real-time update, etc.).
- FIG. 6D is a block diagram 680 illustrating an example embodiment of another aspect of the present disclosure.
- a car 682 is on a road 692 , and receives initial or additional information about objects 684 a - c that are within the zone of objects sent to the car 690 .
- the zone of objects sent to the car is an intersection of a route zone 688 that tracks the road in the direction of motion of the car and a requested detection radius 686 that is a particular distance from the car.
- the route zone 688 is a region of interest requested by the vehicle.
- the region of interest is a parametrically defined corridor that can be any shape based on predicted paths or corridors of the vehicle.
- the route zone 688 is a geo-filter boundary requested by the vehicle.
- object 694 which is in the requested detection radius 686 but not in the route zone 688
- object 696 which is in the route zone 688 but not the requested detection radius 686
- shape other than circles can be used for the requested detection radius 686
- route zone 688 can track curved roads, routes with turns onto multiple roads, or an area slightly greater than the road to detect objects on sidewalks or parking lots.
- FIG. 7A is a diagram 700 illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle.
- the sensor field of view 712 discussed can be derived from any of a vision system, LIDAR, RADAR system, other sensor system, or any combination of traditional sensing systems. These traditional systems detect objects within the sensor field of view 712 , such as vision identified SUV 702 , vision identified passenger car 704 , vision identified sports car 706 , vision identified police car 708 .
- RADAR, LIDAR, and camera vision systems can combine their gathered information to determine the location of the relative vehicles, and the vehicle types.
- FIG. 7A shows the example of an unidentified trucks 710 a - b can be undetected by the vehicle's traditional system by being not completely in a field of vision, or being obscured by other objects, such as the police car 708 obscuring the unidentified truck 710 b. Further, the sports car 706 obscures the unidentified car 712 .
- Other unidentified objects can be concealed from sensors in other ways, such as being out of range or being camouflaged with respect to the sensor type.
- FIG. 7B is a diagram 750 illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle.
- the pedestrians 752 a - f are identified by the vehicle.
- a cyclist is identified 754 , but the vehicle cannot identify its class (e.g., whether it is a pedestrian, cyclist, or other object).
- class e.g., whether it is a pedestrian, cyclist, or other object.
- a person of ordinary skill in the art can recognize that cars and other objects are left unrecognized within the sensor field of view 760 .
- FIG. 8 is a diagram 800 of an example mobile application 802 running on a mobile device.
- the application identifies the type of traffic represented by the user self-identifying its type by using the user control 806 .
- the types can include pedestrian, cyclist, and motorist.
- the user can select its type on the mobile application and submit it using user control 804 , or the mobile application 802 can determine the type automatically based on the user's speed and motion pattern.
- the mobile application 802 can determine motion of the mobile device in a user's pocket to be walking, running, or cycling based on its vibration. Further, the mobile application 802 can determine speeds over a threshold are those of a vehicle, and therefore register the user as a motorist.
- FIG. 9A is a diagram of a sensor field of view 902 employing the data received from the mobile application.
- the sensor identifies several objects/pedestrians 904 . However, it does not recognize a pedestrian near the sidewalk, about to cross outside of the cross walk.
- the mobile application With the mobile application and its reported data, the mobile device sends a signal to a server which is relayed to the vehicle. Even with no visual, RADAR, or LIDAR knowledge of the pedestrian, the reported object's location can be made known to the vehicle.
- FIG. 9B is a diagram of a sensor field of view 902 employing the data received from the mobile application. Within the sensor field of view 902 , the sensor identifies several objects/pedestrians 954 . Based on the mobile data, the vehicle recognize the reported object 956 . However, in this embodiment, the vehicle also recognizes the reported acceleration measure 958 . In other embodiments, the acceleration measure can also be or include a velocity or direction measure.
- FIG. 10 is a network diagram 1000 illustrating a mobile application 1002 communicating with a server 1004 and a representation of an autonomous vehicle 1006 .
- the autonomous vehicle Upon receiving reported data 1010 , the autonomous vehicle models the location of the reported objects 1012 within a perception of an autonomous vehicle. The columns shown within the perception 1008 model indicate reported objects, and thus, the car can avoid them.
- FIG. 11 is a flow diagram 1100 illustrating a process employed by an example embodiment of the present disclosure.
- the process receives, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle ( 1103 ).
- the data can then optionally be filtered for quality, such as filtering for signs of inaccurate location reporting due to building structures or bad WiFi, or other quality metrics ( 1104 ).
- the data is relayed to the autonomous vehicle via a server.
- the reported data including a current location of the object, a type of the object, or a predicted location of the object.
- the process determines whether the reported data of the object matches an object in an object list ( 1105 ).
- the process correlates the reported data of the object to a matching object in the object list ( 1108 ). Otherwise, the process adds the reported data of the object to an object list of objects detected by sensor from on-board sensors of the autonomous vehicle ( 1106 ).
- FIG. 12 is a flow diagram 1200 illustrating a process employed by an example embodiment of the present disclosure at a vehicle 1202 and server 1252 .
- the server receives the vehicle location ( 1254 ).
- the server 1252 receives data collected by mobile device(s) ( 1256 ).
- the server then geo-filters data collected from the mobile device(s) to the vicinity of the vehicle location ( 1258 ).
- the server 1252 then sends the geo-filtered data to the vehicle as reported data to the vehicle 1202 ( 1260 ).
- the vehicle 1202 receives the reported data ( 1206 ), then determines whether the reported data matches an object of an object list ( 1208 ). If so, the vehicle 1202 correlates the reported data to a matching object in the object list ( 1212 ).
- the server can also receive create and modify sensor models based on reported data and data from on-board sensors of autonomous vehicle ( 1262 ). The updated sensor model can then be sent back to the vehicle, as shown above in FIG. 6C . If the reported data does not match an object of the object list ( 1208 ), however, the vehicle 1202 adds reported data to the object list ( 1210 ).
- the collected data can further be used to improve vehicle sensor models.
- Reported data can further include pictures taken from the device, and the vehicle can analyze the pictures to identify objects that may not be visible to the vehicle.
- mobile devices or stationary roadside cameras can report image collected data to vehicles.
- the image collected data can reflect objects that are at locations other than that of the collecting device.
- the image processing of the images can be performed at the collecting devices (e.g., mobile device, stationary roadside cameras) to save bandwidth and get the reported data to the vehicles faster.
- the image analysis can be performed in the vehicle or on at a cloud server.
- the model can also be improved by automatically identifying differences between the “ground truth” reported by mobile devices and the vehicle's analysis of the sensors. Differences between the vehicle's original analysis and the mobile collected data can be automatically identified. Then, a machine learning process can associate the “ground truth” reported by these mobile devices to similar data sensed by the vehicle's other sensors, thereby improving the vehicle's future performance.
- image analysis can be performed on the vehicle, but in other embodiments the image analysis can be performed on the server or on the mobile device.
- FIG. 13 illustrates a computer network or similar digital processing environment in which embodiments of the present disclosure may be implemented.
- Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like.
- the client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60 .
- the communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, local area or wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth®, etc.) to communicate with one another.
- Other electronic device/computer network architectures are suitable.
- FIG. 14 is a diagram of an example internal structure of a computer (e.g., client processor/device 50 or server computers 60 ) in the computer system of FIG. 13 .
- Each computer 50 , 60 contains a system bus 79 , where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system.
- the system bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements.
- Attached to the system bus 79 is an I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 50 , 60 .
- a network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of FIG. 13 ).
- Memory 90 provides volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure (e.g., sensor interface controller, perception controller, localization controller, automated driving controller, vehicle controller, system controller, human interaction controller, and machine interaction controller detailed above).
- Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure.
- a central processor unit 84 is also attached to the system bus 79 and provides for the execution of computer instructions.
- the processor routines 92 and data 94 are a computer program product (generally referenced 92 ), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the disclosure system.
- the computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art.
- at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection.
- the disclosure programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)).
- a propagation medium e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)
- Such carrier medium or signals may be employed to provide at least a portion of the software instructions for the present disclosure routines/program 92 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Game Theory and Decision Science (AREA)
- Operations Research (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Development Economics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Traffic Control Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Aviation & Aerospace Engineering (AREA)
Abstract
Description
- Recently, image and other sensor systems have been developed to detect objects, and different object types, such as types of cars, pedestrians, and cyclists. These systems can further detect direction of movements, speed, and accelerations of these objects as well. However, these systems, while sufficient for certain tasks, can be hindered by limitations of range, field of view, or other measuring errors.
- For the purpose of this disclosure the term “autonomous vehicle” refers to a vehicle with autonomous functions, including semi-autonomous vehicles and fully-autonomous vehicles.
- In an embodiment, a method includes receiving, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle. The data is collected by a collecting device external to the autonomous vehicle, and is relayed to the autonomous vehicle via a server. The reported data includes a current location of the object, a type of the object, or a predicted location of the object. The method further includes determining, at the autonomous vehicle, whether the reported data of the object correlates with a found object in an object list. If the determination finds the found object in the object list, the method , adds the reported data of the object to data associated with the found object in the object list. Otherwise, the method adds the reported data of the object to an object list of objects detected by sensors from on-board the autonomous vehicle.
- In another embodiment, the collective device can be an off-vehicle collecting device.
- In an embodiment, the type of the object is a pedestrian, bicycle, vehicle, or vehicle type.
- In an embodiment, the reported data includes location data collected from the collecting device of the object. The location data provides the current location of the object. A history of the location data can be used to predict future locations of the object. In an embodiment, the location data can be acquired using a global positioning system (GPS) receiver, cell-tower signals, WiFi signals, or other location sensing devices and methods.
- In an embodiment, the type of the object is determined based on a vibration pattern of the collecting device, wherein the collecting device is a mobile device.
- In an embodiment, the type of object is determined by a user of the mobile device self-identifying the type of object.
- In an embodiment, the reported data further includes velocity and acceleration data of the object, a route map of the object, or a calendar of a user of the object. In an embodiment, the predicted location of the object is determined by loading the route map of the object, or by loading a destination location from the calendar of the user and generating a route map from the current location to the destination location.
- In an embodiment, the method includes modifying a sensor model based on discrepancies between the reported data from the collection device and data from the on-board sensors of the autonomous vehicle. In another embodiment, the method includes building a sensor model based on the reported data from the collection device and data from the on-board sensors of the autonomous vehicle.
- In an embodiment, the method further includes generating the reported data of the object the object by analyzing at least one image taken by the collecting device.
- In an embodiment, the method further includes identifying the reported data as an emergency vehicle signal. The method can further include authenticating the reported data as the emergency vehicle signal.
- In an embodiment, the method further includes reporting, from the autonomous vehicle to the server, a location and direction of the vehicle, such that the data returned from the server is related to a current and future locations of the autonomous vehicle.
- In an embodiment, the data returned from the server is selected from a list of object data reported from multiple collecting devices stored at the server.
- In an embodiment, the method further includes reporting, from the autonomous vehicle to the server, at least one of a location and direction of the vehicle, such that the responsively received data is related to the same location of the vehicle.
- In an embodiment, one of the objects of the object list is determined by on-board sensors of the autonomous vehicle.
- In an embodiment, a system includes a processor and a memory with computer code instructions stored therein. The memory is operatively coupled to said processor such that the computer code instructions configure the processor to implement a machine interaction controller. The machine interaction controller is configured to receive, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle. The data is collected by a collecting device external to the autonomous vehicle and relayed to the autonomous vehicle via a server. The reported data includes a current location of the object, a type of the object, or a predicted location of the object. The memory and processor are further configured to implement a perception controller configured to determine, at the autonomous vehicle, whether the reported data of the object correlates with a found object in an object list, wherein at least one of the objects of the object list is determined by on-board sensors of the autonomous vehicle. If the determination finds the found object in the object list, the perception controller adds the reported data of the object to data associated with the found object in the object list. Otherwise, the perception controller adds the reported data of the object to an object list of objects detected by sensor from on-board sensors of the autonomous vehicle.
- In an embodiment, a non-transitory computer-readable medium is configured to store instructions for providing data to an autonomous vehicle. The instructions, when loaded and executed by a processor, causes the processor to receive, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle. The data is collected by a collecting device external to the autonomous vehicle and relayed to the autonomous vehicle via a server. The reported data includes a current location of the object, a type of the object, or a predicted location of the object. The instructions are further configured to determine, at the autonomous vehicle, whether the reported data of the object correlates with a found object in an object list, and if the determination finds the found object in the object list, add the reported data of the object to data associated with the found object in the object list, and otherwise, add the reported data of the object to an object list of objects detected by sensor from on-board sensors of the autonomous vehicle.
- The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
-
FIG. 1 is a diagram illustrating steps in an embodiment of an automated control system of the Observe, Orient, Decide, and Act (OODA) model. -
FIG. 2 is a block diagram of an embodiment of an autonomous vehicle high-level architecture. -
FIG. 3 is a block diagram illustrating an embodiment of the sensor interaction controller (SIC), perception controller (PC), and localization controller (LC). -
FIG. 4 is a block diagram illustrating an example embodiment of the automatic driving controller (ADC), vehicle controller (VC) and actuator controller. -
FIG. 5 is a diagram illustrating decision time scales of the ADC and VC. -
FIG. 6A is a block diagram illustrating an example embodiment of the present disclosure. -
FIG. 6B is a block diagram illustrating an example embodiment of the present disclosure. -
FIG. 6C is a block diagram illustrating an example embodiment of another aspect of the present disclosure. -
FIG. 6D is a block diagram illustrating an example embodiment of another aspect of the present disclosure. -
FIG. 7A is a diagram illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle. -
FIG. 7B is a diagram illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle. -
FIG. 8 is a diagram of an example mobile application running on a mobile device. -
FIG. 9A is a diagram of a sensor field of view employing the data received from the mobile application. -
FIG. 9B is a diagram of a sensor field of view employing the data received from the mobile application. -
FIG. 10 is a network diagram illustrating a mobile application communicating with a server and a representation of an autonomous vehicle. -
FIG. 11 is a flow diagram illustrating a process employed by an example embodiment of the present disclosure. -
FIG. 12 is a flow diagram illustrating a process employed by an example embodiment of the present disclosure at a vehicle and server. -
FIG. 13 illustrates a computer network or similar digital processing environment in which embodiments of the present disclosure may be implemented. -
FIG. 14 is a diagram of an example internal structure of a computer (e.g., client processor/device or server computers) in the computer system ofFIG. 13 . - A description of example embodiments of the disclosure follows.
-
FIG. 1 is a diagram illustrating steps in an embodiment of an automated control system of the Observe, Orient, Decide, and Act (OODA) model. Automated systems, such as highly-automated driving systems, or, self-driving cars, or autonomous vehicles, employ an OODA model. The observevirtual layer 102 involves sensing features from the world using machine sensors, such as laser ranging, radar, infra-red, vision systems, or other systems. The orientationvirtual layer 104 involves perceiving situational awareness based on the sensed information. Examples of orientation virtual layer activities are Kalman filtering, model based matching, machine or deep learning, and Bayesian predictions. The decidevirtual layer 106 selects an action from multiple objects to a final decision. The actvirtual layer 108 provides guidance and control for executing the decision. -
FIG. 2 is a block diagram 200 of an embodiment of an autonomous vehicle high-level architecture 206. Thearchitecture 206 is built using a top-down approach to enable fully automated driving. Further, thearchitecture 206 is preferably modular such that it can be adaptable with hardware from different vehicle manufacturers. Thearchitecture 206, therefore, has several modular elements functionally divided to maximize these properties. In an embodiment, themodular architecture 206 described herein can interface withsensor systems 202 of anyvehicle 204. Further, themodular architecture 206 can receive vehicle information from and communicate with anyvehicle 204. - Elements of the
modular architecture 206 includesensors 202, Sensor Interface Controller (SIC) 208, localization controller (LC) 210, perception controller (PC) 212, automated driving controller 214 (ADC), vehicle controller 216 (VC), system controller 218 (SC), human interaction controller 220 (HC) and machine interaction controller 222 (MC). - Referring again to the CODA model of
FIG. 1 , in terms of an autonomous vehicle, the observation layer of the model includes gathering sensor readings, for example, from vision sensors, Radar (Radio Detection And Ranging), LIDAR (Light Detection And Ranging), and Global Positioning Systems (GPS). Thesensors 202 shown inFIG. 2 shows such an observation layer. Examples of the orientation layer of the model can include determining where a car is relative to the world, relative to the road it is driving on, and relative to lane markings on the road, shown by Perception Controller (PC) 212 and Localization Controller (LC) 210 ofFIG. 2 . Examples of the decision layer of the model include determining a corridor to automatically drive the car, and include elements such as the Automatic Driving Controller (ADC) 214 and Vehicle Controller (VC) 216 ofFIG. 2 . Examples of the act layer include converting that corridor into commands to the vehicle's driving systems (e.g., steering sub-system, acceleration sub-system, and breaking sub-system) that direct the car along the corridor, such asactuator control 410 ofFIG. 4 . A person of ordinary skill in the art can recognize that the layers of the system are not strictly sequential, and as observations change, so do the results of the other layers. For example, after the system chooses a corridor to drive in, changing conditions on the road, such as detection of another object, may direct the car to modify its corridor, or enact emergency procedures to prevent a collision. Further, the commands of the vehicle controller may need to be adjusted dynamically to compensate for drift, skidding, or other changes to expected vehicle behavior. - At a high level, the
module architecture 206 receives measurements fromsensors 202. While different sensors may output different sets of information in different formats, themodular architecture 206 includes Sensor Interface Controller (SIC) 208, sometimes also referred to as a Sensor Interface Server (SIS), configured to translate the sensor data into data having a vendor-neutral format that can be read by themodular architecture 206. Therefore, themodular architecture 206 learns about the environment around thevehicle 204 from the vehicle's sensors, no matter the vendor, manufacturer, or configuration of the sensors. TheSIS 208 can further tag each sensor's data with a metadata tag having its location and orientation in the car, which can be used by the perception controller to determine the unique angle, perspective, and blind spot of each sensor. - Further, the
modular architecture 206 includes vehicle controller 216 (VC). TheVC 216 is configured to send commands to the vehicle and receive status messages from the vehicle. Thevehicle controller 216 receives status messages from thevehicle 204 indicating the vehicle's status, such as information regarding the vehicle's speed, attitude, steering position, braking status, and fuel level, or any other information about the vehicle's subsystems that is relevant for autonomous driving. Themodular architecture 206, based on the information from thevehicle 204 and thesensors 202, therefore can calculate commands to send from theVC 216 to thevehicle 204 to implement self-driving. The functions of the various modules within themodular architecture 206 are described in further detail below. However, when viewing themodular architecture 206 at a high level, it receives (a) sensor information from thesensors 202 and (b) vehicle status information from thevehicle 204, and in turn, provides the vehicle instructions to thevehicle 204. Such an architecture allows the modular architecture to be employed for any vehicle with any sensor configuration. Therefore, any vehicle platform that includes a sensor subsystem (e.g., sensors 202) and an actuation subsystem having the ability to provide vehicle status and accept driving commands (e.g.,actuator control 410 ofFIG. 4 ) can integrate with themodular architecture 206. - Within the
modular architecture 206, various modules work together to implement automated driving according to the CODA model. Thesensors 202 andSIC 208 reside in the “observe” virtual layer. As described above, theSIC 208 receives measurements (e.g., sensor data) having various formats. TheSIC 208 is configured to convert vendor-specific data directly from the sensors to vendor-neutral data. In this way, the set ofsensors 202 can include any brand of Radar, LIDAR, image sensor, or other sensors, and themodular architecture 206 can use their perceptions of the environment effectively. - The measurements output by the sensor interface server are then processed by perception controller (PC) 212 and localization controller (LC) 210. The
PC 212 and LC 210 both reside in the “orient” virtual layer of the OODA model. The LC 210 determines a robust world-location of the vehicle that can be more precise than a GPS signal, and still determines the world-location of the vehicle when there is no available or an inaccurate GPS signal. The LC 210 determines the location based on GPS data and sensor data. ThePC 212, on the other hand, generates prediction models representing a state of the environment around the car, including objects around the car and state of the road.FIG. 3 provides further details regarding theSIC 208, LC 210 andPC 212. - Automated driving controller 214 (ADC) and vehicle controller 216 (VC) receive the outputs of the perception controller and localization controller. The
ADC 214 andVC 216 reside in the “decide” virtual layer of the OODA model. TheADC 214 is responsible for destination selection, route and lane guidance, and high-level traffic surveillance. TheADC 214 further is responsible for lane selection within the route, and identification of safe harbor areas to diver the vehicle in case of an emergency. In other words, theADC 214 selects a route to reach the destination, and a corridor within the route to direct the vehicle. TheADC 214 passes this corridor onto theVC 216. Given the corridor, theVC 216 provides a trajectory and lower level driving functions to direct the vehicle through the corridor safely. TheVC 216 first determines the best trajectory to maneuver through the corridor while providing comfort to the driver, an ability to reach safe harbor, emergency maneuverability, and ability to follow the vehicle's current trajectory. In emergency situations, theVC 216 overrides the corridor provided by theADC 214 and immediately guides the car into a safe harbor corridor, returning to the corridor provided by theADC 214 when it is safe to do so. TheVC 216, after determining how to maneuver the vehicle, including safety maneuvers, then provides actuation commands to thevehicle 204, which executes the commands in its steering, throttle, and braking subsystems. This element of theVC 216 is therefore in the “act” virtual layer of the OODA model.FIG. 4 describes theADC 214 andVC 216 in further detail. - The
modular architecture 206 further coordinates communication with various modules through system controller 218 (SC). By exchanging messages with theADC 214 andVC 216, theSC 218 enables operation of human interaction controller 220 (HC) and machine interaction controller 222 (MC). TheHC 220 provides information about the autonomous vehicle's operation in a human understandable format based on status messages coordinated by the system controller. TheHC 220 further allows for human input to be factored into the car's decisions. For example, theHC 220 enables the operator of the vehicle to enter or modify the destination or route of the vehicle, as one example. TheSC 218 interprets the operator's input and relays the information to theVC 216 orADC 214 as necessary. - Further, the MC 222 can coordinate messages with other machines or vehicles. For example, other vehicles can electronically and wirelessly transmit route intentions, intended corridors of travel, and sensed objects that may be in other vehicle's blind spot to autonomous vehicles, and the MC 222 can receive such information, and relay it to the
VC 216 andADC 214 via theSC 218. In addition, the MC 222 can send information to other vehicles wirelessly. In the example of a turn signal, the MC 222 can receive a notification that the vehicle intends to turn. The MC 222 receives this information via theVC 216 sending a status message to theSC 218, which relays the status to the MC 222. However, other examples of machine communication can also be implemented. For example, other vehicle sensor information or stationary sensors can wirelessly send data to the autonomous vehicle, giving the vehicle a more robust view of the environment. Other machines may be able to transmit information about objects in the vehicles blind spot, for example. In further examples, other vehicles can send their vehicle track. In an even further examples, traffic lights can send a digital signal of their status to aid in the case where the traffic light is not visible to the vehicle. A person of ordinary skill in the art can recognize that any information employed by the autonomous vehicle can also be transmitted to or received from other vehicles to aid in autonomous driving.FIG. 6 shows theHC 220, MC 222, andSC 218 in further detail. -
FIG. 3 is a block diagram 300 illustrating an embodiment of the sensor interaction controller 304 (SIC), perception controller (PC) 306, and localization controller (LC) 308. Asensor array 302 of the vehicle can include various types of sensors, such as acamera 302 a,radar 302 b,LIDAR 302 c,GPS 302 d,IMU 302 e, vehicle-to-everything (V2X) 302 f, or external collecteddata 302 g (e.g., from a mobile device). Each sensor sends individual vendor defined data types to theSIC 304. For example, thecamera 302 a sends object lists and images, theradar 302 b sends object lists, and in-phase/quadrature (IQ) data, theLIDAR 302 c sends object lists and scan points, theGPS 302 d sends position and velocity, theIMU 302 e sends acceleration data, and theV2X 302 f controller sends tracks of other vehicles, turn signals, other sensor data, or traffic light data. A person of ordinary skill in the art can recognize that thesensor array 302 can employ other types of sensors, however. TheSIC 304 monitors and diagnoses faults at each of thesensors 302 a-f. TheSIC 304 processes external collecteddata 302 g received from mobile devices associated with objects. In addition, theSIC 304 isolates the data from each sensor from its vendor specific package and sends vendor neutral data types to the perception controller (PC) 306 and localization controller 308 (LC). TheSIC 304 forwards localization feature measurements and position and attitude measurements to theLC 308, and forwards tracked object measurements, driving surface measurements, and position & attitude measurements to the PC 306. TheSIC 304 can further be updated with firmware so that new sensors having different formats can be used with the same modular architecture. - The
LC 308 fuses GPS and IMU data with Radar, Lidar, and Vision data to determine a vehicle location, velocity, and attitude with more precision than GPS can provide alone. TheLC 308 then reports that robustly determined location, velocity, and attitude to the PC 306. TheLC 308 further monitors measurements representing position, velocity, and attitude data for accuracy relative to each other, such that if one sensor measurement fails or becomes degraded, such as a GPS signal in a city, theLC 308 can correct for it. The PC 306 identifies and locates objects around the vehicle based on the sensed information. The PC 306 further estimates drivable surface regions surrounding the vehicle, and further estimates other surfaces such as road shoulders or drivable terrain in the case of an emergency. The PC 306 further provides a stochastic prediction of future locations of objects. The PC 306 further stores a history of objects and drivable surfaces. - The PC 306 outputs two predictions, a strategic prediction, and a tactical prediction. The tactical prediction represents the world around 2-4 seconds into the future, which only predicts the nearest traffic and road to the vehicle. This prediction includes a free space harbor on shoulder of the road or other location. This tactical prediction is based entirely on measurements from sensors on the vehicle of nearest traffic and road conditions.
- The strategic prediction is a long term prediction that predicts areas of the car's visible environment beyond the visible range of the sensors. This prediction is for greater than four seconds into the future, but has a higher uncertainty than the tactical prediction because objects (e.g., cars and people) may change their currently observed behavior in an unanticipated manner. Such a prediction can also be based on sensor measurements from external sources including other autonomous vehicles, manual vehicles with a sensor system and sensor communication network, sensors positioned near or on the roadway or received over a network from transponders on the objects, and traffic lights, signs, or other signals configured to communicate wirelessly with the autonomous vehicle.
-
FIG. 4 is a block diagram 400 illustrating an example embodiment of the automatic driving controller (ADC) 402, vehicle controller (VC) 404 andactuator controller 410. TheADC 402 andVC 404 execute the “decide” virtual layer of the CODA model. - The
ADC 402, based on destination input by the operator and current position, first creates an overall route from the current position to the destination including a list of roads and junctions between roads in order to reach the destination. This strategic route plan may be based on traffic conditions, and can change based on updating traffic conditions, however such changes are generally enforced for large changes in estimated time of arrival (ETA). Next, theADC 402 plans a safe, collision-free, corridor for the autonomous vehicle to drive through based on the surrounding objects and permissible drivable surface—both supplied by the PC. This corridor is continuously sent as a request to theVC 404 and is updated as traffic and other conditions change. TheVC 404 receives the updates to the corridor in real time. TheADC 402 receives back from theVC 404 the current actual trajectory of the vehicle, which is also used to modify the next planned update to the driving corridor request. - The
ADC 402 generates a strategic corridor for the vehicle to navigate. TheADC 402 generates the corridor based on predictions of the free space on the road in the strategic/tactical prediction. TheADC 402 further receives the vehicle position information and vehicle attitude information from the perception controller ofFIG. 3 . TheVC 404 further provides theADC 402 with an actual trajectory of the vehicle from the vehicle'sactuator control 410. Based on this information, theADC 402 calculates feasible corridors to drive the road, or any drivable surface. In the example of being on an empty road, the corridor may follow the lane ahead of the car. - In another example of the car attempting to pass a second car, the
ADC 402 can determine whether there is free space in a passing lane and in front of the car to safely execute the pass. TheADC 402 can automatically calculate based on (a) the current distance to the second car to be passed, (b) amount of drivable road space available in the passing lane, (c) amount of free space in front of the second car to be passed, (d) speed of the vehicle to be passed, (e) current speed of the autonomous vehicle, and (f) known acceleration of the autonomous vehicle, a corridor for the vehicle to travel through to execute the pass maneuver. - In another example, the
ADC 402 can determine a corridor to switch lanes when approaching a highway exit. In addition to all of the above factors, theADC 402 monitors the planned route to the destination and, upon approaching a junction, calculates the best corridor to safely and legally continue on the planned route. - The
ADC 402 the provides the requestedcorridor 406 to theVC 404, which works in tandem with theADC 402 to allow the vehicle to navigate the corridor. The requestedcorridor 406 places geometric and velocity constraints on any planned trajectories for a number of seconds into the future. TheVC 404 determines a trajectory to maneuver within thecorridor 406. TheVC 404 bases its maneuvering decisions from the tactical/maneuvering prediction received from the perception controller and the position of the vehicle and the attitude of the vehicle. As described previously, the tactical/maneuvering prediction is for a shorter time period, but has less uncertainty. Therefore, for lower-level maneuvering and safety calculations, theVC 404 effectively uses the tactical/maneuvering prediction to plan collision-free trajectories within requestedcorridor 406. As needed in emergency situations, theVC 404 plans trajectories outside thecorridor 406 to avoid collisions with other objects. - The
VC 404 then determines, based on the requestedcorridor 406, the current velocity and acceleration of the car, and the nearest objects, how to drive the car through thatcorridor 406 while avoiding collisions with objects and remain on the drivable surface. TheVC 404 calculates a tactical trajectory within the corridor, which allows the vehicle to maintain a safe separation between objects. The tactical trajectory also includes a backup safe harbor trajectory in the case of an emergency, such as a vehicle unexpectedly decelerating or stopping, or another vehicle swerving in front of the autonomous vehicle. - As necessary to avoid collisions, the
VC 404 may be required to command a maneuver suddenly outside of the requested corridor from theADC 402. This emergency maneuver can be initiated entirely by theVC 404 as it has faster response times than theADC 402 to imminent collision threats. This capability isolates the safety critical collision avoidance responsibility within theVC 404. TheVC 404 sends maneuvering commands to the actuators that control steering, throttling, and braking of the vehicle platform. - The
VC 404 executes its maneuvering strategy by sending acurrent vehicle trajectory 408 having driving commands (e.g., steering, throttle, braking) to the vehicle's actuator controls 410. The vehicle's actuator controls 410 apply the commands to the car's respective steering, throttle, and braking systems. TheVC 404 sending thetrajectory 408 to the actuator controls represent the “Act” virtual layer of the CODA model. By conceptualizing the autonomous vehicle architecture in this way, the VC is the only component needing configuration to control a specific model of car (e.g., format of each command, acceleration performance, turning performance, and braking performance), whereas the ADC remaining highly agnostic to the specific vehicle capacities. In an example, theVC 404 can be updated with firmware configured to allow interfacing with particular vehicle's actuator control systems, or a fleet-wide firmware update for all vehicles. -
FIG. 5 is a diagram 500 illustrating decision time scales of theADC 402 andVC 404. TheADC 402 implements higher-level, strategic 502 and tactical 504 decisions by generating the corridor. TheADC 402 therefore implements the decisions having a longer range/or time scale. The estimate of world state used by theADC 402 for planning strategic routes and tactical driving corridors for behaviors such as passing or making turns has higher uncertainty, but predicts longer into the future, which is necessary for planning these autonomous actions. The strategic predictions have high uncertainty because they predict beyond the sensor's visible range, relying solely on non-vision technologies, such as Radar, for predictions of objects far away from the car, that events can change quickly due to, for example, a human suddenly changing his or her behavior, or the lack of visibility of objects beyond the visible range of the sensors. Many tactical decisions, such as passing a car at highway speed, require perception Beyond the Visible Range (BVR) of an autonomous vehicle (e.g., 100 m or greater), whereas allmaneuverability 506 decisions are made based on locally perceived objects to avoid collisions. - The
VC 404, on the other hand, generatesmaneuverability decisions 506 using maneuverability predictions that are short time frame/range predictions of object behaviors and the driving surface. These maneuverability predictions have a lower uncertainty because of the shorter time scale of the predictions, however, they rely solely on measurements taken within visible range of the sensors on the autonomous vehicle. Therefore, theVC 404 uses these maneuverability predictions (or estimates) of the state of the environment immediately around the car for fast response planning of collision-free trajectories for the autonomous vehicle. TheVC 402 issues actuation commands, on the lowest end of the time scale, representing the execution of the already planned corridor and maneuvering through the corridor. -
FIG. 6A is a block diagram 600 illustrating an example embodiment of the present disclosure. A autonomous orsemi-autonomous vehicle 602 includes a plurality of sensors, as described above. Using those sensors, the vehicle has a field of view including sensor acquireddata 604. In the example ofFIG. 6A , the vehicle detects acyclist 608 a,vehicle 608 b, andpedestrian 608 c. The vehicle may automatically categorize the detectedobjects 608 a-c as a respective cyclist, vehicle, and pedestrian, or it may lack the information to do so accurately. The vehicle's sensors may be obscured from theobjects 608 a-c by lacking a line of sight, or by other interference, for example. In addition, other objects may be in the path of thevehicle 602, but not in the sensor acquireddata 604. Therefore, eachobject 608 a-c includes or carries a mobile device external to the autonomous vehicle having a transmitter, or carries a separate dedicated transmitter, that transmits reported data 610 a-c, respectively, to aserver 608. The server, therefore, maintains a list of reported data collected by multiple devices, and determines which data of the list to distribute to each autonomous vehicle. The server may maintain its list of object by a variety of methods, including time-limiting data in the list, such that data beyond a given time threshold is automatically deleted. The mobile device can also be a collecting device external to the autonomous vehicle. The reported data 610 a-c can include a current location of the object (e.g., GPS location), a type of the object (e.g., pedestrian, cyclist, motorist, or type of vehicle), a predicted location of the object (e.g., based on detected current velocity and direction of object and map data), and social media or personal information of the user. Social media information can employ calendar or events information to determine a destination of the user. From there, the vehicle can determine a best route from the current location to the destination, and predict a location or direction of the user. This is especially useful with pedestrians and cyclists, who may not be using a GPS application that can relay its GPS route to the server automatically like a motorist may do. Therefore, the reported data 610 a-c comes from theobjects 608 a-c themselves, and does not originate from a sensor of thevehicle 602. -
FIG. 6B is a block diagram 650 illustrating an example embodiment of the present disclosure.FIG. 6B is a logical continuation ofFIG. 6A . After the reported data 610 a-c is sent from theobjects 608 a-c to theserver 608, the server relays the reported data 652 a-c. In embodiments, theserver 608 can relay the data as received from theobjects 608 a-c, or can aggregate or otherwise modify the data 610 a-c to create reported data 652 a-c. Theserver 608 can also predict effects of propagation delay of the Internet signal in the relayed reported data 652 a-c. Thevehicle 602 receives the reported data 652 a-c. In an embodiment, thevehicle 602 sends the reported data to its perception controller module, such asperception controller 212 ofFIG. 2 . In relation toFIG. 2 , the reported data 652 a-c can be conceptually considered as an input of thesensors 202.FIG. 3 illustrates this concept further, by showing external collecteddata 302 g as an input of the sensor array, sending reported data to thesensor interaction controller 304, which relays vender neutral data types to the perception controller 306. - Therefore, the external collected
data input 302 g ofFIG. 3 can further assist thevehicle 602 in object types, or even additional objects. For example, after receiving the reported data 652 a-c, the vehicle identifies thecyclist 608 a as identified cyclist 654 a, thevehicle 608 b as identified vehicle 654 b, and thepedestrian 608 c as identified pedestrian 654 c. The reported data 654 a-c can include a current location of the object, a type of the object, and a predicted location of the object. This information can be correlated with already detected objects and their location relative to the car. - Therefore, the reported data 654 a-c can provide the following advantages to the
vehicle 602. First, thevehicle 602 can use the reported data 654 a-c to verify a class of object that is already detected by the car to be correct. Second, thevehicle 602 can use the reported data 654 a-c to identify a class of object that was previously detected, but of an undetermined type. Third, thevehicle 602 can use the reported data 654 a-c to correct an incorrectly determined class of object. Fourth, thevehicle 602 can use the reported data 654 a-c to detect a previously undetected object. Fifth, thevehicle 602 can use the reported data 654 a-c to determine that a previously detected object is, in reality, two or more separate objects that were difficult for other sensor types to differentiate. Sixth, thevehicle 602 can use the reported data 654 a-c to detect an objects movement or intentions. - A person of ordinary skill in the art can further recognize that the reported data 652 a-c can be reported to multiple vehicles, as long as the server has determined the objects in the reported data 652 a-c is relevant to each vehicle. Therefore, each vehicle can received reported data custom to its route and location, as illustrated in further detail below in relation to
FIG. 6D . - In an embodiment, the reported data 654 a-c can include an indication and authentication of an emergency vehicle mode. Then, other vehicles in the area can automatically know that an emergency vehicle is approaching, and react accordingly by decelerating and approaching the side of the road until the emergency vehicle passes.
- Most autonomous vehicles rely solely on their on-board sensors. However, some systems accept data from outside of the vehicle itself though a vehicle to infrastructure (V2I) and vehicle to anything (V2X) or vehicle to vehicle (V2V) radios. In V2I, V2X, and V2V, these radios are built for a special purpose and only built for cars. Therefore, a ubiquitous mobile device, such as an iPhone , Android® Smartphone, or Windows® Phone cannot communicate with a car via those interfaces without adding additional hardware. Therefore, Applicant's present disclosure overcomes these limitation by taking advantage of more common network standards.
- Utilizing the commonly known networks of V2I, V2X, and V2V have limited range because of their peer-to-peer nature, being a special radio made for special frequencies. However, broadcasting over a universal network, such as the Internet, means that the packet transfers can be targeted to any vehicle in a certain location. For example, the vehicle only needs to know about objects being reported in its vicinity, not in other countries, states/provinces, or even city blocks. Therefore, a packet exchange process can occur, where the vehicle notifies the server of its location and direction, and the server relays packets related to that location. In such a manner, the present method and system provide flexibility to deliver the correct data to the correct vehicles, where the V2I, V2X, and V2V provide data to all receivers within range of the radios. Therefore, in some scenarios, the present system allows object detection at a further range, but can also exclude packets that are irrelevant to the recipient (e.g., objects that the car has already passed, etc.).
- The market penetration of V2I, V2X, and V2V is far less compared to the market penetration with LTE/Internet connected smartphones. In short, LTE has scaled better than V2I, V2X, and V2V, and therefore is a more effective technology to accomplish the goals of the present application, once certain challenges are overcome.
- Internet services today (e.g., Waze and Google Maps) report the general level of traffic congestion on a road, but do not report the locations of individual cars or objects in real time. For example, the application Waze® does allow user reporting of incidents, such as traffic backups, road construction, law enforcement locations, and accidents. However, these reports are limited in the sense that they are not dynamic once reported, and further are not reported in real time by a device associated with the object being reported. Waze's reports do not allow for the precise location accuracy of the present disclosure. Waze's reports, rather, report where on a route an incident is, but are agnostic to where exactly the incident on the road is. In other words, Waze's precision is one-dimensional with regards to the route, in that it reports incidents on a mile/foot marker on a given road. However, Waze lacks the two-dimensional capacity of embodiments of the present disclosure, to detect the presence of objects not only at a certain range of the road, but on the road's width as well. Further still, Applicant's disclosure can report movement of individual objects in real time, instead of relying on a momentary report of a user that is not updated.
-
FIG. 6C is a block diagram 670 illustrating an example embodiment of another aspect of the present disclosure. In response to the reported data 610 a-c and a sensor result 674 being sent to theserver 608, theserver 608 calculates an updatedsensor model 672, which can be sent to new vehicles or as an update to existing vehicles. As one example, the differences (or delta) between the reported data 610 a-c and sensor result 674 can be used to adjust the updatedsensor model 672. Optionally, thesensor model 672 can be sent back to the vehicle 602 (e.g., as a download, firmware update, or real-time update, etc.). -
FIG. 6D is a block diagram 680 illustrating an example embodiment of another aspect of the present disclosure. Acar 682 is on aroad 692, and receives initial or additional information about objects 684 a-c that are within the zone of objects sent to thecar 690. The zone of objects sent to the car is an intersection of aroute zone 688 that tracks the road in the direction of motion of the car and a requested detection radius 686 that is a particular distance from the car. Theroute zone 688 is a region of interest requested by the vehicle. The region of interest is a parametrically defined corridor that can be any shape based on predicted paths or corridors of the vehicle. In other words, theroute zone 688 is a geo-filter boundary requested by the vehicle. Therefore, information onobject 694, which is in the requested detection radius 686 but not in theroute zone 688, and object 696, which is in theroute zone 688 but not the requested detection radius 686, are filtered from being sent to thecar 682. A person of ordinary skill in the art can recognize that shapes other than circles can be used for the requested detection radius 686. Further, a person of ordinary skill in the art can recognize that theroute zone 688 can track curved roads, routes with turns onto multiple roads, or an area slightly greater than the road to detect objects on sidewalks or parking lots. -
FIG. 7A is a diagram 700 illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle. In relation toFIGS. 7A-7B , the sensor field ofview 712 discussed can be derived from any of a vision system, LIDAR, RADAR system, other sensor system, or any combination of traditional sensing systems. These traditional systems detect objects within the sensor field ofview 712, such as vision identifiedSUV 702, vision identifiedpassenger car 704, vision identifiedsports car 706, vision identifiedpolice car 708. RADAR, LIDAR, and camera vision systems can combine their gathered information to determine the location of the relative vehicles, and the vehicle types. However, while these systems have a high degree of accuracy, these systems can sometimes have incomplete information, and not detect various objects.FIG. 7A shows the example of an unidentified trucks 710 a-b can be undetected by the vehicle's traditional system by being not completely in a field of vision, or being obscured by other objects, such as thepolice car 708 obscuring theunidentified truck 710 b. Further, thesports car 706 obscures theunidentified car 712. Other unidentified objects can be concealed from sensors in other ways, such as being out of range or being camouflaged with respect to the sensor type. -
FIG. 7B is a diagram 750 illustrating an example embodiment of a representation of a vision field identified by an autonomous or semi-autonomous vehicle. In the sensor field ofview 760, the pedestrians 752 a-f are identified by the vehicle. Further, a cyclist is identified 754, but the vehicle cannot identify its class (e.g., whether it is a pedestrian, cyclist, or other object). Further, a person of ordinary skill in the art can recognize that cars and other objects are left unrecognized within the sensor field ofview 760. -
FIG. 8 is a diagram 800 of an examplemobile application 802 running on a mobile device. The application identifies the type of traffic represented by the user self-identifying its type by using the user control 806. The types can include pedestrian, cyclist, and motorist. The user can select its type on the mobile application and submit it using user control 804, or themobile application 802 can determine the type automatically based on the user's speed and motion pattern. For example, themobile application 802 can determine motion of the mobile device in a user's pocket to be walking, running, or cycling based on its vibration. Further, themobile application 802 can determine speeds over a threshold are those of a vehicle, and therefore register the user as a motorist. -
FIG. 9A is a diagram of a sensor field ofview 902 employing the data received from the mobile application. Within the sensor field of view, the sensor identifies several objects/pedestrians 904. However, it does not recognize a pedestrian near the sidewalk, about to cross outside of the cross walk. With the mobile application and its reported data, the mobile device sends a signal to a server which is relayed to the vehicle. Even with no visual, RADAR, or LIDAR knowledge of the pedestrian, the reported object's location can be made known to the vehicle. Likewise,FIG. 9B is a diagram of a sensor field ofview 902 employing the data received from the mobile application. Within the sensor field ofview 902, the sensor identifies several objects/pedestrians 954. Based on the mobile data, the vehicle recognize the reportedobject 956. However, in this embodiment, the vehicle also recognizes the reportedacceleration measure 958. In other embodiments, the acceleration measure can also be or include a velocity or direction measure. -
FIG. 10 is a network diagram 1000 illustrating amobile application 1002 communicating with aserver 1004 and a representation of an autonomous vehicle 1006. Upon receiving reporteddata 1010, the autonomous vehicle models the location of the reportedobjects 1012 within a perception of an autonomous vehicle. The columns shown within the perception 1008 model indicate reported objects, and thus, the car can avoid them. -
FIG. 11 is a flow diagram 1100 illustrating a process employed by an example embodiment of the present disclosure. After collecting data at a mobile device (1102), the process receives, at an autonomous vehicle, reported data regarding an object in proximity to the autonomous vehicle (1103). The data can then optionally be filtered for quality, such as filtering for signs of inaccurate location reporting due to building structures or bad WiFi, or other quality metrics (1104). The data is relayed to the autonomous vehicle via a server. The reported data including a current location of the object, a type of the object, or a predicted location of the object. Then, the process determines whether the reported data of the object matches an object in an object list (1105). - If the determination finds a matching object in the object list, the process correlates the reported data of the object to a matching object in the object list (1108). Otherwise, the process adds the reported data of the object to an object list of objects detected by sensor from on-board sensors of the autonomous vehicle (1106).
-
FIG. 12 is a flow diagram 1200 illustrating a process employed by an example embodiment of the present disclosure at avehicle 1202 andserver 1252. First thevehicle 1202 reports a vehicle location and vehicle route to the server 1252 (1204). In turn, the server receives the vehicle location (1254). Theserver 1252 then receives data collected by mobile device(s) (1256). The server then geo-filters data collected from the mobile device(s) to the vicinity of the vehicle location (1258). Theserver 1252 then sends the geo-filtered data to the vehicle as reported data to the vehicle 1202 (1260). - The
vehicle 1202 receives the reported data (1206), then determines whether the reported data matches an object of an object list (1208). If so, thevehicle 1202 correlates the reported data to a matching object in the object list (1212). Separately, the server can also receive create and modify sensor models based on reported data and data from on-board sensors of autonomous vehicle (1262). The updated sensor model can then be sent back to the vehicle, as shown above inFIG. 6C . If the reported data does not match an object of the object list (1208), however, thevehicle 1202 adds reported data to the object list (1210). - While vehicles are the primary recipient of data, the collected data can further be used to improve vehicle sensor models. Reported data can further include pictures taken from the device, and the vehicle can analyze the pictures to identify objects that may not be visible to the vehicle. For example, mobile devices or stationary roadside cameras can report image collected data to vehicles. The image collected data can reflect objects that are at locations other than that of the collecting device. The image processing of the images can be performed at the collecting devices (e.g., mobile device, stationary roadside cameras) to save bandwidth and get the reported data to the vehicles faster. In other embodiments, however, the image analysis can be performed in the vehicle or on at a cloud server.
- However, in addition to the immediate vehicle benefit, learning systems such as Amazon® Mechanical Turk can use the collected data to train automated systems in a better manner. Users of Mechanical Turk can verify or correct cars, pedestrians, cyclists, and other objects detected in traditional image systems, such that the model can be improved in the future.
- The model can also be improved by automatically identifying differences between the “ground truth” reported by mobile devices and the vehicle's analysis of the sensors. Differences between the vehicle's original analysis and the mobile collected data can be automatically identified. Then, a machine learning process can associate the “ground truth” reported by these mobile devices to similar data sensed by the vehicle's other sensors, thereby improving the vehicle's future performance.
- In one embodiment, image analysis can be performed on the vehicle, but in other embodiments the image analysis can be performed on the server or on the mobile device.
-
FIG. 13 illustrates a computer network or similar digital processing environment in which embodiments of the present disclosure may be implemented. - Client computer(s)/
devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like. The client computer(s)/devices 50 can also be linked throughcommunications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60. Thecommunications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, local area or wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth®, etc.) to communicate with one another. Other electronic device/computer network architectures are suitable. -
FIG. 14 is a diagram of an example internal structure of a computer (e.g., client processor/device 50 or server computers 60) in the computer system ofFIG. 13 . Eachcomputer system bus 79, where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system. Thesystem bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements. Attached to thesystem bus 79 is an I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to thecomputer network interface 86 allows the computer to connect to various other devices attached to a network (e.g.,network 70 ofFIG. 13 ).Memory 90 provides volatile storage for computer software instructions 92 anddata 94 used to implement an embodiment of the present disclosure (e.g., sensor interface controller, perception controller, localization controller, automated driving controller, vehicle controller, system controller, human interaction controller, and machine interaction controller detailed above).Disk storage 95 provides non-volatile storage for computer software instructions 92 anddata 94 used to implement an embodiment of the present disclosure. Acentral processor unit 84 is also attached to thesystem bus 79 and provides for the execution of computer instructions. - In one embodiment, the processor routines 92 and
data 94 are a computer program product (generally referenced 92), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the disclosure system. The computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection. In other embodiments, the disclosure programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)). Such carrier medium or signals may be employed to provide at least a portion of the software instructions for the present disclosure routines/program 92. - While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
Claims (31)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/497,821 US10101745B1 (en) | 2017-04-26 | 2017-04-26 | Enhancing autonomous vehicle perception with off-vehicle collected data |
US16/160,156 US10963462B2 (en) | 2017-04-26 | 2018-10-15 | Enhancing autonomous vehicle perception with off-vehicle collected data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/497,821 US10101745B1 (en) | 2017-04-26 | 2017-04-26 | Enhancing autonomous vehicle perception with off-vehicle collected data |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/160,156 Continuation US10963462B2 (en) | 2017-04-26 | 2018-10-15 | Enhancing autonomous vehicle perception with off-vehicle collected data |
Publications (2)
Publication Number | Publication Date |
---|---|
US10101745B1 US10101745B1 (en) | 2018-10-16 |
US20180314247A1 true US20180314247A1 (en) | 2018-11-01 |
Family
ID=63761654
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/497,821 Active US10101745B1 (en) | 2017-04-26 | 2017-04-26 | Enhancing autonomous vehicle perception with off-vehicle collected data |
US16/160,156 Active 2037-11-05 US10963462B2 (en) | 2017-04-26 | 2018-10-15 | Enhancing autonomous vehicle perception with off-vehicle collected data |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/160,156 Active 2037-11-05 US10963462B2 (en) | 2017-04-26 | 2018-10-15 | Enhancing autonomous vehicle perception with off-vehicle collected data |
Country Status (1)
Country | Link |
---|---|
US (2) | US10101745B1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180349784A1 (en) * | 2017-06-06 | 2018-12-06 | PlusAI Corp | Method and system for on-the-fly object labeling via cross modality validation in autonomous driving vehicles |
US20190049992A1 (en) * | 2017-08-14 | 2019-02-14 | GM Global Technology Operations LLC | System and Method for Improved Obstable Awareness in Using a V2x Communications System |
US20190077402A1 (en) * | 2017-09-13 | 2019-03-14 | Lg Electronics Inc. | Driving assistance apparatus for vehicle and control method thereof |
CN110134124A (en) * | 2019-04-29 | 2019-08-16 | 北京小马慧行科技有限公司 | Control method, device, storage medium and the processor of vehicle driving |
US20200012286A1 (en) * | 2018-07-06 | 2020-01-09 | Toyota Research Institute, Inc. | System, method, and computer-readable medium for an autonomous vehicle to pass a bicycle |
US10599150B2 (en) | 2016-09-29 | 2020-03-24 | The Charles Stark Kraper Laboratory, Inc. | Autonomous vehicle: object-level fusion |
US20200217948A1 (en) * | 2019-01-07 | 2020-07-09 | Ainstein AI, Inc | Radar-camera detection system and methods |
US20200310416A1 (en) * | 2019-03-29 | 2020-10-01 | Honda Motor Co., Ltd. | Control apparatus, control method, and storage medium |
EP3761285A1 (en) * | 2019-07-01 | 2021-01-06 | Fujitsu Limited | Smart object knowledge sharing |
US20210027628A1 (en) * | 2019-07-26 | 2021-01-28 | Volkswagen Aktiengesellschaft | Methods, computer programs, apparatuses, a vehicle, and a traffic entity for updating an environmental model of a vehicle |
US10963462B2 (en) | 2017-04-26 | 2021-03-30 | The Charles Stark Draper Laboratory, Inc. | Enhancing autonomous vehicle perception with off-vehicle collected data |
US20210206392A1 (en) * | 2020-01-08 | 2021-07-08 | Robert Bosch Gmbh | Method and device for operating an automated vehicle |
US11249184B2 (en) | 2019-05-07 | 2022-02-15 | The Charles Stark Draper Laboratory, Inc. | Autonomous collision avoidance through physical layer tracking |
US11354913B1 (en) * | 2019-11-27 | 2022-06-07 | Woven Planet North America, Inc. | Systems and methods for improving vehicle predictions using point representations of scene |
CN114616609A (en) * | 2019-10-30 | 2022-06-10 | 古河电气工业株式会社 | Driving assistance system |
US11392133B2 (en) | 2017-06-06 | 2022-07-19 | Plusai, Inc. | Method and system for object centric stereo in autonomous driving vehicles |
US11550334B2 (en) | 2017-06-06 | 2023-01-10 | Plusai, Inc. | Method and system for integrated global and distributed learning in autonomous driving vehicles |
CN115862183A (en) * | 2023-02-28 | 2023-03-28 | 禾多科技(北京)有限公司 | Sensor characteristic engineering information construction method, device, equipment and computer medium |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018156462A (en) * | 2017-03-17 | 2018-10-04 | 東芝メモリ株式会社 | Mobile object and driving support system including the same |
US20180339730A1 (en) * | 2017-05-26 | 2018-11-29 | Dura Operating, Llc | Method and system for generating a wide-area perception scene graph |
US10019011B1 (en) * | 2017-10-09 | 2018-07-10 | Uber Technologies, Inc. | Autonomous vehicles featuring machine-learned yield model |
US10591914B2 (en) * | 2017-11-08 | 2020-03-17 | GM Global Technology Operations LLC | Systems and methods for autonomous vehicle behavior control |
CN111103874A (en) * | 2018-10-26 | 2020-05-05 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, and medium for controlling automatic driving of vehicle |
US11580687B2 (en) * | 2018-12-04 | 2023-02-14 | Ottopia Technologies Ltd. | Transferring data from autonomous vehicles |
KR102611775B1 (en) * | 2019-01-30 | 2023-12-11 | 삼성전자 주식회사 | Method and electronic device for transmitting group message |
US11386778B2 (en) | 2019-05-17 | 2022-07-12 | sibrtech inc. | Road user detecting and communication device and method |
US10665109B1 (en) | 2019-05-17 | 2020-05-26 | sibrtech inc. | Construction zone apparatus and method |
CN110378298B (en) * | 2019-07-23 | 2021-04-27 | 精英数智科技股份有限公司 | Mine car queue monitoring method, device, equipment and storage medium |
JP2022543559A (en) * | 2019-08-06 | 2022-10-13 | シーメンス エレクトロニック デザイン オートメーション ゲゼルシャフト ミット ベシュレンクテル ハフツング | Method, Apparatus, and System for Controlling Autonomous Vehicles |
CN110550026B (en) * | 2019-09-25 | 2021-05-28 | 清华大学 | Automatic braking control method, device and system based on medium-time distance information |
JP7382791B2 (en) * | 2019-10-30 | 2023-11-17 | 株式会社日立製作所 | Abnormality determination device, vehicle support system |
US11891087B2 (en) | 2019-12-20 | 2024-02-06 | Uatc, Llc | Systems and methods for generating behavioral predictions in reaction to autonomous vehicle movement |
US11210951B2 (en) | 2020-03-03 | 2021-12-28 | Verizon Patent And Licensing Inc. | System and method for location data fusion and filtering |
CN111780981B (en) * | 2020-05-21 | 2022-02-18 | 东南大学 | Intelligent vehicle formation lane change performance evaluation method |
DE102020207081A1 (en) * | 2020-06-05 | 2021-12-09 | Siemens Mobility GmbH | Remote-controlled intervention in the tactical planning of autonomous vehicles |
US11551456B2 (en) | 2020-06-17 | 2023-01-10 | Ford Global Technologies, Llc | Enhanced infrastructure |
US11945472B2 (en) | 2020-08-28 | 2024-04-02 | Motional Ad Llc | Trajectory planning of vehicles using route information |
US11511770B2 (en) | 2020-10-19 | 2022-11-29 | Marvell Asia Pte, Ltd. | System and method for neural network-based autonomous driving |
US11753041B2 (en) * | 2020-11-23 | 2023-09-12 | Waymo Llc | Predicting behaviors of road agents using intermediate intention signals |
TWI756993B (en) * | 2020-12-17 | 2022-03-01 | 國立中山大學 | Vital-sign radar sensor using wireless internet signal |
CN112874502B (en) * | 2021-03-01 | 2022-07-12 | 南京航空航天大学 | Wire control chassis information physical system in intelligent traffic environment and control method |
Family Cites Families (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7295925B2 (en) * | 1997-10-22 | 2007-11-13 | Intelligent Technologies International, Inc. | Accident avoidance systems and methods |
US6405132B1 (en) * | 1997-10-22 | 2002-06-11 | Intelligent Technologies International, Inc. | Accident avoidance system |
US7202776B2 (en) * | 1997-10-22 | 2007-04-10 | Intelligent Technologies International, Inc. | Method and system for detecting objects external to a vehicle |
US5917920A (en) | 1997-05-29 | 1999-06-29 | Humphries; Alan | Safety vehicle communication system |
DE10036276A1 (en) | 2000-07-26 | 2002-02-07 | Daimler Chrysler Ag | Automatic braking and steering system for a vehicle |
US6581005B2 (en) | 2000-11-30 | 2003-06-17 | Nissan Motor Co., Ltd. | Vehicle position calculation apparatus and method |
US9428186B2 (en) * | 2002-04-09 | 2016-08-30 | Intelligent Technologies International, Inc. | Exterior monitoring for vehicles |
US7135992B2 (en) | 2002-12-17 | 2006-11-14 | Evolution Robotics, Inc. | Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system |
US7752548B2 (en) | 2004-10-29 | 2010-07-06 | Microsoft Corporation | Features such as titles, transitions, and/or effects which vary according to positions |
JP4392389B2 (en) | 2005-06-27 | 2009-12-24 | 本田技研工業株式会社 | Vehicle and lane recognition device |
JP2007303841A (en) | 2006-05-08 | 2007-11-22 | Toyota Central Res & Dev Lab Inc | Vehicle position estimation device |
US7587260B2 (en) * | 2006-07-05 | 2009-09-08 | Battelle Energy Alliance, Llc | Autonomous navigation system and method |
JP4344869B2 (en) | 2007-02-16 | 2009-10-14 | 三菱電機株式会社 | Measuring device |
US20080243378A1 (en) | 2007-02-21 | 2008-10-02 | Tele Atlas North America, Inc. | System and method for vehicle navigation and piloting including absolute and relative coordinates |
US20090292468A1 (en) | 2008-03-25 | 2009-11-26 | Shunguang Wu | Collision avoidance method and system using stereo vision and radar sensor fusion |
US20100062652A1 (en) | 2008-09-05 | 2010-03-11 | Wen-Chi Liu | Power adapter with changeable plug |
US20100164789A1 (en) | 2008-12-30 | 2010-07-01 | Gm Global Technology Operations, Inc. | Measurement Level Integration of GPS and Other Range and Bearing Measurement-Capable Sensors for Ubiquitous Positioning Capability |
JP2010221099A (en) | 2009-03-23 | 2010-10-07 | Roodo Tekku:Kk | Fermentation apparatus and system for producing organic fermented fertilizer |
US8352112B2 (en) | 2009-04-06 | 2013-01-08 | GM Global Technology Operations LLC | Autonomous vehicle management |
US8315756B2 (en) | 2009-08-24 | 2012-11-20 | Toyota Motor Engineering and Manufacturing N.A. (TEMA) | Systems and methods of vehicular path prediction for cooperative driving applications through digital map and dynamic vehicle model fusion |
US8570168B2 (en) * | 2009-10-08 | 2013-10-29 | Bringrr Systems, Llc | System, method and device to interrogate for the presence of objects |
US20110190972A1 (en) | 2010-02-02 | 2011-08-04 | Gm Global Technology Operations, Inc. | Grid unlock |
JP4977218B2 (en) | 2010-02-12 | 2012-07-18 | トヨタ自動車株式会社 | Self-vehicle position measurement device |
US8829404B1 (en) | 2010-03-26 | 2014-09-09 | Raytheon Company | Multi-mode seekers including focal plane array assemblies operable in semi-active laser and image guidance modes |
JP5466576B2 (en) | 2010-05-24 | 2014-04-09 | 株式会社神戸製鋼所 | High strength cold-rolled steel sheet with excellent bending workability |
TW201207237A (en) | 2010-08-03 | 2012-02-16 | Shu-Mu Wu | Air pump head |
US9472097B2 (en) | 2010-11-15 | 2016-10-18 | Image Sensing Systems, Inc. | Roadway sensing systems |
US9189280B2 (en) * | 2010-11-18 | 2015-11-17 | Oracle International Corporation | Tracking large numbers of moving objects in an event processing system |
JP5821275B2 (en) | 2011-05-20 | 2015-11-24 | マツダ株式会社 | Moving body position detection device |
US9140792B2 (en) | 2011-06-01 | 2015-09-22 | GM Global Technology Operations LLC | System and method for sensor based environmental model construction |
EP2728563A4 (en) | 2011-06-13 | 2015-03-04 | Toyota Motor Co Ltd | Driving assistance device and driving assistance method |
US9381916B1 (en) | 2012-02-06 | 2016-07-05 | Google Inc. | System and method for predicting behaviors of detected objects through environment representation |
US8493198B1 (en) * | 2012-07-11 | 2013-07-23 | Google Inc. | Vehicle and mobile device traffic hazard warning techniques |
US9120485B1 (en) | 2012-09-14 | 2015-09-01 | Google Inc. | Methods and systems for smooth trajectory generation for a self-driving vehicle |
JP5915480B2 (en) | 2012-09-26 | 2016-05-11 | トヨタ自動車株式会社 | Own vehicle position calibration apparatus and own vehicle position calibration method |
DE102012220337A1 (en) | 2012-11-08 | 2014-05-08 | Robert Bosch Gmbh | A system, mobile device, server, and method for providing a local service to each of the mobile devices carried by a road user |
US9404754B2 (en) | 2013-03-25 | 2016-08-02 | Raytheon Company | Autonomous range-only terrain aided navigation |
US20140309836A1 (en) | 2013-04-16 | 2014-10-16 | Neya Systems, Llc | Position Estimation and Vehicle Control in Autonomous Multi-Vehicle Convoys |
DE102013212710A1 (en) | 2013-05-16 | 2014-11-20 | Siemens Aktiengesellschaft | Sensor product, simulator and method for simulating sensor measurements, merging sensor measurements, validating a sensor model and designing a driver assistance system |
WO2014200984A1 (en) * | 2013-06-12 | 2014-12-18 | Zimmer, Inc. | Femoral explant device |
WO2015008290A2 (en) | 2013-07-18 | 2015-01-22 | Secure4Drive Communication Ltd. | Method and device for assisting in safe driving of a vehicle |
US9547989B2 (en) | 2014-03-04 | 2017-01-17 | Google Inc. | Reporting road event data and sharing with other vehicles |
US9151626B1 (en) | 2014-04-11 | 2015-10-06 | Nissan North America, Inc. | Vehicle position estimation system |
US10012504B2 (en) | 2014-06-19 | 2018-07-03 | Regents Of The University Of Minnesota | Efficient vision-aided inertial navigation using a rolling-shutter camera with inaccurate timestamps |
KR20160002178A (en) | 2014-06-30 | 2016-01-07 | 현대자동차주식회사 | Apparatus and method for self-localization of vehicle |
JP6489632B2 (en) | 2014-09-30 | 2019-03-27 | 株式会社Subaru | Vehicle travel support device |
JP6354561B2 (en) | 2014-12-15 | 2018-07-11 | 株式会社デンソー | Orbit determination method, orbit setting device, automatic driving system |
WO2016110728A1 (en) | 2015-01-05 | 2016-07-14 | 日産自動車株式会社 | Target path generation device and travel control device |
JP6573769B2 (en) | 2015-02-10 | 2019-09-11 | 国立大学法人金沢大学 | Vehicle travel control device |
JP6511283B2 (en) * | 2015-02-12 | 2019-05-15 | 日立オートモティブシステムズ株式会社 | Object detection device |
DE102015001971A1 (en) | 2015-02-19 | 2016-08-25 | Iav Gmbh Ingenieurgesellschaft Auto Und Verkehr | Method and monitoring device for monitoring driver assistance systems |
TWI737592B (en) * | 2015-03-23 | 2021-09-01 | 日商新力股份有限公司 | Image sensor, image processing method and electronic machine |
US10397019B2 (en) * | 2015-11-16 | 2019-08-27 | Polysync Technologies, Inc. | Autonomous vehicle platform and safety architecture |
US9460616B1 (en) | 2015-12-16 | 2016-10-04 | International Business Machines Corporation | Management of mobile objects and service platform for mobile objects |
JP6752024B2 (en) * | 2016-02-12 | 2020-09-09 | 日立オートモティブシステムズ株式会社 | Image processing device |
US9645577B1 (en) * | 2016-03-23 | 2017-05-09 | nuTonomy Inc. | Facilitating vehicle driving and self-driving |
US10829116B2 (en) | 2016-07-01 | 2020-11-10 | nuTonomy Inc. | Affecting functions of a vehicle based on function-related information about its environment |
US20180087907A1 (en) | 2016-09-29 | 2018-03-29 | The Charles Stark Draper Laboratory, Inc. | Autonomous vehicle: vehicle localization |
US10377375B2 (en) | 2016-09-29 | 2019-08-13 | The Charles Stark Draper Laboratory, Inc. | Autonomous vehicle: modular architecture |
US10599150B2 (en) | 2016-09-29 | 2020-03-24 | The Charles Stark Kraper Laboratory, Inc. | Autonomous vehicle: object-level fusion |
US10101745B1 (en) | 2017-04-26 | 2018-10-16 | The Charles Stark Draper Laboratory, Inc. | Enhancing autonomous vehicle perception with off-vehicle collected data |
WO2018199941A1 (en) | 2017-04-26 | 2018-11-01 | The Charles Stark Draper Laboratory, Inc. | Enhancing autonomous vehicle perception with off-vehicle collected data |
US11249184B2 (en) | 2019-05-07 | 2022-02-15 | The Charles Stark Draper Laboratory, Inc. | Autonomous collision avoidance through physical layer tracking |
-
2017
- 2017-04-26 US US15/497,821 patent/US10101745B1/en active Active
-
2018
- 2018-10-15 US US16/160,156 patent/US10963462B2/en active Active
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10599150B2 (en) | 2016-09-29 | 2020-03-24 | The Charles Stark Kraper Laboratory, Inc. | Autonomous vehicle: object-level fusion |
US10963462B2 (en) | 2017-04-26 | 2021-03-30 | The Charles Stark Draper Laboratory, Inc. | Enhancing autonomous vehicle perception with off-vehicle collected data |
US11550334B2 (en) | 2017-06-06 | 2023-01-10 | Plusai, Inc. | Method and system for integrated global and distributed learning in autonomous driving vehicles |
US11042155B2 (en) * | 2017-06-06 | 2021-06-22 | Plusai Limited | Method and system for closed loop perception in autonomous driving vehicles |
US11435750B2 (en) | 2017-06-06 | 2022-09-06 | Plusai, Inc. | Method and system for object centric stereo via cross modality validation in autonomous driving vehicles |
US20210294326A1 (en) * | 2017-06-06 | 2021-09-23 | Plusai Limited | Method and system for closed loop perception in autonomous driving vehicles |
US11537126B2 (en) * | 2017-06-06 | 2022-12-27 | Plusai, Inc. | Method and system for on-the-fly object labeling via cross modality validation in autonomous driving vehicles |
US20180349785A1 (en) * | 2017-06-06 | 2018-12-06 | PlusAI Corp | Method and system for on-the-fly object labeling via cross temporal validation in autonomous driving vehicles |
US20180349784A1 (en) * | 2017-06-06 | 2018-12-06 | PlusAI Corp | Method and system for on-the-fly object labeling via cross modality validation in autonomous driving vehicles |
US11790551B2 (en) | 2017-06-06 | 2023-10-17 | Plusai, Inc. | Method and system for object centric stereo in autonomous driving vehicles |
US20230140540A1 (en) * | 2017-06-06 | 2023-05-04 | Plusai, Inc. | Method and system for distributed learning and adaptation in autonomous driving vehicles |
US11573573B2 (en) | 2017-06-06 | 2023-02-07 | Plusai, Inc. | Method and system for distributed learning and adaptation in autonomous driving vehicles |
US11392133B2 (en) | 2017-06-06 | 2022-07-19 | Plusai, Inc. | Method and system for object centric stereo in autonomous driving vehicles |
US10613547B2 (en) * | 2017-08-14 | 2020-04-07 | GM Global Technology Operations LLC | System and method for improved obstacle awareness in using a V2X communications system |
US20190049992A1 (en) * | 2017-08-14 | 2019-02-14 | GM Global Technology Operations LLC | System and Method for Improved Obstable Awareness in Using a V2x Communications System |
US10937314B2 (en) * | 2017-09-13 | 2021-03-02 | Lg Electronics Inc. | Driving assistance apparatus for vehicle and control method thereof |
US20190077402A1 (en) * | 2017-09-13 | 2019-03-14 | Lg Electronics Inc. | Driving assistance apparatus for vehicle and control method thereof |
US20200012286A1 (en) * | 2018-07-06 | 2020-01-09 | Toyota Research Institute, Inc. | System, method, and computer-readable medium for an autonomous vehicle to pass a bicycle |
US11940798B2 (en) * | 2018-07-06 | 2024-03-26 | Toyota Research Institute, Inc. | System, method, and computer-readable medium for an autonomous vehicle to pass a bicycle |
US20200217948A1 (en) * | 2019-01-07 | 2020-07-09 | Ainstein AI, Inc | Radar-camera detection system and methods |
US20200310416A1 (en) * | 2019-03-29 | 2020-10-01 | Honda Motor Co., Ltd. | Control apparatus, control method, and storage medium |
US11733694B2 (en) * | 2019-03-29 | 2023-08-22 | Honda Motor Co., Ltd. | Control apparatus, control method, and storage medium |
JP2020167551A (en) * | 2019-03-29 | 2020-10-08 | 本田技研工業株式会社 | Control device, control method, and program |
JP7256668B2 (en) | 2019-03-29 | 2023-04-12 | 本田技研工業株式会社 | Control device, control method and program |
CN110134124A (en) * | 2019-04-29 | 2019-08-16 | 北京小马慧行科技有限公司 | Control method, device, storage medium and the processor of vehicle driving |
US11249184B2 (en) | 2019-05-07 | 2022-02-15 | The Charles Stark Draper Laboratory, Inc. | Autonomous collision avoidance through physical layer tracking |
EP3761285A1 (en) * | 2019-07-01 | 2021-01-06 | Fujitsu Limited | Smart object knowledge sharing |
US20210027628A1 (en) * | 2019-07-26 | 2021-01-28 | Volkswagen Aktiengesellschaft | Methods, computer programs, apparatuses, a vehicle, and a traffic entity for updating an environmental model of a vehicle |
US11545034B2 (en) * | 2019-07-26 | 2023-01-03 | Volkswagen Aktiengesellschaft | Methods, computer programs, apparatuses, a vehicle, and a traffic entity for updating an environmental model of a vehicle |
CN114616609A (en) * | 2019-10-30 | 2022-06-10 | 古河电气工业株式会社 | Driving assistance system |
EP4044143A4 (en) * | 2019-10-30 | 2022-11-23 | Furukawa Electric Co., Ltd. | Driving assistance system |
US11354913B1 (en) * | 2019-11-27 | 2022-06-07 | Woven Planet North America, Inc. | Systems and methods for improving vehicle predictions using point representations of scene |
US20210206392A1 (en) * | 2020-01-08 | 2021-07-08 | Robert Bosch Gmbh | Method and device for operating an automated vehicle |
US11919544B2 (en) * | 2020-01-08 | 2024-03-05 | Robert Bosch Gmbh | Method and device for operating an automated vehicle |
CN115862183A (en) * | 2023-02-28 | 2023-03-28 | 禾多科技(北京)有限公司 | Sensor characteristic engineering information construction method, device, equipment and computer medium |
Also Published As
Publication number | Publication date |
---|---|
US10101745B1 (en) | 2018-10-16 |
US10963462B2 (en) | 2021-03-30 |
US20190220462A1 (en) | 2019-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10963462B2 (en) | Enhancing autonomous vehicle perception with off-vehicle collected data | |
US10599150B2 (en) | Autonomous vehicle: object-level fusion | |
US10377375B2 (en) | Autonomous vehicle: modular architecture | |
US20180087907A1 (en) | Autonomous vehicle: vehicle localization | |
US10379533B2 (en) | System and method for autonomous vehicle fleet routing | |
WO2018063245A1 (en) | Autonomous vehicle localization | |
CN113423627B (en) | Operating an automated vehicle according to road user reaction modeling under occlusion | |
US9494935B2 (en) | Remote operation of autonomous vehicle in unexpected environment | |
US20190354104A1 (en) | Providing user assistance in a vehicle based on traffic behavior models | |
GB2524384A (en) | Autonomous control in a dense vehicle environment | |
WO2018199941A1 (en) | Enhancing autonomous vehicle perception with off-vehicle collected data | |
CN110329250A (en) | Method for exchanging information between at least two automobiles | |
US20210389133A1 (en) | Systems and methods for deriving path-prior data using collected trajectories | |
WO2018063241A1 (en) | Autonomous vehicle: object-level fusion | |
US20230020040A1 (en) | Batch control for autonomous vehicles | |
KR101439019B1 (en) | Car control apparatus and its car control apparatus and autonomic driving method | |
US20200269864A1 (en) | System and apparatus for a connected vehicle | |
WO2018063250A1 (en) | Autonomous vehicle with modular architecture | |
CN112464229A (en) | Method and apparatus for detecting spoofing attacks against autonomous driving systems | |
KR20220054534A (en) | Vehicle operation using behavioral rule checks | |
WO2021261228A1 (en) | Obstacle information management device, obstacle information management method, and device for vehicle | |
WO2022165498A1 (en) | Methods and system for generating a lane-level map for an area of interest for navigation of an autonomous vehicle | |
KR20220107881A (en) | Surface guided vehicle behavior | |
CN115265537A (en) | Navigation system with traffic state detection mechanism and method of operation thereof | |
CN115220439A (en) | System and method for a vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE CHARLES STARK DRAPER LABORATORY, INC., MASSACH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUN, FEI;JONES, TROY;LENNOX, SCOTT;SIGNING DATES FROM 20171006 TO 20180110;REEL/FRAME:044605/0392 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |