CN112526997A - Automatic driving perception system and method and vehicle - Google Patents

Automatic driving perception system and method and vehicle Download PDF

Info

Publication number
CN112526997A
CN112526997A CN202011439340.6A CN202011439340A CN112526997A CN 112526997 A CN112526997 A CN 112526997A CN 202011439340 A CN202011439340 A CN 202011439340A CN 112526997 A CN112526997 A CN 112526997A
Authority
CN
China
Prior art keywords
information
recognition module
image
identification module
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011439340.6A
Other languages
Chinese (zh)
Inventor
王栋梁
陈博
孙建蕾
王秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202011439340.6A priority Critical patent/CN112526997A/en
Publication of CN112526997A publication Critical patent/CN112526997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an automatic driving perception system, an automatic driving perception method and a vehicle. The system comprises: the system comprises an object identification module, an image identification module, a time synchronization box and a central computing platform; the time synchronization box is respectively connected with the object identification module, the image identification module and the central computing platform; the object identification module is used for determining object information in the environment where the vehicle is located and sending the object information to the central computing platform through the time synchronization box; the image identification module is used for determining image information around the vehicle and sending the image information to the central computing platform through the time synchronization box; the central computing platform is used for determining a target recognition module according to the object information and the image information so as to realize automatic driving perception through the target recognition module. The object recognition module and the image recognition module in the system can guarantee that the vehicle can sense surrounding scenes in an all-around mode, the target recognition module can be automatically determined according to the scenes by the aid of the central computing platform, and the best sensing effect is achieved.

Description

Automatic driving perception system and method and vehicle
Technical Field
The embodiment of the invention relates to the technical field of automation, in particular to an automatic driving perception system, an automatic driving perception method and a vehicle.
Background
With the development of algorithms, computing power and big data, artificial intelligence starts to enter a high-speed development period. As the extension and application of artificial intelligence technology in the automobile industry and the transportation field, the automatic driving of automobiles has received close attention and great investment from all parties in the world in recent years. The automatic driving of the automobile depends on the cooperative work of artificial intelligence, visual calculation, radar and global positioning system, so that the central processing unit of the whole automobile can automatically and safely operate the motor vehicle without any active operation of human. The automatic driving technology becomes a brand new development direction of future automobiles.
The design of the automatic driving perception system is one of key technologies which need to be overcome urgently in automatic driving, and is also the key of whether the automatic driving vehicle can be produced in mass finally. The automatic driving perception system among the prior art is installed in addition at the vehicle top after the vehicle leaves the factory, is not integrative with the vehicle, and unable volume production and have following defect: on one hand, the sensor of the automatic driving sensing system is only arranged at the top of the vehicle, the sensing range is limited, on the other hand, the sensor of the automatic driving sensing system has the characteristics of heterogeneity, polymorphism, incompleteness and the like, and various sensors are interfered by different factors and can show the phenomena of distortion and even failure, so that the whole automatic driving system fails.
Therefore, how to improve the perception effect of the automatic driving perception system and reduce the failure condition of the automatic driving perception system is a technical problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention provides an automatic driving perception system, an automatic driving perception method and a vehicle, which can automatically determine a target recognition module according to a scene by utilizing a central computing platform to achieve the best perception effect.
In a first aspect, an embodiment of the present invention provides an automatic driving sensing system, including: the system comprises an object identification module, an image identification module, a time synchronization box and a central computing platform;
the time synchronization box is respectively connected with the object identification module, the image identification module and the central computing platform;
the object identification module is used for determining object information in the environment where the vehicle is located and sending the object information to the central computing platform through the time synchronization box;
the image identification module is used for determining image information around the vehicle and sending the image information to the central computing platform through the time synchronization box;
the central computing platform is used for determining a target recognition module according to the object information and the image information so as to perform automatic driving perception through the target recognition module, and the target recognition module is the object recognition module or the image recognition module.
In a second aspect, an embodiment of the present invention further provides an automatic driving perception method, where the method is executed by a central computing platform in the automatic driving perception system according to any of the embodiments of the present invention, and the method includes:
acquiring object information and image information;
determining a target identification module according to the object information and the image information;
carrying out automatic driving perception according to the target recognition module, wherein the target recognition module is the object recognition module and/or the image recognition module;
correspondingly, the method further comprises the following steps:
and carrying out fault detection and monitoring on the target identification module, and determining the target identification module again according to the current scene information when the target identification module is monitored to have a fault.
In a third aspect, an embodiment of the present invention further provides a vehicle, including the automatic driving sensing system in any embodiment of the present invention, so as to implement automatic driving.
The embodiment of the invention provides an automatic driving perception system, a method and a vehicle, wherein object information in the environment where the vehicle is located is determined through an object recognition module, and the object information is sent to a central computing platform through a time synchronization box; then, image information around the vehicle is transmitted to the central computing platform through the object identification module and the time synchronization box; and finally, determining a target recognition module through a central computing platform according to the object information and the image information so as to realize automatic driving perception through the target recognition module. The object recognition module and the image recognition module in the system can guarantee that the vehicle can sense surrounding scenes in an all-around mode, the target recognition module can be automatically determined according to the scenes by the aid of the central computing platform, and the best sensing effect is achieved.
Drawings
Fig. 1 is a schematic structural diagram of an automatic driving sensing system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an automatic driving sensing system according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of an automatic driving sensing method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle according to a fourth embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present invention are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in the present invention are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that reference to "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present invention are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Example one
Fig. 1 is a schematic structural diagram of an automatic driving perception system according to an embodiment of the present invention, where the system is applicable to a situation of automatic driving of a vehicle, where the automatic driving perception system may be implemented by software and/or hardware, and the automatic driving perception system may include an object recognition module and an image recognition module for automatic driving perception.
As shown in fig. 1, an automatic driving sensing system according to a first embodiment of the present invention includes:
an object recognition module 110, an image recognition module 120, a time synchronization box 130, and a central computing platform 140;
the time synchronization box 130 is connected to the object recognition module 110, the image recognition module 120, and the central computing platform 140, respectively;
the object identification module 110 is used for determining object information in the environment where the vehicle is located and sending the object information to the central computing platform 140 through the time synchronization box 130;
the image recognition module 120 is used for determining image information around the vehicle and sending the image information to the central computing platform 140 through the time synchronization box 130;
the central computing platform 140 is configured to determine a target recognition module according to the object information and the image information, so as to perform automatic driving perception through the target recognition module, where the target recognition module is the object recognition module 110 or the image recognition module 120.
In the present embodiment, the object recognition module 110 may be a module for recognizing an object around the vehicle, the object recognition module 110 may include various radar sensors, and the type of the radar sensor is not specifically limited thereto, and for example, the object recognition module 110 may include a solid-state laser radar, an ultrasonic radar, and a millimeter-wave radar. The radar sensors in the object recognition module 110 may be disposed at different positions around the vehicle body to realize recognition perception of objects at different distances around the vehicle body.
The solid-state laser radar may be one of laser radars, which is a radar system that detects characteristic quantities such as a position and a speed of a target by emitting a laser beam. Illustratively, the solid state lidar may be a semiconductor lidar; the ultrasonic radar can be a radar which uses ultrasonic waves for positioning; the millimeter wave radar may be a radar operating in the millimeter wave band (millimeter wave) detection.
The object identification module 110 may be configured to identify object information in an environment where the vehicle is located, specifically, various radar sensors in the object identification module 110 may identify objects around the vehicle, and the object information may be understood as the size and shape of the objects around the vehicle, for example, the objects may be obstacles, pedestrians, traffic lights, and the like, and the radar sensors identify the objects as the prior art, which is not described herein again.
The object identification module 110 may be hard-wired to the time synchronization box 130. Wherein, the hard wire is a connecting wire between the modules.
Specifically, the object identification module 110 may be connected to the time synchronization box 130, and configured to send the identified object information to the central computing platform 140, so that the central computing platform 140 may determine the current scene according to the object information.
The object identification module 110 is connected to the time synchronization box 130, and may further obtain time information provided by the time synchronization box 130, so as to ensure time synchronization between the object identification module 110 and the time synchronization box 130, where the time information may be understood as current time.
In this embodiment, the image recognition module 120 may be a module for recognizing an image around the vehicle, the image recognition module 120 may include a plurality of camera sensors, and the types and types of the camera sensors are not specifically limited herein, and for example, the image recognition module 120 may include a common camera and a fisheye camera. Each camera sensor in the image recognition module 110 may be disposed at different positions around the vehicle body to realize recognition perception of images at different distances around the vehicle body.
The common camera can be any type and model of camera with the most basic camera shooting function. The fisheye camera is a panoramic camera which can independently realize large-range monitoring without dead angles, and can realize 360-degree view coverage around a vehicle body.
The image recognition module 120 may be configured to recognize image information around the vehicle, and specifically, the various camera sensors in the image recognition module 120 may recognize images around the vehicle, and the image information may be understood as information included in the images captured by the cameras, and for example, the image information may be information about weather conditions in the current environment and pedestrians, obstacles, and the like around the vehicle. The camera is in the prior art for recognizing images, and details are not described here.
The image recognition module 120 may be hard-wired to the time synchronization box 130.
Specifically, the image recognition module 120 may be connected to the time synchronization box 130, and configured to send the recognized image information to the central computing platform 140, so that the central computing platform 140 may determine the current scene according to the image information.
The image recognition module 120 is connected to the time synchronization box 130, and can further obtain the time information provided by the time synchronization box 130, so as to ensure the time synchronization between the image recognition module 110 and the time synchronization box 130.
In this embodiment, the time synchronization box 130 may be a device capable of providing time information, and the time synchronization box 130 may serve as a central node of all sensors and manage the object recognition module 110 and the image recognition module 120.
The time synchronization box 130 may be connected to the object recognition module 110 and the image recognition module 120 through hard wires, respectively, and the time synchronization box 130 may be connected to the central computing platform 140 through a high-speed serial computer extended bus (PCIE) line.
Specifically, the time synchronization box 130 may be connected to the object recognition module 110, and configured to obtain the object information around the vehicle determined by the object recognition module 110; the time synchronization box 130 may be connected to the image recognition module 120, and is configured to obtain the image information of the vehicle periphery determined by the image recognition module 120; the time synchronization box 130 may be connected to the object recognition module 110 and the image recognition module 120, and is configured to maintain time synchronization between the object recognition module 110 and the image recognition module 120, that is, to ensure that the object recognition module 110 and the image recognition module 120 operate at the same time to obtain the object information and the image information.
The time synchronization box 130 may be connected to the central computing platform 140, and is configured to send the acquired object information and the image information to the central computing platform 140 at the same time, so that the central computing platform 140 may determine a target recognition module according to the object information and the image information at the same time, and implement automatic driving perception according to the target recognition module.
The object recognition module may include an object recognition module or an image recognition module, and the object recognition module is specifically determined according to the object information and the image information.
In the present embodiment, the central computing platform 140 may be an information recognition device, and the central computing platform 140 may recognize object information and image information, for example.
The central computing platform 140 may be connected to the time synchronization box 130 via PCIE lines.
The central computing platform 140 may be connected to the time synchronization box 130, and configured to obtain the object information and the image information, identify the object information and the image information, determine an identification module with the best perception effect, and determine the identification module with the best perception effect as the target identification module.
Specifically, the determination by the central computing platform 140 of the target recognition module may include the following three cases:
in case one, if the central computing platform 140 determines that the current environment of the vehicle is daytime or good weather with good light according to the object information and the image information, the target recognition module may be determined as any one of the object recognition module 110 and the image recognition module 120, and the automatic driving perception is performed by the object recognition module 110 or the image recognition module 120 according to whether the object is present.
In case two, if the central computing platform 140 determines that the current environment of the vehicle is severe weather according to the object information and the image information, the target recognition module may be determined as the image recognition module 120, and the automatic driving perception may be performed according to the image recognition module 120. Wherein, bad weather can include weather such as strong wind, heavy snow and haze.
In case of determining that the current environment of the vehicle is a scene such as a night, a weak light ray or a backlight according to the object information and the image information, the central computing platform 140 may determine the target recognition module as the object recognition module 110, and perform automatic driving perception according to the object recognition module 110.
The automatic driving perception system provided by the embodiment of the invention firstly determines object information in the environment where a vehicle is located through an object recognition module, and sends the object information to the central computing platform through the time synchronization box; then, image information around the vehicle is transmitted to the central computing platform through the object identification module and the time synchronization box; and finally, determining a target recognition module through a central computing platform according to the object information and the image information so as to realize automatic driving perception through the target recognition module. The object recognition module and the image recognition module in the system can guarantee that the vehicle can sense surrounding scenes in an all-around mode, the target recognition module can be automatically determined according to the scenes by the aid of the central computing platform, and the best sensing effect is achieved.
Example two
Fig. 2 is a schematic structural diagram of an automatic driving sensing system according to a second embodiment of the present invention, which is optimized based on the first embodiment. The embodiments of the present invention are not detailed in the first embodiment, and are not described in detail herein.
In the present embodiment, the object recognition module 210 includes: a long range radar 211, at least one medium range radar 212, at least one short range radar 213; the object information includes: long-distance object information, medium-distance object information and short-distance object information; the long-distance radar 211 is arranged in front of the center of the vehicle body and used for acquiring long-distance object information; at least one middle-distance radar 212 is arranged around the roof of the vehicle and used for acquiring middle-distance object information; at least one short distance radar 213 is respectively disposed at the front, rear, left, and right sides of the vehicle for acquiring short distance object information.
Where long-range radar 211 may be a radar that recognizes long-range objects, long-range radar 211 may be a primary recognition radar, and long-range radar 211 may be a solid state lidar, for example. The long-distance radar 211 can be arranged in front of the center of the vehicle body, so that the view coverage right in front of the vehicle is realized, and the long-distance object information in front of the vehicle can be identified.
The long-range radar 211 may be connected to the time synchronization box 230 through ethernet for transmitting the acquired long-range object information to the time synchronization box 230. The remote object information may be object information that is remote from the vehicle. Specifically, the specific value of the distance may be preset, and is not limited herein.
At least one of the medium range radars 212 may be a radar that recognizes medium range objects, the medium range radar 212 may be a medium range secondary radar, and the medium range radar 212 may be a solid state lidar, for example. The number of the middle range radars 212 may be any number, and is not limited herein, and for example, the number of the middle range radars 212 may be eight. Can place long-range radar 211 in the roof around, realize vehicle 360 degrees visual field covers of distance all around for the object information of vehicle distance all around is discerned.
At least one middle range radar 212 may be connected to the time synchronization box 230 via ethernet for transmitting the acquired middle range object information to the time synchronization box 230. The medium-distance object information may be object information at a moderate distance from the vehicle. Specifically, the specific value of the middle distance may be preset, and is not limited herein.
Wherein the at least one short-range radar 213 may be a radar that identifies short-range objects, the short-range radar 213 may be a short-range secondary radar, and the short-range radar 213 may be a blind-fill solid state lidar, for example. The number of the short-range radars 213 may be any number, and is not limited to this, and the short-range radars 213 may be four, for example. The short-range radars 213 may be respectively built in the front, rear, left, and right sides of the vehicle to achieve 360-degree visual coverage of the vehicle body for recognizing object information within 360 degrees of the vehicle short range.
At least one short-range radar 213 may be connected to the time synchronization box 230 via ethernet for transmitting the acquired short-range object information to the time synchronization box 230. The short-distance object information may be object information that is short-distance from the vehicle. Specifically, the specific value of the close distance may be preset, and is not limited herein.
Further, the image recognition module 220 includes: at least one long-range camera 221, at least one medium-range camera 222, at least one short-range camera 223; the image information includes: long-distance image information, medium-distance image information and short-distance image information; at least one remote camera 221 is arranged in front of the center of the vehicle body and used for acquiring remote image information; at least one middle distance camera 222 is respectively arranged on the front side, the rear side, the left side and the right side of the vehicle and is used for acquiring middle distance image information; at least one close-distance camera 223 is respectively disposed at the front, rear, left, and right sides of the vehicle for acquiring close-distance image information.
The at least one long-distance camera 221 may be a camera for recognizing a long-distance image, the long-distance camera 221 may be a main recognition camera, and for example, the long-distance camera 221 may be a general camera, and a horizontal angle of view and a vertical angle of view of the main recognition camera are 61 ° and 34 °. The number of the long-distance radars 221 may be any number, and is not limited herein specifically, and for example, the number of the long-distance radars 221 may be three. The remote camera 221 may be built in front of the center of the vehicle body to realize the view coverage right in front of the vehicle for recognizing image information of a remote distance in front of the vehicle.
At least one remote camera 221 may be connected to the time synchronization box 230 through a coaxial line for transmitting the acquired remote image information to the time synchronization box 230. The remote image information may be image information that is remote from the vehicle. Specifically, the specific value of the distance may be preset, and is not limited herein.
The at least one middle distance camera 222 may be a camera for recognizing a middle distance image, the middle distance camera 222 may be a middle distance auxiliary recognition camera, and the middle distance camera 222 may be an ordinary camera. The number of the middle distance radars 222 may be any number, and is not limited herein, and for example, the number of the middle distance radars 222 may be three. The middle-range radars 222 may be respectively built in the rear, left, and right sides of the vehicle to achieve the view coverage of the rear, left, and right sides of the vehicle body for recognizing the image information of the left, right, and rear sides of the middle range of the vehicle.
The at least one middle-range camera 222 may be connected to the time synchronization box 230 through a coaxial line, and configured to transmit the acquired middle-range image information to the time synchronization box 230. The middle-distance image information may be image information of a middle distance from the vehicle. Specifically, the specific value of the intermediate distance may be preset, and is not limited herein.
The at least one close-range camera 223 may be a camera for recognizing a close-range image, the close-range camera 223 may be a close-range auxiliary recognition camera, and the close-range camera 223 may be a fisheye camera, for example, a horizontal field angle of the fisheye camera is 186 ° and a vertical field angle is 105 °. The number of the short-range radars 223 may be any number, and is not limited herein, and for example, the number of the short-range radars 223 may be four. The short-range radars 223 can be respectively arranged in the front side, the rear side, the left side and the right side of the vehicle, so that the view coverage of the vehicle body in a short range of 360 degrees is realized, and the image information in the short range of 360 degrees of the vehicle can be recognized.
The at least one close-up camera 223 may be connected to the time synchronization box 230 through a coaxial line for transmitting the acquired close-up image information to the time synchronization box 230. The close-range image information may be image information close to the vehicle. Specifically, the specific value of the close distance may be preset, and is not limited herein.
Further, the object identification module 210 further includes: at least one short range alternative radar 214 and at least one long range alternative radar 215; the object information further includes: first candidate information and second candidate information. At least one long-distance alternative radar 214 is respectively arranged in the front center and the rear center of the vehicle body and used for acquiring first alternative information; at least one short-distance alternative radar 215 is respectively arranged at the front side, the rear side, the left side and the right side of the vehicle and used for acquiring second alternative information.
At least one long-range candidate radar 214 may be a candidate radar for identifying long-range objects, and the long-range candidate radar 214 may be a millimeter-wave radar, for example. The number of the long-distance candidate radars 214 may be any number, and is not specifically limited herein, for example, two long-distance candidate radars 214 may be respectively embedded in the front side and the rear side of the vehicle, so as to implement the view coverage of the long-distance front side and the long-distance rear side of the vehicle body, and when the long-distance recognition radar 211 fails, the object information of the long-distance front side and the long-distance rear side of the vehicle may be recognized by the long-distance candidate radars 214.
At least one long range alternative radar 214 may be connected to the time synchronization box 230 via a CAN line for transmitting the acquired second alternative information to the time synchronization box 230. Wherein, the second alternative information may be object information distant from the vehicle.
At least one short-range alternative radar 215 may be an alternative radar for identifying short-range objects, and the short-range alternative radar 215 may be an ultrasonic radar, for example. The number of the short-distance candidate radars 215 may be any number, and is not specifically limited herein, for example, sixteen short-distance candidate radars 215 may be respectively disposed in the front side, the rear side, the left side, and the right side of the vehicle, so as to achieve the short-distance 360-degree view coverage of the vehicle body, and when the short-distance recognition radar 213 fails, object information in the short-distance 360-degree range of the vehicle may be recognized by the short-distance candidate radars 215.
The at least one short-range alternative radar 215 may be connected to the time synchronization box 230 via a CAN line for transmitting the acquired first alternative information to the time synchronization box 230. The first candidate information may be object information close to the vehicle.
Further, the time synchronization box 230 is configured to: acquiring object information sent by the object recognition module 210 and image information sent by the image recognition module 220; the object information and the image information are sent to a central computing platform 240.
The time synchronization box 230 may send the object information and the image information to the central computing platform 240 at the same time, so that the central computing platform 240 may determine the target recognition module according to the image information and the object information, and perform automatic driving perception through the target recognition module.
Further, the time synchronization box 230 is configured to: the time information is acquired and sent to the object recognition module 210 and the image recognition module 220 to synchronize the object recognition module 210 and the image recognition module 220 in time.
Here, time synchronization may be understood as keeping the time of the object recognition module 210 and the image recognition module 220 at the same time value. The time synchronization can ensure that there is no time difference between the object recognition module 210 and the image recognition module 220, and the object information and the image information can be obtained at the same time, and the time synchronization box 230 can obtain the object information and the time information at the same time and simultaneously send them to the central computing platform 240, so that the central computing platform 240 can obtain the object information and the time information at the same time.
The time synchronization box 230 can obtain the time information in the following two ways: in the first mode, the time synchronization box 230 may be connected to a GPS of the vehicle through an antenna to acquire time provided by the GPS as time information; in a second mode, the time synchronization box 230 may use the time provided by the time synchronization box 230 itself as the time information.
It should be noted that, when the GPS can work normally, the time synchronization box 230 preferentially acquires the time information provided by the GPS. When the vehicle travels in an area with a poor network signal, for example, when the vehicle passes through a tunnel, the GPS cannot operate normally and provide correct time, and at this time, the time synchronization box 230 may transmit the provided time as time information to the object recognition module 210 and the image recognition module 220 to ensure time synchronization.
Further, the central computing platform is specifically configured to, including: determining scene information according to the object information and the image information, wherein the scene information comprises first scene information, second scene information and third scene information, and the first scene information, the second scene information and the third scene information respectively correspond to different scenes; if the scene information is first scene information, determining the target recognition module as an image recognition module 220 or an object recognition module 210; if the scene information is the second scene information, determining the target recognition module as an image recognition module 220; if the scene information is the third scene information, determining the target identification module as an object identification module 210;
the scene information may be information of an environment where the vehicle is currently located, and current weather, light intensity, whether obstacles exist around the vehicle, whether pedestrians exist, and the like may be determined according to the scene information. Different target recognition modules can be determined according to different scene information.
The first scene information may include clear weather, daytime, good light and other information; the second scene information may include information such as strong wind, strong snow, and haze, and the third scene information may include information such as dark night, poor light, or backlight.
For example, if the current scene includes the first scene information, the target recognition module may be determined as any one of the object recognition module 210 or the image recognition module 220, and the automatic driving perception may be performed by the module 210 or the image recognition module 220 according to whether the object is.
For example, if the current scene includes the second scene information, the target recognition module may be determined as the image recognition module 220, and the automatic driving perception may be performed according to the image recognition module 220.
For example, if the current scene includes the third scene information, the target recognition module may be determined as the object recognition module 210, and the automatic driving perception may be performed according to the object recognition module 220.
Further, the central computing platform 240 is further configured to: and carrying out fault detection and monitoring on the target identification module, and determining the target identification module again according to the current scene information when the target identification module is monitored to have a fault.
The failure of the target identification module may include occurrence of a failure caused by a change in a current scene. For example, a radar sensor in the object recognition module 210 may be failed in case of rain or snow, and a camera sensor in the image recognition module 220 may be failed in case of night.
For example, after the object recognition module is determined to be the object recognition module 210 according to the first scene information, the current scene is changed to the second scene information due to the driving of the vehicle, and at this time, each radar sensor in the object recognition module 210 may have a fault due to the strong wind, and then the image recognition module 220 needs to be determined to be the object recognition module again according to the second scene information where the object recognition module is currently located.
For example, if the current scene is changed to the third scene information due to a change of time after the object recognition module is determined to be the image recognition module 220 according to the first scene information, at this time, each camera sensor in the image recognition module 220 may be failed due to being in the night, and then the object recognition module 210 needs to be determined again to be the object recognition module according to the current third scene information.
Further, the central computing platform 240 is used for controlling the object recognition module 210 or the image recognition module 220 to perform individual recognition, so that the object recognition module performs individual recognition.
The object recognition module 210 and the image recognition module 220 may work independently, and the target recognition module is the object recognition module 210 or the image recognition module 220, that is, the central computing platform 240 performs automatic driving perception according to one recognition module. Therefore, after the central computing platform 240 determines the target recognition module, one of the recognition modules may be controlled to operate, and the automatic driving may be performed according to the information recognized by the one recognition module.
EXAMPLE III
Fig. 3 is a schematic flow chart of an automatic driving perception method provided in the third embodiment of the present invention, where the method is applicable to a situation of automatic driving of a vehicle, and the method may be executed by a central computing platform in the automatic driving perception system provided in the first or second embodiment, where the central computing platform may be implemented by hardware or software, and the method specifically includes the following steps:
and S110, acquiring object information and image information.
In this step, the central computing platform may obtain the object information from the object recognition module through the time synchronization box, and may obtain the image information from the image recognition module through the time synchronization box.
Specifically, the time synchronization box sends the time information to the object identification module and the image identification module for time synchronization, the object identification module and the image identification module can respectively send the object information and the image information to the time synchronization box at the same time, and the time synchronization box sends the object information and the time information to the central computing platform at the same time.
And S120, determining a target identification module according to the object information and the image information.
In this step, the object recognition module or the image recognition module may be determined as the target recognition module according to the object information and the image information.
For example, if it is determined from the object information and the image information that the current vehicle is in a good weather in daytime or in good light, any one of the object recognition module and the image recognition module may be determined as the target recognition module.
For example, if it is determined that the current scene of the vehicle is a backlight according to the object information and the image information, the camera sensor in the image recognition module cannot work in a backlight environment, and at this time, only the object recognition module can be determined as the target recognition module.
For example, if it is determined that the current scene of the vehicle is snowy weather according to the object information and the image information, the radar sensor in the object identification module cannot work in snowy weather, and at this time, only the image identification module can be determined as the target identification module.
S130, automatic driving perception is conducted according to the target recognition module, and the target recognition module is the object recognition module or the image recognition module.
In this embodiment, if the target recognition module is an object recognition module, the central computing platform may control the object recognition module to perform automatic driving perception.
Specifically, the object information is acquired by the object identification module, the acquired object information can be sent to the central computing platform by the object identification module through the time synchronization box, and the central computing platform can realize automatic driving according to the object information.
In this embodiment, if the target recognition module is an image recognition module, the central computing platform may control the image recognition module to perform automatic driving perception.
Specifically, the image recognition module acquires object information, the image recognition module can send the acquired image information to the central computing platform through the time synchronization box, and the central computing platform can achieve automatic driving according to the image information.
S140, fault detection and monitoring are carried out on the target identification module, and when the target identification module is monitored to have faults, the target identification module is determined again according to the current scene information.
Illustratively, after the target determination module is the object identification module, and the current scene where the vehicle is located is the haze weather, it can be detected that the object identification module fails, and the radar sensor cannot normally work due to haze, and after it is detected that the object identification module fails, the image identification module can be determined as the target identification module again according to the current scene information.
For example, when the object identification module is an image identification module and the current scene of the vehicle is at night, it may be detected that the image identification module has a fault and cannot identify the scene at night, and when it is detected that the image identification module has a fault, the object identification module may be determined as the target identification module again according to the current scene information.
The third embodiment of the invention provides an automatic driving perception method, which embodies the perception method of an automatic driving perception system. The object recognition module and the image recognition module in the method can guarantee that the vehicle can sense surrounding scenes in an all-around mode, the central computing platform can automatically determine the target recognition module according to the scenes to achieve the best sensing effect, and in addition, when the target recognition module breaks down, the target recognition module can be determined again, so that the automatic driving system can still continue to conduct automatic driving sensing normally.
Example four
Fig. 4 is a schematic structural diagram of a vehicle according to a fourth embodiment of the present invention. As shown in fig. 4, a vehicle according to a fourth embodiment of the present invention includes: one or more central computing platforms 41 and storage 42; the central computing platform 41 in the vehicle may be one or more, and one central computing platform 41 is taken as an example in fig. 4; storage 42 is used to store one or more programs; the one or more programs are executed by the one or more central computing platforms 41 such that the one or more central computing platforms 41 implement an automated driving perception method as described in any of the embodiments of the present invention.
The vehicle may further include: an input device 43 and an output device 44.
The central computing platform 41, the storage device 42, the input device 43 and the output device 44 in the vehicle may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The storage device 42 in the vehicle may be used as a computer readable storage medium for storing one or more programs, which may be software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the automatic driving perception method provided in one or two embodiments of the present invention (for example, the central computing platform 140 in the automatic driving perception system shown in fig. 1). The central computing platform 41 executes various functional applications and data processing of the vehicle by running software programs, instructions and modules stored in the storage device 42, that is, implements the automatic driving perception method in the above-described method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the vehicle, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from central computing platform 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 43 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the vehicle. The output device 44 may include a display device such as a display screen.
And, when the one or more programs comprised by the aforementioned vehicle are executed by the one or more central computing platforms 41, the programs perform the following operations:
acquiring object information and image information;
determining a target identification module according to the object information and the image information;
carrying out automatic driving perception according to the target recognition module, wherein the target recognition module is the object recognition module and/or the image recognition module;
correspondingly, fault detection and monitoring are carried out on the target identification module, and when the target identification module is monitored to have faults, the target identification module is determined again according to the current scene information.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
EXAMPLE five
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, is configured to perform an automatic driving perception method, where the method includes:
acquiring object information and image information;
determining a target identification module according to the object information and the image information;
carrying out automatic driving perception according to the target recognition module, wherein the target recognition module is the object recognition module or the image recognition module;
correspondingly, the method further comprises the following steps:
and carrying out fault detection and monitoring on the target identification module, and determining the target identification module again according to the current scene information when the target identification module is monitored to have a fault.
Optionally, the program may be further configured to perform an automatic driving perception method provided by any of the embodiments of the present invention when executed by the processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An automatic driving perception system, comprising: the system comprises an object identification module, an image identification module, a time synchronization box and a central computing platform;
the time synchronization box is respectively connected with the object identification module, the image identification module and the central computing platform;
the object identification module is used for determining object information in the environment where the vehicle is located and sending the object information to the central computing platform through the time synchronization box;
the image identification module is used for determining image information around the vehicle and sending the image information to the central computing platform through the time synchronization box;
the central computing platform is used for determining a target recognition module according to the object information and the image information so as to perform automatic driving perception through the target recognition module, and the target recognition module is the object recognition module or the image recognition module.
2. The system of claim 1, wherein the object identification module comprises: a long range radar, at least one medium range radar, at least one short range radar; the object information includes: long-distance object information, medium-distance object information and short-distance object information;
the long-distance radar is arranged in front of the center of the vehicle body and used for acquiring long-distance object information;
the at least one middle distance radar is arranged around the roof and used for acquiring information of middle distance objects;
the at least one short-distance radar is respectively arranged on the front side, the rear side, the left side and the right side of the vehicle and used for acquiring short-distance object information.
3. The system of claim 1, wherein the image recognition module comprises: the system comprises at least one long-distance camera, at least one middle-distance camera and at least one short-distance camera; the image information includes: long-distance image information, medium-distance image information and short-distance image information;
the at least one remote camera is arranged in front of the center of the vehicle body and used for acquiring remote image information;
the at least one middle-distance camera is respectively arranged on the front side, the rear side, the left side and the right side of the vehicle and is used for acquiring middle-distance image information;
the at least one close-distance camera is respectively arranged on the front side, the rear side, the left side and the right side of the vehicle and is used for acquiring close-distance image information.
4. The system of claim 1, wherein the object identification module further comprises: at least one short range alternative radar and at least one long range alternative radar; the object information further includes: first candidate information and second candidate information.
The at least one long-distance alternative radar is respectively arranged in the center of the front side and the center of the rear side of the vehicle body and used for acquiring first alternative information;
the at least one short-distance alternative radar is respectively arranged on the front side, the rear side, the left side and the right side of the vehicle and used for acquiring second alternative information.
5. The system of claim 1, wherein the time synchronization box is configured to:
acquiring object information sent by the object identification module and image information sent by the image identification module;
and sending the object information and the image information to the central computing platform.
6. The system of claim 1, wherein the time synchronization box is configured to: and acquiring time information, and sending the time information to the object identification module and the image identification module so as to synchronize the time of the object identification module and the time of the image identification module.
7. The system of claim 1, wherein the central computing platform is specifically configured to, including:
determining scene information according to the object information and the image information, wherein the scene information comprises first scene information, second scene information and third scene information, and the first scene information, the second scene information and the third scene information respectively correspond to different scenes;
if the scene information is first scene information, determining the target identification module as an image identification module or an object identification module;
if the scene information is second scene information, determining the target recognition module as an image recognition module;
if the scene information is third scene information, determining the target identification module as an object identification module;
accordingly, the central computing platform is further configured to:
and carrying out fault detection and monitoring on the target identification module, and determining the target identification module again according to the current scene information when the target identification module is monitored to have a fault.
8. The system of claim 1, wherein the central computing platform is configured to control the object recognition module or the image recognition module to perform recognition individually, such that the object recognition module performs recognition individually.
9. An automated driving perception method, the method being performed by a central computing platform in the system of any one of claims 1-8, comprising:
acquiring object information and image information;
determining a target identification module according to the object information and the image information;
carrying out automatic driving perception according to the target recognition module, wherein the target recognition module is the object recognition module or the image recognition module;
correspondingly, the method further comprises the following steps:
and carrying out fault detection and monitoring on the target identification module, and determining the target identification module again according to the current scene information when the target identification module is monitored to have a fault.
10. A vehicle, characterized by comprising:
one or more central computing platforms;
storage means for storing one or more programs;
when executed by the one or more central computing platforms, cause the one or more central computing platforms to implement the automated driving perception method as recited in claim 9.
CN202011439340.6A 2020-12-07 2020-12-07 Automatic driving perception system and method and vehicle Pending CN112526997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011439340.6A CN112526997A (en) 2020-12-07 2020-12-07 Automatic driving perception system and method and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011439340.6A CN112526997A (en) 2020-12-07 2020-12-07 Automatic driving perception system and method and vehicle

Publications (1)

Publication Number Publication Date
CN112526997A true CN112526997A (en) 2021-03-19

Family

ID=75000156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011439340.6A Pending CN112526997A (en) 2020-12-07 2020-12-07 Automatic driving perception system and method and vehicle

Country Status (1)

Country Link
CN (1) CN112526997A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104678912A (en) * 2013-11-28 2015-06-03 梅特勒-托利多(常州)精密仪器有限公司 Measurement system consisting of a plurality of sensors
CN109375635A (en) * 2018-12-20 2019-02-22 安徽江淮汽车集团股份有限公司 A kind of autonomous driving vehicle road environment sensory perceptual system and method
CN109690345A (en) * 2016-09-08 2019-04-26 克诺尔商用车制动***有限公司 The device of vehicle environmental is sensed when being installed to vehicle
CN109696172A (en) * 2019-01-17 2019-04-30 福瑞泰克智能***有限公司 A kind of multisensor flight path fusion method, device and vehicle
CN110329273A (en) * 2019-06-18 2019-10-15 浙江大学 A kind of method and device synchronous for unmanned acquisition data
CN110412564A (en) * 2019-07-29 2019-11-05 哈尔滨工业大学 A kind of identification of train railway carriage and distance measuring method based on Multi-sensor Fusion
CN110537109A (en) * 2017-04-28 2019-12-03 深圳市大疆创新科技有限公司 Sensing component for autonomous driving
CN210465683U (en) * 2019-07-23 2020-05-05 北京九州华海科技有限公司 Be used for intelligent networking car camera and radar data synchronous acquisition system
CN111258318A (en) * 2020-01-22 2020-06-09 东风商用车有限公司 Automatic driving system of sanitation vehicle and control method thereof
CN210882093U (en) * 2019-09-16 2020-06-30 郑州宇通客车股份有限公司 Automatic driving vehicle environment perception system and automatic driving vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104678912A (en) * 2013-11-28 2015-06-03 梅特勒-托利多(常州)精密仪器有限公司 Measurement system consisting of a plurality of sensors
CN109690345A (en) * 2016-09-08 2019-04-26 克诺尔商用车制动***有限公司 The device of vehicle environmental is sensed when being installed to vehicle
CN110537109A (en) * 2017-04-28 2019-12-03 深圳市大疆创新科技有限公司 Sensing component for autonomous driving
CN109375635A (en) * 2018-12-20 2019-02-22 安徽江淮汽车集团股份有限公司 A kind of autonomous driving vehicle road environment sensory perceptual system and method
CN109696172A (en) * 2019-01-17 2019-04-30 福瑞泰克智能***有限公司 A kind of multisensor flight path fusion method, device and vehicle
CN110329273A (en) * 2019-06-18 2019-10-15 浙江大学 A kind of method and device synchronous for unmanned acquisition data
CN210465683U (en) * 2019-07-23 2020-05-05 北京九州华海科技有限公司 Be used for intelligent networking car camera and radar data synchronous acquisition system
CN110412564A (en) * 2019-07-29 2019-11-05 哈尔滨工业大学 A kind of identification of train railway carriage and distance measuring method based on Multi-sensor Fusion
CN210882093U (en) * 2019-09-16 2020-06-30 郑州宇通客车股份有限公司 Automatic driving vehicle environment perception system and automatic driving vehicle
CN111258318A (en) * 2020-01-22 2020-06-09 东风商用车有限公司 Automatic driving system of sanitation vehicle and control method thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李伟章: "《城市轨道交通通信》", 31 October 2008, 中国铁道出版社 *
段卞霞: "《电视摄像基础教程》", 30 September 2012, 河南大学出版社 *
甘荣兵: "《雷达有源干扰技术及***设计》", 3 December 2019, 国防大学出版社 *
铁路职工岗位培训教材编委委员会: "《铁路通信工》", 30 June 2014, 中国铁道出版社 *

Similar Documents

Publication Publication Date Title
CN112562314B (en) Road end sensing method and device based on deep fusion, road end equipment and system
CN109389832B (en) System and method for improving obstacle awareness using a V2X communication system
US10490079B2 (en) Method and device for selecting and transmitting sensor data from a first motor vehicle to a second motor vehicle
US20180120842A1 (en) Radar multipath processing
WO2020147311A1 (en) Vehicle driving guarantee method and apparatus, device, and readable storage medium
CN113721621B (en) Vehicle control method, device, electronic equipment and storage medium
CN112330915B (en) Unmanned aerial vehicle forest fire prevention early warning method and system, electronic equipment and storage medium
CN110083099B (en) Automatic driving architecture system meeting automobile function safety standard and working method
CN105899911A (en) System and method for augmented reality support
US20190389486A1 (en) System and method for detecting objects in an autonomous vehicle
CN112356846A (en) Automatic driving control system and method and vehicle
CN113619599B (en) Remote driving method, system, device and storage medium
CN110789515B (en) System and method for hardware validation in a motor vehicle
CN114627409A (en) Method and device for detecting abnormal lane change of vehicle
CN112526997A (en) Automatic driving perception system and method and vehicle
CN114845267B (en) Sensor data sharing method and device based on Internet of vehicles
CN115743093A (en) Vehicle control method and device, automatic parking auxiliary controller, terminal and system
US20220138889A1 (en) Parking seeker detection system and method for updating parking spot database using same
WO2021125076A1 (en) Information processing device, information processing method, program, image capturing device, and image capturing system
US20200394906A1 (en) Apparatus for assisting driving of a host vehicle based on augmented reality and method thereof
CN209454662U (en) Vehicle environmental sensing device and vehicle
CN113920782B (en) Multi-sensor fusion method applied to parking space detection
CN117496483B (en) Night image recognition method and system
WO2023087248A1 (en) Information processing method and apparatus
US20240020982A1 (en) Road observation apparatus and method for observing road

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication