CN115619871A - Vehicle positioning method, device, equipment and storage medium - Google Patents

Vehicle positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN115619871A
CN115619871A CN202211080423.XA CN202211080423A CN115619871A CN 115619871 A CN115619871 A CN 115619871A CN 202211080423 A CN202211080423 A CN 202211080423A CN 115619871 A CN115619871 A CN 115619871A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target
matching
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211080423.XA
Other languages
Chinese (zh)
Inventor
马鑫军
唐培培
张振林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Priority to CN202211080423.XA priority Critical patent/CN115619871A/en
Publication of CN115619871A publication Critical patent/CN115619871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Navigation (AREA)

Abstract

The application discloses a vehicle positioning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring first point cloud data corresponding to a plurality of first objects of a target vehicle in a first detection range, wherein the first detection range is an actual detection range of the target vehicle at the current position; acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map; filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except the matching difference object in the plurality of first objects; determining point cloud positioning information corresponding to a target vehicle according to first point cloud data corresponding to a target object; the first object and the second object are road identification objects, and more accurate positioning information is obtained through the target object with higher matching degree.

Description

Vehicle positioning method, device, equipment and storage medium
Technical Field
The present application relates to the field of vehicle positioning methods, and in particular, to a vehicle positioning method, apparatus, device, and storage medium.
Background
Travel convenience is provided by intelligent driving, and intelligent driving vehicles are gradually produced and applied in recent years. In the driving process of the intelligent driving vehicle, vehicle positioning (such as the position and the orientation of the vehicle) plays an important role in perception, decision, control and other modules in automatic driving, and incorrect positioning information brings unpredictable results.
In order to solve the problem of vehicle positioning, various solutions are proposed, for example, a high-precision map is used in combination with a forward-looking camera to realize the transverse positioning of a lane line, and the method has low positioning accuracy and high cost.
Disclosure of Invention
In order to solve the technical problem, the application discloses a vehicle positioning method, wherein a first object in an actual detection range is filtered by combining first point cloud data in the actual detection range and second point cloud data in a virtual detection range, so that an object with low matching degree can be filtered out, a target object with high matching degree is obtained, and point cloud positioning information of a target vehicle is determined according to the filtered first point cloud data corresponding to the target object; more accurate positioning information can be obtained.
In order to achieve the above object of the invention, the present application provides a vehicle positioning method, including:
acquiring first point cloud data corresponding to a plurality of first objects of a target vehicle in a first detection range, wherein the first detection range is an actual detection range of the target vehicle at the current position;
acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map;
filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object of which the first point cloud data is matched with the second point cloud data and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data;
determining point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object;
wherein the first object and the second object are road sign objects.
In some embodiments, a target point cloud map corresponding to the second point cloud data is obtained;
matching the first object and the second object based on the target point cloud map, the first point cloud data and the second point cloud data to obtain an initial matching result;
and screening the target object from the plurality of first objects based on the initial matching result.
In some embodiments, the screening the target object from the plurality of first objects based on the initial matching result comprises:
screening target matching point cloud data of which the first point cloud data are matched with the second point cloud data from the target point cloud map based on the initial matching result;
acquiring a first matching object and a second matching object corresponding to the target matching point cloud data;
under the condition that a first matching object corresponding to the target matching point cloud data is not matched with a second matching object, determining the first matching object corresponding to the target matching point cloud data as the matching difference object;
and screening the target object from the plurality of first objects based on the matched difference object.
In some embodiments, before the filtering the first object based on the first point cloud data, the second point cloud data, and the second object to obtain a target object, the method further comprises:
acquiring a preset matching model and current pose information corresponding to the target vehicle;
obtaining first target pose information and a first target point cloud matching reference value corresponding to the first target pose information based on the preset matching model, the current pose information, the first point cloud data and the second point cloud data;
and under the condition that the first target point cloud matching reference value does not meet a preset reference threshold value, filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object.
In some embodiments, the point cloud positioning information corresponding to the target vehicle is determined according to the first point cloud data corresponding to the target object;
acquiring a preset matching model and initial pose information;
positioning the target vehicle based on the preset matching model, the initial pose information, the first point cloud data corresponding to the target object and the second point cloud data to obtain second target pose information of the target vehicle and a second target point cloud matching reference value corresponding to the second target pose information;
and determining the second target pose information as the point cloud positioning information of the target vehicle under the condition that the second target point cloud matching reference value meets a preset reference threshold value.
In some embodiments, the obtaining first point cloud data corresponding to each of a plurality of first objects of the target vehicle within the first detection range includes,
acquiring image information of the target vehicle in the first detection range and a plurality of initial point cloud data;
analyzing the image information to obtain a plurality of first objects in the image information and coordinate information corresponding to each first object;
and performing point cloud conversion processing on a plurality of pieces of coordinate information based on the plurality of pieces of initial point cloud data and the plurality of first objects to obtain first point cloud data corresponding to the plurality of first objects.
In some embodiments, the obtaining second point cloud data corresponding to each of a plurality of second objects in a second detection range includes:
acquiring a second detection range of the current position in a preset map and preset conversion configuration information;
obtaining map coordinate information corresponding to a plurality of second objects of the target vehicle in the second detection range;
and performing point cloud conversion processing on the map coordinate information based on preset conversion configuration information to obtain second point cloud data corresponding to the second objects.
In some embodiments, the method further comprises:
acquiring current pose information corresponding to the target vehicle;
and determining target positioning information of the target vehicle based on the point cloud positioning information and the current pose information.
The present application further provides a vehicle positioning device, said device comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first point cloud data corresponding to a plurality of first objects of a target vehicle in a first detection range, and the first detection range is an actual detection range of the target vehicle in the current position;
the second acquisition module is used for acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map;
the first processing module is used for filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object which matches the first point cloud data with the second point cloud data, and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data;
the information determining module is used for determining point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object;
wherein the first object and the second object are road sign objects.
The present application further provides a vehicle positioning apparatus comprising a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the vehicle positioning method as described above.
The present application further provides a computer-readable storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded by a processor and executing the vehicle localization method as described above.
The embodiment of the application has the following beneficial effects:
according to the vehicle positioning method, the first point cloud data in the actual detection range and the second point cloud data in the virtual detection range are combined to filter the first object in the actual range, so that the object with low matching degree can be filtered out, the target object with high matching degree is obtained, and the point cloud positioning information of the target vehicle is determined according to the filtered first point cloud data corresponding to the target object; more accurate positioning information can be obtained.
Drawings
In order to more clearly illustrate the vehicle positioning method, device, apparatus and storage medium described in the present application, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some of the embodiments of the present application, and that other drawings may be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic view of an implementation environment of a vehicle positioning method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a vehicle positioning method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for determining a target object according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for screening a target object according to an embodiment of the present application;
fig. 5 is a method for determining point cloud positioning information according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a first point cloud data obtaining method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a method for acquiring second point cloud data according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart diagram illustrating another vehicle locating method according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a vehicle positioning device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown, where the implementation environment may include:
at least one terminal 01 and at least one server 02. The at least one terminal 01 and the at least one server 02 may perform data communication through a network.
In an alternative embodiment, the terminal 01 may be the performer of the vehicle positioning method. Terminal 01 may include, but is not limited to, vehicle terminals, smart phones, desktop computers, tablet computers, laptop computers, smart speakers, digital assistants, augmented Reality (AR)/Virtual Reality (VR) devices, smart wearable devices, and other types of electronic devices. The operating system running on terminal 01 may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
The server 02 may provide the terminal 01 with the associated address information of the target object and the first address information. Optionally, the server 02 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like.
Referring to fig. 2, which is a schematic flow chart illustrating a method for locating a vehicle according to an embodiment of the present application, the method steps as described in the embodiment or the flow chart are provided in this specification, but are based on the conventional; or the non-inventive act may include more or fewer steps. The sequence of steps recited in the embodiments is only one of many steps performed and does not represent a unique order of execution, and the vehicle localization method may be performed according to the method sequence shown in the embodiments or the drawings. Specifically, as shown in fig. 2, the method includes:
s201, first point cloud data corresponding to a plurality of first objects of the target vehicle in the first detection range are acquired.
In the embodiment of the present application, the first detection range may be an actual detection range of the target vehicle at the current position. The current location may be the location of the target vehicle at the current time. The first object may refer to a road identification object; for example, the first object may include a lane line, a speed limit sign, a utility pole, a guide arrow, a traffic sign, a drainage line, or the like.
In one example, the current position of the current vehicle may be acquired based on the information sensing device; the information sensing device may include an IMU (Inertial Measurement Unit), among others.
In one exemplary embodiment, a current location of a target vehicle may be obtained; thereby, the first detection range is determined based on the current position and the first preset detection distance. Wherein the first preset detection distance may be a preset detection distance of a sensing device on the target vehicle. The number of sensing devices on the target vehicle may be one or more. When one sensing device is used, taking the detection distance of the sensing device as a first preset detection distance; and when a plurality of sensing devices are arranged, taking the detection distance of the sensing device with the smaller detection distance in the plurality of devices as a first preset detection distance.
In one example, the first detection range may be determined with the current position as a center and the first preset detection distance as a radius.
In another example, the first detection range may be determined with the current position as a starting point, the heading direction along the head of the target vehicle as a detection length direction, the side of the target vehicle as a detection width direction, and the first preset detection distance as a detection length and a detection width.
Optionally, taking two sensing devices as an example, the sensing devices may include a first target sensing device and a second target sensing device.
A plurality of first objects of the target vehicle in a first detection range can be acquired based on the first target perception device; and acquiring first point cloud data corresponding to the first objects respectively based on the second target perception device and the first objects. The first target sensing equipment can be a camera device, and the second target sensing equipment can be a radar device; for example, the camera means may include a front-view camera or a camera, etc.; the radar means may comprise a lidar or the like.
In one possible exemplary embodiment, before the first target sensing device and the second target sensing device are used for information acquisition, calibration processing may be performed on the first target sensing device and the second target sensing device based on a finished vehicle coordinate system, so that the first target sensing device and the second target sensing device are located in the same coordinate system.
Specifically, external parameters of the first target sensing device and the second target sensing device may be calibrated based on the entire vehicle coordinate system.
S203, second point cloud data corresponding to the plurality of second objects in the second detection range are obtained.
In this embodiment, the second detection range is a virtual detection range of the current position in a preset map. The preset map may refer to a preset high-precision map. The second object may refer to a road marking object, for example, the second object may include a lane line, a speed limit sign, a power pole, a guide arrow, a traffic sign, a drainage line, or the like.
In one exemplary embodiment, a current location of a target vehicle may be obtained; and determining a second detection range based on the current position and a second preset detection distance. The second preset detection distance may be a preset detection distance,
in one example, the second detection range may be determined with the current position as a center and the second preset detection distance as a radius.
In another example, the second detection range may be determined with the current position as a starting point, the heading direction along the head of the target vehicle as a detection length direction, the side of the target vehicle as a detection width direction, and the second preset detection distance as the detection length and the detection width.
Optionally, the preset map may be entirely converted into a point cloud map, and second point cloud data corresponding to each of the plurality of second objects in the second detection range is obtained from the point cloud map.
S205, filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object.
In the embodiment of the application, the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object in which the first point cloud data is matched with the second point cloud data, and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data. The matching of the first point cloud data and the second point cloud data can represent that the first point cloud data and the second point cloud data are partially or completely overlapped. A mismatch between a first object corresponding to the first point cloud data and a second object corresponding to the second point cloud data may indicate that the first object and the second object are not identical, e.g., different in type and/or different in semantic label.
In some possible embodiments, after first point cloud data corresponding to each of the plurality of first objects and second point cloud data corresponding to each of the plurality of second objects are obtained, the first object may be directly filtered according to the first point cloud data, the second point cloud data, and the second object, so as to obtain the target object.
In other possible embodiments, the preset matching model and the current pose information corresponding to the target vehicle can be obtained; obtaining first pose information and a first point cloud matching reference value corresponding to the first pose information based on a preset matching model, current pose information, the first point cloud data and the second point cloud data; and under the condition that the first point cloud matching reference value does not meet a preset reference threshold, filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object. The preset matching model can refer to an algorithm model for matching and pose calculation of preset first point cloud data and second point cloud data; for example, the predetermined matching model may be an NDT (Normal distribution Transform) algorithm model. The current pose information may characterize a current location of the target vehicle and an orientation of the target vehicle. The first point cloud matching reference value can represent the matching degree of the first point cloud data and the second point cloud data; correspondingly, the lower the first point cloud matching reference value is, the higher the matching degree of the first point cloud data and the second point cloud data is.
Specifically, the current pose information, the first point cloud data, and the second point cloud data may be input into a preset matching model, so that the first pose information and a first point cloud matching reference value corresponding to the first pose information may be obtained.
Optionally, acquiring a preset conversion matrix; matching the first object and the second object based on a preset conversion matrix, the first point cloud data and the second point cloud data to obtain an object matching result; and screening out the target object from the plurality of first objects based on the object matching result.
In one example, the first point cloud data may be converted based on a preset conversion matrix to obtain first target point cloud data; under the condition that the first target point cloud data is matched with the second point cloud data and a first object corresponding to the first target point cloud data is not matched with a second object corresponding to the second point cloud data, determining the first object corresponding to the first target point cloud data as a matching difference object; and filtering the matched difference objects in the plurality of first objects to obtain the target object.
And S207, determining point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object.
In this embodiment of the application, the point cloud positioning information may be positioning information generated according to the first point cloud data.
In some exemplary embodiments, a pose generation model may be acquired; and determining point cloud positioning information corresponding to the target vehicle according to the pose generation model and the first point cloud data corresponding to the target object.
In some exemplary embodiments, the point cloud location information may be determined as target location information for the target vehicle; the target location information may refer to location information that the target vehicle finally outputs.
In the embodiment, the first point cloud data in the actual detection range and the second point cloud data in the virtual detection range are combined to filter the first object in the actual range, so that an object with a low matching degree can be filtered out, a target object with a high matching degree can be obtained, and the point cloud positioning information of the target vehicle is determined according to the filtered first point cloud data corresponding to the target object; more accurate positioning information can be obtained.
In some exemplary embodiments, as shown in fig. 3, it is a schematic flowchart of a method for determining a target object provided in the embodiment of the present application; the details are as follows.
S301, acquiring a target point cloud map corresponding to the second point cloud data;
in the embodiment of the application, the target point cloud map can display a plurality of second point cloud data; which may be a map corresponding to the second detection area.
In some example embodiments, a target point cloud map may be generated based on the plurality of second point cloud data.
S303, matching the first object and the second object based on the target point cloud map, the first point cloud data and the second point cloud data to obtain an initial matching result;
in this embodiment of the application, the initial matching result may refer to each point cloud data in the target point cloud map and a matching result of each object. The initial matching results may include a first matching result of the first point cloud data and the second point cloud data in the target point cloud map and a second matching result of the first object and the second object in the target point cloud map.
Optionally, the plurality of first point cloud data may be respectively projected to a target point cloud map to obtain a plurality of target point cloud data; respectively matching the plurality of target point cloud data with a plurality of second point cloud data in a target point cloud map to obtain a first matching result; matching a first object corresponding to each of the target point cloud data with a second object corresponding to each of the second point cloud data to obtain a second matching result; an initial matching result is determined based on the first matching result and the second matching result.
S305, screening out a target object from the plurality of first objects based on the initial matching result.
In some exemplary embodiments, a target object that meets a preset condition may be screened out from the plurality of first objects based on the initial matching result. The preset condition can be that the first point cloud data is matched with the second point cloud data, and a first object corresponding to the matched first point cloud data is matched with a second object corresponding to the second point cloud data; and first point cloud data for which there is no matching second point cloud data.
In the embodiment, the first point cloud data of the acquired target vehicle in the actual detection range is matched with the second point cloud data in the target point cloud map, and the matching difference object is filtered, so that the first point cloud data can be obtained more accurately, and the accuracy of point cloud positioning information is improved.
In some exemplary embodiments, as shown in fig. 4, a schematic flow chart of a method for screening a target object according to an embodiment of the present application is shown; the details are as follows.
S401, based on the initial matching result, target matching point cloud data of the first point cloud data and the second point cloud data are screened out from the target point cloud map;
in this embodiment, the target matching point cloud data may refer to point cloud data obtained by partially or completely overlapping the first point cloud data and the second point cloud data in the target point cloud map.
S403, acquiring a first matching object and a second matching object corresponding to the target matching point cloud data;
in the embodiment of the present application, the first matching object may be any one of the first objects, and the second matching object may be any one of the second objects.
The first matching object and the second matching object corresponding to the target matching point cloud data may refer to a first object corresponding to the first point cloud data and a second object corresponding to the second point cloud data, which are overlapped to obtain the target matching point cloud data.
S405, under the condition that the first matching object corresponding to the target matching point cloud data is not matched with the second matching object, the first matching object corresponding to the target matching point cloud data is determined to be a matching difference object.
S407, screening out a target object from the plurality of first objects based on the matching difference objects.
In the embodiment of the application, matching difference objects can be screened out from the plurality of first objects, and the matching difference objects in the plurality of first objects are filtered out to obtain the target object.
In the embodiment, the target object can be obtained by finding out the matching difference object without matching from the initial matching result, and the target object can be quickly screened out in the way.
In some exemplary embodiments, as shown in fig. 5, a method for determining point cloud positioning information provided in the embodiments of the present application is described as follows.
S501, acquiring a preset matching model and initial pose information;
in the embodiment of the present application, the initial pose information may refer to current pose information corresponding to the current position of the target vehicle or pose information generated based on the current pose information in the point cloud data matching process.
S503, positioning the target vehicle based on the preset matching model, the initial pose information, the first point cloud data corresponding to the target object and the second point cloud data to obtain second target pose information of the target vehicle and a second target point cloud matching reference value corresponding to the second target pose information.
In the embodiment of the application, the initial pose information, the first point cloud data corresponding to the target object, and the second point cloud data can be input into a preset matching model, so that the second target pose information and the second target point cloud matching reference value corresponding to the second target pose information can be obtained. The point cloud matching reference value can represent the matching degree of first point cloud data and second point cloud data corresponding to the target object; correspondingly, the lower the second point cloud matching reference value is, the higher the matching degree of the first point cloud data and the second point cloud data is.
And S505, determining the second target pose information as point cloud positioning information of the target vehicle under the condition that the second target point cloud matching reference value meets a preset reference threshold value.
In the embodiment, the vehicle is positioned by the preset matching model and the first point cloud data and the second point cloud data corresponding to the target object, and the accuracy of the positioning information is evaluated according to the target point cloud matching reference value, so that more accurate point cloud positioning information can be obtained.
In some exemplary embodiments, as shown in fig. 6, a schematic flow chart of a first point cloud data obtaining method provided in the embodiment of the present application is shown, which is as follows.
S601, acquiring image information and a plurality of initial point cloud data of the target vehicle in a first detection range.
In embodiments of the present application, the initial point cloud data may refer to a set of vectors in a three-dimensional coordinate system.
In some exemplary embodiments, image information of the target vehicle within the first detection range may be acquired based on the first target perception device; and acquiring initial point cloud data of the target vehicle in the first detection range based on the second target perception device.
S603, analyzing the image information to obtain a plurality of first objects in the image information and coordinate information corresponding to each first object;
in the embodiment of the application, semantic analysis can be performed on the image information to obtain semantic labels corresponding to a plurality of first objects and a plurality of first objects which are contained in the image information; coordinate recognition is performed on the plurality of first objects in the image information, and coordinate information corresponding to each of the plurality of first objects is obtained. Wherein the first object may characterize the type of the first object and the semantic label.
And S605, performing point cloud conversion processing on the coordinate information based on the initial point cloud data and the first objects to obtain first point cloud data corresponding to the first objects.
In the embodiment of the application, the coordinate information corresponding to each of the plurality of first objects can be converted through a preset point cloud conversion matrix, so that the plurality of first objects correspond to the plurality of initial point cloud data; and further obtaining first point cloud data corresponding to the first objects.
In one example, the determination of the first point cloud data may be made using model one;
model one:
Figure BDA0003832845540000131
wherein,
Figure BDA0003832845540000132
a rotation matrix representing a coordinate system corresponding from the second target sensing device to the first target sensing device;
Figure BDA0003832845540000133
a translation matrix representing a coordinate system corresponding to the second target perception device to the first target perception device;
Figure BDA0003832845540000134
point cloud data under a coordinate system corresponding to the second target sensing device is represented;
Figure BDA0003832845540000135
and the coordinate information of the first object in a coordinate system corresponding to the first target perception device is represented.
In this embodiment, the camera device and the radar device are used in combination, and the first point cloud data corresponding to each of the plurality of first objects in the first detection range is acquired in a radar positioning function mode, so that the first point cloud data can be acquired more accurately.
In some exemplary embodiments, as shown in fig. 7, a schematic flow chart of a method for acquiring second point cloud data provided in the embodiments of the present application is shown, which is as follows.
S701, acquiring a second detection range of the current position in a preset map and preset conversion configuration information.
In this embodiment, the preset conversion configuration information may refer to a conversion matrix for converting map coordinates into point cloud data. The transformation matrix may transform the map coordinates into coordinate values in a world coordinate system.
And S703, acquiring the map coordinate information corresponding to each of a plurality of second objects of the target vehicle in the second detection range.
In the embodiment of the present application, the map coordinate information may be coordinate information of a plurality of second objects in a preset map.
S705, performing point cloud conversion processing on the map coordinate information based on preset conversion configuration information to obtain second point cloud data corresponding to the second objects.
In the embodiment of the application, the map coordinate information corresponding to each of the second objects can be converted by the conversion matrix corresponding to the preset conversion configuration information to obtain the target coordinates corresponding to each of the second objects; and determining the target coordinates corresponding to the second objects as second point cloud data corresponding to the second objects. The target coordinate may be a three-dimensional coordinate in a world coordinate system.
In the embodiment, the coordinates of the identification object in the second detection range in the preset map are converted into the point cloud data, so that the problem of overhigh cost caused by directly adopting the point cloud map can be avoided, and the problem of overlarge data processing amount caused by integrally converting the preset map into the point cloud map can be avoided; according to the method and the device, second point cloud data corresponding to a plurality of second objects in a second detection range can be rapidly acquired; and the cost is low.
In some exemplary embodiments, fig. 8 is a schematic flow chart of another vehicle positioning method provided in the embodiments of the present application, which is described in detail below.
S801, acquiring first point cloud data corresponding to a plurality of first objects of the target vehicle in a first detection range, wherein the first detection range is an actual detection range of the target vehicle at the current position.
S803, acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in the preset map.
S805, filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object in which the first point cloud data is matched with the second point cloud data, and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data.
S807, point cloud positioning information corresponding to the target vehicle is determined according to the first point cloud data corresponding to the target object.
S809, acquiring current pose information corresponding to the target vehicle;
in the embodiment of the application, the current pose information can be acquired based on the information sensing equipment; the information sensing device may include an IMU (Inertial Measurement Unit), among others.
The current pose information CAN also be acquired based on vehicle body CAN (Controller Area Network) data.
S811, determining target positioning information of the target vehicle based on the point cloud positioning information and the current pose information.
Optionally, preset fusion configuration information may be obtained; and determining target positioning information of the target vehicle based on the preset fusion configuration information, the point cloud positioning information and the current pose information. The preset fusion configuration information can refer to a filtering denoising model; the filtering denoising model may be an EKF (Extended Kalman Filter) model or an ESKF (Error-State Kalman Filter).
In one example, the point cloud positioning information and the current pose information may be input into a filtering and denoising model to obtain target positioning information of the target vehicle.
Optionally, a first positioning accuracy reference value corresponding to the point cloud positioning information and a second positioning accuracy reference value corresponding to the current pose information may be obtained; and selecting a target positioning accuracy reference value meeting a preset condition from the first positioning accuracy reference value and the second positioning accuracy reference value, and determining positioning information corresponding to the target positioning accuracy reference value as target positioning information. The target positioning accuracy reference value satisfying the preset condition may indicate a positioning accuracy reference value having a larger value.
In the embodiment, the acquired more accurate point cloud positioning information and the current pose information are subjected to fusion filtering and other processing again, so that more accurate positioning information is obtained, and the robustness of the positioning information is improved.
An embodiment of the present application further provides a vehicle positioning device, as shown in fig. 9, which is a schematic structural diagram of the vehicle positioning device provided in the embodiment of the present application; specifically, the device comprises:
a first obtaining module 901, configured to obtain first point cloud data corresponding to each of a plurality of first objects of a target vehicle in a first detection range, where the first detection range is an actual detection range of the target vehicle at a current position;
a second obtaining module 902, configured to obtain second point cloud data corresponding to each of a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map;
a first processing module 903, configured to filter the first object based on the first point cloud data, the second point cloud data, and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object of which the first point cloud data is matched with the second point cloud data and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data;
an information determining module 904, configured to determine point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object;
wherein the first object and the second object are road identification objects.
In this embodiment, the first processing module 903 includes:
the first acquisition unit is used for acquiring a target point cloud map corresponding to the second point cloud data;
the first processing unit is used for matching the first object and the second object based on the target point cloud map, the first point cloud data and the second point cloud data to obtain an initial matching result;
a screening unit, configured to screen the target object from the plurality of first objects based on the initial matching result;
in an embodiment of the present application, the screening unit includes:
the first screening subunit is used for screening target matching point cloud data matched with the first point cloud data and the second point cloud data from the target point cloud map based on the initial matching result;
the acquisition subunit is used for acquiring a first matching object and a second matching object corresponding to the target matching point cloud data;
a determining subunit, configured to determine, when a first matching object corresponding to the target matching point cloud data does not match a second matching object, the first matching object corresponding to the target matching point cloud data as the matching difference object;
a second screening subunit, configured to screen the target object from the plurality of first objects based on the matching difference object.
In the embodiment of the present application, the method further includes:
the third acquisition module is used for acquiring a preset matching model and current pose information corresponding to the target vehicle;
the first determining module is used for determining first target point cloud matching reference values corresponding to first target position and pose information based on the preset matching model, the current position and pose information, the first point cloud data and the second point cloud data;
and the execution module is used for filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object under the condition that the first target point cloud matching reference value does not meet a preset reference threshold value.
In this embodiment, the information determining module 904 includes:
the second acquisition unit is used for acquiring a preset matching model and initial pose information;
the second processing unit is used for positioning the target vehicle based on the preset matching model, the initial pose information, the first point cloud data corresponding to the target object and the second point cloud data to obtain second target pose information of the target vehicle and a second target point cloud matching reference value corresponding to the second target pose information;
and the first determining unit is used for determining the second target pose information as the point cloud positioning information of the target vehicle under the condition that the second target point cloud matching reference value meets a preset reference threshold value.
In this embodiment, the first obtaining module 901 includes:
the third acquisition unit is used for acquiring image information of the target vehicle in the first detection range and a plurality of initial point cloud data;
the third processing unit is used for analyzing and processing the image information to obtain a plurality of first objects in the image information and coordinate information corresponding to each first object;
and the fourth processing unit is used for carrying out point cloud conversion processing on the coordinate information based on the initial point cloud data and the first objects to obtain first point cloud data corresponding to the first objects.
In this embodiment, the second obtaining module 902 includes:
the fourth acquisition unit is used for acquiring a second detection range of the current position in a preset map and preset conversion configuration information;
a fifth acquisition unit, configured to acquire map coordinate information corresponding to each of a plurality of second objects of the target vehicle within the second detection range;
and the fifth processing unit is used for carrying out point cloud conversion processing on the map coordinate information based on preset conversion configuration information to obtain second point cloud data corresponding to the second objects.
In the embodiment of the present application, the method further includes:
the fourth acquisition module is used for acquiring the current pose information corresponding to the target vehicle;
and the second determining module is used for determining the target positioning information of the target vehicle based on the point cloud positioning information and the current pose information.
It should be noted that the device and method embodiments in the device embodiment are based on the same inventive concept.
The present application provides a vehicle positioning apparatus, which includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executed to implement the vehicle positioning method according to the foregoing method embodiments.
Further, fig. 10 shows a hardware structure diagram of an electronic device for implementing the vehicle positioning method provided in the embodiment of the present application, where the electronic device may participate in forming or including the vehicle positioning apparatus provided in the embodiment of the present application. As shown in fig. 10, electronic device 100 may include one or more (shown as 1002a, 1002b, \8230;, 1002 n) processors 1002 (processors 1002 may include, but are not limited to, processing devices such as microprocessor MCUs or programmable logic devices FPGAs), memory 1004 for storing data, and transmission devices 1006 for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 10 is merely illustrative and is not intended to limit the structure of the electronic device. For example, electronic device 100 may also include more or fewer components than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
It should be noted that the one or more processors 1002 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuitry may be a single, stand-alone processing module, or incorporated in whole or in part into any of the other elements in the electronic device 100 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of variable resistance termination paths connected to the interface).
The memory 1004 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the vehicle positioning method described in the embodiment of the present application, and the processor 1002 executes various functional applications and data processing by running the software programs and modules stored in the memory 1004, so as to implement one of the vehicle positioning methods described above. The memory 1004 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1004 may further include memory located remotely from the processor 1002, which may be connected to the electronic device 100 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 1006 is used for receiving or sending data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device 100. In one example, the transmission device 1006 includes a network adapter (NIC) that can be connected to other network devices through a base station so as to communicate with the internet. In one embodiment, the transmission device 1006 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the electronic device 100 (or mobile device).
Embodiments of the present application further provide a computer-readable storage medium, which may be disposed in an electronic device to store at least one instruction or at least one program for implementing a vehicle positioning method in the method embodiments, where the at least one instruction or the at least one program is loaded and executed by the processor to implement the vehicle positioning method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, as for the apparatus and electronic device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A vehicle positioning method, characterized in that the method comprises:
acquiring first point cloud data corresponding to a plurality of first objects of a target vehicle in a first detection range, wherein the first detection range is an actual detection range of the target vehicle at the current position;
acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map;
filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object of which the first point cloud data is matched with the second point cloud data and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data;
determining point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object;
wherein the first object and the second object are road sign objects.
2. The vehicle positioning method according to claim 1, wherein the filtering the first object based on the first point cloud data, the second point cloud data, and the second object to obtain a target object comprises:
acquiring a target point cloud map corresponding to the second point cloud data;
matching the first object and the second object based on the target point cloud map, the first point cloud data and the second point cloud data to obtain an initial matching result;
and screening the target object from the plurality of first objects based on the initial matching result.
3. The vehicle localization method of claim 2, wherein the screening the target object from the plurality of first objects based on the initial matching result comprises:
screening target point cloud data matched with the first point cloud data and the second point cloud data from the target point cloud map based on the initial matching result;
acquiring a first matching object and a second matching object corresponding to the target matching point cloud data;
under the condition that a first matching object corresponding to the target matching point cloud data is not matched with a second matching object, determining the first matching object corresponding to the target matching point cloud data as the matching difference object;
and screening the target object from the plurality of first objects based on the matched difference object.
4. The vehicle localization method according to claim 1, wherein before the filtering the first object based on the first point cloud data, the second point cloud data, and the second object to obtain a target object, the method further comprises:
acquiring a preset matching model and current pose information corresponding to the target vehicle;
determining first target point cloud matching reference values corresponding to first target position and position information based on the preset matching model, the current position and position information, the first point cloud data and the second point cloud data;
and under the condition that the first target point cloud matching reference value does not meet a preset reference threshold value, filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object.
5. The vehicle positioning method according to claim 1, wherein the point cloud positioning information corresponding to the target vehicle is determined according to the first point cloud data corresponding to the target object;
acquiring a preset matching model and initial pose information;
based on the preset matching model, the initial pose information, the first point cloud data corresponding to the target object and the second point cloud data, positioning the target vehicle to obtain second target pose information of the target vehicle and a second target point cloud matching reference value corresponding to the second target pose information;
and determining the second target pose information as the point cloud positioning information of the target vehicle under the condition that the second target point cloud matching reference value meets a preset reference threshold value.
6. The vehicle positioning method according to claim 1, wherein the obtaining of first point cloud data corresponding to each of a plurality of first objects of the target vehicle within the first detection range includes,
acquiring image information of the target vehicle in the first detection range and a plurality of initial point cloud data;
analyzing the image information to obtain a plurality of first objects in the image information and coordinate information corresponding to each first object;
and performing point cloud conversion processing on a plurality of pieces of coordinate information based on the plurality of pieces of initial point cloud data and the plurality of first objects to obtain first point cloud data corresponding to the plurality of first objects.
7. The vehicle positioning method according to claim 1, wherein the obtaining second point cloud data corresponding to each of a plurality of second objects in a second detection range includes:
acquiring a second detection range of the current position in a preset map and preset conversion configuration information;
obtaining map coordinate information corresponding to a plurality of second objects of the target vehicle in the second detection range;
and performing point cloud conversion processing on the map coordinate information based on preset conversion configuration information to obtain second point cloud data corresponding to the second objects respectively.
8. The vehicle localization method according to claim 1, further comprising:
acquiring current pose information corresponding to the target vehicle;
and determining target positioning information of the target vehicle based on the point cloud positioning information and the current pose information.
9. A vehicle locating apparatus, characterized in that the apparatus comprises:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first point cloud data corresponding to a plurality of first objects of a target vehicle in a first detection range, and the first detection range is an actual detection range of the target vehicle in the current position;
the second acquisition module is used for acquiring second point cloud data corresponding to a plurality of second objects in a second detection range; the second detection range is a virtual detection range of the current position in a preset map;
the first processing module is used for filtering the first object based on the first point cloud data, the second point cloud data and the second object to obtain a target object; the target object is a first object except for a matching difference object in the plurality of first objects, the matching difference object is an object of which the first point cloud data is matched with the second point cloud data and the first object corresponding to the first point cloud data is not matched with the second object corresponding to the second point cloud data;
the information determining module is used for determining point cloud positioning information corresponding to the target vehicle according to the first point cloud data corresponding to the target object;
wherein the first object and the second object are road sign objects.
10. A vehicle localization apparatus, characterized in that the apparatus comprises a processor and a memory, in which at least one instruction or at least one program is stored, which is loaded and executed by the processor to implement the vehicle localization method according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that at least one instruction or at least one program is stored in the storage medium, which is loaded by a processor and executes the vehicle localization method according to any of claims 1 to 8.
CN202211080423.XA 2022-09-05 2022-09-05 Vehicle positioning method, device, equipment and storage medium Pending CN115619871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211080423.XA CN115619871A (en) 2022-09-05 2022-09-05 Vehicle positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211080423.XA CN115619871A (en) 2022-09-05 2022-09-05 Vehicle positioning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115619871A true CN115619871A (en) 2023-01-17

Family

ID=84859691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211080423.XA Pending CN115619871A (en) 2022-09-05 2022-09-05 Vehicle positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115619871A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170779A (en) * 2023-04-18 2023-05-26 西安深信科创信息技术有限公司 Collaborative awareness data transmission method, device and system
CN116931583A (en) * 2023-09-19 2023-10-24 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116170779A (en) * 2023-04-18 2023-05-26 西安深信科创信息技术有限公司 Collaborative awareness data transmission method, device and system
CN116170779B (en) * 2023-04-18 2023-07-25 西安深信科创信息技术有限公司 Collaborative awareness data transmission method, device and system
CN116931583A (en) * 2023-09-19 2023-10-24 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object
CN116931583B (en) * 2023-09-19 2023-12-19 深圳市普渡科技有限公司 Method, device, equipment and storage medium for determining and avoiding moving object

Similar Documents

Publication Publication Date Title
CN115619871A (en) Vehicle positioning method, device, equipment and storage medium
US10867189B2 (en) Systems and methods for lane-marker detection
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
CN110954114B (en) Method and device for generating electronic map, terminal and storage medium
CN111739283B (en) Road condition calculation method, device, equipment and medium based on clustering
CN113592015B (en) Method and device for positioning and training feature matching network
CN114693836A (en) Method and system for generating road element vector
CN111982132A (en) Data processing method, device and storage medium
CN111368860B (en) Repositioning method and terminal equipment
CN113610745A (en) Calibration evaluation parameter acquisition method and device, storage medium and electronic equipment
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN114743395B (en) Signal lamp detection method, device, equipment and medium
CN116430404A (en) Method and device for determining relative position, storage medium and electronic device
CN115236645A (en) Laser radar attitude determination method and attitude determination device
CN114674328A (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN111383337B (en) Method and device for identifying objects
CN109816709A (en) Monocular camera-based depth estimation method, device and equipment
CN114088103A (en) Method and device for determining vehicle positioning information
CN111061878A (en) Page clustering method, device, medium and equipment
CN111223139A (en) Target positioning method and terminal equipment
CN113923774B (en) Target terminal position determining method and device, storage medium and electronic equipment
CN114323020B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN112184813B (en) Vehicle positioning method, device, equipment and storage medium
CN112785645B (en) Terminal positioning method and device and electronic equipment
CN112711965B (en) Drawing recognition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination