CN115170630B - Map generation method, map generation device, electronic equipment, vehicle and storage medium - Google Patents

Map generation method, map generation device, electronic equipment, vehicle and storage medium Download PDF

Info

Publication number
CN115170630B
CN115170630B CN202210778725.8A CN202210778725A CN115170630B CN 115170630 B CN115170630 B CN 115170630B CN 202210778725 A CN202210778725 A CN 202210778725A CN 115170630 B CN115170630 B CN 115170630B
Authority
CN
China
Prior art keywords
point cloud
target
frame
obstacle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210778725.8A
Other languages
Chinese (zh)
Other versions
CN115170630A (en
Inventor
袁鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210778725.8A priority Critical patent/CN115170630B/en
Publication of CN115170630A publication Critical patent/CN115170630A/en
Application granted granted Critical
Publication of CN115170630B publication Critical patent/CN115170630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a map generation method, a map generation device, an electronic device, a vehicle and a storage medium, and relates to the technical field of automatic driving, wherein the map generation method comprises the following steps: the method comprises the steps of acquiring multi-frame image information and multi-frame point cloud information of surrounding environments of a vehicle, determining an obstacle area corresponding to a mobile obstacle in each environment image, determining a target obstacle point cloud set corresponding to the point cloud information according to the obstacle area and the point cloud data for each frame of point cloud information, determining a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data, and generating a point cloud map according to the target point cloud set. According to the method and the device, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle region corresponding to the moving obstacle in the environment image, and the point cloud map which does not contain the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that noise in the point cloud map can be reduced, and the accuracy of the point cloud map is ensured.

Description

Map generation method, map generation device, electronic equipment, vehicle and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a map generation method, a map generation device, electronic equipment, a vehicle and a storage medium.
Background
With the rapid development of computer technology, the application of high-precision positioning is becoming widespread, for example, high-precision positioning plays a role in automatic driving. Currently, pose information with higher precision is mainly provided through SLAM (English: simultaneous Localization and Mapping, chinese: positioning and mapping) technology, a point cloud map is generated for subsequent laser positioning, and a vector map is generated through the point cloud map so as to be used for visual positioning. However, in urban environments, indoor environments or campus environments, the number of moving obstacles is large, and the moving obstacles and the moving track thereof can be remained in the point cloud map as noise, which can affect the accuracy of subsequent positioning. Therefore, how to obtain a high-precision point cloud map that does not include moving obstacles is an important problem to be solved.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a map generation method, apparatus, electronic device, vehicle, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a map generation method, the method including:
Acquiring multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
determining a corresponding obstacle region of a moving obstacle in the surrounding environment of the vehicle in each environment image;
determining a target obstacle point cloud set corresponding to the point cloud information of each frame according to the obstacle region and the point cloud data aiming at the point cloud information of each frame;
determining a target point cloud set corresponding to the point cloud information of each frame according to the target obstacle point cloud set and the point cloud data;
and generating a point cloud map according to the target point cloud set.
Optionally, the determining a corresponding obstacle region of the moving obstacle in the surrounding environment of the vehicle in each environment image includes:
and aiming at each frame of the image information, splicing all the environment images included in the frame of the image information to obtain a spliced environment image corresponding to the frame of the image information, and determining the corresponding obstacle area of the mobile obstacle in each environment image included in the frame of the image information according to the spliced environment image by a pre-trained obstacle detection model.
Optionally, the determining, according to the obstacle region and the point cloud data, a target obstacle point cloud set corresponding to the point cloud information of the frame includes:
for each acquisition region, projecting each point to be selected in the point cloud data corresponding to the acquisition region, which is included in the frame point cloud information, into a target environment image corresponding to the acquisition region to obtain a projection point corresponding to the point to be selected, and taking the point to be selected corresponding to the projection point matched with the obstacle region as an obstacle point to be selected to obtain an obstacle point cloud set to be selected corresponding to the acquisition region; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information;
and clustering the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to each acquisition area to obtain at least one cluster point cloud set, and taking the largest cluster point cloud set in the at least one cluster point cloud set as the target obstacle point cloud set.
Optionally, the determining, according to the target obstacle point cloud set and the point cloud data, a target point cloud set corresponding to the point cloud information of each frame includes:
And aiming at the point cloud information of each frame, removing the target obstacle point cloud set corresponding to the point cloud information of the frame from the point cloud data included in the point cloud information of the frame to obtain the target point cloud set corresponding to the point cloud information of the frame.
Optionally, the generating a point cloud map according to the target point cloud set includes:
determining the target odometer pose of the vehicle corresponding to the point cloud information of each frame according to the target point cloud set corresponding to the point cloud information of each frame;
and aiming at the point cloud information of each frame, splicing the target odometer pose corresponding to the point cloud information of the frame with the target point cloud set corresponding to the point cloud information of the frame to obtain the point cloud map.
Optionally, the determining, according to the target point cloud set corresponding to the point cloud information of each frame, the target odometer pose of the vehicle corresponding to the point cloud information of the frame includes:
extracting characteristic points of target points in each target point cloud set to obtain target characteristic points corresponding to each target point;
according to the target feature points, determining the position and pose of the to-be-selected odometer of the vehicle corresponding to the point cloud information of each frame;
and optimizing the position and the posture of the odometer to be selected by using a preset optimization algorithm to obtain the position and the posture of the target odometer.
According to a second aspect of embodiments of the present disclosure, there is provided a map generation apparatus, the apparatus comprising:
the acquisition module is configured to acquire multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
a determining module configured to determine a corresponding obstacle region in each of the environmental images of a moving obstacle in the vehicle surroundings;
the determining module is further configured to determine, for each frame of the point cloud information, a target obstacle point cloud set corresponding to the point cloud information of the frame according to the obstacle region and the point cloud data;
the determining module is further configured to determine a target point cloud set corresponding to the point cloud information of each frame according to the target obstacle point cloud set and the point cloud data;
and the generation module is configured to generate a point cloud map according to the target point cloud set.
Optionally, the determining module is configured to:
and aiming at each frame of the image information, splicing all the environment images included in the frame of the image information to obtain a spliced environment image corresponding to the frame of the image information, and determining the corresponding obstacle area of the mobile obstacle in each environment image included in the frame of the image information according to the spliced environment image by a pre-trained obstacle detection model.
Optionally, the determining module includes:
the first determining submodule is configured to project each candidate point in the point cloud data corresponding to the acquisition area included in the frame point cloud information into a target environment image corresponding to the acquisition area to obtain a projection point corresponding to the candidate point, and take the candidate point corresponding to the projection point matched with the obstacle area as a candidate obstacle point to obtain a candidate obstacle point cloud set corresponding to the acquisition area; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information;
the second determining submodule is configured to cluster the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to each acquisition area to obtain at least one cluster point cloud set, and the largest cluster point cloud set in the at least one cluster point cloud set is used as the target obstacle point cloud set.
Optionally, the determining module is configured to:
and aiming at the point cloud information of each frame, removing the target obstacle point cloud set corresponding to the point cloud information of the frame from the point cloud data included in the point cloud information of the frame to obtain the target point cloud set corresponding to the point cloud information of the frame.
Optionally, the generating module includes:
a third determining submodule configured to determine a target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to the point cloud information;
and the splicing sub-module is configured to splice the target odometer pose corresponding to the point cloud information of each frame with the target point cloud set corresponding to the point cloud information of the frame to obtain the point cloud map.
Optionally, the third determination submodule is configured to:
extracting characteristic points of target points in each target point cloud set to obtain target characteristic points corresponding to each target point;
according to the target feature points, determining the position and pose of the to-be-selected odometer of the vehicle corresponding to the point cloud information of each frame;
and optimizing the position and the posture of the odometer to be selected by using a preset optimization algorithm to obtain the position and the posture of the target odometer.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to perform the steps of the map generation method provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to perform the steps of the map generation method provided by the first aspect of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the map generation method provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
the method comprises the steps of firstly obtaining multi-frame image information and multi-frame point cloud information of a vehicle surrounding environment, wherein the image information comprises environment images corresponding to a plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, the image information corresponds to the point cloud information one by one, then determining an obstacle area corresponding to a moving obstacle in the vehicle surrounding environment in each environment image, determining a target obstacle point cloud corresponding to the point cloud information according to the obstacle area and the point cloud data for each frame of the point cloud information, determining a target point cloud corresponding to each frame of the point cloud information according to the target obstacle point cloud and the point cloud data, and finally generating a point cloud map according to the target point cloud. According to the method and the device, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle region corresponding to the moving obstacle in the environment image, and the point cloud map which does not contain the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that noise in the point cloud map can be reduced, and the accuracy of the point cloud map is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a map generation method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating one step 103 according to the embodiment shown in fig. 1.
Fig. 3 is a flow chart illustrating one step 105 according to the embodiment shown in fig. 1.
Fig. 4 is a block diagram of a map generating apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram of a determination module shown in accordance with the embodiment of fig. 4.
FIG. 6 is a block diagram of a generation module shown in accordance with the embodiment of FIG. 4.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment.
FIG. 8 is a functional block diagram of a vehicle, shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions of acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before introducing the map generation method, the map generation device, the electronic equipment, the vehicle and the storage medium provided by the disclosure, application scenes related to various embodiments of the disclosure are first described. The application scene may include a vehicle provided with a plurality of image acquisition devices and a lidar. Each image acquisition device corresponds to one acquisition area in the surrounding environment of the vehicle, and the acquisition area can be an area in the surrounding environment of the vehicle, which can be acquired by the image acquisition device in the visual field range of the image acquisition device (namely, the visual field range of each image acquisition device corresponds to one acquisition area), and each image acquisition device is used for acquiring the image information of the acquisition area corresponding to the image acquisition device. The laser radar is used for emitting laser beams according to a certain acquisition frequency so as to acquire point cloud information of the surrounding environment of the vehicle. The visible range of the laser radar is divided into a plurality of visual fields (the total visible range of the laser radar is 360 °), and the visual field of one laser radar corresponds to the visual field of one image acquisition device (in this way, the corresponding relation between the acquisition area and the visual field of the laser radar is established). The image acquisition device may be a device with an image acquisition function, such as an all-around camera, a camera, an image sensor, etc., for example, the image acquisition device may employ four all-around cameras, and the four all-around cameras may respectively acquire an acquisition region in front of a vehicle, an acquisition region in left side of the vehicle, an acquisition region in right side of the vehicle, and an acquisition region in rear of the vehicle. The lidar may be a multi-line lidar and the vehicle may be an automobile, which is not limited to a conventional automobile, a pure electric automobile or a hybrid automobile, but may be adapted to other types of motor vehicles or non-motor vehicles.
Fig. 1 is a flowchart illustrating a map generation method according to an exemplary embodiment. As shown in fig. 1, the method may include the steps of:
in step 101, multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle are acquired. The image information comprises environment images corresponding to the plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, and the image information corresponds to the point cloud information one by one.
For example, the moving obstacle in the image information of the surrounding environment of the vehicle can be detected, the detection result is mapped into the point cloud information of the surrounding environment of the vehicle, the moving obstacle is removed from the point cloud information, and then the point cloud information after the moving obstacle is removed is utilized to generate a high-precision point cloud map without the moving obstacle. Specifically, firstly, in a preset time range, each image acquisition device periodically acquires an environment image corresponding to an acquisition area corresponding to the image acquisition device according to a specified period, and all the environment images acquired by the image acquisition devices in the same period are one frame of image information. Meanwhile, in a preset time range, the point cloud data corresponding to each acquisition area can be acquired by the laser radar periodically according to different visual field ranges of the laser radar according to a specified period, and all the point cloud data acquired by the laser radar in each period are one frame of point cloud information. Wherein the image information corresponds to the point cloud information one by one (i.e. the image information is associated with the point cloud information), and each frame of image information is aligned in time with the point cloud information corresponding to the frame of image information.
Furthermore, in order to correlate the image information with the point cloud information, before acquiring the multi-frame image information and the multi-frame point cloud information, the image acquisition devices and the laser radar need to be calibrated in a combined mode. The method comprises the steps of calibrating internal parameters of each image acquisition device and calibrating external parameters between each image acquisition device and a laser radar. The projection coefficient and the distortion coefficient K of each image acquisition device can be obtained through internal reference calibration i And a conversion relation between the camera coordinate system and the pixel coordinate systemThrough external parameter calibration, the conversion relation between a laser radar coordinate system and a camera coordinate system can be obtained>In addition, when the vehicle is in a motion state, motion distortion exists in the point cloud data collected by the laser radar, and in order to ensure the accuracy of the point cloud information, the collected point cloud information can be subjected to motion compensation.
In step 102, a corresponding obstacle region in each environment image of a moving obstacle in the surrounding of the vehicle is determined.
In step 103, for each frame of point cloud information, determining a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle region and the point cloud data.
Specifically, after the multi-frame image information and the multi-frame point cloud information are acquired, the environmental images included in each frame of image information can be subjected to target detection through a target detection algorithm, so that the corresponding obstacle area of the moving obstacle in each environmental image included in the frame of image information is obtained. The movement obstacle may be, for example, a pedestrian, an animal, a motor vehicle, a non-motor vehicle, or the like. Then, for each frame of point cloud information, mapping the obstacle area corresponding to each environment image included in the image information corresponding to the frame of point cloud information into the point cloud data included in the frame of point cloud information to obtain the target obstacle point cloud set corresponding to the frame of point cloud information.
In step 104, a target point cloud set corresponding to each frame of point cloud information is determined according to the target obstacle point cloud set and the point cloud data.
In step 105, a point cloud map is generated from the target point cloud set.
For example, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information may be removed from the point cloud data included in the frame of point cloud information, so as to obtain a target point cloud set corresponding to the frame of point cloud information. And then, extracting characteristic points of target points in the target point cloud set corresponding to each frame of point cloud information to obtain target characteristic points corresponding to each frame of point cloud information, and determining the target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target characteristic points corresponding to each frame of point cloud information. Finally, according to the target point cloud set, a SLAM algorithm is utilized to generate a three-dimensional high-precision point cloud map without moving obstacles and trajectories left by the moving obstacles and without noise points.
It should be noted that, considering the real-time problem of the target detection algorithm and the SLAM algorithm, in order to ensure that the overall algorithm can reach the real-time requirement, the present disclosure may use two CPUs (english: central Processing Unit, chinese: central processing unit). For example, a target detection algorithm may be run on the CPU1, the detected obstacle region is sent to the CPU2 in real time through a TCP/IP (english: transmission Control Protocol/Internet Protocol, chinese: transmission control protocol/internet protocol) communication protocol, and a target obstacle point cloud cluster elimination and SLAM algorithm is run on the CPU 2.
In summary, the disclosure firstly obtains multi-frame image information and multi-frame point cloud information of a surrounding environment of a vehicle, wherein the image information comprises environmental images corresponding to a plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, the image information corresponds to the point cloud information one by one, then determines an obstacle area corresponding to a moving obstacle in the surrounding environment of the vehicle in each environmental image, determines a target obstacle point cloud corresponding to the point cloud information of each frame according to the obstacle area and the point cloud data for each frame of the point cloud information, determines a target point cloud corresponding to each frame of the point cloud information according to the target obstacle point cloud and the point cloud data, and finally generates a point cloud map according to the target point cloud. According to the method and the device, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle region corresponding to the moving obstacle in the environment image, and the point cloud map which does not contain the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that noise in the point cloud map can be reduced, and the accuracy of the point cloud map is ensured.
Alternatively, step 102 may be implemented by:
and aiming at each frame of image information, splicing all the environment images included in the frame of image information to obtain a spliced environment image corresponding to the frame of image information, and determining a corresponding obstacle area of the moving obstacle in each environment image included in the frame of image information according to the spliced environment image through a pre-trained obstacle detection model.
For example, after the multi-frame image information and the multi-frame point cloud information are obtained, all the environmental images included in the frame image information can be spliced for each frame image information, so as to obtain a spliced environmental image corresponding to the frame image information. Further, in order to ensure the accuracy of the environment image to the environment description, the distortion coefficient K of each image acquisition device can be calibrated before the environment image is spliced i And performing de-distortion processing on each environment image. Then, for each frame of image information, the spliced environment image corresponding to the frame of image information is used as input of an obstacle detection model, so that an obstacle region corresponding to the moving obstacle output by the obstacle detection model in each environment image included in the frame of image information is obtained.
Fig. 2 is a flow chart illustrating one step 103 according to the embodiment shown in fig. 1. As shown in fig. 2, step 103 may include the steps of:
in step 1031, for each acquisition region, each candidate point in the point cloud data corresponding to the acquisition region included in the frame point cloud information is projected to the target environment image corresponding to the acquisition region, so as to obtain a projection point corresponding to the candidate point, and the candidate point corresponding to the projection point matched with the obstacle region is used as the candidate obstacle point, so as to obtain a candidate obstacle point cloud set corresponding to the acquisition region. The target environment image is an environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information.
For example, the conversion relationship between the laser radar coordinate system and the camera coordinate system can be used for each acquisition regionTransferring each point to be selected in the point cloud data corresponding to the acquisition area included in the frame point cloud information to a camera coordinate system, and utilizing the conversion relation between the calibrated camera coordinate system and the pixel coordinate system->And projecting the points to be selected to a pixel coordinate system from a camera coordinate system so as to project each point to be selected corresponding to the acquisition region to a target environment image corresponding to the acquisition region, thereby obtaining a projection point corresponding to the point to be selected. Then, the position relation of the coordinates of the projection points in the target environment image and the obstacle region can be compared, and the candidate points corresponding to the projection points in the obstacle region (namely, the projection points matched with the obstacle region) are used as candidate obstacle points, so that the candidate obstacle point cloud set corresponding to the acquisition region is obtained.
In step 1032, for each acquisition region, clustering the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to the acquisition region to obtain at least one cluster point cloud set, and taking the largest cluster point cloud set of the at least one cluster point cloud set as the target obstacle point cloud set.
For example, the obstacle region determined in step 102 is a quadrangular region, and in actual cases, the moving obstacle is mostly irregular. That is, the determined obstacle region may contain other objects than the moving obstacle, but the moving obstacle occupies a major portion of the obstacle region. Therefore, for each acquisition region, the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to the acquisition region can be clustered (for example, the to-be-selected obstacle points can be clustered through a Euclidean distance or a K-means clustering algorithm), at least one cluster point cloud set is obtained, and the largest cluster point cloud set in the at least one cluster point cloud set is taken as the target obstacle point cloud set.
Fig. 3 is a flow chart illustrating one step 105 according to the embodiment shown in fig. 1. As shown in fig. 3, step 105 may include the steps of:
In step 1051, a target odometer pose of a vehicle corresponding to each frame of point cloud information is determined according to a target point cloud set corresponding to the frame of point cloud information.
In step 1052, for each frame of point cloud information, the target odometer pose corresponding to the frame of point cloud information and the target point cloud set corresponding to the frame of point cloud information are spliced to obtain a point cloud map.
For example, feature point extraction may be performed on the target points in each target point cloud set to obtain target feature points corresponding to each target point. The target feature points may include ground points, pillar points, surface points, and the like, among others. And secondly, determining the position and the posture of the mileage meter to be selected of the vehicle corresponding to each frame of point cloud information according to the target characteristic points. The mode of determining the pose of the mileage meter to be selected may be: according to different environments, different weight values are given to various target feature points (for example, when a vehicle runs on a plane, the weight value corresponding to a ground point can be set higher, the weight value corresponding to a column point and the weight value corresponding to a planar object point are set lower), then inter-frame matching is carried out according to the target feature points corresponding to every two frames of adjacent point cloud information and the weight values corresponding to the target feature points, so that pose change amounts corresponding to every two frames of point cloud information are obtained, and according to the pose change amounts corresponding to every two frames of point cloud information, the pose of a to-be-selected odometer corresponding to every frame of point cloud information is determined.
Then, the position and the posture of the odometer to be selected inevitably have accumulated errors, so after the position and the posture of the odometer to be selected corresponding to each frame of point cloud information are determined, a preset optimization algorithm can be utilized to optimize the position and the posture of the odometer to be selected, and the target position and the posture of the odometer to be selected corresponding to each frame of point cloud information are obtained. For example, an optimization problem can be constructed according to historical frame point cloud information, and pose optimization is performed on the pose of the to-be-selected odometer corresponding to each frame of point cloud information by using a nonlinear optimization method, so that the pose of the target odometer corresponding to each frame of point cloud information is obtained.
And finally, splicing the target odometer pose corresponding to the point cloud information of each frame with the target point cloud set corresponding to the point cloud information of the frame by utilizing an SLAM algorithm to generate a point cloud map.
In summary, the disclosure firstly obtains multi-frame image information and multi-frame point cloud information of a surrounding environment of a vehicle, wherein the image information comprises environmental images corresponding to a plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, the image information corresponds to the point cloud information one by one, then determines an obstacle area corresponding to a moving obstacle in the surrounding environment of the vehicle in each environmental image, determines a target obstacle point cloud corresponding to the point cloud information of each frame according to the obstacle area and the point cloud data for each frame of the point cloud information, determines a target point cloud corresponding to each frame of the point cloud information according to the target obstacle point cloud and the point cloud data, and finally generates a point cloud map according to the target point cloud. According to the method and the device, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle region corresponding to the moving obstacle in the environment image, and the point cloud map which does not contain the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that noise in the point cloud map can be reduced, and the accuracy of the point cloud map is ensured.
Fig. 4 is a block diagram of a map generating apparatus according to an exemplary embodiment. Referring to fig. 4, the map generating apparatus 200 includes an acquisition module 201, a determination module 202, and a generation module 203.
An acquisition module 201 configured to acquire multi-frame image information and multi-frame point cloud information of a surrounding environment of a vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
the determination module 202 is configured to determine a corresponding obstacle region in each environment image of a moving obstacle in the vehicle surroundings.
The determining module 202 is further configured to determine, for each frame of point cloud information, a target obstacle point cloud set corresponding to the frame of point cloud information according to the obstacle region and the point cloud data.
The determining module 202 is further configured to determine a target point cloud set corresponding to each frame of point cloud information according to the target obstacle point cloud set and the point cloud data.
The generating module 203 is configured to generate a point cloud map according to the target point cloud set.
Optionally, the determination module 202 is configured to:
and aiming at each frame of image information, splicing all the environment images included in the frame of image information to obtain a spliced environment image corresponding to the frame of image information, and determining a corresponding obstacle area of the moving obstacle in each environment image included in the frame of image information according to the spliced environment image through a pre-trained obstacle detection model.
Fig. 5 is a block diagram of a determination module shown in accordance with the embodiment of fig. 4. As shown in fig. 5, the determining module 202 includes:
the first determining submodule 2021 is configured to, for each acquisition region, project each candidate point in the point cloud data corresponding to the acquisition region included in the frame point cloud information into the target environment image corresponding to the acquisition region to obtain a projection point corresponding to the candidate point, and take the candidate point corresponding to the projection point matched with the obstacle region as the candidate obstacle point to obtain a candidate obstacle point cloud set corresponding to the acquisition region. The target environment image is an environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information.
The second determining submodule 2022 is configured to, for each acquisition region, cluster the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to the acquisition region, obtain at least one cluster point cloud set, and take the largest cluster point cloud set in the at least one cluster point cloud set as the target obstacle point cloud set.
Optionally, the determination module 202 is configured to:
and aiming at each frame of point cloud information, removing a target obstacle point cloud set corresponding to the frame of point cloud information from point cloud data included in the frame of point cloud information, and obtaining the target point cloud set corresponding to the frame of point cloud information.
FIG. 6 is a block diagram of a generation module shown in accordance with the embodiment of FIG. 4. As shown in fig. 6, the generating module 203 includes:
the third determining submodule 2031 is configured to determine, according to the target point cloud set corresponding to each frame of point cloud information, a target odometer pose of the vehicle corresponding to the frame of point cloud information.
The stitching submodule 2032 is configured to stitch, for each frame of point cloud information, a target odometer pose corresponding to the frame of point cloud information and a target point cloud set corresponding to the frame of point cloud information, and obtain a point cloud map.
Optionally, the third determination submodule 2031 is configured to:
and extracting characteristic points of the target points in each target point cloud set to obtain target characteristic points corresponding to each target point.
And determining the pose of the odometer to be selected of the vehicle corresponding to each frame of point cloud information according to the target feature points.
And optimizing the pose of the odometer to be selected by using a preset optimization algorithm to obtain the pose of the target odometer.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the disclosure firstly obtains multi-frame image information and multi-frame point cloud information of a surrounding environment of a vehicle, wherein the image information comprises environmental images corresponding to a plurality of acquisition areas, the point cloud information comprises point cloud data corresponding to the plurality of acquisition areas, the image information corresponds to the point cloud information one by one, then determines an obstacle area corresponding to a moving obstacle in the surrounding environment of the vehicle in each environmental image, determines a target obstacle point cloud corresponding to the point cloud information of each frame according to the obstacle area and the point cloud data for each frame of the point cloud information, determines a target point cloud corresponding to each frame of the point cloud information according to the target obstacle point cloud and the point cloud data, and finally generates a point cloud map according to the target point cloud. According to the method and the device, the target obstacle point cloud set corresponding to the moving obstacle is determined through the obstacle region corresponding to the moving obstacle in the environment image, and the point cloud map which does not contain the moving obstacle is generated according to the target point cloud set obtained by removing the target obstacle point cloud set from the point cloud data, so that noise in the point cloud map can be reduced, and the accuracy of the point cloud map is ensured.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the map generation method provided by the present disclosure.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an electronic device 800 may include one or more of the following components: a processing component 802, a first memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more first processors 820 to execute instructions to perform all or part of the steps of the map generation method described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The first memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The first memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the first memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
Input/output interface 812 provides an interface between processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the map generation methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as first memory 804 including instructions executable by first processor 820 of electronic device 800 to perform the map generation method described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
FIG. 8 is a functional block diagram of a vehicle, shown in an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous mode. For example, the vehicle 600 may obtain environmental information of its surroundings through the perception system 620 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis results to the user to achieve partial automatic driving.
The vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of vehicle 600 may be interconnected via wires or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system, which may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Entertainment system 612 may include a display device, a microphone, and an audio, and a user may listen to the broadcast in the vehicle based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, the screen of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate through touching the screen.
In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 600 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a travel route for the vehicle 600, and the navigation system 613 may be used with the global positioning system 621 and the inertial measurement unit 622 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, sensing system 620 may include a global positioning system 621 (which may be a GPS system, or may be a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 622, a lidar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 600. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
The global positioning system 621 is used to estimate the geographic location of the vehicle 600.
The inertial measurement unit 622 is configured to sense a change in the pose of the vehicle 600 based on inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
The lidar 623 uses a laser to sense objects in the environment in which the vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, millimeter-wave radar 624 may be used to sense the speed and/or heading of an object in addition to sensing the object.
The ultrasonic radar 625 may utilize ultrasonic signals to sense objects around the vehicle 600.
The image pickup device 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the image capturing device 626 may include still images or video stream information.
The decision control system 630 includes a computing system 631 that makes analysis decisions based on information acquired by the perception system 620, and the decision control system 630 also includes a vehicle controller 632 that controls the powertrain of the vehicle 600, as well as a steering system 633, throttle 634, and braking system 635 for controlling the vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 600. The targets may include pedestrians or animals and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 631 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map the environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the acquired various information and derive control strategies for the vehicle.
The vehicle controller 632 may be configured to coordinate control of the power battery and the engine 641 of the vehicle to enhance the power performance of the vehicle 600.
Steering system 633 is operable to adjust the direction of travel of vehicle 600. For example, in one embodiment may be a steering wheel system.
Throttle 634 is used to control the operating speed of engine 641 and thereby the speed of vehicle 600.
The braking system 635 is used to control deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheels 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 into electrical current. The braking system 635 may take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transfer mechanical power from the engine 641 to wheels 644. The transmission 643 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 643 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one second processor 651, and the second processor 651 may execute instructions 653 stored in a non-transitory computer-readable medium, such as a second memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The second processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the second processor 651 may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (FieldProgrammable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof. Although FIG. 8 functionally illustrates a processor, memory, and other elements of a computer in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In the present disclosure, the second processor 651 may perform the map generation method described above.
In various aspects described herein, the second processor 651 can be located remotely from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, the second memory 652 may contain instructions 653 (e.g., program logic), the instructions 653 being executable by the second processor 651 to perform various functions of the vehicle 600. The second memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, the second memory 652 may also store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control the functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, computing platform 650 is operable to provide control over many aspects of vehicle 600 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 600. For example, the second memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 8 should not be construed as limiting the embodiments of the present disclosure.
An autonomous car traveling on a road, such as the vehicle 600 above, may identify objects within its surrounding environment to determine adjustments to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently and based on its respective characteristics, such as its current speed, acceleration, spacing from the vehicle, etc., may be used to determine the speed at which the autonomous car is to adjust.
Alternatively, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on a roadway).
The vehicle 600 may be various types of traveling tools, such as a car, a truck, a motorcycle, a bus, a ship, an airplane, a helicopter, a recreational vehicle, a train, etc., and embodiments of the present disclosure are not particularly limited.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described map generation method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A map generation method, the method comprising:
acquiring multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
determining a corresponding obstacle region of a moving obstacle in the surrounding environment of the vehicle in each environment image;
determining a target obstacle point cloud set corresponding to the point cloud information of each frame according to the obstacle region and the point cloud data aiming at the point cloud information of each frame;
determining a target point cloud set corresponding to the point cloud information of each frame according to the target obstacle point cloud set and the point cloud data;
generating a point cloud map according to the target point cloud set;
the determining, according to the obstacle region and the point cloud data, a target obstacle point cloud set corresponding to the point cloud information of the frame includes:
for each acquisition region, projecting each point to be selected in the point cloud data corresponding to the acquisition region, which is included in the frame point cloud information, into a target environment image corresponding to the acquisition region to obtain a projection point corresponding to the point to be selected, and taking the point to be selected corresponding to the projection point matched with the obstacle region as an obstacle point to be selected to obtain an obstacle point cloud set to be selected corresponding to the acquisition region; the target environment image is the environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information;
Clustering the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to each acquisition area to obtain at least one cluster point cloud set, and taking the largest cluster point cloud set in the at least one cluster point cloud set as the target obstacle point cloud set;
the generating a point cloud map according to the target point cloud set includes:
determining the target odometer pose of the vehicle corresponding to the point cloud information of each frame according to the target point cloud set corresponding to the point cloud information of each frame;
and aiming at the point cloud information of each frame, splicing the target odometer pose corresponding to the point cloud information of the frame with the target point cloud set corresponding to the point cloud information of the frame to obtain the point cloud map.
2. The method of claim 1, wherein said determining a corresponding obstacle region in each of said environmental images for a moving obstacle in the surrounding of said vehicle comprises:
and aiming at each frame of the image information, splicing all the environment images included in the frame of the image information to obtain a spliced environment image corresponding to the frame of the image information, and determining the corresponding obstacle area of the mobile obstacle in each environment image included in the frame of the image information according to the spliced environment image by a pre-trained obstacle detection model.
3. The method according to claim 1, wherein the determining the target point cloud corresponding to the point cloud information for each frame according to the target obstacle point cloud and the point cloud data includes:
and aiming at the point cloud information of each frame, removing the target obstacle point cloud set corresponding to the point cloud information of the frame from the point cloud data included in the point cloud information of the frame to obtain the target point cloud set corresponding to the point cloud information of the frame.
4. The method according to claim 1, wherein the determining the target odometer pose of the vehicle corresponding to the point cloud information according to the target point cloud set corresponding to the point cloud information of each frame includes:
extracting characteristic points of target points in each target point cloud set to obtain target characteristic points corresponding to each target point;
according to the target feature points, determining the position and pose of the to-be-selected odometer of the vehicle corresponding to the point cloud information of each frame;
and optimizing the position and the posture of the odometer to be selected by using a preset optimization algorithm to obtain the position and the posture of the target odometer.
5. A map generation apparatus, the apparatus comprising:
the acquisition module is configured to acquire multi-frame image information and multi-frame point cloud information of the surrounding environment of the vehicle; the image information comprises environment images corresponding to a plurality of acquisition areas, and the point cloud information comprises point cloud data corresponding to the acquisition areas; the image information corresponds to the point cloud information one by one;
A determining module configured to determine a corresponding obstacle region in each of the environmental images of a moving obstacle in the vehicle surroundings;
the determining module is further configured to determine, for each frame of the point cloud information, a target obstacle point cloud set corresponding to the point cloud information of the frame according to the obstacle region and the point cloud data;
the determining module is further configured to determine a target point cloud set corresponding to the point cloud information of each frame according to the target obstacle point cloud set and the point cloud data;
a generation module configured to generate a point cloud map from the target point cloud set;
the determining module includes:
the first determining submodule is configured to project each candidate point in the point cloud data corresponding to the acquisition region, which is included in the frame point cloud information, into the target environment image corresponding to the acquisition region to obtain a projection point corresponding to the candidate point, and take the candidate point corresponding to the projection point matched with the obstacle region as the candidate obstacle point to obtain a candidate obstacle point cloud set corresponding to the acquisition region. The target environment image is an environment image corresponding to the acquisition area and included in the image information corresponding to the frame point cloud information;
The second determining submodule is configured to cluster the to-be-selected obstacle points in the to-be-selected obstacle point cloud set corresponding to each acquisition area to obtain at least one cluster point cloud set, and the largest cluster point cloud set in the at least one cluster point cloud set is used as a target obstacle point cloud set;
the generation module comprises:
the third determining submodule is configured to determine the target odometer pose of the vehicle corresponding to each frame of point cloud information according to the target point cloud set corresponding to each frame of point cloud information;
and the splicing sub-module is configured to splice the target odometer pose corresponding to the point cloud information of each frame with the target point cloud set corresponding to the point cloud information of each frame to obtain a point cloud map.
6. An electronic device, comprising:
a first processor;
a first memory for storing first processor-executable instructions;
wherein the first processor is configured to perform the steps of the method of any one of claims 1 to 4.
7. A vehicle, characterized by comprising:
a second processor;
a second memory for storing second processor-executable instructions;
wherein the second processor is configured to perform the steps of the method of any one of claims 1 to 4.
8. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 4.
CN202210778725.8A 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium Active CN115170630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210778725.8A CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210778725.8A CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115170630A CN115170630A (en) 2022-10-11
CN115170630B true CN115170630B (en) 2023-11-21

Family

ID=83492064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210778725.8A Active CN115170630B (en) 2022-06-30 2022-06-30 Map generation method, map generation device, electronic equipment, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115170630B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883496B (en) * 2023-06-26 2024-03-12 小米汽车科技有限公司 Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113639745A (en) * 2021-08-03 2021-11-12 北京航空航天大学 Point cloud map construction method and device and storage medium
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113639745A (en) * 2021-08-03 2021-11-12 北京航空航天大学 Point cloud map construction method and device and storage medium
CN114353799A (en) * 2021-12-30 2022-04-15 武汉大学 Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Static map reconstruction and dynamic object tracking for a camera and laser scanner system;Cheng Zou等;《https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/iet-cvi.2017.0308》;第384-392页 *

Also Published As

Publication number Publication date
CN115170630A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN115100377B (en) Map construction method, device, vehicle, readable storage medium and chip
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114863717B (en) Parking stall recommendation method and device, storage medium and vehicle
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN114937351B (en) Motorcade control method and device, storage medium, chip, electronic equipment and vehicle
CN114880408A (en) Scene construction method, device, medium and chip
CN114973178A (en) Model training method, object recognition method, device, vehicle and storage medium
CN114862931A (en) Depth distance determination method and device, vehicle, storage medium and chip
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115214629B (en) Automatic parking method, device, storage medium, vehicle and chip
CN114821511B (en) Rod body detection method and device, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN115042813B (en) Vehicle control method and device, storage medium and vehicle
CN115219151B (en) Vehicle testing method, system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant