CN116086432A - Pose correction method and device, electronic equipment, storage medium and vehicle - Google Patents

Pose correction method and device, electronic equipment, storage medium and vehicle Download PDF

Info

Publication number
CN116086432A
CN116086432A CN202211686341.XA CN202211686341A CN116086432A CN 116086432 A CN116086432 A CN 116086432A CN 202211686341 A CN202211686341 A CN 202211686341A CN 116086432 A CN116086432 A CN 116086432A
Authority
CN
China
Prior art keywords
pose
score
candidate
determining
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211686341.XA
Other languages
Chinese (zh)
Inventor
吴家征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202211686341.XA priority Critical patent/CN116086432A/en
Publication of CN116086432A publication Critical patent/CN116086432A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The disclosure provides a pose correction method, a pose correction device, electronic equipment, a storage medium and a vehicle, relates to the technical field of artificial intelligence, and particularly relates to the fields of automatic driving, computer vision and the like. The specific implementation scheme is as follows: in response to vehicle launch, determining a plurality of candidate poses and determining a first pose of the plurality of candidate poses as an initial pose for navigation; determining a score for each of the plurality of candidate poses based on the received sensory information; and determining a second pose as an initial pose for navigation under the condition that a comparison result between the score of the second pose and the score of the first pose in the plurality of candidate poses meets a preset condition. According to the embodiment of the disclosure, the pose is dynamically corrected in the navigation process, the candidate pose generated in the initialization process when the vehicle is started is utilized in the correction process, and the calculated amount and the memory occupation are reduced when the real-time correction is performed.

Description

Pose correction method and device, electronic equipment, storage medium and vehicle
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to the field of autopilot, computer vision, and the like.
Background
After the automatic driving vehicle is started, an optimal pose is often determined by an algorithm to serve as an initial pose. Thereafter, the vehicle continues to be positioned based on this initial pose (also known as trajectory tracking). In practical applications, if the initial pose has a larger error from the true pose of the vehicle, the trajectory tracking algorithm cannot correct the error to form an original error. When the scenes are similar, for example, in a long straight corridor scene, the subsequent track tracking cannot fail, the positioning module continuously outputs an error pose, and when the error pose is transmitted to the downstream planning and control module, the situation of wall collision, parking into an error garage and the like can be caused. Based on this, it is necessary to consider how to correct the initial pose currently in use.
Disclosure of Invention
The present disclosure provides a pose correction method, a pose correction device, an electronic device, a storage medium and a vehicle.
According to an aspect of the present disclosure, there is provided a pose correction method, including:
in response to vehicle launch, determining a plurality of candidate poses and determining a first pose of the plurality of candidate poses as an initial pose for navigation;
determining a score for each of the plurality of candidate poses based on the received sensory information;
and determining a second pose as an initial pose for navigation under the condition that a comparison result between the score of the second pose and the score of the first pose in the plurality of candidate poses meets a preset condition.
According to another aspect of the present disclosure, there is provided a posture correction apparatus including:
an initialization module for determining a plurality of candidate poses in response to vehicle start, and determining a first pose of the plurality of candidate poses as an initial pose for navigation;
a score determination module for determining a score for each of the plurality of candidate poses based on the received sensory information;
and the correction module is used for determining the second pose as the initial pose for navigation under the condition that the comparison result between the scores of the second pose and the scores of the first pose in the plurality of candidate poses meets the preset condition.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided an autonomous vehicle comprising an electronic device according to any one of the embodiments of the present disclosure.
According to the technical scheme of the embodiment of the disclosure, after the vehicle is started and the first pose is determined as the initial pose, a plurality of candidate poses are reserved, the score of each candidate pose is determined according to the subsequently received sensing information, and the second pose is used for replacing the first pose as the initial pose for navigation under the condition that the second pose meeting the preset condition is contained. Therefore, the method realizes the dynamic correction of the pose in the navigation process, and the correction process utilizes the candidate pose generated in the initialization process when the vehicle is started, so that the calculated amount and the memory occupation are reduced when the real-time correction is carried out.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flow chart of a pose correction method according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a method for correcting a pose according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a candidate pose tracking trajectory in one example application of an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of position correction in an application example of an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of a posture correction apparatus provided by an embodiment of the present disclosure;
FIG. 6 is a schematic block diagram of a posture correction apparatus provided by another embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a pose correction method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to facilitate understanding of the pose correction method provided by the embodiments of the present disclosure, the following description is provided for related technologies of the embodiments of the present disclosure, and the following related technologies may be optionally combined with the technical solutions of the embodiments of the present disclosure as an alternative, which all belong to the protection scope of the embodiments of the present disclosure.
In the related art, when an initial pose has been obtained through a series of complex algorithms, global repositioning can be performed based on a bag of words or scene recognition algorithm, and the initial pose is obtained again. However, the global repositioning method based on word bag or scene recognition only does not consider the obtained initial pose result, which is equivalent to fast initialization again, but the solution is quite unstable, and when the scene changes greatly, the solution jumps, so that the downstream planning and navigation by the control module are not facilitated. And the initialization of precise calculation is performed again in the running process of the vehicle, so that a large calculation amount is often generated, and the real-time running is not facilitated.
In addition, a method using particle filtering may also be employed. However, the particle filtering method can randomly generate particles on the whole map when the particles are not converged, but in a garage scene, the scene similarity is difficult to converge, and because the positions of the generated particles are random, many particles have no value, but occupy a large amount of computation and memory, so the particle filtering method has a large limitation.
Embodiments of the present disclosure can solve at least one of the above problems. Fig. 1 shows a flow chart of a pose correction method according to an embodiment of the present disclosure. The method may be applied to an electronic device that may be deployed on an autonomous vehicle. In some alternative implementations, the electronic device may implement the pose correction method of the embodiments of the present disclosure by way of a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the method may include:
step S110, responding to the starting of the vehicle, determining a plurality of candidate poses, and determining a first pose in the plurality of candidate poses as an initial pose for navigation;
step S120, determining the score of each candidate pose in the plurality of candidate poses based on the received sensing information;
step S130, determining the second pose as the initial pose for navigation under the condition that the comparison result between the scores of the second pose and the scores of the first pose in the plurality of candidate poses meets the preset condition.
Illustratively, in the above step S110, the plurality of candidate poses may be determined using a preset algorithm, for example, a bag-of-word algorithm or a scene recognition algorithm. Alternatively, the first pose may be determined among the plurality of candidate poses by scoring, for example, the score of each candidate pose is given while the plurality of candidate poses are determined by using a preset algorithm, where the pose with the highest score is the first pose, and the pose is determined as the initial pose for navigation.
In the embodiment of the disclosure, after determining an initial pose among a plurality of candidate poses, each candidate pose is reserved, and a score of each candidate pose is determined based on subsequently received sensing information. The sensing information may be from sensors in the vehicle, including cameras, lidar, etc. By way of example, the sensed information may include images acquired by a camera, point clouds acquired by a lidar, and the like.
Alternatively, the electronic device may periodically receive the sensory information, determine a score for each candidate pose each time the sensory information is received, and determine whether to replace the pose for navigation. For example, each time an observed image of a camera is received, a score for each candidate pose is determined.
According to embodiments of the present disclosure, a second pose of the plurality of candidate poses is compared to a first pose currently used as a navigation initial pose. The second pose may be any other candidate pose, or may be a specific pose, for example, a pose with the highest score among multiple candidate poses. And under the condition that the comparison result meets the preset condition, using the second pose to replace the first pose as the initial pose of navigation. The preset condition is, for example, that the score of the second pose is greater than the score of the first pose, or that the difference between the score of the second pose and the score of the first pose is greater than a preset threshold, or the like.
According to the method, after the vehicle is started and the first pose is determined as the initial pose, a plurality of candidate poses are reserved, the score of each candidate pose is determined according to the subsequently received sensing information, and the second pose is used for replacing the first pose as the initial pose for navigation under the condition that the second pose meeting the preset condition is contained. Therefore, the method realizes the dynamic correction of the pose in the navigation process, and the correction process utilizes the candidate pose generated in the initialization process when the vehicle is started, so that the calculated amount and the memory occupation are reduced when the real-time correction is carried out.
It should be noted that the above processing manner may be performed iteratively. For example, after determining the second pose as a new initial pose and tracking the trajectory of the second pose for navigation, the second pose is considered as a new first pose, after receiving new sensory information, each candidate pose may be continuously scored with the new sensory information, and whether to use other poses instead of the first pose is determined according to the score.
In an exemplary embodiment, the step S120, determining the score of each candidate pose of the plurality of candidate poses based on the received sensing information, includes: determining a tracking track of each candidate pose of the plurality of candidate poses based on the received sensing information; based on the local point cloud corresponding to the tracking track of each candidate pose, the global point cloud is obtained by splicing; and determining the score of each candidate pose based on the local point cloud and the global point cloud corresponding to the tracking track of each candidate pose.
The tracking track of the candidate pose is a track obtained by tracking the track by taking the candidate pose as an initial pose. Since each candidate pose is not used for navigation, the above-described trajectory tracking may also be referred to as hypothesis tracking. In order to facilitate understanding of this processing manner, fig. 2 is a schematic flow chart of a pose correction method according to another embodiment of the present disclosure. As shown in fig. 2, the method may include:
step S201, starting a vehicle;
step S202, generating a plurality of candidate poses by using algorithms such as word bags or scene recognition and the like;
step S203, selecting the first pose with the highest score as the initial pose of the vehicle;
step S204, the vehicle enters a navigation mode according to the initial pose;
step S205, all candidate pose is put into a hypothetical tracking state (including the optimal pose).
That is, as the vehicle moves, a respective trajectory and local point cloud are generated for each candidate pose, and the score for each candidate pose is determined based on the comparison of the local point cloud corresponding to the generated trajectory and the global point cloud.
By adopting the embodiment, the sensing information is dynamically collected, the tracking track and the local point cloud of each candidate pose are generated, and the correction precision of each candidate pose is gradually enhanced along with the dynamic running of the vehicle, so that the original error of the initial positioning can be gradually removed when the vehicle runs, and the failure rate of the automatic driving in the navigation process is greatly reduced.
In an exemplary embodiment, determining a tracking trajectory for each of a plurality of candidate poses based on received sensory information includes: based on the received observation image and the observation point cloud, obtaining the real pose of the vehicle; determining an incremental pose of the vehicle based on the real pose; and determining the tracking track of each candidate pose based on the increment pose and the observation point cloud.
The observation image may refer to an image acquired by a camera of the vehicle in real time. The observation point cloud may refer to a point cloud acquired in real time by a laser radar of a vehicle. For example, pnP (selective-n-Point) matching and resolving can be performed based on each pixel Point in the observation image and a corresponding Point in the observation Point cloud, so as to obtain a real pose of the vehicle, that is, a pose of the vehicle in the global coordinate system. And comparing the current real pose with the real pose determined when the previous frame of image is received to obtain the incremental pose of the vehicle. And simultaneously triangulating the current observation point cloud to form a local point cloud, and updating the tracking track of each candidate pose and the corresponding point cloud according to the increment pose and the local point cloud.
By adopting the embodiment, the accuracy of tracking the track can be improved, so that the score of each candidate pose can be more accurately determined, the initial pose for navigation can be replaced at a proper time, excessive jump of the pose is avoided, and the stability of a positioning system is improved.
In an exemplary embodiment, determining the score of each candidate pose based on the local point cloud and the global point cloud corresponding to the tracking trajectory of each candidate pose includes: matching a plurality of points in the local point cloud corresponding to the tracking track of each pose with a plurality of points in the global point cloud to obtain a plurality of matching point pairs; and obtaining the score of each candidate pose based on the distance of each matching point pair of the plurality of matching point pairs.
Specifically, points closest to each point in each local point cloud may be found in the global point cloud, thereby forming a plurality of matching point pairs. Then, the Score of each candidate pose is determined based on the following formula:
Figure BDA0004021168760000061
wherein P is local P is the point in the local point cloud corresponding to the candidate pose global And n is the number of matching point pairs between the local point cloud corresponding to the candidate pose and the global point cloud. According to the formula, the score is high when the coincidence degree of the local point cloud and the global point cloud is high (the distance is short). Therefore, the track tracking effect corresponding to each pose can be accurately evaluated through scoring, the replacement of the initial pose for navigation at a proper time is further ensured, and the stability of the positioning system is improved.
In an exemplary embodiment, the pose correction method may further include: among the plurality of candidate poses, the pose of the tracking track hitting the wall is deleted. Illustratively, when current sensing information is received, if a tracking track of a certain candidate pose is determined to hit a wall based on the sensing information, the candidate pose is deleted, so that scoring of the candidate pose is not needed. The next time the sensing information is received, the candidate pose does not need to be subjected to track updating and scoring.
FIG. 3 shows a schematic diagram of a tracking trajectory for each candidate pose in one example application. As shown in fig. 3, in a garage scene, the trajectories of the plurality of candidate poses include a trajectory 301, a trajectory 302, and a trajectory 303. It can be seen that track 302 and track 303 hit the wall, and candidate poses corresponding to those tracks that have hit the wall can be deleted. By adopting the embodiment, the candidate pose which does not meet the requirement can be deleted in time, so that the calculated amount in the subsequent processing is reduced.
For example, in embodiments of the present disclosure, the second pose may be the highest scoring pose of the plurality of candidate poses. Accordingly, in step S130, when the comparison result between the score of the second pose and the score of the first pose among the plurality of candidate poses meets the preset condition, determining the second pose as the initial pose for navigation may include: in the event that the difference between the score of the second pose and the score of the first pose is greater than a first threshold, the second pose is determined to be an initial pose for navigation.
That is, the preset condition is that the difference between the score of the second pose and the score of the first pose is greater than the first threshold. By adopting the embodiment, when the score of the second pose is higher than the score of the first pose by a first threshold, the second pose is used for replacing the first pose as the initial pose for navigation, so that the problem of lower user navigation experience caused by excessive jump of the pose can be avoided, and the stability of a positioning system is improved.
In an exemplary embodiment, where the difference between the score of the second pose and the score of the first pose is greater than a first threshold, determining the second pose as an initial pose for navigation comprises: and if the distance between the second pose and the first pose is smaller than or equal to a second threshold value under the condition that the difference value between the score of the second pose and the score of the first pose is larger than the first threshold value, determining the second pose as an initial pose for navigation.
Wherein the distance between the second pose and the first pose may be a euclidean distance. By adopting the embodiment, the second pose is adopted to replace the first pose as the initial pose for navigation only under the condition that the distance between the second pose and the first pose is smaller, so that the problem of lower user navigation experience caused by excessive jump of the pose can be further avoided, and the stability of the positioning system is improved.
In an exemplary embodiment, the pose correction method further includes: and under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, if the distance between the second pose and the first pose is larger than a second threshold value, sending error reporting information to indicate that the candidate poses are redetermined.
For example, if the distance between the second pose and the first pose is greater than a second threshold, an error message may be sent to the system indicating that a large error may have occurred in the initial pose, requiring the vehicle to stop for repositioning. At this time, the vehicle may be restarted, a plurality of candidate poses are determined, for example, a plurality of candidate poses are determined using an algorithm such as a word bag or scene recognition, and a third pose among the plurality of candidate poses is determined as an initial pose for navigation, that is, is initialized once again.
By adopting the embodiment, the first pose currently in use is considered in the process of correcting the pose of the vehicle, and if the first pose is identified to have an excessive gap from the pose with the highest score, the repositioning is performed in time so as to prevent larger disasters.
In an exemplary embodiment, the pose correction method may further include: among the plurality of candidate poses, poses having scores below a third threshold are deleted.
For example, when determining the score of each candidate pose, the poses with scores below the third threshold may be deleted, so that there is no need to perform score comparison for the poses. When the next sensing information is received, track updating and scoring on the pose are not needed. By adopting the embodiment, the candidate pose which does not meet the requirement can be deleted in time, so that the calculated amount in the subsequent processing is reduced.
In order to facilitate understanding of the pose correction method provided by the embodiments of the present disclosure, a specific application example is provided below. Fig. 4 shows a flow diagram of a method of position correction in an application example of an embodiment of the present disclosure. As shown in fig. 4, the method includes:
step S401, a frame of camera observation image is received.
And step S402, performing PnP matching on point clouds around the current pose to obtain a pose (namely the real pose of the vehicle) under a global coordinate system.
Step S403, calculating the increment pose delta p of the current frame and the previous frame.
Step S404, triangulating the current observation point cloud to form a local point cloud Z.
And step S405, updating the track and the point cloud of all the candidate poses according to deltap and Z.
And step S406, screening the pose of the track hitting the wall.
Step S407, comparing the local point cloud and the global point cloud of each track for scoring.
Step S408, screening out candidate poses with low scores.
And S409, judging whether the candidate pose with the highest current score is the currently used pose. If the candidate pose with the highest current score is not the currently used pose, step S410 is entered. If the candidate pose with the highest current score is the currently used pose, other processing is not performed, and the current flow is ended.
Step S410, judging whether the highest pose of the current scores is higher than the current pose scores by a first threshold delta. If yes, go to step S411; if not, other processing is not performed, and the current flow is ended.
And S411, judging whether the highest pose of the current score is larger than the Euclidean distance of the currently used pose by a second threshold epsilon. If yes, go to step S412; if not, the process advances to step S413.
Step S412, the system is informed of the error, the system is stopped for initialization again, and then the process is finished.
And step 413, replacing the pose with the highest score as the current pose, continuing to navigate, and ending.
According to the application example, the process of dynamically correcting the pose in the navigation process is realized, the pose which is currently being used is considered in the correction process, the problem of lower user navigation experience caused by excessive jump of the pose is not caused after correction, and if the excessive wrong matching is identified, the user can stop in time to prevent larger disasters.
According to the embodiment of the disclosure, a pose correction device is also provided. Fig. 5 shows a schematic block diagram of a posture correction apparatus provided by an embodiment of the present disclosure. As shown in fig. 5, the apparatus includes:
an initialization module 510 for determining a plurality of candidate poses in response to a vehicle start, and determining a first pose of the plurality of candidate poses as an initial pose for navigation;
a score determination module 520 for determining a score for each candidate pose of the plurality of candidate poses based on the received sensory information;
a correction module 530, configured to determine a second pose as an initial pose for navigation if a comparison result between the score of the second pose and the score of the first pose among the plurality of candidate poses meets a preset condition.
Illustratively, fig. 6 shows a schematic block diagram of a posture correction apparatus provided by another embodiment of the present disclosure. As shown in fig. 6, the score determination module 520 may include:
a track tracking unit 621 for determining a tracking track of each of the plurality of candidate poses based on the received sensing information;
a point cloud stitching unit 622, configured to stitch the local point clouds corresponding to the tracking tracks of each candidate pose to obtain a global point cloud;
the score determining unit 623 is configured to determine a score of each candidate pose based on the local point cloud corresponding to the tracking track of each candidate pose and the global point cloud.
Optionally, the track following unit 621 is specifically configured to:
based on the received observation image and the observation point cloud, obtaining the real pose of the vehicle;
determining an incremental pose of the vehicle based on the real pose;
and determining the tracking track of each candidate pose based on the increment pose and the observation point cloud.
Alternatively, the score determining unit 623 is specifically configured to:
matching a plurality of points in the local point cloud corresponding to the tracking track of each pose with a plurality of points in the global point cloud to obtain a plurality of matching point pairs;
and obtaining the score of each candidate pose based on the distance of each matching point pair in the plurality of matching point pairs.
Optionally, as shown in fig. 6, the pose correction device may further include:
the first deleting module 610 is configured to delete a pose of the tracking track against the wall among the plurality of candidate poses.
Optionally, the second pose is a highest scoring pose among the plurality of candidate poses; the correction module 530 is specifically configured to:
in the event that the difference between the score of the second pose and the score of the first pose is greater than a first threshold, the second pose is determined to be an initial pose for navigation.
Optionally, the correction module 530 is configured to:
and if the distance between the second pose and the first pose is smaller than or equal to a second threshold value under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, determining the second pose as an initial pose for navigation.
Optionally, the correction module 530 is further configured to:
and under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, sending error reporting information to indicate that the candidate poses are redetermined if the distance between the second pose and the first pose is larger than a second threshold value.
Optionally, as shown in fig. 6, the pose correction device further includes:
a second deleting module 620, configured to delete poses with scores below a third threshold among the candidate poses.
For descriptions of specific functions and examples of each module and sub-module of the apparatus in the embodiments of the present disclosure, reference may be made to the related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, the pose correction method. For example, in some embodiments, the pose correction method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM702 and/or communication unit 709. When the computer program is loaded into the RAM703 and executed by the computing unit 701, one or more steps of the above-described pose correction method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the pose correction method by any other suitable means (e.g. by means of firmware).
The embodiment of the disclosure also provides an automatic driving vehicle, which comprises the electronic equipment. The autopilot device implements the pose correction method of the embodiments of the present disclosure by way of a processor in the electronic device invoking computer readable instructions stored in a memory.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (22)

1. A method of pose correction comprising:
in response to vehicle launch, determining a plurality of candidate poses and determining a first pose of the plurality of candidate poses as an initial pose for navigation;
determining a score for each of the plurality of candidate poses based on the received sensory information;
and determining a second pose as an initial pose for navigation under the condition that a comparison result between the score of the second pose and the score of the first pose in the plurality of candidate poses meets a preset condition.
2. The method of claim 1, wherein the determining a score for each candidate pose of the plurality of candidate poses based on the received sensory information comprises:
determining a tracking track of each candidate pose of the plurality of candidate poses based on the received sensing information;
based on the local point cloud corresponding to the tracking track of each candidate pose, splicing to obtain a global point cloud;
and determining the score of each candidate pose based on the local point cloud corresponding to the tracking track of each candidate pose and the global point cloud.
3. The method of claim 2, wherein the determining a tracking trajectory for each of the plurality of candidate poses based on the received sensory information comprises:
based on the received observation image and the observation point cloud, obtaining the real pose of the vehicle;
determining an incremental pose of the vehicle based on the real pose;
and determining the tracking track of each candidate pose based on the increment pose and the observation point cloud.
4. A method according to claim 2 or 3, wherein said determining the score of each candidate pose based on the local point cloud corresponding to the tracking trajectory of each candidate pose and the global point cloud comprises:
matching a plurality of points in the local point cloud corresponding to the tracking track of each pose with a plurality of points in the global point cloud to obtain a plurality of matching point pairs;
and obtaining the score of each candidate pose based on the distance of each matching point pair in the plurality of matching point pairs.
5. The method of any of claims 2-4, further comprising:
and deleting the pose of the tracking track hitting the wall from the plurality of candidate poses.
6. The method of any of claims 1-5, wherein the second pose is a highest scoring pose of the plurality of candidate poses;
and determining the second pose as an initial pose for navigation when the comparison result between the score of the second pose and the score of the first pose meets a preset condition, wherein the method comprises the following steps:
in the event that the difference between the score of the second pose and the score of the first pose is greater than a first threshold, the second pose is determined to be an initial pose for navigation.
7. The method of claim 6, wherein the determining the second pose as an initial pose for navigation if a difference between the score of the second pose and the score of the first pose is greater than a first threshold comprises:
and if the distance between the second pose and the first pose is smaller than or equal to a second threshold value under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, determining the second pose as an initial pose for navigation.
8. The method of claim 7, further comprising:
and under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, sending error reporting information to indicate that the candidate poses are redetermined if the distance between the second pose and the first pose is larger than a second threshold value.
9. The method of any of claims 1-8, further comprising:
among the plurality of candidate poses, poses having scores below a third threshold are deleted.
10. A posture correction apparatus comprising:
an initialization module for determining a plurality of candidate poses in response to vehicle start, and determining a first pose of the plurality of candidate poses as an initial pose for navigation;
a score determination module for determining a score for each of the plurality of candidate poses based on the received sensory information;
and the correction module is used for determining the second pose as the initial pose for navigation under the condition that the comparison result between the scores of the second pose and the scores of the first pose in the plurality of candidate poses meets the preset condition.
11. The apparatus of claim 10, wherein the score determination module comprises:
a track tracking unit, configured to determine a tracking track of each candidate pose of the plurality of candidate poses based on the received sensing information;
the point cloud splicing unit is used for splicing local point clouds corresponding to the tracking tracks of each candidate pose to obtain global point clouds;
and the scoring determining unit is used for determining the scoring of each candidate pose based on the local point cloud corresponding to the tracking track of each candidate pose and the global point cloud.
12. The apparatus of claim 11, wherein the trajectory tracking unit is to:
based on the received observation image and the observation point cloud, obtaining the real pose of the vehicle;
determining an incremental pose of the vehicle based on the real pose;
and determining the tracking track of each candidate pose based on the increment pose and the observation point cloud.
13. The apparatus according to claim 11 or 12, wherein the score determination unit is configured to:
matching a plurality of points in the local point cloud corresponding to the tracking track of each pose with a plurality of points in the global point cloud to obtain a plurality of matching point pairs;
and obtaining the score of each candidate pose based on the distance of each matching point pair in the plurality of matching point pairs.
14. The apparatus of any of claims 11-13, further comprising:
and the first deleting module is used for deleting the pose of the tracking track striking the wall among the plurality of candidate poses.
15. The apparatus of any of claims 10-14, wherein the second pose is a highest scoring pose of the plurality of candidate poses;
the correction module is used for:
in the event that the difference between the score of the second pose and the score of the first pose is greater than a first threshold, the second pose is determined to be an initial pose for navigation.
16. The apparatus of claim 15, wherein the correction module is to:
and if the distance between the second pose and the first pose is smaller than or equal to a second threshold value under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, determining the second pose as an initial pose for navigation.
17. The apparatus of claim 16, wherein the correction module is further to:
and under the condition that the difference value between the score of the second pose and the score of the first pose is larger than a first threshold value, sending error reporting information to indicate that the candidate poses are redetermined if the distance between the second pose and the first pose is larger than a second threshold value.
18. The apparatus of any of claims 10-17, further comprising:
and the second deleting module is used for deleting the pose with the score lower than a third threshold value from the candidate poses.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-9.
22. An autonomous vehicle comprising the electronic device of claim 19.
CN202211686341.XA 2022-12-27 2022-12-27 Pose correction method and device, electronic equipment, storage medium and vehicle Withdrawn CN116086432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211686341.XA CN116086432A (en) 2022-12-27 2022-12-27 Pose correction method and device, electronic equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211686341.XA CN116086432A (en) 2022-12-27 2022-12-27 Pose correction method and device, electronic equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN116086432A true CN116086432A (en) 2023-05-09

Family

ID=86186129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211686341.XA Withdrawn CN116086432A (en) 2022-12-27 2022-12-27 Pose correction method and device, electronic equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN116086432A (en)

Similar Documents

Publication Publication Date Title
CN110827325B (en) Target tracking method and device, electronic equipment and storage medium
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
CN112560680A (en) Lane line processing method and device, electronic device and storage medium
CN113361710B (en) Student model training method, picture processing device and electronic equipment
CN113029129B (en) Method and device for determining positioning information of vehicle and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN112509126B (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN116086432A (en) Pose correction method and device, electronic equipment, storage medium and vehicle
CN114299192B (en) Method, device, equipment and medium for positioning and mapping
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN113984072A (en) Vehicle positioning method, device, equipment, storage medium and automatic driving vehicle
CN116295466A (en) Map generation method, map generation device, electronic device, storage medium and vehicle
CN114049615B (en) Traffic object fusion association method and device in driving environment and edge computing equipment
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN115096328B (en) Positioning method and device of vehicle, electronic equipment and storage medium
CN116012624B (en) Positioning method, positioning device, electronic equipment, medium and automatic driving equipment
CN112710305B (en) Vehicle positioning method and device
CN116894894B (en) Method, apparatus, device and storage medium for determining motion of avatar
CN116448105B (en) Pose updating method and device, electronic equipment and storage medium
CN114115640B (en) Icon determination method, device, equipment and storage medium
CN113963326A (en) Traffic sign detection method, device, equipment, medium and automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230509

WW01 Invention patent application withdrawn after publication