CN110207714A - A kind of method, onboard system and the vehicle of determining vehicle pose - Google Patents

A kind of method, onboard system and the vehicle of determining vehicle pose Download PDF

Info

Publication number
CN110207714A
CN110207714A CN201910576519.7A CN201910576519A CN110207714A CN 110207714 A CN110207714 A CN 110207714A CN 201910576519 A CN201910576519 A CN 201910576519A CN 110207714 A CN110207714 A CN 110207714A
Authority
CN
China
Prior art keywords
pose
vehicle
conversion
monocular cam
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910576519.7A
Other languages
Chinese (zh)
Other versions
CN110207714B (en
Inventor
柴文楠
刘中元
蒋少峰
李良
周建
潘力澜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Motors Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Motors Technology Co Ltd filed Critical Guangzhou Xiaopeng Motors Technology Co Ltd
Priority to CN201910576519.7A priority Critical patent/CN110207714B/en
Publication of CN110207714A publication Critical patent/CN110207714A/en
Application granted granted Critical
Publication of CN110207714B publication Critical patent/CN110207714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of method of determining vehicle pose, it is used for onboard system and vehicle, according to the output fusion of the pose of each monocular system to obtain the fusion pose of vehicle, to improve the pose estimated accuracy of vehicle.The method comprise the steps that obtaining the first inertial navigation information and the first speedometer for automobile meter information of vehicle;According to the first inertial navigation information and the first speedometer for automobile meter information, the first current estimation pose of vehicle is determined;It obtains and joins outside N number of monocular cam of vehicle, N is the integer greater than 1;Pose is currently estimated with described first according to joining outside N number of monocular cam, by the vision SLAM algorithm fusion of inertial navigation odometer algorithm and each monocular cam, to estimate the current pose of N number of monocular cam;Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains N number of conversion pose of the vehicle;According to N number of conversion pose, the fusion pose of the vehicle is obtained.

Description

A kind of method, onboard system and the vehicle of determining vehicle pose
Technical field
The present invention relates to intelligent automobile technical fields, and in particular to determines method, onboard system and the vehicle of vehicle pose.
Background technique
In automatic Pilot field, the synchronous positioning of monocular vision and map building (Simultaneous Localization And Mapping, SLAM) refer to automatic driving vehicle using single visual sensor (such as monocular cam) create one with it is true The consistent map of real environment, and determined simultaneously from the position in map.
The synchronous positioning of monocular vision with build figure (simultaneous localization and mapping, SLAM) with Inertial navigation system fusion can position vehicle.But the field range of monocular cam is usually limited, sometimes because of inspection The characteristics of image or parallax measured is very little, influences the precision and continuity of monocular SLAM positioning.
Summary of the invention
The embodiment of the invention provides method, onboard system and the vehicles of a kind of determining vehicle pose, and being used for can basis The outer ginseng of multiple monocular cams of vehicle obtains conversion pose, is merged, obtains the fusion pose of vehicle, thus, it improves The pose estimated accuracy of vehicle.
In view of this, first aspect present invention provides a kind of method of determining vehicle pose, may include:
Obtain the first inertial navigation information and the first speedometer for automobile meter information of vehicle;
According to first inertial navigation information and the first speedometer for automobile meter information, determine that the first of the vehicle is worked as Preceding estimation pose;
It obtains and joins outside N number of monocular cam of the vehicle, N is the integer greater than 1;
According to ginseng outside N number of monocular cam and the described first current estimation pose, by inertial navigation odometer algorithm and respectively The vision SLAM algorithm fusion of monocular cam, to estimate the current pose of N number of monocular cam;
Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains N number of translation bit of the vehicle Appearance;
According to N number of conversion pose, the fusion pose of the vehicle is obtained.
Optionally, in some embodiments of the invention,
The conversion pose according to N number of vehicle, obtains the fusion pose of the vehicle, comprising:
According to the first current estimation pose of N number of conversion pose and the vehicle, the power of each conversion pose is determined Weight;
The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Fusion pose.
Optionally, in some embodiments of the invention,
The conversion pose according to N number of vehicle, obtains the fusion pose of the vehicle, comprising:
According to Predistribution Algorithm and N number of conversion pose, the weight of each conversion pose is determined;
The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Fusion pose.
Optionally, in some embodiments of the invention, the method also includes:
Join according to outside N number of monocular cam, by the inertial navigation odometer output information and each monocular cam SLAM algorithm fusion constructs N number of monocular cam visual signature map.
Optionally, in some embodiments of the invention, the method also includes:
According to track of vehicle common in N number of monocular cam visual signature map, by N number of monocular cam Visual signature map is fused to reference to visual signature map.
Optionally, in some embodiments of the invention, join outside the N number of monocular cam for obtaining the vehicle, packet It includes:
Obtain visual odometry information;
According to the first current estimation pose of the visual odometry information and the vehicle, the N number of of the vehicle is determined Join outside monocular cam.
Optionally, in some embodiments of the invention, the method also includes:
Obtain the second inertial navigation information and the second speedometer for automobile meter information of the vehicle;
According to the fusion pose of the vehicle, the second inertial navigation information and the second speedometer for automobile meter letter of the vehicle Breath determines the second current estimation pose of the vehicle.
Second aspect of the present invention provides a kind of onboard system, may include:
Module is obtained, by information based on the first inertial navigation information and the first speedometer for automobile that obtain vehicle;Described in acquisition Join outside N number of monocular cam of vehicle, N is the integer greater than 1;
Processing module, the information based on according to first inertial navigation information and first speedometer for automobile, determines institute State the first current estimation pose of vehicle;Pose is currently estimated with described first according to joining outside N number of monocular cam, will be used to The vision SLAM algorithm fusion of odometer algorithm Yu each monocular cam is led, to estimate the current pose of N number of monocular cam;It is right The current pose of N number of monocular cam carries out coordinate conversion respectively, obtains N number of conversion pose of the vehicle;According to institute N number of conversion pose is stated, the fusion pose of the vehicle is obtained.
Optionally, in some embodiments of the invention,
The processing module, specifically for currently estimating pose according to the first of N number of conversion pose and the vehicle, Determine the weight of each conversion pose;It is flat using weighting according to the weight of each conversion pose and N number of conversion pose Equal method obtains the fusion pose of the vehicle.
Optionally, in some embodiments of the invention,
The processing module is specifically used for determining each conversion pose according to Predistribution Algorithm and N number of conversion pose Weight;The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Fusion pose.
Optionally, in some embodiments of the invention,
The processing module is also used to join according to outside N number of monocular cam, by the inertial navigation odometer output information With the SLAM algorithm fusion of each monocular cam, N number of monocular cam visual signature map is constructed.
Optionally, in some embodiments of the invention,
The processing module is also used to according to track of vehicle common in N number of monocular cam visual signature map, N number of monocular cam visual signature map is fused to reference to visual signature map.
Optionally, in some embodiments of the invention,
The acquisition module is specifically used for obtaining visual odometry information;According to the visual odometry information and described The current estimation pose of the first of vehicle, determines and joins outside N number of monocular cam of the vehicle.
Optionally, in some embodiments of the invention,
The acquisition module is also used to obtain the second inertial navigation information and the second speedometer for automobile meter letter of the vehicle Breath;
The processing module is also used to the second inertial navigation information for merging pose, the vehicle according to the vehicle With the second speedometer for automobile meter information, the second current estimation pose of the vehicle is determined.
Third aspect present invention provides a kind of vehicle, may include as second aspect of the present invention and second aspect are any optional Onboard system described in mode.
Fourth aspect present invention provides a kind of computer readable storage medium, stores computer program, wherein the meter Calculation machine program makes computer execute a kind of method of determining vehicle pose disclosed in first aspect of the embodiment of the present invention.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
In embodiments of the present invention, the first inertial navigation information and the first speedometer for automobile meter information of vehicle are obtained;According to First inertial navigation information and the first speedometer for automobile meter information determine the first current estimation pose of the vehicle; It obtains and joins outside N number of monocular cam of the vehicle, N is the integer greater than 1;According to ginseng and institute outside N number of monocular cam The first current estimation pose is stated, it is N number of to estimate by the vision SLAM algorithm fusion of inertial navigation odometer algorithm and each monocular cam The current pose of monocular cam;Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains the vehicle N number of conversion pose;According to N number of conversion pose, the fusion pose of the vehicle is obtained.It can be according to the multiple of vehicle The outer ginseng of monocular cam obtains conversion pose, is merged, obtains the fusion pose of vehicle, thus, improve the pose of vehicle Estimated accuracy.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to institute in embodiment and description of the prior art Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is one embodiment schematic diagram that the method for vehicle pose is determined in the embodiment of the present invention;
Fig. 2 is a workflow schematic diagram applied by the embodiment of the present invention;
Fig. 3 is one embodiment schematic diagram of onboard system in the embodiment of the present invention;
Fig. 4 is one embodiment schematic diagram of vehicle in the embodiment of the present invention;
Fig. 5 is one embodiment schematic diagram of onboard system in the embodiment of the present invention.
Specific embodiment
The embodiment of the invention provides method, onboard system and the vehicles of a kind of determining vehicle pose, and being used for can basis The outer ginseng of multiple monocular cams of vehicle obtains conversion pose, is merged, obtains the fusion pose of vehicle, thus, it improves The pose estimated accuracy of vehicle.
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical solution in the embodiment of the present invention are described, it is clear that described embodiment is only present invention a part Embodiment, instead of all the embodiments.Based on the embodiments of the present invention, it should fall within the scope of the present invention.
A simple illustration is done to technical term involved in the embodiment of the present invention below:
Inertial navigation system (INS, hereinafter referred to as inertial navigation) is one kind independent of external information, also not to external radiation energy The autonomic navigation system of amount, including Inertial Measurement Unit.Its working environment not only includes aerial, ground, can also be under water. The basic functional principle of inertial navigation, by measurement carrier in the acceleration of inertial reference system, is incited somebody to action based on Newton mechanics law It integrates the time, and it is transformed in navigational coordinate system, it will be able to obtain the speed in navigational coordinate system, yaw The information such as angle and position.
Speedometer for automobile meter: refer to the device of the measurement stroke (as on automobile), such as wheel pulse counter.
It should be noted that in embodiments of the present invention, it is usually that inertial navigation system and speedometer for automobile meter is referred to as used Lead odometer.
In view of the drawbacks of the prior art, the present invention proposes a kind of based on the more monocular SLAM of more monocular cams progress and inertial navigation The method of fusion is used to improve the scheme of vehicle pose estimation.Below by way of examples, to technical solution of the present invention do into The explanation of one step, as shown in Figure 1, one embodiment schematic diagram of the method to determine vehicle pose in the embodiment of the present invention, it can To include:
101, the first inertial navigation information and the first speedometer for automobile meter information of vehicle are obtained.
The first inertial navigation information and the first speedometer for automobile meter information of onboard system acquisition vehicle.
Illustratively, can be installed on vehicle Inertial Measurement Unit (Inertial measurement unit, IMU), The sensors such as pulse counter are taken turns, the locating module (such as vehicle body odometer) that these sensors can be used as vehicle calculates vehicle Operating range further can also carry out position positioning to vehicle.As shown in Fig. 2, being answered by embodiment illustrated in fig. 1 of the present invention One workflow schematic diagram.
102, according to first inertial navigation information and the first speedometer for automobile meter information, the of the vehicle is determined One current estimation pose.
Onboard system is according to first inertial navigation information and the first speedometer for automobile meter information, vehicle described in pre-estimation The first current estimation pose under global coordinate system.
Wherein, the first inertial navigation information may include providing posture, speed and change in location information by integral, utilize Kalman filtering scheduling algorithm is merged with displacement information provided by speedometer for automobile meter, and the vehicle can be determined in global coordinate system Under the first current estimation pose.
103, it obtains and joins outside N number of monocular cam of the vehicle, N is the integer greater than 1.
Join outside the N number of monocular cam for obtaining the vehicle, may include: to obtain visual odometry information;According to First current estimation pose of the visual odometry information and the vehicle, determines outside N number of monocular cam of the vehicle Ginseng.Further, according to the first current estimation pose of the visual odometry information and the vehicle, online outer ginseng mark is utilized Determine algorithm (such as hand and eye calibrating method), determines and join outside N number of monocular cam of the vehicle.
Wherein it is possible to understand, join outside monocular cam indicate be monocular cam coordinate system relative vehicle it is used The rotation information and translation information of property coordinate system.
It should be noted that monocular cam can be to be set to front windshield of vehicle or be set to vehicle mirrors Single monocular cam.It is all suitable such as forward sight, backsight, side view i.e. if being mounted with more than one monocular cam on vehicle For technical solution provided by the present invention.
Optionally, vehicle electronic device is determined from the true environment image sequence that the monocular cam of vehicle takes At least two initial frames, and pass through the reality of the locating module of vehicle acquisition vehicle between the shooting time of at least two initial frames Border moving distance.
In the embodiment of the present invention, monocular cam can be to be set to front windshield of vehicle or be set to vehicle rearview The monocular cam of mirror.Initial frame is the picture frame initialized for monocular vision SLAM, is existed by comparing identical characteristic point The gap of picture position at least two initial frames determines relative pose variation of the monocular cam between initial frame, from And complete monocular vision SLAM initialization.It is understood that in order to improve the success rate of monocular vision SLAM initialization, initially Frame should meet certain requirements.For example, can set initial frame is characterized a number is respectively greater than preset threshold (such as 100) two A continuous picture frame.
Monocular vision SLAM initialization the following steps are included: 1, Feature Points Matching is carried out at least two initial frames;2, Based on matched characteristic point, using to geometrical principle can according to the interframe pose for the monocular cam that initial frame is determined, That is relative pose of the monocular cam between initial frame changes (including translational movement and rotation amount);3, existed based on monocular cam Interframe pose between at least two initial frames, the depth of characteristic point in image is determined using principle of triangulation;4, according to list The depth of mesh camera characteristic point in the interframe pose and image between at least two initial frames, with constructing initial SLAM Figure.
Further it is subsequent build figure during, the pose of the corresponding monocular cam of each picture frame can be expressed as Relative pose variation of the monocular cam between the picture frame and the previous picture frame of the picture frame.In general, should Relative pose variation between picture frame and previous picture frame can specifically be expressed as monocular cam between initial frame Relative pose variation M times, M is real number.Therefore, directly the translational movement t of monocular cam is returned when if initializing One changes, and the interframe pose of monocular cam and the characteristic point depth recovered when initialization is caused not include dimensional information, So finally obtained SLAM map can only also determine that the variation of monocular cam relative position is accurate, can not reflect really Manage environment.
As it can be seen that the SLAM map finally constructed contain monocular cam pose and characteristic point three-dimensional space position it Between corresponding relationship, the corresponding relationship with the image in true environment image sequence be constraint, when monocular cam be located at it is a certain When pose shoots a certain characteristic point, obtained image is a certain frame image in true environment image sequence.That is, SLAM Map includes the three-dimensional for the characteristic point that each image includes in the pose sequence and true environment image sequence of monocular cam Spatial position.
104, according to ginseng outside N number of monocular cam and the first current estimation pose, by inertial navigation odometer algorithm With the vision SLAM algorithm fusion of each monocular cam, to estimate the current pose of N number of monocular cam.
Onboard system can currently estimate pose with the first of vehicle according to joining outside N number of monocular cam, determine each list The current pose of mesh camera.
Illustratively, to monocular cam system, such as camera 1, if joined outside camera, (camera is with respect to car body/be used to The pose of guiding systems) it does not demarcate, then onboard system can be according to the pose of visual odometry and inertial navigation odometer pre-estimation, i.e., One current estimation pose is demarcated outside camera with optimization joins.
It has demarcated, can have been calculated according to the vision SLAM of each monocular cam of current comparative maturity if joined outside camera The present bit of algorithm (such as monocular version VI-ORB-SLAM2 or VINS) real-time estimation camera that method is merged with inertial navigation odometer Appearance simultaneously establishes visual signature map.It is understood that other monocular cams are also principle described above, determine and correspond to Current pose.
105, coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains N number of turn of the vehicle Replace appearance.
The current pose of camera obtained by monocular SLAM system is carried out coordinate system conversion respectively, asked according to its outer ginseng again N number of conversion pose of the corresponding vehicle under global coordinate system out.
It is understood that SLAM system has found out current pose of the camera under visual global coordinate system.By taking the photograph As ginseng outside head, that is, the relative pose of camera relative vehicle inertial coodinate system, available visual coordinate system to vehicle is sat Mark the transition matrix of system;The current pose of this transition matrix and the camera that is initially referred under visual coordinate system is recycled, Conversion pose of the vehicle in vehicle global coordinate system (navigational coordinate system i.e. in this programme) can be found out.
106, according to N number of conversion pose, the fusion pose of the vehicle is obtained.
The conversion pose according to N number of vehicle, obtains the fusion pose of the vehicle, comprising:
According to the first current estimation pose of N number of conversion pose and the vehicle, the power of each conversion pose is determined Weight;The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Merge pose;
Alternatively,
According to Predistribution Algorithm and N number of conversion pose, the weight of each conversion pose is determined;According to each conversion The weight of pose and N number of conversion pose obtain the fusion pose of the vehicle using weighted mean method.If N number of pose Between there are time synchronization problems, then will pose fusion before, carry out pose synchronization process, such as using inertial navigation progress pose reckoning Or interpolation is to obtain the pose of time synchronization.
It is understood that onboard system can according to the procedure parameter of each monocular cam system (such as outer ginseng confidence level, Pose residual error is estimated in visual odometry and inertial navigation, and rear end optimizes scale, and whether there is or not detect closed loop etc.) calculating its, respectively pose is estimated Weight (reflects its pose estimated accuracy), finds out the current vehicle pose after each system globe area by weighted mean method with this.If The failure of current time monocular SLAM, then ignored, if its procedure parameter is excessively poor, its weight can also be set as zero.Its In, the information of outer ginseng confidence level is when obtaining to join outside each monocular cam, and the outer ginseng of the correspondence monocular cam of acquisition is credible Degree.Visual odometry and inertial navigation estimate pose residual error, rear end optimization scale, whether there is or not the information such as closed loop are detected, are in monocular It is obtained during SLAM and inertial navigation fusion.
Illustratively, such as scheme the corresponding conversion pose of N number of camera are as follows: A1, A2, A3 ... AN, corresponding power Weight is M1, M2, M3 ... MN, and the sum of weight is 1, then the fusion pose of vehicle are as follows:
A1*M1+A2*M2+A3*M3+…+AN*MN。
Optionally, in some embodiments of the invention, join according to outside N number of monocular cam, it will be in the inertial navigation The vision SLAM algorithm fusion of journey meter output information and each monocular cam, constructs N number of monocular cam visual signature map.Into One step, according to track of vehicle common in N number of monocular cam visual signature map, by N number of monocular cam Visual signature map is fused to reference to visual signature map.
Illustratively, its built visual signature map is directly multiplexed (as map for convenience of each monocular cam system Matching reorientation), the independence of each set visual signature map can be kept.It, can also be by this more set vision if there is scene builds figure demand Characteristics map is fused in same visual signature map according to common track of vehicle.
Wherein it is possible to understand, the visual signature map in the embodiment of the present invention be constructed by SLAM algorithm by The map that visual signature (such as ORB (Oriented FAST and Rotated BRIEF) characteristic point) is constituted.Wherein, FAST It is a kind of extracting method of common vision angle point, BRIEF is a kind of describing mode of common characteristic point.
Further, after onboard system obtains the fusion pose of vehicle, onboard system can also obtain the of the vehicle Two inertial navigation informations and the second speedometer for automobile meter information;According to the fusion pose of the vehicle, the second inertia of the vehicle Navigation information and the second speedometer for automobile meter information determine the second current estimation pose of the vehicle.
Illustratively, it is known that current pose, next step pose are estimated as x (k+1)=x (k)+dx (k).Wherein dx (k) by Inertial navigation and odometer output are extrapolated in interval time dt.
Wherein, x includes position and attitude vectors, and the attitudes vibration of the dx (k) between x (k+1) and x (k) is by the top in IMU Integral acquires at any time for spiral shell instrument output (tarnsition velocity), and change in location is exported the triangle letter of (range ability) and posture by odometer Number, which is multiplied, to be acquired.
The independent visual signature of more sets that onboard system output is built by the fusion pose and multi-camera system of vehicle Figure.Fusion pose will be used for next step inertial navigation pose pre-estimation, and next step inertial navigation pose pre-estimation can correspond to and be melted The calculating of coincidence appearance is equivalent to the output fusion that more monocular cam SLAM systems are utilized, and each subsystem of iteration optimization can Reduce its positioning drift error.
In embodiments of the present invention, the first inertial navigation information and the first speedometer for automobile meter information of vehicle are obtained;According to First inertial navigation information and the first speedometer for automobile meter information determine the first current estimation pose of the vehicle; It obtains and joins outside N number of monocular cam of the vehicle, N is the integer greater than 1;According to ginseng and institute outside N number of monocular cam The first current estimation pose is stated, it is N number of to estimate by the vision SLAM algorithm fusion of inertial navigation odometer algorithm and each monocular cam The current pose of monocular cam;Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains the vehicle N number of conversion pose;According to N number of conversion pose, the fusion pose of the vehicle is obtained.It can be according to the multiple of vehicle The outer ginseng of monocular cam obtains conversion pose, is merged, obtains the fusion pose of vehicle, thus, improve the pose of vehicle Estimated accuracy.The vision SLAM of the shortcomings that overcoming monocular cam limited view, the monocular cam based on comparative maturity is calculated Method (such as monocular version ORB-SLAM2, VINS etc.) is easy to algorithm integration and shifting from single camera shooting to any number of camera systems It plants.Not needing multi-cam has zone of mutual visibility domain, does not require outer ginseng calibration mutual between multi-cam.Realized on onboard system The system design of parallel more monocular cam vision SLAM algorithms and inertial navigation odometer algorithm fusion;According to each monocular cam SLAM systematic procedure parameter calculates weight and merges pose according to weight calculation;It is pre- that present fusion pose is used for next step pose Estimation, can each monocular SLAM of iteration optimization reduce its drift error.
As shown in figure 3, for one embodiment schematic diagram of onboard system in the embodiment of the present invention.May include:
Module 301 is obtained, by information based on the first inertial navigation information and the first speedometer for automobile that obtain vehicle;Obtain institute It states and joins outside N number of monocular cam of vehicle, N is the integer greater than 1;
Processing module 302, the information based on according to first inertial navigation information and first speedometer for automobile determine The current estimation pose of the first of the vehicle;Pose is currently estimated with described first according to joining outside N number of monocular cam, it will The vision SLAM algorithm fusion of inertial navigation odometer algorithm and each monocular cam, to estimate the current pose of N number of monocular cam; Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains N number of conversion pose of the vehicle;According to N number of conversion pose, obtains the fusion pose of the vehicle.
Optionally, in some embodiments of the invention,
Processing module 302, specifically for currently estimating pose according to the first of N number of conversion pose and the vehicle, Determine the weight of each conversion pose;It is flat using weighting according to the weight of each conversion pose and N number of conversion pose Equal method obtains the fusion pose of the vehicle.
Optionally, in some embodiments of the invention,
Processing module 302 is specifically used for determining each conversion pose according to Predistribution Algorithm and N number of conversion pose Weight;The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Fusion pose.
Optionally, in some embodiments of the invention,
Processing module 302 is also used to join according to outside N number of monocular cam, by the inertial navigation odometer output information With the SLAM algorithm fusion of each monocular cam, N number of monocular cam visual signature map is constructed.
Optionally, in some embodiments of the invention,
Processing module 302 is also used to according to track of vehicle common in N number of monocular cam visual signature map, N number of monocular cam visual signature map is fused to reference to visual signature map.
Optionally, in some embodiments of the invention,
Module 301 is obtained, is specifically used for obtaining visual odometry information;According to the visual odometry information and the vehicle The first current estimation pose, determine and join outside N number of monocular cam of the vehicle.
Optionally, in some embodiments of the invention,
Module 301 is obtained, is also used to obtain the second inertial navigation information and the second speedometer for automobile meter information of the vehicle;
Processing module 302, be also used to according to the vehicle fusion pose, the vehicle the second inertial navigation information and Second speedometer for automobile meter information determines the second current estimation pose of the vehicle.
As shown in figure 4, the vehicle includes as shown in Figure 3 for one embodiment schematic diagram of vehicle in the embodiment of the present invention Onboard system.
As shown in figure 5, for one embodiment schematic diagram of onboard system in the embodiment of the present invention.May include:
It is stored with the memory 501 of executable program code;
The processor 502 coupled with memory 501;
Vehicle localization module 503;
Wherein, vehicle localization module 503 obtains the first inertial navigation information of vehicle and the first speedometer for automobile meter information passes Processor 502 is transported to, processor 502 calls the executable program code stored in memory 501, executes shown in FIG. 1 any The method that kind determines vehicle pose.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.
The computer program product includes one or more computer instructions.Load and execute on computers the meter When calculation machine program instruction, entirely or partly generate according to process or function described in the embodiment of the present invention.The computer can To be general purpose computer, special purpose computer, computer network or other programmable devices.The computer instruction can be deposited Storage in a computer-readable storage medium, or from a computer readable storage medium to another computer readable storage medium Transmission, for example, the computer instruction can pass through wired (example from a web-site, computer, server or data center Such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (such as infrared, wireless, microwave) mode to another website Website, computer, server or data center are transmitted.The computer readable storage medium can be computer and can deposit Any usable medium of storage either includes that the data storages such as one or more usable mediums integrated server, data center are set It is standby.The usable medium can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or partly lead Body medium (such as solid state hard disk Solid State Disk (SSD)) etc..
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided by the present invention, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.

Claims (15)

1. a kind of method of determining vehicle pose characterized by comprising
Obtain the first inertial navigation information and the first speedometer for automobile meter information of vehicle;
According to first inertial navigation information and the first speedometer for automobile meter information, determine that the first of the vehicle is currently estimated Count pose;
It obtains and joins outside N number of monocular cam of the vehicle, N is the integer greater than 1;
Pose is currently estimated with described first according to joining outside N number of monocular cam, by inertial navigation odometer algorithm and each monocular The vision SLAM algorithm fusion of camera, to estimate the current pose of N number of monocular cam;
Coordinate conversion is carried out to the current pose of N number of monocular cam respectively, obtains N number of conversion pose of the vehicle;
According to N number of conversion pose, the fusion pose of the vehicle is obtained.
2. obtaining the vehicle the method according to claim 1, wherein described according to N number of conversion pose Fusion pose, comprising:
According to the first current estimation pose of N number of conversion pose and the vehicle, the weight of each conversion pose is determined;
The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Merge pose.
3. the method according to claim 1, wherein the conversion pose according to N number of vehicle, obtains institute State the fusion pose of vehicle, comprising:
According to Predistribution Algorithm and N number of conversion pose, the weight of each conversion pose is determined;
The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Merge pose.
4. method according to any one of claim 1-3, which is characterized in that the method also includes:
Join according to outside N number of monocular cam, the SLAM of the inertial navigation odometer output information and each monocular cam is calculated Method fusion, constructs N number of monocular cam visual signature map.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
According to track of vehicle common in N number of monocular cam visual signature map, by N number of monocular cam vision Characteristics map is fused to reference to visual signature map.
6. method according to any one of claim 1-3, which is characterized in that the N number of monocular for obtaining the vehicle Join outside camera, comprising:
Obtain visual odometry information;
According to the first current estimation pose of the visual odometry information and the vehicle, N number of monocular of the vehicle is determined Join outside camera.
7. method according to any one of claim 1-3, which is characterized in that the method also includes:
Obtain the second inertial navigation information and the second speedometer for automobile meter information of the vehicle;
According to the fusion pose of the vehicle, the second inertial navigation information and the second speedometer for automobile meter information of the vehicle, really The current estimation pose of the second of the fixed vehicle.
8. a kind of onboard system characterized by comprising
Module is obtained, by information based on the first inertial navigation information and the first speedometer for automobile that obtain vehicle;Obtain the vehicle N number of monocular cam outside join, N is integer greater than 1;
Processing module, the information based on according to first inertial navigation information and first speedometer for automobile, determines the vehicle The first current estimation pose;Pose is currently estimated with described first according to joining outside N number of monocular cam, it will be in inertial navigation The vision SLAM algorithm fusion of journey calculating method and each monocular cam, to estimate the current pose of N number of monocular cam;To described The current pose of N number of monocular cam carries out coordinate conversion respectively, obtains N number of conversion pose of the vehicle;According to described N number of Pose is converted, the fusion pose of the vehicle is obtained.
9. onboard system according to claim 8, which is characterized in that
The processing module is determined specifically for the first current estimation pose according to N number of conversion pose and the vehicle The weight of each conversion pose;According to the weight of each conversion pose and N number of conversion pose, using weighted average Method obtains the fusion pose of the vehicle.
10. onboard system according to claim 8, which is characterized in that
The processing module is specifically used for determining the power of each conversion pose according to Predistribution Algorithm and N number of conversion pose Weight;The vehicle is obtained using weighted mean method according to the weight of each conversion pose and N number of conversion pose Merge pose.
11. the onboard system according to any one of claim 8-10, which is characterized in that
The processing module is also used to join according to outside N number of monocular cam, by the inertial navigation odometer output information and respectively The SLAM algorithm fusion of monocular cam constructs N number of monocular cam visual signature map.
12. onboard system according to claim 11, which is characterized in that
The processing module is also used to according to track of vehicle common in N number of monocular cam visual signature map, by institute N number of monocular cam visual signature map is stated to be fused to reference to visual signature map.
13. the onboard system according to any one of claim 8-10, which is characterized in that
The acquisition module is specifically used for obtaining visual odometry information;According to the visual odometry information and the vehicle The first current estimation pose, determine and join outside N number of monocular cam of the vehicle.
14. the onboard system according to any one of claim 8-10, which is characterized in that
The acquisition module is also used to obtain the second inertial navigation information and the second speedometer for automobile meter information of the vehicle;
The processing module is also used to the second inertial navigation information and the of fusion pose according to the vehicle, the vehicle Two speedometer for automobile meter information determine the second current estimation pose of the vehicle.
15. a kind of vehicle, which is characterized in that including the onboard system as described in any one of claim 8-14.
CN201910576519.7A 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle Active CN110207714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910576519.7A CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910576519.7A CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Publications (2)

Publication Number Publication Date
CN110207714A true CN110207714A (en) 2019-09-06
CN110207714B CN110207714B (en) 2021-01-19

Family

ID=67795335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910576519.7A Active CN110207714B (en) 2019-06-28 2019-06-28 Method for determining vehicle pose, vehicle-mounted system and vehicle

Country Status (1)

Country Link
CN (1) CN110207714B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127584A (en) * 2019-11-19 2020-05-08 奇点汽车研发中心有限公司 Method and device for establishing visual map, electronic equipment and storage medium
CN112577479A (en) * 2019-09-27 2021-03-30 北京初速度科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
CN112965076A (en) * 2021-01-28 2021-06-15 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN113074726A (en) * 2021-03-16 2021-07-06 深圳市慧鲤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN113203428A (en) * 2021-05-28 2021-08-03 拉扎斯网络科技(上海)有限公司 Mileage counting device, data counting method based on same and interface
CN113340313A (en) * 2020-02-18 2021-09-03 北京四维图新科技股份有限公司 Navigation map parameter determination method and device
CN113390411A (en) * 2021-06-10 2021-09-14 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN113899363A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Vehicle positioning method and device and automatic driving vehicle
CN114001742A (en) * 2021-10-21 2022-02-01 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium
CN114170320A (en) * 2021-10-29 2022-03-11 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN102768042A (en) * 2012-07-11 2012-11-07 清华大学 Visual-inertial combined navigation method
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104748727A (en) * 2013-12-31 2015-07-01 中国科学院沈阳自动化研究所 Array type high-speed visual odometer and realization method thereof
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN108759815A (en) * 2018-04-28 2018-11-06 温州大学激光与光电智能制造研究院 A kind of information in overall Vision localization method merges Combinated navigation method
CN109387198A (en) * 2017-08-03 2019-02-26 北京自动化控制设备研究所 A kind of inertia based on sequential detection/visual odometry Combinated navigation method
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080167814A1 (en) * 2006-12-01 2008-07-10 Supun Samarasekera Unified framework for precise vision-aided navigation
CN102768042A (en) * 2012-07-11 2012-11-07 清华大学 Visual-inertial combined navigation method
CN104748727A (en) * 2013-12-31 2015-07-01 中国科学院沈阳自动化研究所 Array type high-speed visual odometer and realization method thereof
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder
CN106708037A (en) * 2016-12-05 2017-05-24 北京贝虎机器人技术有限公司 Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
CN107229063A (en) * 2017-06-26 2017-10-03 奇瑞汽车股份有限公司 A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN107504969A (en) * 2017-07-24 2017-12-22 哈尔滨理工大学 Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN109387198A (en) * 2017-08-03 2019-02-26 北京自动化控制设备研究所 A kind of inertia based on sequential detection/visual odometry Combinated navigation method
CN108759815A (en) * 2018-04-28 2018-11-06 温州大学激光与光电智能制造研究院 A kind of information in overall Vision localization method merges Combinated navigation method
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TURNER J. MONTGOMERY, MEIR PACHTER: "Visual-Aided INS Using Converted Measurements", 《IFAC-PAPERSONLINE》 *
张建越: "基于嵌入式并行处理的视觉惯导SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
茹祥宇等: "单目视觉惯性融合方法在无人机位姿估计中的应用", 《控制与信息技术》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577479A (en) * 2019-09-27 2021-03-30 北京初速度科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
CN112577479B (en) * 2019-09-27 2024-04-12 北京魔门塔科技有限公司 Multi-sensor fusion vehicle positioning method and device based on map element data
CN111127584A (en) * 2019-11-19 2020-05-08 奇点汽车研发中心有限公司 Method and device for establishing visual map, electronic equipment and storage medium
CN113340313A (en) * 2020-02-18 2021-09-03 北京四维图新科技股份有限公司 Navigation map parameter determination method and device
CN113340313B (en) * 2020-02-18 2024-04-16 北京四维图新科技股份有限公司 Navigation map parameter determining method and device
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor
CN112965076A (en) * 2021-01-28 2021-06-15 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN112965076B (en) * 2021-01-28 2024-05-24 上海思岚科技有限公司 Multi-radar positioning system and method for robot
CN113074726A (en) * 2021-03-16 2021-07-06 深圳市慧鲤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN113203428A (en) * 2021-05-28 2021-08-03 拉扎斯网络科技(上海)有限公司 Mileage counting device, data counting method based on same and interface
CN113390411A (en) * 2021-06-10 2021-09-14 中国北方车辆研究所 Foot type robot navigation and positioning method based on variable configuration sensing device
CN113899363A (en) * 2021-09-29 2022-01-07 北京百度网讯科技有限公司 Vehicle positioning method and device and automatic driving vehicle
US11953609B2 (en) 2021-09-29 2024-04-09 Beijing Baidu Netcom Science Technology Co., Ltd. Vehicle positioning method, apparatus and autonomous driving vehicle
CN113884098B (en) * 2021-10-15 2024-01-23 上海师范大学 Iterative Kalman filtering positioning method based on materialization model
CN113884098A (en) * 2021-10-15 2022-01-04 上海师范大学 Iterative Kalman filtering positioning method based on specific model
CN114001742A (en) * 2021-10-21 2022-02-01 广州小鹏自动驾驶科技有限公司 Vehicle positioning method and device, vehicle and readable storage medium
CN114001742B (en) * 2021-10-21 2024-06-04 广州小鹏自动驾驶科技有限公司 Vehicle positioning method, device, vehicle and readable storage medium
CN114170320A (en) * 2021-10-29 2022-03-11 广西大学 Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion

Also Published As

Publication number Publication date
CN110207714B (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN110207714A (en) A kind of method, onboard system and the vehicle of determining vehicle pose
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN109887032B (en) Monocular vision SLAM-based vehicle positioning method and system
CN109887057B (en) Method and device for generating high-precision map
CN112197770B (en) Robot positioning method and positioning device thereof
CN109885080B (en) Autonomous control system and autonomous control method
CN107478220B (en) Unmanned aerial vehicle indoor navigation method and device, unmanned aerial vehicle and storage medium
EP3864623A1 (en) System, device and method of generating a high resolution and high accuracy point cloud
CN110100190A (en) System and method for using the sliding window of global location epoch in vision inertia ranging
CN112074875B (en) Group optimization depth information method and system for constructing 3D feature map
JP2020528994A (en) Vehicle navigation system using attitude estimation based on point cloud
JP2021508814A (en) Vehicle positioning system using LiDAR
CN113678079A (en) Generating structured map data from vehicle sensors and camera arrays
CN111263960B (en) Apparatus and method for updating high definition map
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
KR102219843B1 (en) Estimating location method and apparatus for autonomous driving
US20210183100A1 (en) Data processing method and apparatus
EP2175237B1 (en) System and methods for image-based navigation using line features matching
KR101909953B1 (en) Method for vehicle pose estimation using LiDAR
CN105844692A (en) Binocular stereoscopic vision based 3D reconstruction device, method, system and UAV
CN112212852A (en) Positioning method, mobile device and storage medium
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN109490931A (en) Flight localization method, device and unmanned plane
Andert et al. Optical-aided aircraft navigation using decoupled visual SLAM with range sensor augmentation
CN113503873A (en) Multi-sensor fusion visual positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210106

Address after: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 245, No. 333, jiufo Jianshe Road, Zhongxin Guangzhou Knowledge City, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190906

Assignee: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Assignor: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Contract record no.: X2021440000219

Denomination of invention: A method for determining vehicle posture, on-board system and vehicle

Granted publication date: 20210119

License type: Common License

Record date: 20211220

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240228

Address after: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Patentee before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Country or region before: China