CN116543138A - Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip - Google Patents

Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip Download PDF

Info

Publication number
CN116543138A
CN116543138A CN202310463116.8A CN202310463116A CN116543138A CN 116543138 A CN116543138 A CN 116543138A CN 202310463116 A CN202310463116 A CN 202310463116A CN 116543138 A CN116543138 A CN 116543138A
Authority
CN
China
Prior art keywords
vehicle
information
lane
tunnel
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310463116.8A
Other languages
Chinese (zh)
Inventor
胡东阳
刘峰学
王爱春
黄少堂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangling Motors Corp Ltd
Original Assignee
Jiangling Motors Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangling Motors Corp Ltd filed Critical Jiangling Motors Corp Ltd
Priority to CN202310463116.8A priority Critical patent/CN116543138A/en
Publication of CN116543138A publication Critical patent/CN116543138A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Computational Linguistics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of vehicle positioning, and particularly discloses a vehicle tunnel positioning method, a device, a vehicle, a readable storage medium and a chip, wherein the method comprises the following steps: according to the obtained current pose information of the vehicle, when the vehicle judges that the vehicle enters the tunnel, the vehicle pre-stores the environmental parameter information in the tunnel and stores the current position as initial positioning information, the current lane image information is obtained and combined with the current pose information of the vehicle, whether the vehicle has a lane change is judged, and if the lane change exists, the longitudinal position positioning of the vehicle is corrected in real time; the method fully considers the road surface condition of the lane in the tunnel and realizes accurate positioning after the vehicle enters the tunnel when some special tunnel vehicles need to change the lane.

Description

Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip
Technical Field
The invention relates to the technical field of vehicle positioning, in particular to a vehicle tunnel positioning method, a vehicle tunnel positioning device, a vehicle, a readable storage medium and a chip.
Background
The development of scientific technology promotes the continuous progress of positioning technology, and the progress of positioning methods is closely connected with the requirement of people for high-precision position information. The outdoor automobile positioning technology generally uses a global satellite navigation system (Global Navigation Satellite System, GNSS) and a carrier phase difference technology (Real time kinematic, RTK), but in places with tunnels, GNSS signals and RTK signals are frequently lost and even cannot be positioned for a long time, so that a vehicle cannot realize real-time accurate positioning in the tunnels, and positioning errors are large, and therefore, a high-precision tunnel positioning method is needed to solve the problems in the prior art.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. To this end, the invention provides a vehicle tunnel positioning method, a vehicle tunnel positioning device, a vehicle, a readable storage medium and a chip.
According to an embodiment of the first aspect of the present invention, there is provided a vehicle tunnel positioning method, including:
judging whether a vehicle is about to enter a tunnel, if so, pre-storing environmental parameter information in the tunnel by the vehicle and storing current positioning information as initial positioning information;
Acquiring current pose information of a vehicle, wherein the current pose information comprises current position information and current running pose information, and the current running pose information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
correcting the positioning information of the current vehicle at intervals of a first preset time according to the acquired current pose information of the vehicle;
the correcting the positioning position information of the current vehicle at intervals of a first preset time according to the acquired current pose information of the vehicle comprises the following steps:
acquiring current lane image information of the vehicle at intervals of a second preset time, fitting a lane simulation curve according to a trained image segmentation model, judging whether the vehicle has a lane change according to the fitted lane simulation curve, and correcting the lane position where the vehicle is transversely located;
the lane simulation curve is expressed as:
Y= aX^6+bX^5+cX^4+dX^3+eX^2+fX
wherein X, Y respectively represents the space abscissa and the space ordinate of the target lane, a, b, c, d, e, f respectively represents the sixth order term constant, the fifth order term constant, the fourth order term constant, the third order term constant and the second order term constant of the sixth order polynomial;
and acquiring the environmental parameter information in the tunnel corresponding to the position of the vehicle at intervals of a third preset time, and correcting the position of the vehicle in the longitudinal running direction according to the acquired current pose information of the vehicle.
Optionally, the pre-storing the environmental parameter information in the tunnel by the vehicle specifically includes: and the vehicle pre-stores gradient information, curvature information and steering information of each lane in the tunnel.
Optionally, the calling the environmental parameter information in the tunnel corresponding to the position of the vehicle at intervals of a third preset time, and correcting the position of the vehicle in the longitudinal running direction according to the obtained current pose information of the vehicle includes:
acquiring current pose information of the vehicle at intervals of fourth preset time;
according to the current pose information, gradient information, curvature information and steering information of each lane in the tunnel corresponding to the current position are called;
correcting the obtained current pose information according to the current position corresponding to gradient information, curvature information and steering information of each lane in a pre-stored tunnel to obtain predicted pose information;
according to the predicted pose information, the vehicle retrieves and updates the pre-stored tunnel environment parameter information in real time;
optionally, according to the predicted pose information, the vehicle real-time retrieving and updating the pre-stored environmental parameter information in the tunnel includes:
determining new position information of the vehicle according to the obtained predicted pose information, wherein the new position information comprises: absolute position, relative position, accuracy radius, and confidence information;
And according to the new position information, the vehicle retrieves and updates the environment parameter information in the pre-stored tunnel.
Optionally, the acquiring the current lane image information of the vehicle at intervals of a second preset time, fitting a lane simulation curve according to the trained image segmentation model, and determining whether the target vehicle has a lane change according to the fitted lane simulation curve, where the vehicle is located in the transverse direction, and correcting the lane position includes:
performing binarization semantic segmentation on the current lane image according to the trained image segmentation model to obtain lane line elements and background elements in the lane image;
acquiring coordinate information of all real characteristic points contained in a lane line according to the lane line elements, and projecting the lane image according to the coordinate information of all the real characteristic points and a preset conversion matrix to obtain a overlook image corresponding to the lane image;
and according to the obtained overlook image corresponding to the lane image, the vehicle calls the current position to correspond to the pre-stored environment parameter information in the tunnel, and the lane position where the vehicle is transversely located is corrected.
Optionally, the obtaining the coordinate information of all the real feature points included in the lane according to the lane element, so as to project the lane image according to the coordinate information of all the real feature points and a preset conversion matrix, and obtaining the overlooking image corresponding to the lane image includes:
The lane image is projected according to the following formula:
wherein x ', y ' and w ' respectively represent three-dimensional coordinates of a certain projection feature point in the overhead image, u, v and w respectively represent three-dimensional coordinates of a certain projection feature point in the lane image, a ij Representing a variation parameter in a preset transformation matrix, wherein i=1, 2, 3, j=1, 2, 3.
When a vehicle judges that the vehicle is about to enter a tunnel, the vehicle pre-stores environment parameter information in the tunnel and stores current positioning information as initial positioning information, and judges whether the vehicle has a lane change or not by acquiring current lane image information and current pose information of the vehicle, and if the vehicle has the lane change, the vehicle longitudinal position positioning information is corrected in real time; the method for positioning the vehicle in the tunnel fully considers the road surface condition of the lane in the tunnel and realizes accurate positioning after the vehicle enters the tunnel under the condition of no GNSS signal and no RTK signal when some special tunnel vehicles need to change the lane.
In order to achieve the above object, according to a second aspect of the embodiments of the present invention, there is provided a vehicle speed limit control device including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire current pose information of a vehicle, the current pose information comprises current position information and current running pose information, and the current running pose information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
the second acquisition module is configured to acquire environmental parameter information in a tunnel corresponding to the current position through the vehicle system call;
the information processing module is configured to correct the position information of the vehicle in the tunnel according to the current pose information of the vehicle acquired by the first acquisition module and the environment parameter information in the tunnel corresponding to the current position acquired by the second acquisition module;
the third acquisition module is configured to acquire the current regional driving information and the lane image information of the vehicle through the camera module;
a determining module configured to determine a lane in which the vehicle is located according to the current region driving information and lane image information acquired by the third acquiring module
And the fourth acquisition module is configured to acquire and update the environmental parameter information in the tunnel pre-stored by the vehicle according to the information processing module correcting the position information of the vehicle in the tunnel.
In order to achieve the above object, according to a third aspect of the embodiments of the present invention, there is provided a vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the method for positioning the vehicle in the tunnel comprises the steps of implementing the positioning method provided by the embodiment of the first aspect of the invention.
To achieve the above object, according to a fourth aspect of the embodiments of the present invention, there is provided a non-transitory computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of a vehicle tunnel positioning method provided by the embodiments of the first aspect of the present invention.
To achieve the above object, according to a fourth aspect of an embodiment of the present invention, there is provided a chip including a processor and an interface; the processor is configured to read instructions to perform the steps of a method for positioning a vehicle tunnel provided by an embodiment of the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a method of locating a position in a vehicle tunnel according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of locating a position in a vehicle tunnel according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of locating a position in a vehicle tunnel, according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method of locating a position in a vehicle tunnel, according to an exemplary embodiment;
FIG. 5 is a block diagram of a vehicle in-tunnel locating device, according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for vehicle in-tunnel positioning according to an exemplary embodiment;
FIG. 7 is a functional block diagram of a vehicle, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terms "first," second, "" third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a series of steps or elements may be included, or alternatively, steps or elements not listed or, alternatively, other steps or elements inherent to such process, method, article, or apparatus may be included.
Only some, but not all, of the matters relevant to the present application are shown in the accompanying drawings. Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
As used in this specification, the terms "component," "module," "system," "unit," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a unit may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or being distributed between two or more computers. Furthermore, these units may be implemented from a variety of computer-readable media having various data structures stored thereon. The units may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., second unit data from another unit interacting with a local system, distributed system, and/or across a network).
Example 1
Fig. 1 is a flowchart showing a vehicle tunnel positioning method according to an exemplary embodiment, which is applied to an electronic device having processing capability, such as an in-vehicle processor, a controller, etc., as shown in fig. 1, and which includes the steps of:
In step S100, judging whether a vehicle is about to enter a tunnel, if so, pre-storing environmental parameter information in the tunnel by the vehicle and storing current positioning information as initial positioning information;
in this step, whether the vehicle enters the tunnel is determined, whether the vehicle is about to enter the tunnel can be determined by determining whether GNSS signals and RTK signals of the vehicle are lost or by identifying map data information buffered by a vehicle-mounted GPS, when it is determined that the vehicle is about to enter the tunnel, the vehicle caches in the cloud of the merchant the intra-tunnel environmental parameter information corresponding to the tunnel through the communication module, in some embodiments, the intra-tunnel environmental parameter information includes a tunnel model image, a length of the tunnel, a number of lanes in the tunnel, a width corresponding to each lane, gradient information, curvature information and steering information of each lane, and a vehicle central control display displays a high-precision map of a current position.
In step S200, current pose information of a vehicle is obtained, wherein the current pose information includes current position information and current running pose information, and the current running pose information includes acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
In this step, the current pose information may include current position information and current driving pose information. The current running gesture information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle, and meanwhile, the current running gesture information further comprises at least one of a current pitch angle, current steering information and current track curvature.
In some embodiments, when satellite signals exist, GPS, beidou positioning system, carrier phase difference technology and the like can be adopted to acquire current position information of the vehicle in real time. Inertial sensors disposed on the vehicle may be employed to acquire the pitch angle of the vehicle in real time. The steering information may include left turn, right turn, and straight run, and in particular, the current steering information of the vehicle may be estimated based on the heading and wheel speed count data of the vehicle (acquired by a wheel speed counter provided on the wheel). Meanwhile, the slope between the current position coordinate and the vehicle position coordinate acquired at the previous moment can be used as the current track curvature.
In step S300, according to the obtained current pose information of the vehicle, correcting the positioning information of the current vehicle at intervals of a first preset time;
In this step, it should be noted that, in step S100, the vehicle caches the in-tunnel environmental parameter information of the corresponding tunnel, where the in-tunnel environmental parameter information includes a tunnel model image, a length of the tunnel, a number of lanes in the tunnel, a width corresponding to each lane, gradient information, curvature information, and steering information of each lane, these data are used as current position information obtained by the vehicle, and are called in real time by the micro control unit of the vehicle, and then according to the current pose information of the vehicle fed back by each sensor element and the executing device of the vehicle, the current vehicle position information is corrected by comprehensively considering the current environmental information of the vehicle and the dynamic data information of the vehicle itself, the correction includes correcting the vehicle position in the speed direction, whether the vehicle changes lanes, and the lateral position, and the corrected position information is used as current vehicle positioning information, and is updated and displayed in real time by the central control display screen,
it should be noted that under the general situation, the vehicle cannot change the lane in the tunnel, but under some special situations, such as before entering the tunnel or after coming out of the tunnel, or under some tunnels capable of changing the lane, if the lane changing process does not process the lane changing related information of the vehicle, the positioning error of the vehicle in the tunnel will be increased, in order to more accurately position the vehicle, in this step, the vehicle acquires lane image information of the current position through the image capturing module, and corrects the lane changing and the transverse position of the vehicle according to the image information in combination with a preset lane changing related algorithm;
According to the vehicle tunnel positioning method, when a vehicle judges that the vehicle is about to enter a tunnel, the vehicle pre-stores environment parameter information in the tunnel and stores current positioning information as initial positioning information, and whether the vehicle has a lane change is judged by acquiring current lane image information and current pose information of the vehicle; and (3) calling gradient information, curvature information and steering information of each lane in the corresponding pre-stored tunnel of the current position and combining dynamic data information of the vehicle, correcting the position information of the vehicle on the lane in real time, fully considering the road surface condition of the lane in the tunnel and realizing accurate positioning of the vehicle entering the tunnel under the condition that no GNSS signal and no RTK signal are available when some special tunnel vehicles need to change the lane.
Example two
Fig. 2 is a flowchart showing a vehicle tunnel positioning method according to an exemplary embodiment, which further describes the vehicle tunnel positioning method on the basis of embodiment 1, and as shown in fig. 2, the vehicle tunnel positioning method further includes the steps of:
in step S310, obtaining current lane image information of the vehicle at intervals of a second preset time, fitting a lane simulation curve according to the trained image segmentation model, judging whether the vehicle has a lane change according to the fitted lane simulation curve, and correcting the lane position where the vehicle is transversely located;
In this step, the second preset time may be set according to a program, where the second preset time may be in a millisecond level, or even may be in a time interval smaller than a millisecond level, and the current lane image information of the vehicle may be photographed and read by a photographing module installed around the vehicle, where it is to be noted that the second preset time may be a time instruction set by a micro-processing control unit of the vehicle, and perform continuous short-interval control on the photographing module, and when the photographing module photographs the current environmental image information of the target vehicle, the photographing module feeds back the current environmental image information to the micro-processing control unit for processing; wherein, the micro-processing control unit stores a corresponding preset program control instruction;
the lane simulation curve is expressed as:
Y= aX^6+bX^5+cX^4+dX^3+eX^2+fX
wherein X, Y respectively represents the space abscissa and the space ordinate of the target lane, a, b, c, d, e, f respectively represents the sixth order term constant, the fifth order term constant, the fourth order term constant, the third order term constant and the second order term constant of the sixth order polynomial;
in some embodiments, the distance between the vehicle and the left lane line and the right lane line is estimated by obtaining a sixth order polynomial of the left lane and the right lane in the image, and then the real-time transverse position of the vehicle lane changing process is judged when the vehicle changes lanes. Because the image shot by the camera can be actually considered as a two-dimensional coordinate grid, a plurality of points can be obtained by connecting the grids where the lane lines are positioned, and the points are fitted by an algorithm to form a six-degree polynomial.
In some embodiments, the position of the lane line is determined by setting a form of a sixth order polynomial, and further generating a, b, c, d, e, f five parameters, then, the corresponding relation between the image space coordinates and the real space coordinates is photographed by the camera, and then, the a, b, c, d, e, f parameters and the distances between the vehicle and the lane lines on two sides are corresponding to obtain f1 (a, b, c, d, e) as the distance between the target vehicle and the left lane, and f2 (a, b, c, d, e) as the distance between the target vehicle and the right lane, wherein f1 refers to the left side distance obtained based on the mapping relation between the image space and the real space coordinates under the sixth order polynomial parameter representation, and f2 refers to the right side distance obtained based on the mapping relation between the image space and the real space coordinates under the sixth order polynomial parameter representation.
Further, after the right side distance and the left side distance are obtained, the lane changing direction of the vehicle is judged, wherein the lane changing direction of the vehicle can be judged and predicted by reading the steering information of the steering wheel of the vehicle, the lane changing distance of the vehicle is calculated by estimating the transverse component speed and the transverse component acceleration of the vehicle, the lane changing distance of the vehicle is compared with the right side distance and the left side distance obtained by images according to the calculated lane changing distance of the vehicle, the position information of the transverse lane changing of the vehicle is updated according to the comparison result, and the positioning information of the vehicle is updated according to the current transverse position information of the vehicle, and the requirement is that when the vehicle does not change lanes in a tunnel, the transverse displacement of the vehicle in the same lane can be accurately obtained through the step, and the accurate positioning of the vehicle is further improved when the vehicle is positioned in the tunnel;
In step S320, the environmental parameter information in the tunnel corresponding to the position of the vehicle is called every third preset time, and the position of the vehicle in the longitudinal running direction is corrected according to the obtained current pose information of the vehicle.
In this step, the third preset time may be set by the micro-processing control unit, and the micro-processing control unit reads and calls the pre-stored in-tunnel environmental parameter information from the vehicle memory through the set third preset time, where it is to be noted that the read and call pre-stored in-tunnel environmental parameter information matches with the current position information of the vehicle and is not complete in-tunnel environmental parameter information, specifically, the in-tunnel environmental parameter information includes a tunnel model image, a length of the tunnel, a number of lanes in the tunnel, a width corresponding to each lane, gradient information, curvature information and steering information of each lane, and by these information, the micro-processing control unit is combined to acquire the current pose information of the vehicle through the sensor or the actuator, so as to accurately correct the current vehicle position in real time, thereby further improving the vehicle positioning accuracy.
In some embodiments, particularly for the situation that the current lane has a certain gradient, in order to reduce the calculation load of the processor, the gradient can be graded, each grade can be expressed by different percentages, for example, the gradient is defined to be 10%, 20%, 30%, 40% and the like at equal intervals, but the environmental parameter information in the tunnel where the current vehicle is located is read, the corresponding percentage of the gradient of the lane is directly read through pre-stored information, the corresponding speed reduction value of the different preset percentages is further read, the reduction value can be defined according to actual measurement, and after the current real vehicle speed and acceleration are acquired, the acquired speed reduction value is combined with the vehicle, the current real vehicle speed and acceleration of the vehicle are processed for the second time, so that the current position of the vehicle is accurately positioned.
Example III
Fig. 3 is a flowchart illustrating a vehicle tunnel positioning method according to an exemplary embodiment, which further describes a method for fitting a lane simulation curve according to a trained image segmentation model, based on embodiment 2, as shown in fig. 3, and includes the following steps:
in step S311, performing binarization semantic segmentation on the current lane image according to the trained image segmentation model to obtain lane line elements and background elements in the lane image;
in the step, after a lane image is acquired, a microprocessor control unit performs binary semantic segmentation on the lane image by adopting a trained image segmentation model so as to obtain lane line elements and background elements in the lane image;
in step S312, the coordinate information of all the real feature points included in the lane line is obtained according to the lane line elements, so as to project the lane image according to the coordinate information of all the real feature points and a preset conversion matrix, and obtain a top view image corresponding to the lane image;
in this step, coordinate information of all real feature points included in a lane line is obtained according to the lane line elements, so that the lane image is projected according to the coordinate information of all real feature points and a preset conversion matrix to obtain a top view image corresponding to the lane image, and five frames of image fitting is performed on the lane line according to the coordinate information of all projected feature points included in the top view image obtained after projection to obtain the lane simulation curve.
In some embodiments, the lane image is projected according to the following formula:
wherein x ', y ', w ' respectively represent three-dimensional coordinates of a certain projection feature point in the overhead image, u, v, w respectively represent three-dimensional coordinates of a certain projection feature point in the lane image, aij represents a variation parameter in a preset conversion matrix, wherein i=1, 2, 3, j=1, 2, 3.
In step S312, according to the obtained overhead view image corresponding to the lane image, the vehicle invokes the current position to correspond to the pre-stored environmental parameter information in the tunnel, and corrects the lane position where the vehicle is located in the transverse direction.
In the step, according to the overlook image corresponding to the obtained actual lane image, the information of the position of the vehicle moving transversely on the lane is acquired by combining the environmental parameter information in the tunnel cached by the vehicle, and the position of the lane where the vehicle is transversely located is corrected, so that the vehicle moving transversely can also realize accurate positioning.
Example IV
FIG. 4 is a flowchart illustrating a method of locating a position in a vehicle tunnel, as shown in FIG. 4, according to an exemplary embodiment, the method comprising the steps of:
in step S321, current pose information of the vehicle is obtained at intervals of a fourth preset time;
In the step, the fourth preset time can be set according to a program, and the micro-processing control unit is used for controlling the sensor element and the actuator to acquire the current pose information of the vehicle at intervals of set time so as to accurately position the vehicle;
in step S322, according to the current pose information, gradient information, curvature information and steering information of each lane in the pre-stored tunnel corresponding to the current position are called;
in the step, it is to be noted that the current position corresponds to the slope information, curvature information and steering information of each lane in the pre-stored tunnel, and is lane information corresponding to the current position of the vehicle, and according to comprehensive consideration of various information of the lanes, accurate positioning in the running process of the vehicle is improved;
in step S323, according to the gradient information, the curvature information and the steering information of each lane in the pre-stored tunnel corresponding to the current position, correcting the obtained dynamic data information to obtain predicted pose information;
in this step, it should be noted that, the dynamic data information acquired by the vehicle micro-processing control unit is dynamic data information of the vehicle rationalization, and if the vehicle is to be accurately positioned in the actual running process of the vehicle, the actual conditions of the road, such as gradient information, curvature information, steering information and the like of the lane, need to be considered, and the obtained dynamic data information is corrected by combining with the actual conditions of the lane to obtain predicted pose information, so that the vehicle can be more accurately positioned by performing position information processing according to the predicted pose information;
In step S324, new position information of the vehicle is determined according to the predicted pose information, where the new position information includes: absolute position, relative position, accuracy radius, and confidence information;
in this step, it should be noted that, since the running process of the vehicle is dynamic, i.e. the external environment of the vehicle is continuously changed, and the pre-stored environmental parameter information in the tunnel of the corresponding vehicle is also continuously changed, when the vehicle is positioned to the current position, in order to improve the positioning accuracy of the vehicle, the parameter information corresponding to the new environmental position where the vehicle is about to enter needs to be processed and invoked in advance, and the new position information refers to the condition that the vehicle is about to enter the next position
In step S325, the vehicle retrieves and updates the pre-stored tunnel environmental parameter information according to the new location information.
In this step, it should be noted that, when the vehicle invokes and updates the environmental parameter information in the pre-stored tunnel, the vehicle has completed the positioning process of the current position, which can be understood as that this step invokes the new position information and the environmental parameter information in the new pre-stored tunnel from the memory and puts them into the buffer stack of the micro-processing control unit.
Example five
Based on the same inventive concept, the present disclosure also provides a vehicle speed limit control device. Fig. 5 is a block diagram of a vehicle in-tunnel positioning device according to an exemplary embodiment, and as shown in fig. 5, a vehicle speed limit control device 300 includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire current pose information of a vehicle, the current pose information comprises current position information and current running pose information, and the current running pose information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
the second acquisition module is configured to acquire environmental parameter information in a tunnel corresponding to the current position through the vehicle system call;
the information processing module is configured to correct the position information of the vehicle in the tunnel according to the current pose information of the vehicle acquired by the first acquisition module and the environment parameter information in the tunnel corresponding to the current position acquired by the second acquisition module;
the third acquisition module is configured to acquire the current regional driving information and the lane image information of the vehicle through the camera module;
a determining module configured to determine a lane in which the vehicle is located according to the current region driving information and lane image information acquired by the third acquiring module
And the fourth acquisition module is configured to acquire and update the environmental parameter information in the tunnel pre-stored by the vehicle according to the information processing module correcting the position information of the vehicle in the tunnel.
Example six
Based on the same inventive concept, the present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the vehicle tunnel locating method provided by the present disclosure.
Example seven
FIG. 6 is a block diagram illustrating an apparatus for vehicle in-tunnel positioning, as shown in FIG. 6, the vehicle 400 may include one or more of the following components: a processing component 402, a power component 404, a multimedia component 406, an audio component 408, a memory 410, an input/output (I/O) interface 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the vehicle 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the above-described lane change method. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 may include a multimedia module to facilitate interaction between the multimedia component 406 and the processing component 402.
The power components 404 provide power to the various components of the vehicle 400. The power components 404 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the vehicle 400.
The multimedia component 406 includes a screen between the vehicle 400 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 406 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the vehicle 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 408 is configured to output and/or input audio signals. For example, the audio component 408 includes a Microphone (MIC) configured to receive external audio signals when the vehicle 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 410 or transmitted via the communication component 416. In some embodiments, the audio component 408 further comprises a speaker for outputting audio signals.
The memory 410 is configured to store various types of data to support operation at the vehicle 400. Examples of such data include instructions for any application or method operating on the vehicle 400, contact data, phonebook data, messages, pictures, videos, and the like. The memory 410 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 414 includes one or more sensors for providing status assessment of various aspects of the vehicle 400. For example, the sensor assembly 414 may detect an on/off state of the vehicle 400, a relative positioning of the components, such as a display and keypad of the vehicle 400, the sensor assembly 414 may also detect a change in position of the vehicle 400 or a component of the vehicle 400, the presence or absence of user contact with the vehicle 400, an orientation or acceleration/deceleration of the vehicle 400, and a change in temperature of the vehicle 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate communication between the vehicle 400 and other devices in a wired or wireless manner. The vehicle 400 may access a wireless network based on a communication standard, such as WiFi,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 416 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, vehicle 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the speed limit control methods described above.
In the exemplary embodiment, a non-transitory computer-readable storage medium is also provided that includes instructions, such as memory 410 that includes instructions, that are executable by processor 420 of vehicle 400 to perform the above-described lane-change-over method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Example eight
Referring to fig. 7, fig. 7 is a functional block diagram of a vehicle according to an exemplary embodiment; the vehicle 600 may be configured in a fully or partially autonomous mode. For example, the vehicle 600 may obtain environmental information of its surroundings through the perception system 620 and derive an automatic driving strategy based on analysis of the surrounding environmental information to achieve full automatic driving, or present the analysis results to the user to achieve partial automatic driving.
The vehicle 600 may include various subsystems, such as an infotainment system 610, a perception system 620, a decision control system 630, a drive system 640, and a computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the subsystems and components of vehicle 600 may be interconnected via wires or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system, which may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a wireless local area network (wireless local area network, WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication systems may include one or more dedicated short-range communication (dedicated short range communications, DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
Entertainment system 612 may include a display device, a microphone, and an audio, and a user may listen to the broadcast in the vehicle based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, the screen of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate through touching the screen.
In some cases, the user's voice signal may be acquired through a microphone and certain controls of the vehicle 600 by the user may be implemented based on analysis of the user's voice signal, such as adjusting the temperature within the vehicle, etc. In other cases, music may be played to the user through sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a travel route for the vehicle 600, and the navigation system 613 may be used with the global positioning system 621 and the inertial measurement unit 622 of the vehicle. The map service provided by the map provider may be a two-dimensional map or a high-precision map.
The perception system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, sensing system 620 may include a global positioning system 621 (which may be a GPS system, or may be a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU) 622, a lidar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors (e.g., in-vehicle air quality monitors, fuel gauges, oil temperature gauges, etc.) of the internal systems of the monitored vehicle 600. Sensor data from one or more of these sensors may be used to detect objects and their corresponding characteristics (location, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
The global positioning system 621 is used to estimate the geographic location of the vehicle 600.
The inertial measurement unit 622 is configured to sense a change in the pose of the vehicle 600 based on inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
The lidar 623 uses a laser to sense objects in the environment in which the vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, millimeter-wave radar 624 may be used to sense the speed and/or heading of an object in addition to sensing the object.
The ultrasonic radar 625 may utilize ultrasonic signals to sense objects around the vehicle 600.
The image pickup device 626 is used to capture image information of the surrounding environment of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the image capturing device 626 may include still images or video stream information.
The decision control system 630 includes a computing system 631 that makes analysis decisions based on information acquired by the perception system 620, and the decision control system 630 also includes a vehicle controller 632 that controls the powertrain of the vehicle 600, as well as a steering system 633, throttle 634, and braking system 635 for controlling the vehicle 600.
The computing system 631 may be operable to process and analyze the various information acquired by the perception system 620 in order to identify targets, objects, and/or features in the environment surrounding the vehicle 600. The targets may include pedestrians or animals and the objects and/or features may include traffic signals, road boundaries, and obstacles. The computing system 631 may use object recognition algorithms, in-motion restoration structure (Structure from Motion, SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map the environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the acquired various information and derive control strategies for the vehicle.
The vehicle controller 632 may be configured to coordinate control of the power battery and the engine 641 of the vehicle to enhance the power performance of the vehicle 600.
Steering system 633 is operable to adjust the direction of travel of vehicle 600. For example, in one embodiment may be a steering wheel system.
Throttle 634 is used to control the operating speed of engine 641 and thereby the speed of vehicle 600.
The braking system 635 is used to control deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheels 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 into electrical current. The braking system 635 may take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered movement of the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, or a hybrid engine of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transfer mechanical power from the engine 641 to wheels 644. The transmission 643 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission 643 may also include other devices, such as a clutch. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functions of the vehicle 600 are controlled by the computing platform 650. The computing platform 650 may include at least one processor 651, and the processor 651 may execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of vehicle 600 in a distributed manner.
The processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 651 may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof. Although FIG. 6 functionally illustrates a processor, memory, and other elements of a computer in the same block, it will be understood by those of ordinary skill in the art that the processor, computer, or memory may in fact comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computer. Thus, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only calculations related to the component-specific functions.
In the disclosed embodiment, the processor 651 may perform the above-described vehicle tunnel locating method.
In various aspects described herein, the processor 651 can be located remotely from and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
In some embodiments, fourth memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by fourth processor 651 to perform various functions of vehicle 600. Memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of infotainment system 610, perception system 620, decision control system 630, drive system 640.
In addition to instructions 653, memory 652 may store data such as road maps, route information, vehicle location, direction, speed, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
The computing platform 650 may control the functions of the vehicle 600 based on inputs received from various subsystems (e.g., the drive system 640, the perception system 620, and the decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, computing platform 650 is operable to provide control over many aspects of vehicle 600 and its subsystems.
Alternatively, one or more of these components may be mounted separately from or associated with vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Alternatively, the above components are only an example, and in practical applications, components in the above modules may be added or deleted according to actual needs, and fig. 6 should not be construed as limiting the embodiments of the present disclosure.
Alternatively, the vehicle 600 or a sensing and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict the behavior of the identified object based on the characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on a road, etc.). Alternatively, each identified object depends on each other's behavior, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous car is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 so that the autonomous vehicle follows a given trajectory and/or maintains safe lateral and longitudinal distances from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on a roadway).
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned vehicle travel control method when being executed by the programmable apparatus.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples.
It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application for the embodiment. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A vehicle tunnel positioning method, comprising:
judging whether a vehicle is about to enter a tunnel, if so, pre-storing environmental parameter information in the tunnel by the vehicle and storing current positioning information as initial positioning information;
acquiring current pose information of a vehicle, wherein the current pose information comprises current position information and current running pose information, and the current running pose information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
correcting the positioning information of the current vehicle at intervals of a first preset time according to the acquired current pose information of the vehicle;
the correcting the positioning position information of the current vehicle at intervals of a first preset time according to the acquired current pose information of the vehicle comprises the following steps:
acquiring current lane image information of the vehicle at intervals of a second preset time, fitting a lane simulation curve according to a trained image segmentation model, judging whether the vehicle has a lane change according to the fitted lane simulation curve, and correcting the lane position where the vehicle is transversely located;
The lane simulation curve is expressed as:
Y= aX^6+bX^5+cX^4+dX^3+eX^2+fX
wherein X, Y respectively represents the space abscissa and the space ordinate of the target lane, a, b, c, d, e, f respectively represents the sixth order term constant, the fifth order term constant, the fourth order term constant, the third order term constant and the second order term constant of the sixth order polynomial;
and acquiring the environmental parameter information in the tunnel corresponding to the position of the vehicle at intervals of a third preset time, and correcting the position of the vehicle in the longitudinal running direction according to the acquired current pose information of the vehicle.
2. The method for positioning a vehicle in a tunnel according to claim 1, wherein the pre-storing the environmental parameter information in the tunnel by the vehicle specifically comprises: and the vehicle pre-stores gradient information, curvature information and steering information of each lane in the tunnel.
3. The method for positioning a vehicle in a tunnel according to claim 1, wherein the step of retrieving the environmental parameter information in the tunnel corresponding to the position of the vehicle at intervals of a third preset time, and correcting the position of the vehicle in the longitudinal driving direction according to the obtained current pose information of the vehicle comprises:
acquiring current pose information of the vehicle at intervals of fourth preset time;
according to the current pose information, gradient information, curvature information and steering information of each lane in the tunnel corresponding to the current position are called;
Correcting the obtained current pose information according to the current position corresponding to gradient information, curvature information and steering information of each lane in a pre-stored tunnel to obtain predicted pose information;
and according to the predicted pose information, the vehicle retrieves and updates the pre-stored environment parameter information in the tunnel in real time.
4. A method of locating a vehicle in a tunnel according to claim 3, wherein said vehicle retrieving and updating pre-stored in-tunnel environmental parameter information in real time based on said predicted pose information comprises:
determining new position information of the vehicle according to the obtained predicted pose information, wherein the new position information comprises: absolute position, relative position, accuracy radius, and confidence information;
and according to the new position information, the vehicle retrieves and updates the environment parameter information in the pre-stored tunnel.
5. The method for positioning a vehicle in a tunnel according to claim 1, wherein the obtaining the current lane image information of the vehicle at intervals of a second preset time, fitting a lane simulation curve according to the trained image segmentation model, determining whether a lane change exists in the target vehicle according to the fitted lane simulation curve, and correcting the lane position where the vehicle is located in the lateral direction comprises:
Performing binarization semantic segmentation on the current lane image according to the trained image segmentation model to obtain lane line elements and background elements in the lane image;
acquiring coordinate information of all real characteristic points contained in a lane line according to the lane line elements, and projecting the lane image according to the coordinate information of all the real characteristic points and a preset conversion matrix to obtain a overlook image corresponding to the lane image;
and according to the obtained overlook image corresponding to the lane image, the vehicle calls the current position to correspond to the pre-stored environment parameter information in the tunnel, and the lane position where the vehicle is transversely located is corrected.
6. The method for positioning a vehicle in a tunnel according to claim 5, wherein the obtaining the coordinate information of all real feature points included in the lane according to the lane element, so as to project the lane image according to the coordinate information of all real feature points and a preset conversion matrix, and obtaining the overhead image corresponding to the lane image includes:
the lane image is projected according to the following formula:
wherein x ', y ' and w ' respectively represent three-dimensional coordinates of a certain projection feature point in the overhead image, u, v and w respectively represent three-dimensional coordinates of a certain projection feature point in the lane image, a ij Representing a variation parameter in a preset transformation matrix, wherein i=1, 2, 3, j=1, 2, 3.
7. A vehicle in-tunnel positioning device, characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire current pose information of a vehicle, the current pose information comprises current position information and current running pose information, and the current running pose information comprises acceleration magnitude and direction, running direction and wheel speed information of the vehicle;
the second acquisition module is configured to acquire environmental parameter information in a tunnel corresponding to the current position through the vehicle system call;
the information processing module is configured to correct the position information of the vehicle in the tunnel according to the current pose information of the vehicle acquired by the first acquisition module and the environment parameter information in the tunnel corresponding to the current position acquired by the second acquisition module;
the third acquisition module is configured to acquire the current regional driving information and the lane image information of the vehicle through the camera module;
the determining module is configured to determine a lane where the vehicle is located according to the current regional driving information and lane image information of the vehicle acquired by the third acquiring module;
And the fourth acquisition module is configured to acquire and update the environmental parameter information in the tunnel pre-stored by the vehicle according to the information processing module correcting the position information of the vehicle in the tunnel.
8. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of implementing a positioning method in a vehicle tunnel according to any one of claims 1 to 6.
9. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the vehicle tunnel localization method of any one of claims 1 to 6.
10. A chip, comprising a processor and an interface; the processor is configured to read instructions to perform the steps of the vehicle tunnel positioning method of any one of claims 1 to 6.
CN202310463116.8A 2023-04-26 2023-04-26 Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip Pending CN116543138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310463116.8A CN116543138A (en) 2023-04-26 2023-04-26 Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310463116.8A CN116543138A (en) 2023-04-26 2023-04-26 Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip

Publications (1)

Publication Number Publication Date
CN116543138A true CN116543138A (en) 2023-08-04

Family

ID=87453543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310463116.8A Pending CN116543138A (en) 2023-04-26 2023-04-26 Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip

Country Status (1)

Country Link
CN (1) CN116543138A (en)

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
US20240017719A1 (en) Mapping method and apparatus, vehicle, readable storage medium, and chip
CN114935334B (en) Construction method and device of lane topological relation, vehicle, medium and chip
CN115222941A (en) Target detection method and device, vehicle, storage medium, chip and electronic equipment
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115223122A (en) Method and device for determining three-dimensional information of object, vehicle and storage medium
CN114880408A (en) Scene construction method, device, medium and chip
CN114973178A (en) Model training method, object recognition method, device, vehicle and storage medium
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN116543138A (en) Positioning method and device for vehicle tunnel, vehicle, readable storage medium and chip
CN114821511B (en) Rod body detection method and device, vehicle, storage medium and chip
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN114911630B (en) Data processing method and device, vehicle, storage medium and chip
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082573B (en) Parameter calibration method and device, vehicle and storage medium
CN115221261A (en) Map data fusion method and device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination