CN111311675A - Vehicle positioning method, device, equipment and storage medium - Google Patents

Vehicle positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN111311675A
CN111311675A CN202010086068.1A CN202010086068A CN111311675A CN 111311675 A CN111311675 A CN 111311675A CN 202010086068 A CN202010086068 A CN 202010086068A CN 111311675 A CN111311675 A CN 111311675A
Authority
CN
China
Prior art keywords
sample
road
vehicle
image
segmentation map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010086068.1A
Other languages
Chinese (zh)
Other versions
CN111311675B (en
Inventor
吴运声
梁晨
黄宇坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010086068.1A priority Critical patent/CN111311675B/en
Publication of CN111311675A publication Critical patent/CN111311675A/en
Application granted granted Critical
Publication of CN111311675B publication Critical patent/CN111311675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a vehicle positioning method, a vehicle positioning device, vehicle positioning equipment and a storage medium, and relates to the technical field of AI. The method comprises the following steps: acquiring a road image; calling a vehicle positioning model, wherein the vehicle positioning model comprises a segmentation map prediction part and a position prediction part; acquiring a road segmentation map corresponding to the road image through a segmentation map prediction part; acquiring an included angle between the orientation of the target vehicle and a reference line and a distance between the target vehicle and the reference line according to the road segmentation map through a position prediction part; and determining the position of the target vehicle through geometric transformation according to the included angle and the distance. Compared with the related art, the lane line is obtained by determining the lane line parameter fitting in a fitting mode, and the position of the vehicle is further determined. According to the embodiment of the application, after the road image is obtained, the position of the vehicle can be directly determined through the vehicle positioning model, and processes such as fitting of lane lines are not needed, so that the accuracy of the finally determined vehicle position is improved.

Description

Vehicle positioning method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of Artificial Intelligence (AI), in particular to a vehicle positioning method, device, equipment and storage medium.
Background
In the vehicle driving assistance system, the image processing and the computer vision technology are utilized to detect and segment the lane and the vehicle, and the lane where the vehicle runs is positioned to assist the vehicle driving.
In the related art, determining a position where a vehicle travels within a lane may include the steps of: calibrating camera distortion; the forward shot picture is transmitted and transformed to be projected to an orthographic projection space of a top view; carrying out lane line example segmentation through deep learning; performing polynomial fitting on each lane example to obtain lane line parameters; and calculating the coordinates of the vehicle in the lane according to the lane line parameters.
In the related art, the lane line parameters are determined by fitting, and when a part of lanes are missing, the fitted lane line parameters are inaccurate due to incomplete lane data, so that the finally obtained vehicle position is further inaccurate.
Disclosure of Invention
The embodiment of the application provides a vehicle positioning method, a vehicle positioning device, vehicle positioning equipment and a storage medium, which can be used for improving the accuracy of vehicle positioning. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a vehicle positioning method, where the method includes:
acquiring a road image, wherein the road image is an image containing a target road acquired by a camera device installed on a target vehicle;
calling a vehicle positioning model, wherein the vehicle positioning model comprises a segmentation map prediction part and a position prediction part;
acquiring a road segmentation map corresponding to the road image through the segmentation map prediction part, wherein the road segmentation map is an image segmented with lanes included in the target road;
acquiring an included angle between the orientation of the target vehicle and a reference line and a distance between the target vehicle and the reference line according to the road segmentation map through the position prediction part, wherein the reference line is a line parallel to the lane;
and determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
In another aspect, an embodiment of the present application provides a training method for a vehicle positioning model, where the vehicle positioning model includes a segmentation map prediction portion and a location prediction portion, and the method includes:
acquiring a sample road image and sample position information, wherein the sample road image is an image which is acquired through a camera device installed on a sample vehicle and contains a sample road, the sample position information comprises a sample included angle and a sample distance, the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, the reference line is a line parallel to a lane included in the sample road, and the sample distance is a distance between the sample vehicle and the reference line;
labeling the sample road image to obtain a sample road segmentation map corresponding to the sample road image, wherein the sample road segmentation map is an image obtained by segmenting lanes included in the sample road;
training a segmentation map prediction part by adopting the sample road image and the sample road segmentation map, wherein the segmentation map prediction part is used for acquiring a road segmentation map;
in response to stopping the training of the segmentation map prediction section, training the position prediction section using the sample road image and the sample position information, the position prediction section being used for vehicle positioning.
In another aspect, an embodiment of the present application provides a vehicle positioning apparatus, including:
the system comprises an image acquisition module, a road image acquisition module and a road image processing module, wherein the image acquisition module is used for acquiring a road image which is an image containing a target road and is acquired by a camera device installed on a target vehicle;
the model calling module is used for calling a vehicle positioning model, and the vehicle positioning model comprises a segmentation map prediction part and a position prediction part;
the segmentation map acquisition module is used for acquiring a road segmentation map corresponding to the road image through the segmentation map prediction part, wherein the road segmentation map is an image obtained by segmenting lanes included in the target road;
an information acquisition module, configured to acquire, by the location prediction part, an included angle between an orientation of the target vehicle and a reference line and a distance between the target vehicle and the reference line according to the road segmentation map, where the reference line is a line parallel to the lane;
and the position determining module is used for determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
In another aspect, an embodiment of the present application provides a training apparatus for a vehicle positioning model, where the vehicle positioning model includes a segmentation map prediction part and a position prediction part, the apparatus includes:
the system comprises a sample acquisition module, a data acquisition module and a data processing module, wherein the sample acquisition module is used for acquiring a sample road image and sample position information, the sample road image is an image which is acquired by a camera device arranged on a sample vehicle and contains a sample road, the sample position information comprises a sample included angle and a sample distance, the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, the reference line is a line parallel to a lane contained in the sample road, and the sample distance is a distance between the sample vehicle and the reference line;
the segmentation map acquisition module is used for labeling the sample road image to obtain a sample road segmentation map corresponding to the sample road image, wherein the sample road segmentation map is an image obtained by segmenting lanes included in the sample road;
the first training module is used for training the segmentation map prediction part by adopting the sample road image and the sample road segmentation map, and the segmentation map prediction part is used for acquiring a road segmentation map;
and the second training module is used for responding to the stopping of the training of the segmentation graph prediction part, and training the position prediction part by adopting the sample road image and the sample position information, wherein the position prediction part is used for positioning the vehicle.
In yet another aspect, embodiments of the present application provide a computer device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the vehicle positioning method according to the above aspect, or implement the training method of the vehicle positioning model according to the above aspect.
In yet another aspect, an embodiment of the present application provides a computer-readable storage medium, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the vehicle positioning method according to the above aspect, or implement the training method of the vehicle positioning model according to the above aspect.
In a further aspect, an embodiment of the present application provides a computer program product, which when executed by a processor, is configured to implement the above vehicle positioning method, or implement the above training method for the vehicle positioning model.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
after the road image acquired by the target vehicle is acquired, the vehicle positioning model can be directly called, and the position of the target vehicle is directly obtained through the vehicle positioning model. Compared with the related art, the lane line is obtained by determining the lane line parameter fitting in a fitting mode, and the position of the vehicle is further determined. According to the technical scheme, after the road image is obtained, the position of the vehicle can be directly determined through the vehicle positioning model, and processes such as fitting of lane lines are not needed, so that the accuracy of the finally determined vehicle position is improved.
Drawings
FIG. 1 illustrates a flow chart of a vehicle localization method provided by an embodiment of the present application;
FIG. 2 is a flow chart of a vehicle locating method provided by one embodiment of the present application;
FIG. 3 illustrates a schematic view of a road image;
FIG. 4 is a schematic diagram illustrating a vehicle localization model;
FIG. 5 is a schematic diagram illustrating an exemplary virtual point of the present application;
FIG. 6 is a schematic diagram illustrating an angle and distance according to the present application;
FIG. 7 is a flow chart of a vehicle localization method provided by another embodiment of the present application;
FIG. 8 is a flow chart of training of a vehicle localization model provided by an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a sample road segmentation map of the present application;
FIG. 10 is a flow chart of training of a vehicle localization model provided by another embodiment of the present application;
FIG. 11 illustrates a schematic diagram of sample road images acquired at different locations;
FIG. 12 is a block diagram of a vehicle locating device provided in one embodiment of the present application;
FIG. 13 is a block diagram of a vehicle locating device provided in another embodiment of the present application;
FIG. 14 is a block diagram of a training apparatus for a vehicle localization model provided in one embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
CV (Computer Vision) Computer Vision is a science for researching how to make a machine "see", and further refers to using a camera and a Computer to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further performing graphic processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes technologies such as image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also includes common biometric technologies such as face Recognition, fingerprint Recognition, and the like.
ML (Machine Learning) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligence CV and ML, and provides a Vehicle positioning method which can be applied to the fields of AGV (automatic Guided Vehicle), automatic driving of automobiles, auxiliary driving of automobiles and the like.
Referring to fig. 1, a flowchart of a vehicle positioning method according to an embodiment of the present application is schematically shown. The target vehicle 100 has mounted thereon the image pickup apparatus 10, and the target vehicle 100 can take an image of the front of the vehicle by the image pickup apparatus 10 to acquire a road image 200, the road image 200 being an image containing a target road. After the road image 200 is obtained, the vehicle positioning model 300 may be called, where the vehicle positioning model 300 includes a segmentation map prediction portion 310 and a position prediction portion 320, where a road segmentation map 400 corresponding to the road image 200 is obtained by the segmentation map prediction portion 310, where the road segmentation map 400 is an image obtained by segmenting lanes included in a target road; then, the road segmentation map 400 obtains the angle between the orientation of the target vehicle 100 and the reference line and the distance between the target vehicle 100 and the reference line through the position prediction part 320; the position 500 of the target vehicle 100 may then be determined by geometric transformation based on the included angle and distance.
In the following, the execution subject and the implementation environment of the embodiment of the present application are described:
according to the vehicle positioning method provided by the embodiment of the application, the execution main body of each step can be a vehicle, and the vehicle can comprise a vision module. The vision module may include a camera device for capturing an image of a driving road in front of the vehicle, resulting in a road image.
Optionally, the image capturing apparatus may be any electronic apparatus with an image capturing function, such as a camera, a video camera, a still camera, or the like, which is not limited in this embodiment of the application.
Optionally, the vision module may further include a processing unit, where the processing unit is configured to process the road image to execute the vehicle positioning method to obtain the position information of the vehicle. The processing unit may be an electronic device, such as a processor, having image and data processing functionality.
It should be noted that the processing unit may be integrated on the vision module, or may be independent on the vehicle to form a processing module, and the processing module and the vision module may be electrically connected.
Optionally, in the vehicle positioning method provided in the embodiment of the present application, the execution main body of each step may be a vehicle-mounted terminal installed on a vehicle. The vehicle-mounted terminal has image acquisition and image processing functions. The vehicle-mounted terminal can comprise a camera device and a processing device, and after the camera device collects the road image, the processing device can execute the vehicle positioning method based on the road image to acquire the position information of the vehicle.
In some other embodiments, the vehicle location method described above may also be performed by a server. The server, after obtaining the location information of the vehicle, may send the location information to the vehicle.
It should be noted that the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content delivery network), and a big data and artificial intelligence platform, which is not limited in this embodiment of the present application.
The technical solution of the present application will be described below by means of several embodiments.
Referring to fig. 2, a flowchart of a vehicle positioning method according to an embodiment of the present application is shown. In the present embodiment, the method is mainly exemplified as being applied to the vehicle or the in-vehicle terminal described above. The method may include the steps of:
step 201, acquiring a road image.
The road image is an image containing a target road acquired by an image pickup apparatus mounted on the target vehicle.
The subject vehicle may have mounted thereon an image pickup apparatus by which an image of a road ahead of the subject vehicle can be picked up when the image pickup apparatus is mounted on a front windshield of the subject vehicle. The road image is an image including a target road.
Alternatively, the target road may include at least one lane. Further, each lane may be constituted by at least two lane lines.
Illustratively, as shown in fig. 3, a schematic diagram of a road image is exemplarily shown. The target road in the road image 200 includes four lanes.
Optionally, the image capturing apparatus may be any electronic apparatus with an image capturing function, such as a camera, a video camera, a still camera, or the like, which is not limited in this embodiment.
Optionally, the road image may be an image acquired by the camera device, or may be any image frame in a road video acquired by the camera device, which is not limited in this embodiment of the application.
Step 202, a vehicle positioning model is invoked, the vehicle positioning model comprising a segmentation map prediction component and a location prediction component.
After the road image is acquired, a vehicle positioning model may be invoked to perform subsequent vehicle positioning steps. The vehicle positioning model is an end-to-end model.
Alternatively, the vehicle localization model may be a CNN (Convolutional Neural Networks) model, which may contain at least one normalization (batch normalization) layer, at least one pooling (pool) layer, at least one fully connected (full connected) layer, and so on.
Illustratively, as shown in fig. 4, a schematic structural diagram of a vehicle positioning model is exemplarily shown. The vehicle positioning model may include a normalization layer, a pooling layer, a full-link layer, an output layer, and the like.
The vehicle localization model may include a segmentation map prediction section and a location prediction section. Wherein, the partition map prediction part may adopt any one of the following: UNet, ENet (Efficient Neural Network), PSPNet (Pyramid Scene Parsing Network), FCN (full convolutional Neural Network), MRCNN, and the like, which are not limited in the embodiments of the present application.
And step 203, acquiring a road segmentation map corresponding to the road image through the segmentation map prediction part.
After the road image is input into the vehicle positioning model, the road segmentation map corresponding to the road image can be obtained through the segmentation map prediction part in the vehicle positioning model.
The road segmentation map is an image obtained by segmenting lanes included in the target road. The road segmentation map can be used for displaying the background and the lanes in the road image in a distinguishing manner, and further can be used for displaying each lane in a distinguishing manner.
Optionally, the road segmentation map may be a gray scale map, where different lane lines may be represented by different gray scale values, and different lane lines may be represented by different colors or different brightness values, which is not limited in this embodiment of the present invention.
Illustratively, with continued reference to fig. 4, after the road image 200 is input into the segmentation map prediction part of the vehicle positioning model, the road segmentation map 400 corresponding to the road image 200 can be obtained.
And step 204, acquiring an included angle between the orientation of the target vehicle and the reference line and a distance between the target vehicle and the reference line according to the road segmentation map through the position prediction part.
After obtaining the road segmentation map, the road may be segmented and further passed through a position prediction section, thereby obtaining an angle between the orientation of the target vehicle and a reference line, and a distance between the target vehicle and the reference line.
The distance may be a normalized distance between a virtual point in front of the target vehicle, which may be a projection of a center point on the bottom side on the target road in the field angle of the image pickup device, and the reference line.
Illustratively, as shown in fig. 5, a schematic diagram of a virtual point is exemplarily shown. Wherein, the straight line of the points O and p0 is the bottom line of the field angle of the image pickup device, the straight line of the points O and p1 is the middle line of the field angle of the image pickup device, and the straight line of the points O and p2 is the top line of the field angle of the image pickup device.
Optionally, the reference line includes a road centerline of the target road.
Taking the target road including four lanes and the reference line as the road center line of the target road, i.e., a double solid line as an example, the angle between the orientation of the target vehicle and the reference line may be represented as α and the distance between the target vehicle and the reference line may be represented as β.
And step 205, determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
After the camera device is installed, the relative position of the camera device and the target vehicle is constant, so after the included angle and the distance are acquired, the position of the target vehicle can be obtained through further geometric transformation.
To sum up, according to the technical scheme provided by the embodiment of the application, after the road image acquired by the target vehicle is acquired, the vehicle positioning model can be directly called, and the position of the target vehicle is directly obtained through the vehicle positioning model. Compared with the related art, the lane line is obtained by determining the lane line parameter fitting in a fitting mode, and the position of the vehicle is further determined. According to the technical scheme, after the road image is obtained, the position of the vehicle can be directly determined through the vehicle positioning model, and processes such as fitting of lane lines are not needed, so that the accuracy of the finally determined vehicle position is improved.
Referring to fig. 7, a flowchart of a vehicle positioning method according to another embodiment of the present application is shown. In the present embodiment, the method is mainly exemplified as being applied to the vehicle or the in-vehicle terminal described above. The method may include the steps of:
step 701, acquiring a road image.
This step is the same as or similar to the step 201 in the embodiment of fig. 2, and is not described here again.
Step 702, a vehicle positioning model is invoked, the vehicle positioning model comprising a segmentation map prediction component and a location prediction component.
This step is the same as or similar to the step 202 in the embodiment of fig. 2, and is not repeated here.
The training process of the vehicle positioning model is described in detail in the following embodiments, and will not be described herein.
And 703, performing feature extraction on the road image to obtain a feature map corresponding to the road image.
After the road image is acquired, the road image can be subjected to feature extraction by the segmentation map prediction part, and abstract features of the road image are extracted, so that a feature map corresponding to the road image is obtained.
Alternatively, the size of the feature map is the same as the size of the road image.
Step 704, classifying the pixels in the feature map, and determining the category to which the pixels belong.
After the feature map is obtained, each pixel in the feature map may be classified to determine a category to which each pixel belongs.
In one example, the categories may include background, single solid line, dashed line, and double solid line; in another example, the categories may include background and single solid lines; in yet another example, the categories may include background, single solid line, and dashed line; in yet another example, the categories may include background, single solid line, and double solid line; and so on. The embodiments of the present application do not limit this.
Optionally, the classifying the pixels in the feature map and determining the category to which the pixels belong may include: and determining probability values of the pixels belonging to the various categories, and determining the categories of the pixels by comparing the probability values.
Optionally, the segmentation map prediction part may classify each pixel by using a SoftMax regression model.
Step 705, a road segmentation map is obtained according to the category to which the pixel belongs.
After determining the category to which each pixel belongs, the road segmentation map may be obtained by setting the pixel values of the pixels belonging to the same category to the same value.
For example, the road segmentation map may be obtained by setting the pixel values of the pixels belonging to the background to 0, the pixel values of the pixels belonging to the single solid line to 1, the pixel values of the pixels belonging to the dotted line to 2, and the pixel values of the pixels belonging to the double solid line to 3. In some other examples, the pixel values of the pixels of the background, the single solid line, the dashed line and the double solid line may also be set to other values, respectively, which is not limited in this embodiment.
And step 706, acquiring an included angle between the orientation of the target vehicle and the reference line and a distance between the target vehicle and the reference line according to the road segmentation map through the position prediction part.
This step is the same as or similar to the step 204 in the embodiment of fig. 2, and is not repeated here.
And step 707, determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
This step is the same as or similar to the step 204 in the embodiment of fig. 2, and is not repeated here.
Optionally, the determining the position of the target vehicle through geometric transformation according to the included angle and the distance may include the following steps:
(1) determining the field angle of the camera equipment according to the included angle and the distance;
since the above-mentioned distance is a normalized distance between a virtual point in front of the target vehicle and the reference line, the virtual point is a projection of a midpoint on the target road in the angle of view of the image pickup device. Therefore, after the angle and the distance are acquired, the field angle of the image pickup apparatus can be further determined according to the angle and the distance.
The above-mentioned angle of view refers to the angular range in which the image pickup apparatus can capture an image, and the larger the angle of view, the larger the angular range.
(2) Acquiring a downward inclination angle of the camera equipment;
the downward inclination angle refers to an angle between a viewing angle central line and the vertical direction.
Illustratively, with continued reference to fig. 5, where the straight lines of the points O and p1 are the central lines of the field angles of the image pickup apparatus, and the straight lines of the points O and p3 represent the vertical directions, and the included angle between the two is the above-mentioned down tilt angle.
(3) Determining the position of the camera equipment according to the field angle of the camera equipment and the downward inclination angle of the camera equipment;
after the angle of view of the image pickup apparatus and the down tilt angle of the image pickup apparatus are acquired, the position of the image pickup apparatus can be further calculated.
(4) And determining the position of the target vehicle according to the position of the camera device and the relative position of the camera device and the target vehicle.
Since the relative position to the target vehicle is constant after the image pickup apparatus is mounted, the position of the target vehicle can be found in combination with the relative position of the image pickup apparatus to the target vehicle after the position of the image pickup apparatus is determined.
In one possible embodiment, after determining the position of the target vehicle by geometric transformation according to the included angle and the distance in step 707, the following steps may be further performed:
at step 708, a desired position of the target vehicle is obtained.
The desired position is a position to which the target vehicle is actually required to travel.
Step 709, determine a position offset between the position of the target vehicle and the desired position.
The positional deviation amount refers to a deviation between the position of the target vehicle and a desired value.
And step 710, controlling the target vehicle to move to a desired position according to the position offset.
After determining the amount of positional deviation between the position of the target vehicle and the desired position, the target vehicle may be controlled to move to the desired position based on the amount of positional deviation.
Alternatively, the target vehicle may be controlled to move to a desired position by controlling the steering angle of the steering wheel of the target vehicle or controlling the steering of the tires of the target vehicle according to the amount of offset.
For example, when the vehicle positioning model provided in the embodiment of the present application is applied to the field of AGV carts, after the position of the target vehicle is obtained, the position of the target vehicle may be sent to a PID (proportional-integral-differential) controller, the PID controller calculates a position offset between the desired position and the position of the target vehicle, and the PID controller may generate a control command according to the position offset, and control the AGV cart to travel to the desired position through the control command.
In another possible embodiment, after determining the position of the target vehicle by geometric transformation according to the included angle and the distance in step 707, the following steps may be further performed:
and step 711, determining the lane where the target vehicle is located according to the position of the target vehicle.
After determining the position information of the target vehicle, the lane in which the target vehicle is located may be further determined according to the position of the target vehicle.
In step 712, the lane in which the target vehicle is located is indicated in the road image.
Further, the lane in which the target vehicle is currently located may be indicated in the road image. Based on the method, the driver can accurately plan the subsequent driving route by combining the navigation information and the indicated lane where the vehicle is located.
To sum up, according to the technical scheme provided by the embodiment of the application, after the road image acquired by the target vehicle is acquired, the vehicle positioning model can be directly called, and the position of the target vehicle is directly obtained through the vehicle positioning model. Compared with the related art, the lane line is obtained by determining the lane line parameter fitting in a fitting mode, and the position of the vehicle is further determined. According to the technical scheme provided by the embodiment of the application, the position of the vehicle is directly determined through the vehicle positioning model, and processes such as fitting of lane lines are not needed, so that the accuracy of the finally determined vehicle position is improved.
Referring to FIG. 8, a flow chart of training of a vehicle localization model provided by an embodiment of the present application is shown. The vehicle positioning model includes a segmentation map prediction section and a position prediction section. In the present embodiment, the method is mainly applied to the server described above for illustration. The method may include the steps of:
step 801, acquiring a sample road image and sample position information.
The sample vehicle may have mounted thereon an image pickup apparatus by which an image of a road in front of the sample vehicle can be picked up when the image pickup apparatus is mounted on a front windshield of the sample vehicle. The sample road image is an image including a sample road, the sample position information includes a sample angle and a sample distance, the sample angle is an angle between the orientation of the sample vehicle and a reference line, the reference line is a line parallel to a lane included in the sample road, and the sample distance is a distance between the sample vehicle and the reference line.
The angles and distances have been described in detail above and will not be described in detail here.
Optionally, the sample road image may be an image acquired by the image capturing device, or may be any image frame in a sample road video acquired by the image capturing device, which is not limited in this embodiment of the application.
And 802, marking the sample road image to obtain a sample road segmentation map corresponding to the sample road image.
After the sample road image is obtained, the sample road image can be labeled, so that a sample road segmentation map corresponding to the sample road image is obtained. The sample road segmentation map is an image obtained by segmenting lanes included in the sample road.
Optionally, when the image capturing device captures a sample road video at a certain position, since the position remains unchanged, any one image frame in the sample road video may be selected for labeling, so as to obtain a sample road segmentation map.
Illustratively, as shown in fig. 9, a schematic diagram of a sample road segmentation map is exemplarily shown. Part (a) of fig. 9 may be a sample road image, and part (b) of fig. 9 may be a sample road segmentation map corresponding to the sample road image.
And step 803, training the prediction part of the segmentation map by adopting the sample road image and the sample road segmentation map.
After the sample road image and the sample road segmentation map are obtained, the segmentation map prediction part may be trained by using the sample road image and the sample road segmentation map.
The segmentation map prediction section is used to obtain a road segmentation map.
When the stop first training condition is satisfied, stopping training the segmentation map prediction part, and executing the subsequent steps.
And step 804, in response to stopping the training of the segmentation map prediction part, training the position prediction part by using the sample road image and the sample position information.
After stopping the training of the segmentation map prediction section, the segmentation map prediction section is set to be non-trainable, and then the position prediction section is trained using the sample road image and the sample position information. The position prediction part is used for vehicle positioning.
And when the second training stopping condition is met, stopping training the position prediction part, thereby obtaining the trained vehicle positioning model.
The trained vehicle positioning model can be called by other equipment to position the vehicle.
In summary, according to the technical scheme provided by the embodiment of the application, after the sample road image, the sample position information and the sample road segmentation map are obtained, the segmentation map prediction part is trained by using the sample road image and the sample road segmentation map, and after the training of the segmentation map prediction part is stopped, the position prediction part is trained by using the sample road image and the sample position information, so that a trained vehicle positioning model is obtained; the vehicle location model may then be used directly to determine location information of the vehicle. Compared with the prior art, the technical scheme provided by the embodiment of the application directly determines the position of the vehicle through the vehicle positioning model without processes of fitting lane lines and the like, so that the accuracy of the finally determined vehicle position is improved.
Referring to FIG. 10, a flow chart of training of a vehicle localization model provided by another embodiment of the present application is shown. The vehicle positioning model includes a segmentation map prediction section and a position prediction section. In the present embodiment, the method is mainly applied to the server described above for illustration. The method may include the steps of:
step 1001, n pieces of sample position information of a sample vehicle are acquired, where n is a positive integer.
The sample vehicle is placed at n positions to acquire n pieces of sample position information of the sample vehicle. The sample location information is used to indicate the location where the sample vehicle was when the sample road image was acquired.
In step 1002, acquiring n sample road images acquired by a sample vehicle under different sample position information through a camera device installed on the sample vehicle.
The sample vehicle is mounted with an image pickup apparatus by which n sample road images can be acquired.
The n sample road images have different environmental parameters, the environmental parameters are used for representing environmental characteristics when the sample road images are collected, and the environmental parameters include at least one of the following items: illumination intensity, color temperature, illumination direction.
Optionally, the sample position information includes a sample angle and a sample distance; the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, and the reference line is a line parallel to the lane; the sample distance refers to the distance between the sample vehicle and the reference line.
Alternatively, the reference line may be a road center line of the sample road.
Optionally, the n sample position information includes: a target sample angle and at least one sample distance corresponding to the target sample angle; a target sample distance and at least one sample angle corresponding to the target sample distance. That is, a plurality of sample distances can be adjusted to collect at one sample angle. For example, the samples may be collected by adjusting a plurality of different sample distances at positions where the sample included angles are 0 °, 30 °, 60 °, 90 °, 120 °, 150 °, and 180 °, respectively. Similarly, a plurality of sample angles can be adjusted to collect at a sample distance. For example, the collection can be performed by adjusting a plurality of different angles at positions where the sample distances are-1.0, -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 1.0, and the like, respectively.
Illustratively, as shown in fig. 11, a schematic diagram of sample road images acquired at different positions is exemplarily shown. Part (a) in fig. 11 is a sample road image acquired at a position where the sample included angle is 90 ° and the sample distance is 0.5; part (b) in fig. 11 is a sample road image acquired at a position where the sample included angle is 88 ° and the sample distance is 0.0; part (c) in fig. 11 is a sample road image collected at a position where the sample included angle is 120 ° and the sample distance is-0.25; part (d) of fig. 11 is a sample road image acquired at a position where the sample angle is 120 ° and the sample distance is 0.0.
Alternatively, the sample road image may also be acquired at a different mounting angle of the camera device.
The training samples are collected at a plurality of different positions, and the model is trained, so that the model can learn information such as lens distortion, three-dimensional transmission relation and the like, and the generalization capability of the model can be further improved.
And 1003, marking the sample road image to obtain a sample road segmentation map corresponding to the sample road image.
The sample road segmentation map is an image into which lanes included in the sample road are segmented.
This step is the same as or similar to the step 802 in the embodiment of fig. 8, and is not repeated here.
Optionally, the pixels in the sample road segmentation map include the following categories: background, single solid line, dashed line and double solid line.
Illustratively, with continued reference to fig. 9, the pixel values of the pixels of the background, single solid line, dashed line, and double solid line may be set to: 0. 1, 2 and 3. In some other examples, the pixel values of the pixels of the background, the single solid line, the dashed line and the double solid line may also be set to other values, which is not limited by the embodiment of the present application.
In this case, using the sample road image and the sample road segmentation map, training the segmentation map prediction part may include the following steps 1304-1306.
And step 1004, acquiring loss weights corresponding to the categories.
The pixels in the sample road segmentation map include a background, a single solid line, a dashed line, and a double solid line, and based on this, loss weights corresponding to the background, the single solid line, the dashed line, and the double solid line, respectively, can be obtained.
Illustratively, the loss weights for the background, single solid line, dashed line, and double solid line may be assigned in a ratio of 0.01:0.33:0.33: 0.33. In some other embodiments, the setting may also be performed according to practical situations, and this is not limited in this application.
Step 1005, determining a first loss function according to the sample road image, the sample road segmentation map and the loss weight corresponding to the category.
After determining the loss weight of each category in the sample road segmentation map, a first loss function may be determined according to the sample road image, the sample road segmentation map, and the loss weight corresponding to the category.
Optionally, the first penalty function employs a cross entropy penalty. In some other embodiments, other types of losses may be used, and the embodiments of the present application are not limited thereto.
In step 1006, the parameters of the segment prediction portion are adjusted according to the value of the first penalty function.
After determining the first loss function, the parameters of the partition map prediction section may be adjusted according to the value of the first loss function. Thereby training the segmentation map prediction part.
When the first stop training condition is satisfied, the training of the partition map prediction section is stopped.
Optionally, the first training stopping condition may be that the value of the first loss function is smaller than a first threshold, or may be that the number of times of training is greater than or equal to a first preset number of times, which is not limited in this embodiment of the present application.
Optionally, before the training of the segmentation graph prediction part, the following steps may also be performed: transforming the image parameters of the sample road image to obtain a transformed sample road image; the transformed sample road image and sample road segmentation map may then be used to train the segmentation map prediction component. Wherein the image parameters include at least one of: hue, contrast, brightness, saturation.
Optionally, before the training of the segmentation graph prediction part, the following steps may also be performed: carrying out position transformation on the sample road image and the sample road segmentation map to obtain a transformed sample road image and a transformed sample road segmentation map; the segmentation map prediction portion may then be trained using the transformed sample road image and the transformed sample road segmentation map. Wherein the position transformation comprises at least one of: translation transformation and rotation transformation.
It should be noted that, before the training of the segmentation map prediction part, only the image parameter transformation may be performed, only the position transformation may be performed, or both of them may be performed, which is not limited in the embodiment of the present application.
On one hand, the sample vehicle can collect a small amount of sample road images, and perform parameter transformation and position change on the small amount of sample road images, so that training samples can be effectively amplified, and the acquisition cost of the training samples is reduced; on the other hand, through parameter transformation and position transformation, the model can learn various conditions (such as shielding, distortion and the like), and the generalization capability and the accuracy of the model are further improved.
Step 1007, in response to stopping training of the segmentation map prediction section, trains the position prediction section using the sample road image and the sample position information.
After stopping the training of the segmentation map prediction section, the segmentation map prediction section is set to be non-trainable, and then the position prediction section is trained using the sample road image and the sample position information. The position predicting part is used for determining the position information of the target vehicle according to the target road segmentation map.
Optionally, determining a second loss function, the value of which is used to characterize the degree of difference between the predicted position information of the vehicle localization model and the sample position information; the position prediction section is adjusted according to the value of the second penalty function.
Optionally, the second training stopping condition may be that the value of the second loss function is smaller than a second threshold, or may be that the number of times of training is greater than or equal to a second preset number of times, which is not limited in this embodiment of the present application.
In summary, according to the technical scheme provided by the embodiment of the application, after the sample road image, the sample position information and the sample road segmentation map are obtained, the segmentation map prediction part is trained by using the sample road image and the sample road segmentation map, and after the training of the segmentation map prediction part is stopped, the position prediction part is trained by using the sample road image and the sample position information, so that a trained vehicle positioning model is obtained; the vehicle location model may then be used directly to determine location information of the vehicle. Compared with the prior art, the technical scheme provided by the embodiment of the application directly determines the position of the vehicle through the vehicle positioning model without processes of fitting lane lines and the like, so that the accuracy of the finally determined vehicle position is improved.
In addition, training samples are collected at a plurality of different positions, and the model is trained, so that the model can learn information such as lens distortion, three-dimensional transmission relation and the like, and the generalization capability of the model can be further improved.
In addition, the acquired samples are subjected to parameter transformation and position change through the positions, on one hand, a sample vehicle can acquire a small number of sample road images and carry out parameter transformation and position change on the small number of sample road images, so that training samples can be effectively amplified, and the acquisition cost of the training samples is reduced; on the other hand, through parameter transformation and position transformation, the model can learn various conditions (such as shielding, distortion and the like), and the generalization capability and the accuracy of the model are further improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 12, a block diagram of a vehicle locating device provided in an embodiment of the present application is shown. The device has the functions of realizing the vehicle positioning method example, and the functions can be realized by hardware or hardware executing corresponding software. The device may be the vehicle, the vehicle-mounted terminal or the server described above, or may be provided on the vehicle, the vehicle-mounted terminal or the server. The apparatus 1500 may include: an image acquisition module 1210, a model calling module 1220, a segmentation map acquisition module 1230, an information acquisition module 1240, and a location determination module 1250.
An image acquisition module 1210 is configured to acquire a road image, which is an image including a target road acquired by a camera device mounted on a target vehicle.
The model calling module 1220 is used for calling a vehicle positioning model, and the vehicle positioning model comprises a segmentation map prediction part and a position prediction part.
The segmentation map obtaining module 1230 is configured to obtain a road segmentation map corresponding to the road image through the segmentation map prediction part, where the road segmentation map is an image obtained by segmenting lanes included in the target road.
An information obtaining module 1240, configured to obtain, according to the road segmentation map, an included angle between the heading of the target vehicle and a reference line and a distance between the target vehicle and the reference line, where the reference line is a line parallel to the lane.
A position determining module 1250, configured to determine the position of the target vehicle through geometric transformation according to the included angle and the distance.
In summary, after the road image acquired by the target vehicle is acquired, the vehicle positioning model may be directly invoked, and the position of the target vehicle is directly obtained through the vehicle positioning model. Compared with the related art, the lane line is obtained by determining the lane line parameter fitting in a fitting mode, and the position of the vehicle is further determined. According to the technical scheme, after the road image is obtained, the position of the vehicle can be directly determined through the vehicle positioning model, and processes such as fitting of lane lines are not needed, so that the accuracy of the finally determined vehicle position is improved.
In some possible designs, the position determining module 1250 is configured to determine a field angle of the image capturing apparatus according to the included angle and the distance; acquiring a downward inclination angle of the camera device; determining the position of the camera equipment according to the field angle of the camera equipment and the downward inclination angle of the camera equipment; and determining the position of the target vehicle according to the position of the camera equipment and the relative position of the camera equipment and the target vehicle.
In some possible designs, the reference line includes a road centerline of the target road.
In some possible designs, the segmentation map obtaining module 1230 is configured to perform feature extraction on the road image to obtain a feature map corresponding to the road image; classifying pixels in the feature map, and determining the category to which the pixels belong, wherein the category comprises a background, a single solid line, a dotted line and a double solid line; and obtaining the road segmentation graph according to the category of the pixel.
In some possible designs, the training process for the vehicle positioning model is as follows:
acquiring a sample road image and sample position information, wherein the sample road image is an image which is acquired through a camera device installed on a sample vehicle and contains a sample road, the sample position information comprises a sample included angle and a sample distance, the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, and the sample distance is a distance between the sample vehicle and the reference line; labeling the sample road image to obtain a sample road segmentation map corresponding to the sample road image, wherein the sample road segmentation map is an image obtained by segmenting lanes included in the sample road; training the prediction part of the segmentation graph by adopting the sample road image and the sample road segmentation graph; in response to stopping the training of the segmentation map prediction section, training the position prediction section using the sample road image and the sample position information.
In some possible designs, the obtaining of the sample road image and the sample position information includes: acquiring n sample position information of the sample vehicle, wherein n is a positive integer; acquiring n sample road images acquired by the sample vehicle under different sample position information through camera equipment arranged on the sample vehicle; the n sample road images are different in environmental parameter, the environmental parameter is used for representing the environmental characteristics when the sample road images are collected, and the environmental parameter comprises at least one of the following items: illumination intensity, color temperature, illumination direction.
In some possible designs, before training the segmentation map prediction part using the sample road image and the sample road segmentation map, the method further includes: transforming the image parameters of the sample road image to obtain a transformed sample road image, wherein the image parameters comprise at least one of the following items: hue, contrast, brightness, saturation;
the training of the segmentation map prediction part by adopting the sample road image and the sample road segmentation map comprises the following steps: and training the prediction part of the segmentation graph by adopting the transformed sample road image and the sample road segmentation graph.
In some possible designs, before training the segmentation map prediction part using the sample road image and the sample road segmentation map, the method further includes: performing position transformation on the sample road image and the sample road segmentation map to obtain a transformed sample road image and a transformed sample road segmentation map, wherein the position transformation comprises at least one of the following items: translation transformation and rotation transformation;
the training of the segmentation map prediction part by adopting the sample road image and the sample road segmentation map comprises the following steps: and training the prediction part of the segmentation graph by adopting the transformed sample road image and the transformed sample road segmentation graph.
In some possible designs, as shown in fig. 13, the apparatus 1200 further comprises: a position acquisition module 1260, an offset determination module 1270, and a position adjustment module 1280.
A location acquisition module 1260 to acquire a desired location of the target vehicle.
An offset determination module 1270 to determine a position offset between the position of the target vehicle and the desired position.
A position adjustment module 1280, configured to control the target vehicle to move to the desired position according to the position offset.
In some possible designs, as shown in fig. 13, the apparatus 1200 further comprises: a lane determination module 1290 and a lane labeling module 1300.
A lane determining module 1290, configured to determine a lane in which the target vehicle is located according to the position of the target vehicle.
A lane indication module 1300 configured to indicate a lane in which the target vehicle is located in the road image.
Referring to fig. 14, a block diagram of a training apparatus for a vehicle positioning model according to an embodiment of the present application is shown. The vehicle localization model includes a segmentation map prediction portion and a location prediction portion. The device has the function of realizing the training method example of the vehicle positioning, and the function can be realized by hardware or by hardware executing corresponding software. The device may be the server described above, or may be provided on the server. The apparatus 1400 may include: a sample acquisition module 1410, a segmentation map acquisition module 1420, a first training module 1430, and a second training module 1440.
The sample obtaining module 1410 is configured to obtain a sample road image and sample position information, where the sample road image is an image that includes a sample road and is obtained by a camera device installed on a sample vehicle, the sample position information includes a sample included angle and a sample distance, the sample included angle is an included angle between an orientation of the sample vehicle and a reference line, the reference line is a line parallel to a lane included in the sample road, and the sample distance is a distance between the sample vehicle and the reference line.
And a segmentation map obtaining module 1420, configured to label the sample road image to obtain a sample road segmentation map corresponding to the sample road image, where the sample road segmentation map is an image obtained by segmenting lanes included in the sample road.
The first training module 1430 is configured to train the segmentation map prediction part by using the sample road image and the sample road segmentation map, where the segmentation map prediction part is used to obtain a road segmentation map.
A second training module 1440, configured to train the location prediction part using the sample road image and the sample location information in response to stopping training the segmentation map prediction part, where the location prediction part is used for vehicle positioning.
In summary, according to the technical scheme provided by the embodiment of the application, after the sample road image, the sample position information and the sample road segmentation map are obtained, the segmentation map prediction part is trained by using the sample road image and the sample road segmentation map, and after the training of the segmentation map prediction part is stopped, the position prediction part is trained by using the sample road image and the sample position information, so that a trained vehicle positioning model is obtained; the vehicle location model may then be used directly to determine location information of the vehicle. Compared with the prior art, the technical scheme provided by the embodiment of the application directly determines the position of the vehicle through the vehicle positioning model without processes of fitting lane lines and the like, so that the accuracy of the finally determined vehicle position is improved.
In some possible designs, the sample obtaining module 1410 is configured to obtain n sample position information of the sample vehicle, where n is a positive integer; acquiring n sample road images acquired by the sample vehicle under different sample position information through camera equipment arranged on the sample vehicle; the n sample road images are different in environmental parameter, the environmental parameter is used for representing the environmental characteristics when the sample road images are collected, and the environmental parameter comprises at least one of the following items: illumination intensity, color temperature, illumination direction.
In some possible designs, the pixels in the sample road segmentation map include the following categories: background, single solid line, dashed line, and double solid line; the first training module 1430 is configured to obtain a loss weight corresponding to the category; determining a first loss function according to the sample road image, the sample road segmentation graph and the loss weight corresponding to the category; adjusting parameters of the segmentation map prediction portion according to a value of the first penalty function.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 15, a block diagram of a terminal according to an embodiment of the present application is shown. In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (field Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1502 is used to store at least one instruction, at least one program, set of codes, or set of instructions for execution by the processor 1501 to implement the vehicle localization methods provided by method embodiments herein, or to implement the training methods of vehicle localization models as described in the above aspects.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device may include: at least one of a communication interface 1504, a display 1505, an audio circuit 1506, a camera assembly 1507, a positioning assembly 1508, and a power supply 1509.
Those skilled in the art will appreciate that the configuration shown in fig. 15 does not constitute a limitation of terminal 1500, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
Referring to fig. 16, a schematic structural diagram of a server according to an embodiment of the present application is shown. Specifically, the method comprises the following steps:
the server 1600 includes a CPU (Central Processing Unit) 1601, a system Memory 1604 including a RAM (Random Access Memory) 1602 and a ROM (Read Only Memory) 1603, and a system bus 1605 connecting the system Memory 1604 and the Central Processing Unit 1601. The server 1600 also includes a basic I/O (Input/Output) system 1606, which facilitates information transfer between various devices within the computer, and a mass storage device 1607 for storing an operating system 1613, application programs 1614, and other program modules 1612.
The basic input/output system 1606 includes a display 1608 for displaying information and an input device 1609 such as a mouse, keyboard, etc. for user input of information. Wherein the display 1608 and input device 1609 are connected to the central processing unit 1601 by way of an input-output controller 1610 which is connected to the system bus 1605. The basic input/output system 1606 may also include an input-output controller 1610 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1610 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 1607 is connected to the central processing unit 1601 by a mass storage controller (not shown) connected to the system bus 1605. The mass storage device 1607 and its associated computer-readable media provide non-volatile storage for the server 1600. That is, the mass storage device 1607 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1604 and mass storage device 1607 described above may be collectively referred to as memory.
The server 1600 may also operate with remote computers connected to a network via a network, such as the internet, according to various embodiments of the present application. That is, the server 1600 may be connected to the network 1612 through the network interface unit 1611 coupled to the system bus 1605, or the network interface unit 1611 may be used to connect to other types of networks or remote computer systems (not shown).
The memory also includes at least one instruction, at least one program, set of codes, or set of instructions stored in the memory and configured to be executed by the one or more processors to implement the vehicle positioning method described above, or to implement the training method of the vehicle positioning model described above.
In an exemplary embodiment, a computer device is also provided. The computer device may be a terminal or a server. The computer device includes a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the above-mentioned vehicle localization method, or to implement the above-mentioned training method of a vehicle localization model.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which when executed by a processor implements the above-mentioned vehicle localization method, or implements the above-mentioned training method of a vehicle localization model.
In an exemplary embodiment, a computer program product is also provided, which, when being executed by a processor, is adapted to carry out the above-mentioned vehicle localization method, or to carry out the training method of the above-mentioned vehicle localization model.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A vehicle positioning method, characterized in that the method comprises:
acquiring a road image, wherein the road image is an image containing a target road acquired by a camera device installed on a target vehicle;
calling a vehicle positioning model, wherein the vehicle positioning model comprises a segmentation map prediction part and a position prediction part;
acquiring a road segmentation map corresponding to the road image through the segmentation map prediction part, wherein the road segmentation map is an image segmented with lanes included in the target road;
acquiring an included angle between the orientation of the target vehicle and a reference line and a distance between the target vehicle and the reference line according to the road segmentation map through the position prediction part, wherein the reference line is a line parallel to the lane;
and determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
2. The method of claim 1, wherein determining the location of the target vehicle by geometric transformation based on the included angle and the distance comprises:
determining the field angle of the camera equipment according to the included angle and the distance;
acquiring a downward inclination angle of the camera device;
determining the position of the camera equipment according to the field angle of the camera equipment and the downward inclination angle of the camera equipment;
and determining the position of the target vehicle according to the position of the camera equipment and the relative position of the camera equipment and the target vehicle.
3. The method according to claim 1, wherein the obtaining of the road segmentation map corresponding to the road image by the segmentation map prediction section includes:
extracting the features of the road image to obtain a feature map corresponding to the road image;
classifying pixels in the feature map, and determining the category to which the pixels belong, wherein the category comprises a background, a single solid line, a dotted line and a double solid line;
and obtaining the road segmentation graph according to the category of the pixel.
4. The method of claim 1, wherein the vehicle localization model is trained as follows:
acquiring a sample road image and sample position information, wherein the sample road image is an image which is acquired through a camera device installed on a sample vehicle and contains a sample road, the sample position information comprises a sample included angle and a sample distance, the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, and the sample distance is a distance between the sample vehicle and the reference line;
labeling the sample road image to obtain a sample road segmentation map corresponding to the sample road image, wherein the sample road segmentation map is an image obtained by segmenting lanes included in the sample road;
training the prediction part of the segmentation graph by adopting the sample road image and the sample road segmentation graph;
in response to stopping the training of the segmentation map prediction section, training the position prediction section using the sample road image and the sample position information.
5. The method of claim 4, wherein the obtaining of the sample road image and the sample location information comprises:
acquiring n sample position information of the sample vehicle, wherein n is a positive integer;
acquiring n sample road images acquired by the sample vehicle under different sample position information through camera equipment arranged on the sample vehicle;
the n sample road images are different in environmental parameter, the environmental parameter is used for representing the environmental characteristics when the sample road images are collected, and the environmental parameter comprises at least one of the following items: illumination intensity, color temperature, illumination direction.
6. The method of claim 4, wherein before training the segmentation map prediction part using the sample road image and the sample road segmentation map, further comprising:
transforming the image parameters of the sample road image to obtain a transformed sample road image, wherein the image parameters comprise at least one of the following items: hue, contrast, brightness, saturation;
the training of the segmentation map prediction part by adopting the sample road image and the sample road segmentation map comprises the following steps:
and training the prediction part of the segmentation graph by adopting the transformed sample road image and the sample road segmentation graph.
7. The method of claim 4, wherein before training the segmentation map prediction part using the sample road image and the sample road segmentation map, further comprising:
performing position transformation on the sample road image and the sample road segmentation map to obtain a transformed sample road image and a transformed sample road segmentation map, wherein the position transformation comprises at least one of the following items: translation transformation and rotation transformation;
the training of the segmentation map prediction part by adopting the sample road image and the sample road segmentation map comprises the following steps:
and training the prediction part of the segmentation graph by adopting the transformed sample road image and the transformed sample road segmentation graph.
8. The method of any one of claims 1 to 7, wherein after determining the position of the target vehicle by geometric transformation based on the included angle and the distance, further comprising:
acquiring a desired position of the target vehicle;
determining a position offset between the position of the target vehicle and the desired position;
and controlling the target vehicle to move to the expected position according to the position offset.
9. The method of any one of claims 1 to 7, wherein after determining the position of the target vehicle by geometric transformation based on the included angle and the distance, further comprising:
determining a lane where the target vehicle is located according to the position of the target vehicle;
indicating in the road image a lane in which the target vehicle is located.
10. A method of training a vehicle localization model, the vehicle localization model comprising a segmentation map prediction component and a location prediction component, the method comprising:
acquiring a sample road image and sample position information, wherein the sample road image is an image which is acquired through a camera device installed on a sample vehicle and contains a sample road, the sample position information comprises a sample included angle and a sample distance, the sample included angle is an included angle between the orientation of the sample vehicle and a reference line, the reference line is a line parallel to a lane included in the sample road, and the sample distance is a distance between the sample vehicle and the reference line;
labeling the sample road image to obtain a sample road segmentation map corresponding to the sample road image, wherein the sample road segmentation map is an image obtained by segmenting lanes included in the sample road;
training a segmentation map prediction part by adopting the sample road image and the sample road segmentation map, wherein the segmentation map prediction part is used for acquiring a road segmentation map;
in response to stopping the training of the segmentation map prediction section, training the position prediction section using the sample road image and the sample position information, the position prediction section being used for vehicle positioning.
11. The method of claim 10, wherein the obtaining of the sample road image and the sample location information comprises:
acquiring n sample position information of the sample vehicle, wherein n is a positive integer;
acquiring n sample road images acquired by the sample vehicle under different sample position information through camera equipment arranged on the sample vehicle;
the n sample road images are different in environmental parameter, the environmental parameter is used for representing the environmental characteristics when the sample road images are collected, and the environmental parameter comprises at least one of the following items: illumination intensity, color temperature, illumination direction.
12. The method according to claim 10 or 11, wherein the pixels in the sample road segmentation map comprise the following categories: background, single solid line, dashed line, and double solid line;
the training of the segmentation map prediction part by adopting the sample road image and the sample road segmentation map comprises the following steps:
obtaining loss weights corresponding to the categories;
determining a first loss function according to the sample road image, the sample road segmentation graph and the loss weight corresponding to the category;
adjusting parameters of the segmentation map prediction portion according to a value of the first penalty function.
13. A vehicle locating apparatus, characterized in that the apparatus comprises:
the system comprises an image acquisition module, a road image acquisition module and a road image processing module, wherein the image acquisition module is used for acquiring a road image which is an image containing a target road and is acquired by a camera device installed on a target vehicle;
the model calling module is used for calling a vehicle positioning model, and the vehicle positioning model comprises a segmentation map prediction part and a position prediction part;
the segmentation map acquisition module is used for acquiring a road segmentation map corresponding to the road image through the segmentation map prediction part, wherein the road segmentation map is an image obtained by segmenting lanes included in the target road;
an information acquisition module, configured to acquire, by the location prediction part, an included angle between an orientation of the target vehicle and a reference line and a distance between the target vehicle and the reference line according to the road segmentation map, where the reference line is a line parallel to the lane;
and the position determining module is used for determining the position of the target vehicle through geometric transformation according to the included angle and the distance.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method of any one of claims 1 to 9 or to implement the method of any one of claims 10 to 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of any one of claims 1 to 9 or to implement the method of any one of claims 10 to 12.
CN202010086068.1A 2020-02-11 2020-02-11 Vehicle positioning method, device, equipment and storage medium Active CN111311675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010086068.1A CN111311675B (en) 2020-02-11 2020-02-11 Vehicle positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010086068.1A CN111311675B (en) 2020-02-11 2020-02-11 Vehicle positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111311675A true CN111311675A (en) 2020-06-19
CN111311675B CN111311675B (en) 2022-09-16

Family

ID=71160032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010086068.1A Active CN111311675B (en) 2020-02-11 2020-02-11 Vehicle positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111311675B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102398A (en) * 2020-09-10 2020-12-18 腾讯科技(深圳)有限公司 Positioning method, device, equipment and storage medium
CN112200884A (en) * 2020-09-08 2021-01-08 浙江大华技术股份有限公司 Method and device for generating lane line
CN112230663A (en) * 2020-10-28 2021-01-15 腾讯科技(深圳)有限公司 Vehicle positioning data monitoring method and device
CN112990009A (en) * 2021-03-12 2021-06-18 平安科技(深圳)有限公司 End-to-end-based lane line detection method, device, equipment and storage medium
CN113132931A (en) * 2021-04-16 2021-07-16 电子科技大学 Depth migration indoor positioning method based on parameter prediction
CN113191342A (en) * 2021-07-01 2021-07-30 中移(上海)信息通信科技有限公司 Lane positioning method and electronic equipment
CN113408413A (en) * 2021-06-18 2021-09-17 苏州科达科技股份有限公司 Emergency lane identification method, system and device
CN114593739A (en) * 2022-03-17 2022-06-07 长沙慧联智能科技有限公司 Vehicle global positioning method and device based on visual detection and reference line matching
WO2023280135A1 (en) * 2021-07-09 2023-01-12 华为技术有限公司 Communication method and apparatus, and storage medium and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108253974A (en) * 2017-12-29 2018-07-06 深圳市城市交通规划设计研究中心有限公司 Floating Car location data automatic adaptation cushion route matching system and method
CN108805891A (en) * 2018-05-23 2018-11-13 北京工业大学 A kind of lane detection and vehicle positioning method based on carinate figure Yu improvement sequence RANSAC
CN109345589A (en) * 2018-09-11 2019-02-15 百度在线网络技术(北京)有限公司 Method for detecting position, device, equipment and medium based on automatic driving vehicle
CN109828577A (en) * 2019-02-25 2019-05-31 北京主线科技有限公司 The opposite automation field bridge high accuracy positioning parking method of unmanned container truck
CN110706509A (en) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof
CN110770741A (en) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108253974A (en) * 2017-12-29 2018-07-06 深圳市城市交通规划设计研究中心有限公司 Floating Car location data automatic adaptation cushion route matching system and method
CN108805891A (en) * 2018-05-23 2018-11-13 北京工业大学 A kind of lane detection and vehicle positioning method based on carinate figure Yu improvement sequence RANSAC
CN109345589A (en) * 2018-09-11 2019-02-15 百度在线网络技术(北京)有限公司 Method for detecting position, device, equipment and medium based on automatic driving vehicle
US20190340783A1 (en) * 2018-09-11 2019-11-07 Baidu Online Network Technology (Beijing) Co., Ltd. Autonomous Vehicle Based Position Detection Method and Apparatus, Device and Medium
CN110770741A (en) * 2018-10-31 2020-02-07 深圳市大疆创新科技有限公司 Lane line identification method and device and vehicle
CN109828577A (en) * 2019-02-25 2019-05-31 北京主线科技有限公司 The opposite automation field bridge high accuracy positioning parking method of unmanned container truck
CN110706509A (en) * 2019-10-12 2020-01-17 东软睿驰汽车技术(沈阳)有限公司 Parking space and direction angle detection method, device, equipment and medium thereof

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200884A (en) * 2020-09-08 2021-01-08 浙江大华技术股份有限公司 Method and device for generating lane line
CN112200884B (en) * 2020-09-08 2024-05-03 浙江大华技术股份有限公司 Lane line generation method and device
CN112102398A (en) * 2020-09-10 2020-12-18 腾讯科技(深圳)有限公司 Positioning method, device, equipment and storage medium
CN112230663B (en) * 2020-10-28 2023-11-10 腾讯科技(深圳)有限公司 Method and device for monitoring vehicle positioning data
CN112230663A (en) * 2020-10-28 2021-01-15 腾讯科技(深圳)有限公司 Vehicle positioning data monitoring method and device
CN112990009A (en) * 2021-03-12 2021-06-18 平安科技(深圳)有限公司 End-to-end-based lane line detection method, device, equipment and storage medium
CN113132931A (en) * 2021-04-16 2021-07-16 电子科技大学 Depth migration indoor positioning method based on parameter prediction
CN113132931B (en) * 2021-04-16 2022-01-28 电子科技大学 Depth migration indoor positioning method based on parameter prediction
CN113408413A (en) * 2021-06-18 2021-09-17 苏州科达科技股份有限公司 Emergency lane identification method, system and device
CN113408413B (en) * 2021-06-18 2023-03-24 苏州科达科技股份有限公司 Emergency lane identification method, system and device
CN113191342A (en) * 2021-07-01 2021-07-30 中移(上海)信息通信科技有限公司 Lane positioning method and electronic equipment
WO2023280135A1 (en) * 2021-07-09 2023-01-12 华为技术有限公司 Communication method and apparatus, and storage medium and program
CN114593739A (en) * 2022-03-17 2022-06-07 长沙慧联智能科技有限公司 Vehicle global positioning method and device based on visual detection and reference line matching
CN114593739B (en) * 2022-03-17 2023-11-21 长沙慧联智能科技有限公司 Vehicle global positioning method and device based on visual detection and reference line matching

Also Published As

Publication number Publication date
CN111311675B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN111311675B (en) Vehicle positioning method, device, equipment and storage medium
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN112740268B (en) Target detection method and device
CN112419368A (en) Method, device and equipment for tracking track of moving target and storage medium
CN113819890A (en) Distance measuring method, distance measuring device, electronic equipment and storage medium
CN111488812B (en) Obstacle position recognition method and device, computer equipment and storage medium
CN112699834B (en) Traffic identification detection method, device, computer equipment and storage medium
CN113591872A (en) Data processing system, object detection method and device
CN114299464A (en) Lane positioning method, device and equipment
CN112801236B (en) Image recognition model migration method, device, equipment and storage medium
CN111931683A (en) Image recognition method, image recognition device and computer-readable storage medium
CN112654998B (en) Lane line detection method and device
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN110780325A (en) Method and device for positioning moving object and electronic equipment
CN111008622B (en) Image object detection method and device and computer readable storage medium
CN111210411B (en) Method for detecting vanishing points in image, method for training detection model and electronic equipment
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
CN117011481A (en) Method and device for constructing three-dimensional map, electronic equipment and storage medium
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
Yang et al. Real-Time field road freespace extraction for agricultural machinery autonomous driving based on LiDAR
CN114882372A (en) Target detection method and device
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
Cattaruzza Design and simulation of autonomous driving algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025250

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant