WO2020119567A1 - Data processing method, apparatus, device and machine readable medium - Google Patents

Data processing method, apparatus, device and machine readable medium Download PDF

Info

Publication number
WO2020119567A1
WO2020119567A1 PCT/CN2019/123214 CN2019123214W WO2020119567A1 WO 2020119567 A1 WO2020119567 A1 WO 2020119567A1 CN 2019123214 W CN2019123214 W CN 2019123214W WO 2020119567 A1 WO2020119567 A1 WO 2020119567A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
vehicle
feature
image
boundary
Prior art date
Application number
PCT/CN2019/123214
Other languages
French (fr)
Chinese (zh)
Inventor
刘进锋
詹中伟
刘欣
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020119567A1 publication Critical patent/WO2020119567A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Definitions

  • the present application relates to the field of intelligent transportation technology, and in particular, to a data processing method, a data processing device, a device, and a machine-readable medium.
  • Intelligent transportation systems apply advanced electronic information technology to transportation to achieve efficient and value-added services. Many of these services are based on vehicle location information, so positioning is a foundation in intelligent transportation systems.
  • the lane is the basic unit of the vehicle during driving. The positioning of the lane is a key technology in the field of intelligent transportation. It can provide technical support for vehicle automatic/semi-automatic driving control, navigation, and lane departure warning.
  • a positioning method that can realize the positioning of the lane through high-precision GPS (Global Positioning System, Global Positioning System) and high-precision electronic maps.
  • the cost of high-precision GPS and high-precision electronic maps is relatively high.
  • the accuracy of high-precision GPS is generally less than the lateral dimension of half a lane, which limits the application range of the above-mentioned positioning method.
  • the technical problem to be solved by the embodiments of the present application is to provide a data processing method, which can realize the positioning of the lane at a relatively low cost.
  • the embodiments of the present application also provide a data processing device, a device, a machine-readable medium, a navigation method, and a driving assistance method to ensure the implementation and application of the above method.
  • a data processing method including:
  • the positioning data determine the corresponding lane position characteristics of the vehicle
  • map data determine the characteristics of the number of lanes corresponding to the vehicle
  • a target lane corresponding to the vehicle is determined.
  • an embodiment of the present application also discloses a data processing device, including:
  • the image processing module is used to determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
  • the positioning module is used to determine the lane position characteristics of the vehicle according to the positioning data
  • the map processing module is used to determine the number of lane features corresponding to the vehicle based on the map data
  • a lane boundary determining module configured to determine a lane boundary corresponding to the vehicle based on the lane image feature, the lane position feature and the lane number feature;
  • the target lane determination module is used to determine a target lane corresponding to the vehicle according to the lane boundary.
  • an embodiment of the present application further discloses a device, including:
  • One or more processors are One or more processors.
  • One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, causes the device to perform the one or more methods described above.
  • embodiments of the present application disclose one or more machine-readable media on which instructions are stored, which when executed by one or more processors, cause the device to perform one or more of the aforementioned methods.
  • an embodiment of the present application further discloses a navigation method, including:
  • the positioning data determine the corresponding lane position characteristics of the vehicle
  • map data determine the characteristics of the number of lanes corresponding to the vehicle
  • the navigation information corresponding to the vehicle is determined.
  • an embodiment of the present application discloses a driving assistance method, including:
  • the positioning data determine the corresponding lane position characteristics of the vehicle
  • map data determine the characteristics of the number of lanes corresponding to the vehicle
  • the auxiliary driving information corresponding to the vehicle is determined.
  • the embodiments of the present application include the following advantages:
  • the embodiment of the present application comprehensively utilizes image data, positioning data and map data to realize the positioning of the lane where the vehicle is located.
  • the image data can be used as the basis for determining the image features of the lane;
  • the positioning data can be used as the basis for determining the position features of the lane;
  • the map data can be used as the basis for determining the features of the number of lanes;
  • the embodiments of the present application can fuse the features of the lane image and the features of the lane position To convert the lane features from the image coordinate system to the map coordinate system; in this way, the lane boundary corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system, and then the target lane corresponding to the vehicle can be determined according to the above lane boundary , That is, determine which lane the vehicle is in.
  • the embodiments of the present application can realize lanes at a lower cost Positioning.
  • FIG. 1 is a schematic diagram of an application environment of a data processing method of this application
  • Embodiment 2 is a flow chart of steps of Embodiment 1 of a data processing method of the present application;
  • FIG. 3 is a schematic diagram of a road condition according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a road condition according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a road situation according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a relationship between a vehicle and a lane boundary according to an embodiment of the present application.
  • FIG. 7 is a flowchart of steps in a second embodiment of a data processing method of the present application.
  • FIG. 9 is a structural block diagram of an embodiment of a data processing device of the present application.
  • FIG. 10 is a schematic diagram of data interaction of a data processing apparatus according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • One embodiment”, “an embodiment”, “a specific embodiment”, etc. in this specification means that the described embodiments may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include That particular feature, structure, or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment.
  • a specific feature, structure, or characteristic is described in connection with one embodiment, whether or not it is explicitly described, it can be considered that such feature, structure, or characteristic is also related to other embodiments within the scope known to those skilled in the art.
  • items in the list included in the form of "at least one of A, B, and C” may include the following possible items: (A); (B); (C); ( A and B); (A and C); (B and C); or (A, B and C).
  • items listed in the form of "at least one of A, B, or C” may mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C).
  • the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried or stored in one or more temporary or non-transitory machine-readable (eg, computer-readable) storage media, which may be executed by one or more processors.
  • a machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (such as volatile or non-volatile memory, a media disk, or other media other Physical structure device).
  • the embodiments of the present application provide a data processing solution, which may include: determining the lane image corresponding to the vehicle according to the road image corresponding to the vehicle Characteristics; based on the positioning data, determine the corresponding lane position characteristics of the vehicle; based on the map data, determine the corresponding lane number characteristics of the vehicle; based on the lane image characteristics, the lane position characteristics, and the lane number characteristics, determine the vehicle correspondence The boundary of the lane; and, based on the boundary of the lane, determine the target lane corresponding to the vehicle.
  • the embodiments of the present application comprehensively use image data, positioning data, and map data to determine the target lane corresponding to the vehicle, and can realize the positioning of the lane where the vehicle is located.
  • the image data may be obtained by an image acquisition device such as a camera, which may be used as a basis for determining the image features of the lane; the image features of the lane may be the lane features of the image dimension.
  • an image acquisition device such as a camera
  • the positioning data can be used as the basis for determining the position feature of the lane.
  • the position feature of the lane can be the lane feature of the position dimension.
  • the embodiments of the present application have low requirements on the accuracy of positioning data.
  • the positioning data can be derived from sensors with common positioning accuracy, such as GPS sensors, GNSS (Global Satellite Navigation System, Global Navigation, Satellite) system sensors, etc.
  • the general positioning accuracy is usually 10 meters about.
  • the map data can be used as the basis for determining the number of lane features. Therefore, the accuracy requirements of the map data in the embodiments of the present application are low, and the precision of the map data may be non-high precision, that is, common precision.
  • the lane image feature and the lane position feature can be fused to convert the lane feature from the image coordinate system to the map coordinate system; further, the lane corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system
  • the boundary in turn, can determine the target lane corresponding to the vehicle according to the above lane boundary, that is, determine which lane the vehicle is located in.
  • the embodiments of the present application can realize lanes at a lower cost Positioning.
  • Lanes also known as lanes and lanes
  • lanes and lanes are roads used by vehicles to travel.
  • the above-mentioned intelligent traffic scene may be a navigation scene.
  • lane-level guidance may be provided.
  • the above-mentioned lane-level guidance is to enter a complex intersection, a multi-level interchange road, or multiple entry and exit roads, which can improve the accuracy and precision of navigation.
  • lane-level positioning can provide basic data for AR (augmented reality, Augmented Reality) navigation.
  • the foregoing intelligent traffic scene may be an assisted driving scene or an unmanned driving scene.
  • assisted driving scenarios require manual monitoring, while unmanned driving scenarios do not require manual monitoring.
  • the driving assistance information may include: target lane announcement information, or lane keeping information, or lane change information.
  • target lane announcement information or lane keeping information
  • lane change information For example, when the target lane is clear, the lane keeping information can be output; for example, when the target lane is not suitable for turning, the lane change information can be output.
  • the data processing solution provided by the embodiments of the present application can be applied to the application environment shown in FIG. 1.
  • the client 100 and the server 200 are located in a wired or wireless network. Through the wired or wireless network, the client 100 Perform data interaction with the server 200.
  • the client may run on the device.
  • the client may be an APP running on the terminal, such as a navigation APP, an e-commerce APP, an instant messaging APP, an input method APP, or an APP that comes with the operating system.
  • the embodiment of the present application does not limit the specific APP corresponding to the client.
  • the above device may have a built-in or external screen, and the above screen is used to display information.
  • the above device may also have a built-in or external speaker, and the above-mentioned speaker is used for playing information.
  • the above information may include: information of the target lane.
  • the road where the vehicle is located includes 4 lanes, and the 4 lanes may be numbered.
  • the numbers of the 4 lanes are: 1, 2, 3, 4 respectively, then the target lane
  • the information can be the number of the target lane.
  • the following information can be played: "You are currently in lane X", where X ranges from 1 to 4.
  • the above-mentioned device may be a vehicle-mounted device or a device owned by a user.
  • the above devices may specifically include but are not limited to: smartphones, tablets, e-book readers, MP3 (Motion Picture Expert Compression Standard Audio Level 3, Moving Pictures Experts Group Audio Layer III) player, MP4 (Motion Picture Expert Compression Standard Audio Level 4, Moving Picture Experts Group Audio Layer IV player, laptop portable computer, car equipment, PC (Personal Computer, Personal Computer), set-top box, smart TV, wearable device, etc. It can be understood that the embodiments of the present application do not limit specific devices.
  • Examples of in-vehicle equipment may include: HUD (Head Up Display, Head Up Display), etc.
  • HUD is usually installed in front of the driver, during the driving process can provide the driver with some necessary driving information, such as navigation information, navigation information can include : The information of the target lane; in other words, HUD can integrate multiple functions into one, which is convenient for the driver to pay attention to the traffic conditions.
  • Embodiment 1 of a data processing method of the present application may specifically include the following steps:
  • Step 201 Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
  • Step 202 According to the positioning data, determine the lane position feature corresponding to the vehicle;
  • Step 203 According to the map data, determine the number of lane features corresponding to the vehicle;
  • Step 204 Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane quantity feature;
  • Step 205 Determine a target lane corresponding to the vehicle according to the lane boundary.
  • At least one step included in the method of the embodiment of the present application may be executed by a client and/or a server.
  • the embodiment of the present application does not limit the specific execution subject of the method step.
  • the road image can be obtained by an image acquisition device such as a camera or a camera.
  • the number of image acquisition devices may be one, or may be greater than one.
  • the image acquisition device may be disposed at the periphery of the vehicle.
  • the above-mentioned periphery may include: a front, and the front may include a front straight or a diagonal front.
  • the camera may be a monocular camera or a binocular camera.
  • the camera can be installed on the longitudinal center axis of the vehicle at a position in front of the roof, the camera is aimed at the front of the vehicle, the height of the camera from the ground, the camera's pitch angle, yaw angle and roll angle It can be determined according to actual application requirements.
  • the camera can continuously collect images of the road directly in front of the vehicle.
  • the camera may also be located on the longitudinal center axis of the vehicle at a position in front of the vehicle projection. It can be understood that the embodiment of the present application does not limit the specific image acquisition device and its orientation.
  • step 201 may use image processing techniques to determine the lane image features corresponding to the vehicle from the road images corresponding to the vehicle.
  • the above image processing technology may include: filtering technology.
  • the above filtering technique can be used to filter out the noise in the road image to reduce the interference of the noise on the characteristics of the lane image.
  • the above image processing technology may include: image recognition technology.
  • Image recognition refers to the technology of using machines to process, analyze and understand images in order to recognize image objects in various modes.
  • a machine may be used to process, analyze, and understand road images to identify various image target technologies.
  • the image target may include: the lane image feature of the embodiment of the present application.
  • the image features of the lane in the road image can correspond to a certain image area in the road image.
  • an edge detection technique may be used to determine an image area corresponding to a single lane image feature, and determine a lane image feature corresponding to the image area.
  • the process of determining the image features of the lane corresponding to the vehicle in step 201 may include: detecting image targets in the road image, and analyzing the acquired image targets using deep learning methods, To obtain the corresponding image target information, that is, the image features of the lane.
  • the image target information may include: image target image, name, category and other information.
  • step 201 may use photogrammetric technology to determine the lane image features corresponding to the vehicle from the road images corresponding to the vehicle.
  • Photogrammetry technology which can use the photos obtained by optical cameras or digital cameras to process the position, shape, size, characteristics and interrelationship of the object
  • the above-mentioned lane image feature may specifically include at least one of the following features: a lane feature point, a lane feature line, and a lane feature area.
  • the lane feature point may be a point-type lane image feature.
  • the lane feature points may specifically include at least one of the following feature points:
  • the end points of the lane boundaries may include: end points of lane lines, or end points of road edges.
  • the lane feature line may be a lane image feature of line type.
  • the lane characteristic line may specifically include at least one of the following characteristic lines:
  • the outline also called “outer line” refers to the outer boundary of things, is the boundary between an object and another object, and between the object and the background.
  • the lane feature area may be a lane image feature of area type or area type.
  • the above lane characteristic area may specifically include at least one of the following areas:
  • the vehicle here may refer to a vehicle other than the vehicle on which the image acquisition device is located on the road.
  • the positioning data may be derived from ordinary positioning accuracy sensors, such as GPS sensors, GNSS sensors, and the like.
  • the lane position characteristics determined in step 202 may include: vehicle position characteristics.
  • the vehicle location feature is used to characterize the location of the vehicle.
  • the vehicle position feature may be determined based on the positioning data and the lane image feature obtained in step 201.
  • the features of the lane image can reflect the surrounding environment where the vehicle is located, so the accuracy of the vehicle's position features can be improved.
  • the lane position feature determined in step 202 may further include: a position feature corresponding to the lane image feature.
  • the lane image features obtained in step 201 can be received, and the position features corresponding to the lane image features can be determined according to the positioning data.
  • the SLAM Simultaneous Localization and Mapping
  • the principle of SLAM can be: the robot starts to move from an unknown position in an unknown environment, and positions itself based on position estimation and maps during the movement process, and at the same time builds an incremental map on the basis of self-positioning to achieve autonomous positioning and
  • the position feature of the lane feature point, the lane feature line, or the lane feature area obtained in step 201 may be determined according to the SLAM method.
  • the map data can be used as the basis for determining the characteristics of the number of lanes. Therefore, the accuracy requirements of the map data in the embodiments of the present application are low, and the precision of the map data may be non-high precision, that is, common precision.
  • step 203 may determine the number of lane features based on the lane position features obtained in step 202.
  • the lane number feature can be used to characterize the number of lanes in the road where the vehicle is currently or will be.
  • step 203 can also determine the lane direction feature based on the map data.
  • the lanes can include: one-way or two-way.
  • the direction of the two-lane road may include two opposite directions, and the lane direction feature may represent one of the directions, that is, the driving direction of the vehicle.
  • Step 204 may fuse lane image features, lane position features, and lane number features to obtain lane boundaries corresponding to vehicles.
  • the lane boundary may refer to the boundary of the lane, which may serve as the boundary between the lane and the lane, or between the lane and other objects. Lane boundaries may include: lane lines, road edges, etc.
  • the principle of determining the lane boundary corresponding to the vehicle in the embodiment of the present application may be: according to the lane image feature, the lane position feature, and the number of lane features, the lane boundary corresponding to the vehicle is solved.
  • the above solution can convert the lane feature from the image coordinate system to the map coordinate system to determine the parameters of the lane boundary in the map coordinate system (such as lane boundary position feature, name, etc.), and then can be based on the vehicle position feature and the lane boundary position feature The relative position between them determines the target lane corresponding to the vehicle.
  • the above lane image features may include information of all lane boundaries.
  • the information of all lane boundaries can be determined according to the features of the lane image, that is, the parameters of all lane boundaries in the map coordinate system can be determined.
  • the image acquisition device is greatly affected by factors such as the environment, climate, light, and the complexity and discontinuity (intersection) of the lane line itself, which makes the above-mentioned lane image features possible It only includes part of the lane boundary information, that is, the current lane boundary corresponding to the vehicle is incomplete.
  • the reasons for the discontinuous lane line may include: lane line defacement, or intersection, the image acquisition device’s
  • the collection range is limited. For example, under road conditions, pedestrians and vehicles are blocking the road surface, and the turning angle is large, so the camera cannot capture all lane lines.
  • the acquisition clarity of the image acquisition device is reduced, so that the above-mentioned lane image features may include only part of the lane boundary information.
  • the above-mentioned weak light conditions may include: severe weather conditions or night conditions.
  • the road surface is blocked by pedestrians and vehicles, and the collection range of the image collection device is reduced, so that the above-mentioned lane image features may include only part of the lane boundary information.
  • the above road congestion may include: road congestion, or road condition congestion, etc.
  • embodiments of the present application may determine all lane boundaries by determining the following technical solution of the target lane corresponding to the vehicle:
  • the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: obtaining two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries ; Determine the distance between the two lane feature points based on the lane position feature; determine the second lane corresponding to the vehicle based on the distance between the two lane feature points and the number of lane features boundary.
  • Technical solution 1 can determine the second lane boundary when the two lane feature points of the first lane boundary are known. Since the lane boundary can be expanded, all lane boundaries can be determined.
  • FIG. 3 a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, a vehicle may be driven in an area near an intersection, and the road may include: a crosswalk 301 and several lanes 302 .
  • the lane feature points may be the end points PA and PB of a lane line.
  • the end points PA and PB of the lane line may be the starting end points of the lane line, and the lane lines to which the PA and PB belong are adjacent.
  • the distance between PA and PB can be the width of a lane (referred to as the lane width); because the width of different lanes on the same road is usually the same, the unknown lane can be determined based on the number of lanes and the lane width line.
  • the features of the lane image may specifically include: contour lines of road surrounding facilities, and the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: according to the contour lines of the surrounding road facilities, and the lane The position feature determines the road width; according to the road width and the number of lane features, the lane boundary corresponding to the vehicle is determined.
  • the technical solution 2 can realize the expansion of the lane boundary when the contour lines of the surrounding facilities of the road are known, so that all lane boundaries can be determined.
  • the road condition may be derived from a road image, and the road condition may include: several lane lines 401, road above facilities 402, and road edge lines 403 .
  • Some lane lines are blocked by vehicles on the road.
  • the width of the road can be determined according to the contour lines of the facilities above the road.
  • the width of the road can represent the width of all lanes. Since the width of different lanes on the same road is usually the same, Therefore, the unknown lane line can be determined according to the number of lanes and the width of the road.
  • the above-road facility 402 shown in FIG. 4 is specifically a traffic gantry. It can be understood that the above-road facility shown in FIG. 4 is only an example. In fact, the above-road facility can also be a tunnel, etc. It can be understood that the embodiments of the present application are The specific facilities above the road are not restricted.
  • the facilities above the road are only optional embodiments.
  • the facilities around the road may also include: green belts, etc.
  • the width of the road may be determined according to the green belts on both sides of the road. It can be understood that the embodiments of the present application do not limit specific road surrounding facilities.
  • Step 205 may determine the target lane corresponding to the vehicle according to the lane boundary obtained in step 204.
  • the principle of determining the target lane corresponding to the vehicle in the embodiment of the present application may be: determining the target lane corresponding to the vehicle according to the relative position between the vehicle and the lane boundary, which is also the lane where the vehicle is located.
  • the process of determining the target lane corresponding to the vehicle in step 203 may specifically include: determining a lane feature point corresponding to a plurality of lane boundaries and a first direction; according to the first The relationship between the first direction and the second direction determines the target lane corresponding to the vehicle; the second direction may be the direction corresponding to the lane feature point and the vehicle feature point.
  • the road condition may be derived from a road image, and the road condition may include: several lane lines 501, vehicles 502, and road edge lines 503.
  • the vertical line 504 at the same position on the road where the vehicle 502 is located can be determined.
  • the lane characteristic points can be: the intersection of the vertical line 504 and the lane line, the first direction can be the direction of the lane line, and the second direction can be the characteristic point and the vehicle characteristic point
  • the direction of the connection between the vehicle feature points can be the location of the image acquisition device; in this way, the target lane corresponding to the vehicle 502 can be determined according to the positional relationship between the first direction and the second direction (such as the included angle) In FIG. 5, the angle corresponding to the third lane is the smallest, so it can be determined that the target lane is the third lane.
  • the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: determining according to the lane boundary and continuous motion information corresponding to the vehicle The target lane corresponding to the vehicle.
  • Continuous motion information can characterize the movement of the vehicle within a period of time. Based on the continuous motion information, the embodiment of the present application can determine the target time for the vehicle to enter the lane boundary, and based on the relative position between the vehicle and the lane boundary at the target time To determine the target lane. Since the vehicle enters the lane boundary at the target time, determining the target lane at this timing can avoid the erroneous determination of the target lane when the lane boundary has not yet been entered, thereby improving the accuracy of the target lane.
  • FIG. 6 there is shown a schematic diagram of a relationship between a vehicle and a lane boundary according to an embodiment of the present application, where the time T0 may be the time to determine all lane lines. If the target lane is determined at the time T0, the target will easily appear The wrong lane.
  • the time T1 may be the time when the vehicle enters the lane line.
  • the embodiment of the present application may accumulate the cumulative motion vector from the time period T0 to the time T1, and determine the time T1 according to the cumulative motion vector, thereby achieving accurate determination of the target lane .
  • the continuous motion information in the embodiment of the present application may be obtained through an inertial sensor.
  • the INS sensor can be an existing sensor on the vehicle, so no additional sensor cost can be consumed.
  • the inertial sensor may include: IMU (Inertial Measurement Unit, Inertial Measurement Unit).
  • the IMU may include: accelerometer, gyroscope, etc.
  • continuous motion information can be determined by INS (Inertial Navigation System, Inertial Navigation System).
  • INS Inertial Navigation System
  • the principle that INS determines the continuous motion information may be: according to the change of the motion state of the vehicle measured by the inertial sensor, the position and posture at the current time can be estimated from the position and posture at the previous time.
  • INS can also utilize trip data provided by the odometer.
  • the lane after the vehicle initially enters the road may be determined according to steps 201 to 205 included in FIG. 2, that is, the target lane in the embodiment of the present application may include: the initial target lane. After determining the initial target lane, you can use lane tracking and/or lane change detection methods to update the target lane in real time.
  • intersection situation is complicated and changeable, and the intersection has a blank area. Therefore, the embodiment of the present application describes the processing process corresponding to the intersection situation in detail. It can be understood that the embodiment of the present application can be applied to other situations besides the road situation.
  • the embodiments of the present application may output the information of the target lane in a visual manner and/or an auditory manner.
  • the visual way can display the target lane information through the screen
  • the auditory way can play the target lane information through the speaker.
  • the data processing method of the embodiment of the present application comprehensively utilizes image data, positioning data and map data to realize the positioning of the lane where the vehicle is located.
  • the image data can be used as the basis for determining the image features of the lane;
  • the positioning data can be used as the basis for determining the position features of the lane;
  • the map data can be used as the basis for determining the features of the number of lanes;
  • the embodiments of the present application can fuse the features of the lane image and the features of the lane position To convert the lane features from the image coordinate system to the map coordinate system; in this way, the lane boundary corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system, and then the target lane corresponding to the vehicle can be determined according to the above lane boundary , That is, determine which lane the vehicle is in.
  • the embodiments of the present application can realize lanes at a lower cost Positioning.
  • FIG. 7 shows a flowchart of steps of Embodiment 2 of a data processing method of the present application, which may specifically include the following steps:
  • Step 701 Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
  • Step 702 Determine the lane position feature corresponding to the vehicle according to the positioning data
  • Step 703 Determine the number of lane features corresponding to the vehicle based on the map data
  • Step 704 Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane quantity feature;
  • Step 705 When the current lane boundary corresponding to the vehicle is incomplete, determine the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary;
  • Step 706 Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the lane quantity feature;
  • Step 707 Determine the target lane corresponding to the vehicle according to the latest lane boundary.
  • the embodiment of the present application can re-determine the lane boundary when the current lane boundary corresponding to the vehicle is incomplete.
  • the lane image feature and lane position feature can be re-determined based on the current lane boundary, and the re-determined lane image feature can be used And the location characteristics of the lane to determine the latest lane boundary.
  • the current lane boundary may refer to the lane boundary at the current time
  • the current time may refer to the device time when the step is performed
  • the current lane boundary may be updated as the current time is updated.
  • the current lane boundary can provide rich information for the re-determination of the lane image features, and thus can be used as the basis for determining the lane image features.
  • the current lane boundary can provide the basis for the secondary feature extraction of the image area.
  • the road image includes a discontinuous lane line L1.
  • the lane image feature, the lane position feature, and the number of lane features can be combined to obtain a continuous lane line L1; and in step 705, the lane image feature can also be combined.
  • step 706 may obtain more lane image features, such as lane lines, based on the continuous lane line L1 and the lane line L2 The end point feature of L2, the feature of the lane line L3 adjacent to the lane line L2, and so on.
  • the determination of the lane boundary based on the latest lane image feature and the latest lane position feature can improve the accuracy of the lane boundary.
  • FIG. 8 there is shown a flowchart of steps of Embodiment 3 of a data processing method of the present application, which may specifically include the following steps:
  • Step 801 Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
  • Step 802 Determine the lane position feature corresponding to the vehicle according to the positioning data
  • Step 803 According to the map data, determine the number of lane features corresponding to the vehicle;
  • Step 804 Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature, and the lane number feature;
  • Step 805 Determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete
  • Step 806 Determine the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
  • Step 807 Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the lane quantity feature;
  • Step 808 Determine the target lane corresponding to the vehicle according to the latest lane boundary.
  • the embodiment of the present application may determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete.
  • the aforementioned technical solution 1 and/or technical solution 2 may be used to use the expanded lane boundary as the predicted lane boundary.
  • the predicted lane boundary can be used as the basis for determining the latest lane image features.
  • the feature extraction of the lane image may be performed again based on the predicted lane boundary.
  • the image features of the lane line L3 cannot be extracted from the road image initially.
  • the embodiment of the present application can predict the lane line L3 and mark the lane line L3 at the corresponding position of the road image.
  • a road image marked with the lane line L3 to extract the image features of the lane line L3; the image features of the lane line L3 are added to the process of determining the lane boundary again. Since the information in the process of determining the lane boundary can be increased, the lane can be improved The accuracy of the boundary.
  • the latest lane image features can be located to obtain the latest lane position features.
  • the latest lane position feature may include: predicting the position feature corresponding to the lane boundary.
  • the data processing method of the embodiment of the present application determines the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete.
  • the predicted lane boundary can be used as the basis for determining the latest lane image features to obtain more lane image features. Since the information in the process of determining the lane boundary (such as the latest lane image feature and the latest lane position feature) can be added, the accuracy of the lane boundary can be improved.
  • An embodiment of the present application also provides a navigation method, which may specifically include the following steps:
  • the navigation information corresponding to the vehicle is determined.
  • the above navigation information is used to guide the driving of the vehicle.
  • the navigation information may include: target lane information in voice form, or a lane boundary line corresponding to the target lane drawn on the map, so that the user can determine the target lane where the vehicle is located.
  • the above navigation information may include: a navigation route based on a target lane and the like.
  • the embodiments of the present application can provide lane-level guidance in a navigation scenario.
  • the above-mentioned lane-level guidance is to improve the accuracy and precision of navigation when entering a complex intersection or a multi-level interchange with multiple entrances and exits.
  • lane-level positioning can provide basic data for AR navigation.
  • An embodiment of the present application also provides a driving assistance method, which may specifically include the following steps:
  • the auxiliary driving information corresponding to the vehicle is determined.
  • the above-mentioned driving assistance information may include: target lane announcement information, or lane keeping information, or lane change information, etc.
  • the lane keeping information can be output; for example, when the target lane is not suitable for turning, the lane change information can be output.
  • the embodiments of the present application do not limit specific auxiliary driving information, for example, the foregoing auxiliary driving information may further include: brake prompt information and the like.
  • the embodiments of the present application can provide assisted driving information according to the lane-level positioning, the safety of the vehicle and the rationality of the assisted driving information can be improved.
  • An embodiment of the present application also provides a data processing device.
  • FIG. 9 shows a structural block diagram of an embodiment of a data processing device of the present application, which may specifically include the following modules:
  • the image processing module 901 is used to determine the lane image features corresponding to the vehicle according to the road image corresponding to the vehicle;
  • the positioning module 902 is used to determine the lane position characteristics of the vehicle according to the positioning data
  • the map processing module 903 is used to determine the number of lane features corresponding to the vehicle according to the map data;
  • the lane boundary determination module 904 is configured to determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
  • the target lane determination module 905 is used to determine the target lane corresponding to the vehicle according to the lane boundary.
  • the lane boundary determination module 904 may include:
  • the lane feature point determination module is used to obtain two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries;
  • a distance determination module for determining the distance between the two lane feature points based on the lane position feature
  • the first boundary determining module is configured to determine a second lane boundary corresponding to the vehicle according to the distance between the two lane feature points and the number of lane features.
  • the lane image features may include: contour lines of road surrounding facilities, and the lane boundary determination module 904 may include:
  • the road width determination module is used to determine the road width according to the contour lines of the surrounding facilities of the road and the characteristics of the position of the lane;
  • the second boundary determination module is used to determine the lane boundary corresponding to the vehicle according to the road width and the number of lane features.
  • the target lane determination module 905 may include:
  • the first target lane determination module is configured to determine the target lane corresponding to the vehicle according to the lane boundary and continuous motion information corresponding to the vehicle.
  • the target lane determination module 905 may include:
  • the feature point and direction determination module is used to determine the lane feature points corresponding to the multiple lane boundaries and the first direction respectively;
  • the second target lane determination module is used to determine the target lane corresponding to the vehicle according to the relationship between the first direction and the second direction; the second direction is the correspondence between the lane feature point and the vehicle feature point direction.
  • the image processing module 901 is also used to determine the latest corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary when the current lane boundary corresponding to the vehicle is incomplete Lane image features;
  • the positioning module 902 is also used to determine the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
  • the lane boundary determination module 904 is also used to determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
  • the target lane determination module 905 is also used to determine the target lane corresponding to the vehicle according to the latest lane boundary.
  • the lane boundary determination module module 904 is also used to determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
  • the image processing module 901 is also used to determine the latest lane image feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
  • the positioning module 902 is also used to determine the latest lane position feature based on the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
  • the lane boundary determination module 904 is also used to determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
  • the target lane determination module 905 is also used to determine the target lane corresponding to the vehicle according to the latest lane boundary.
  • the lane image feature may include at least one of the following features:
  • Lane feature points lane feature lines, and lane feature areas.
  • the lane feature points may include at least one of the following feature points:
  • the lane characteristic line may include at least one of the following characteristic lines:
  • the lane feature area may include at least one of the following areas:
  • the description is relatively simple, and the relevant part can be referred to the description of the method embodiment.
  • the image processing module 901 is configured to perform image processing on a road image acquired by an image collection device.
  • the image processing may include: Image recognition and feature extraction, etc., interact with the lane boundary determination module 904 and the positioning module 902.
  • the image processing result can be input to the lane boundary determination module 904; in addition, the image processing result can also be passed to the positioning module 902 to improve the accuracy of the lane position feature.
  • the positioning module 902 is used to determine the position features of the vehicle and the position features corresponding to the lane image features.
  • the positioning module 902 can convert the image processing result into a map coordinate system.
  • the positioning module 902 Since the image recognition result can provide a basis for the positioning module 902, the positioning accuracy of the lane position feature can be improved.
  • the lane position feature can also be fused as the input of the lane boundary determination module 904 and the image processing result.
  • the positioning module 902 can also interact with the map processing module 903 to obtain the road features of the road where the vehicle is located.
  • the above-mentioned road features may include: the number of lanes, lane direction features, and the like.
  • the map processing module 903 is used to determine the road feature where the vehicle position is located.
  • the above road feature is merged with the image processing result and the lane position feature in the lane boundary determination module 904.
  • the lane boundary determination module 904 and the target lane determination module 905 may be set integrally or separately. Both are used to determine the initial target lane. After the initial target lane is determined, the lane tracking and/or lane change detection method can be used to update the target lane in real time.
  • the data transmitted by the image processing module 901 to the lane boundary determination module 904 may include: lane image features, and the lane image features may specifically include but are not limited to: lane lines and road edges, and features on both sides of the road Wait.
  • the data transmitted by the lane boundary determination module 904 to the image processing module 901 may include: current lane boundary, lane position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, road characteristics, and the like.
  • the posture of the image acquisition device can be used as a basis for image processing, and the posture can include head-up, head-up, and top-view.
  • the image processing module 901 may try to extract image features corresponding to the disturbed lane line; and the lane boundary determination module 904 may Based on the fused multiple data, determine the lane line to be disturbed.
  • the data transmitted by the positioning module 902 to the lane boundary determination module 904 may include: vehicle position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, lane position characteristics, and lane change detection results, etc. .
  • the data transmitted by the lane boundary determination module 904 to the positioning module may include: current lane boundary, vehicle position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, and road characteristics.
  • the image processing module 901 is used to determine the lane image features corresponding to the vehicle according to the road image corresponding to the vehicle;
  • the positioning module 902 is used to determine the lane position characteristics of the vehicle according to the positioning data
  • the map processing module 903 is used to determine the number of lane features corresponding to the vehicle according to the map data;
  • the data interaction between the lane boundary determination module 904 and the image processing module 901, the positioning module 902, and the map processing module 903, on the one hand, can better determine the lane boundary, on the other hand, it can put useful information in different Flow between, which can improve the performance and efficiency of multiple modules.
  • the lane boundary determination module 904 may determine lane characteristics such as the number of lanes, lane type, lane direction, etc. of the road segment where the vehicle is located based on the positioning result, that is, the lane position feature, combined with the image recognition result, positioning result, and lane feature; Pre-process the image recognition results.
  • the pre-processing can include: gross error detection and elimination.
  • the pre-processed data is used to solve the lane boundary.
  • the result of the solution can be the information of the lane boundary under the coordinate system of positioning and map unified If all lane boundaries have been determined, you can proceed to the next step to determine the target lane; otherwise, you can predict the remaining lane boundaries on the basis of the existing first lane boundary, and the predicted lane boundary is passed back to
  • the image recognition module 901 and the image recognition module 901 perform secondary extraction on the specific area according to the predicted lane boundary, and the extracted result is added to the lane boundary solution process again until the determination of all lane boundaries is completed.
  • the above-mentioned return of the predicted lane boundary can apply more information to the determination of the lane boundary.
  • the vehicle's motion vectors start to be accumulated.
  • the reason for this is to consider that the vehicle may be in a lane change or just driving in a blank area of the intersection at the time.
  • the target lane is, the initial target lane can be determined.
  • FIG. 11 schematically shows an exemplary device 1300 that can be used to implement various embodiments described in this application.
  • FIG. 11 shows an exemplary device 1300, which may include: one or more processors 1302, a system control module (chip set) 1304 coupled with at least one of the processors 1302, and a system System memory 1306 coupled to control module 1304, non-volatile memory (NVM)/storage device 1308 coupled to system control module 1304, one or more input/output devices 1310 coupled to system control module 1304, and system control
  • the module 1306 is coupled to the network interface 1312.
  • the system memory 1306 may include instructions 1362, which may be executed by one or more processors 1302.
  • the processor 1302 may include one or more single-core or multi-core processors, and the processor 1302 may include any combination of general-purpose processors or special-purpose processors (eg, graphics processors, application processors, baseband processors, etc.).
  • the device 1300 can serve as the server, target device, wireless device, etc. described in the embodiments of the present application.
  • the device 1300 may include one or more machine-readable media with instructions (eg, system memory 1306 or NVM/storage 1308) and be configured in conjunction with the one or more machine-readable media
  • One or more processors 1302 that execute instructions to implement the modules included in the foregoing apparatus to perform the actions described in the embodiments of the present application.
  • the system control module 1304 of an embodiment may include any suitable interface controller for providing any suitable interface to at least one of the processors 1302 and/or any suitable device or component in communication with the system control module 1304.
  • the system control module 1304 of an embodiment may include one or more memory controllers for providing an interface to the system memory 1306.
  • the memory controller may be a hardware module, a software module, and/or a firmware module.
  • the system memory 1306 of one embodiment may be used to load and store data and/or instructions 1362.
  • the system memory 1306 may include any suitable volatile memory, for example, a suitable DRAM (Dynamic Random Access Memory).
  • the system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
  • the system control module 1304 of one embodiment may include one or more input/output controllers to provide an interface to the NVM/storage device 1308 and the input/output device(s) 1310.
  • the NVM/storage device 1308 of one embodiment may be used to store data and/or instructions 1382.
  • the NVM/storage device 1308 may include any suitable non-volatile memory (such as flash memory, etc.) and/or may include any suitable non-volatile storage device(s), for example, one or more hard drives (HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives, etc.
  • HDD hard drives
  • CD compact disc
  • DVD digital versatile disc
  • the NVM/storage device 1308 may include storage resources that are physically part of the device on which the device 1300 is installed, or it may be accessed by the device without having to be part of the device.
  • the NVM/storage device 1308 can be accessed via the network interface 1312 through the network and/or through the input/output device 1310.
  • the input/output device(s) 1310 of one embodiment may provide an interface for the device 1300 to communicate with any other suitable device.
  • the input/output device 1310 may include communication components, audio components, sensor components, and the like.
  • the network interface 1312 of an embodiment may provide an interface for the device 1300 to communicate through one or more networks and/or with any other suitable device.
  • the device 1300 may be based on one or more wireless network standards and/or any of the protocols And/or protocol to wirelessly communicate with one or more components of the wireless network, for example, to access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof for wireless communication.
  • a communication standard such as WiFi, 2G, or 3G, or a combination thereof for wireless communication.
  • At least one of the processors 1302 may be packaged with the logic of one or more controllers (eg, memory controllers) of the system control module 1304.
  • at least one of the processors 1302 may be packaged with the logic of one or more controllers of the system control module 1304 to form a system-in-package (SiP).
  • SiP system-in-package
  • at least one of the processors 1302 may be integrated with the logic of one or more controllers of the system control module 1304 on the same new product.
  • at least one of the processors 1302 may be integrated with the logic of one or more controllers of the system control module 1304 on the same chip to form a system on chip (SoC).
  • SoC system on chip
  • the device 1300 may include, but is not limited to, desktop computing devices or mobile computing devices (eg, laptop computing devices, handheld computing devices, tablet computers, netbooks, etc.) and other computing devices.
  • the device 1300 may have more or fewer components and/or different architectures.
  • the device 1300 may include one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits (ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the display screen may be implemented as a touch screen display to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel.
  • the touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • An embodiment of the present application also provides a non-volatile readable storage medium, where one or more modules are stored in the storage medium, and when the one or more modules are applied to a device, the device can be executed Instructions of each method in the embodiments of the present application.
  • a device including: one or more processors; and, instructions stored in one or more machine-readable media stored thereon, when executed by the one or more processors, results in
  • the device executes the method as in the embodiment of the present application, and the method may include: the method shown in FIG. 2 or FIG. 3 or FIG. 4 or FIG. 5 or FIG. 6 or FIG. 7 or FIG.
  • one or more machine-readable media are also provided, on which instructions are stored, which, when executed by one or more processors, causes the apparatus to execute the method as in the embodiments of the present application, and the method may include: The method shown in Figure 2 or Figure 3 or Figure 4 or Figure 5 or Figure 6 or Figure 7 or Figure 8.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to generate a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including the instruction device, the instructions
  • the device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

A data processing method, an apparatus, a device and a machine readable medium, the method comprising: according to a lane image corresponding to a vehicle, determining lane image characteristics corresponding to the vehicle (201); according to positioning data, determining lane position characteristics corresponding to the vehicle (202); according to map data, determining lane quantity characteristics corresponding to the vehicle (203); according to the lane image characteristics, lane position characteristics, and lane quantity characteristics, determining a lane boundary corresponding to the vehicle (204); and according to the lane boundary, determining a target lane corresponding to the vehicle (205). By means of the described method, a lane may be positioned at a lower cost.

Description

一种数据处理方法、装置、设备和机器可读介质Data processing method, device, equipment and machine-readable medium
本申请要求2018年12月12日递交的申请号为201811519987.2、发明名称为“一种数据处理方法、装置、设备和机器可读介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application with the application number 201811519987.2 and the invention titled "A data processing method, device, equipment and machine-readable medium" filed on December 12, 2018, the entire content of which is incorporated by reference in In this application.
技术领域Technical field
本申请涉及智能交通技术领域,特别是涉及一种数据处理方法、一种数据处理装置、一种设备和一种机器可读介质。The present application relates to the field of intelligent transportation technology, and in particular, to a data processing method, a data processing device, a device, and a machine-readable medium.
背景技术Background technique
智能交通***将先进电子信息技术应用于交通中,实现高效增值的服务,其中很多服务是以车辆的位置信息为基础的,因此定位是智能交通***中一个基础。车道是车辆行驶过程中的基本单元,对车道的定位,是智能交通领域的一项关键技术,可对车辆自动/半自动驾驶控制、导航和车道偏离预警等提供技术支撑。Intelligent transportation systems apply advanced electronic information technology to transportation to achieve efficient and value-added services. Many of these services are based on vehicle location information, so positioning is a foundation in intelligent transportation systems. The lane is the basic unit of the vehicle during driving. The positioning of the lane is a key technology in the field of intelligent transportation. It can provide technical support for vehicle automatic/semi-automatic driving control, navigation, and lane departure warning.
一种定位方法,可以通过高精度GPS(全球定位***,Global Positioning System)和高精度电子地图,实现车道的定位。A positioning method that can realize the positioning of the lane through high-precision GPS (Global Positioning System, Global Positioning System) and high-precision electronic maps.
然而,高精度GPS和高精度电子地图的成本均较高,例如,高精度GPS的精度通常小于半个车道的横向尺寸,这使得上述定位方法的应用范围受到限制。However, the cost of high-precision GPS and high-precision electronic maps is relatively high. For example, the accuracy of high-precision GPS is generally less than the lateral dimension of half a lane, which limits the application range of the above-mentioned positioning method.
发明内容Summary of the invention
本申请实施例所要解决的技术问题是提供一种数据处理方法,可以在花费较低成本的情况下、实现车道的定位。The technical problem to be solved by the embodiments of the present application is to provide a data processing method, which can realize the positioning of the lane at a relatively low cost.
相应的,本申请实施例还提供了一种数据处理装置、一种设备、一种机器可读介质、一种导航方法和一种辅助驾驶方法,用以保证上述方法的实现及应用。Correspondingly, the embodiments of the present application also provide a data processing device, a device, a machine-readable medium, a navigation method, and a driving assistance method to ensure the implementation and application of the above method.
为了解决上述问题,本申请实施例公开了一种数据处理方法,包括:In order to solve the above problems, the embodiments of the present application disclose a data processing method, including:
依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Determine the image features of the lane corresponding to the vehicle according to the road image corresponding to the vehicle;
依据定位数据,确定车辆对应的车道位置特征;According to the positioning data, determine the corresponding lane position characteristics of the vehicle;
依据地图数据,确定车辆对应的车道数量特征;According to the map data, determine the characteristics of the number of lanes corresponding to the vehicle;
依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
依据所述车道边界,确定所述车辆对应的目标车道。According to the lane boundary, a target lane corresponding to the vehicle is determined.
另一方面,本申请实施例还公开了一种数据处理装置,包括:On the other hand, an embodiment of the present application also discloses a data processing device, including:
图像处理模块,用于依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;The image processing module is used to determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
定位模块,用于依据定位数据,确定车辆对应的车道位置特征;The positioning module is used to determine the lane position characteristics of the vehicle according to the positioning data;
地图处理模块,用于依据地图数据,确定车辆对应的车道数量特征;The map processing module is used to determine the number of lane features corresponding to the vehicle based on the map data;
车道边界确定模块,用于依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;以及A lane boundary determining module, configured to determine a lane boundary corresponding to the vehicle based on the lane image feature, the lane position feature and the lane number feature; and
目标车道确定模块,用于依据所述车道边界,确定所述车辆对应的目标车道。The target lane determination module is used to determine a target lane corresponding to the vehicle according to the lane boundary.
再一方面,本申请实施例还公开了一种设备,包括:In another aspect, an embodiment of the present application further discloses a device, including:
一个或多个处理器;和One or more processors; and
其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述设备执行前述一个或多个所述的方法。One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, causes the device to perform the one or more methods described above.
又一方面,本申请实施例公开了一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得设备执行前述一个或多个所述的方法。In yet another aspect, embodiments of the present application disclose one or more machine-readable media on which instructions are stored, which when executed by one or more processors, cause the device to perform one or more of the aforementioned methods.
再一方面,本申请实施例还公开了一种导航方法,包括:In still another aspect, an embodiment of the present application further discloses a navigation method, including:
依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Determine the image features of the lane corresponding to the vehicle according to the road image corresponding to the vehicle;
依据定位数据,确定车辆对应的车道位置特征;According to the positioning data, determine the corresponding lane position characteristics of the vehicle;
依据地图数据,确定车辆对应的车道数量特征;According to the map data, determine the characteristics of the number of lanes corresponding to the vehicle;
依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
依据所述目标车道,确定所述车辆对应的导航信息。According to the target lane, the navigation information corresponding to the vehicle is determined.
又一方面,本申请实施例公开了一种辅助驾驶方法,包括:In yet another aspect, an embodiment of the present application discloses a driving assistance method, including:
依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Determine the image features of the lane corresponding to the vehicle according to the road image corresponding to the vehicle;
依据定位数据,确定车辆对应的车道位置特征;According to the positioning data, determine the corresponding lane position characteristics of the vehicle;
依据地图数据,确定车辆对应的车道数量特征;According to the map data, determine the characteristics of the number of lanes corresponding to the vehicle;
依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
依据所述目标车道,确定所述车辆对应的辅助驾驶信息。According to the target lane, the auxiliary driving information corresponding to the vehicle is determined.
与现有技术相比,本申请实施例包括以下优点:Compared with the prior art, the embodiments of the present application include the following advantages:
本申请实施例综合利用图像数据、定位数据和地图数据,实现车辆所在车道的定位。其中,图像数据可以作为车道图像特征的确定依据;定位数据可作为车道位置特征的确定依据;地图数据可作为车道数量特征的确定依据;本申请实施例可以对车道图像特征和车道位置特征进行融合,以将车道特征从图像坐标系转换到地图坐标系;这样,可以依据地图坐标系的车道特征和车道数量特征,确定车辆对应的车道边界,进而可以依据上述车道边界,确定车辆对应的目标车道,也即确定车辆位于哪个车道。The embodiment of the present application comprehensively utilizes image data, positioning data and map data to realize the positioning of the lane where the vehicle is located. Among them, the image data can be used as the basis for determining the image features of the lane; the positioning data can be used as the basis for determining the position features of the lane; the map data can be used as the basis for determining the features of the number of lanes; the embodiments of the present application can fuse the features of the lane image and the features of the lane position To convert the lane features from the image coordinate system to the map coordinate system; in this way, the lane boundary corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system, and then the target lane corresponding to the vehicle can be determined according to the above lane boundary , That is, determine which lane the vehicle is in.
由于本申请实施例的图像数据可以为通过例如摄像头的图像采集装置得到,且对定位数据和地图数据的精度要求较低,因此,本申请实施例可以在花费较低成本的情况下、实现车道的定位。Since the image data of the embodiments of the present application can be obtained by an image acquisition device such as a camera, and the accuracy requirements of positioning data and map data are low, the embodiments of the present application can realize lanes at a lower cost Positioning.
附图说明BRIEF DESCRIPTION
图1是本申请的一种数据处理方法的应用环境的示意;1 is a schematic diagram of an application environment of a data processing method of this application;
图2是本申请的一种数据处理方法实施例一的步骤流程图;2 is a flow chart of steps of Embodiment 1 of a data processing method of the present application;
图3是本申请实施例的一种道路状况的示意;3 is a schematic diagram of a road condition according to an embodiment of the present application;
图4是本申请实施例的一种道路状况的示意;4 is a schematic diagram of a road condition according to an embodiment of the present application;
图5是本申请实施例的一种道路状况的示意;5 is a schematic diagram of a road situation according to an embodiment of the present application;
图6是本申请实施例的一种车辆与车道边界之间的关系的示意;6 is a schematic diagram of a relationship between a vehicle and a lane boundary according to an embodiment of the present application;
图7是本申请的一种数据处理方法实施例二的步骤流程图;7 is a flowchart of steps in a second embodiment of a data processing method of the present application;
图8是本申请的一种数据处理方法实施例三的步骤流程图;8 is a flowchart of steps in Embodiment 3 of a data processing method of the present application;
图9是本申请的一种数据处理装置实施例的结构框图;9 is a structural block diagram of an embodiment of a data processing device of the present application;
图10是本申请实施例的一种数据处理装置的数据交互示意;以及10 is a schematic diagram of data interaction of a data processing apparatus according to an embodiment of the present application; and
图11是本申请一实施例提供的设备的结构示意图。11 is a schematic structural diagram of a device provided by an embodiment of the present application.
具体实施方式detailed description
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。In order to make the above objects, features and advantages of the present application more obvious and understandable, the present application will be described in further detail below with reference to the accompanying drawings and specific embodiments.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保 护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art fall within the protection scope of the present application.
本申请的构思易于进行各种修改和替代形式,其具体实施例已经通过附图的方式示出,并将在这里详细描述。然而,应该理解,上述内容并不是用来将本申请的构思限制为所公开的具体形式,相反地,本申请的说明书和附加权利要求书意欲覆盖所有的修改、等同和替代的形式。The concept of the present application is susceptible to various modifications and alternative forms, and specific embodiments thereof have been shown by way of the drawings and will be described in detail here. However, it should be understood that the above content is not intended to limit the concept of the present application to the disclosed specific forms. On the contrary, the description and appended claims of the present application are intended to cover all modifications, equivalents, and alternative forms.
本说明书中的“一个实施例”,“实施例”,“一个具体实施例”等,表示所描述的实施例可以包括特定特征、结构或特性,但是每个实施例可以包括或可以不必然包括该特定特征、结构或特性。此外,这样的短语不一定指的是同一实施例。另外,在联系一个实施例描述特定特征、结构或特性的情况下,无论是否明确描述,可以认为本领域技术人员所知的范围内,这样的特征、结构或特性也与其他实施例有关。另外,应该理解的是,“在A,B和C的至少一个”这种形式所包括的列表中的条目中,可以包括如下可能的项目:(A);(B);(C);(A和B);(A和C);(B和C);或(A,B和C)。同样,“A,B或C中的至少一个”这种形式列出的项目可能意味着(A);(B);(C);(A和B);(A和C);(B和C);或(A,B和C)。"One embodiment", "an embodiment", "a specific embodiment", etc. in this specification means that the described embodiments may include specific features, structures, or characteristics, but each embodiment may or may not necessarily include That particular feature, structure, or characteristic. Furthermore, such phrases do not necessarily refer to the same embodiment. In addition, when a specific feature, structure, or characteristic is described in connection with one embodiment, whether or not it is explicitly described, it can be considered that such feature, structure, or characteristic is also related to other embodiments within the scope known to those skilled in the art. In addition, it should be understood that the items in the list included in the form of "at least one of A, B, and C" may include the following possible items: (A); (B); (C); ( A and B); (A and C); (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B, or C" may mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C).
在一些情况下,所公开的实施例可以被实施为硬件、固件、软件或其任意组合。所公开的实施例也可以实现为携带或存储在一个或多个暂时的或者非暂时的机器可读(例如计算机可读)存储介质中的指令,该指令可以被一个或多个处理器执行。机器可读存储介质可以实施为用于以能够被机器读取的形式存储或者传输信息的存储装置、机构或其他物理结构(例如易失性或非易失性存储器、介质盘、或其他媒体其它物理结构装置)。In some cases, the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored in one or more temporary or non-transitory machine-readable (eg, computer-readable) storage media, which may be executed by one or more processors. A machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (such as volatile or non-volatile memory, a media disk, or other media other Physical structure device).
在附图中,一些结构或方法特征可以以特定的安排和/或排序显示。然而,优选地,这样的具体安排和/或排序并不是必要的。相反,在一些实施方案中,这样的特征可以以不同的方式和/或顺序排列,而不是如附图中所示。此外,特定的附图中的结构或方法特征中所包含的内容,不意味着暗示这种特征是在所有实施例是必须的,并且在一些实施方案中,可能不包括这些特征,或者可能将这些特征与其它特征相结合。In the drawings, some structural or method features may be displayed in a specific arrangement and/or order. However, preferably, such a specific arrangement and/or ordering is not necessary. In contrast, in some embodiments, such features may be arranged in different ways and/or orders, rather than as shown in the drawings. In addition, the content contained in the structural or method features in the specific drawings is not meant to imply that such features are necessary in all embodiments, and in some embodiments, these features may not be included, or may These features are combined with other features.
针对高精度GPS和高精度电子地图的成本均较高的技术问题,本申请实施例提供了一种数据处理方案,该方案可以包括:依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;依据定位数据,确定车辆对应的车道位置特征;依据地图数据,确定车辆对应的车道数量特征;依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;以及,依据所述车道边界,确定所述车辆对应的目标车道。In view of the technical problem of high cost of high-precision GPS and high-precision electronic maps, the embodiments of the present application provide a data processing solution, which may include: determining the lane image corresponding to the vehicle according to the road image corresponding to the vehicle Characteristics; based on the positioning data, determine the corresponding lane position characteristics of the vehicle; based on the map data, determine the corresponding lane number characteristics of the vehicle; based on the lane image characteristics, the lane position characteristics, and the lane number characteristics, determine the vehicle correspondence The boundary of the lane; and, based on the boundary of the lane, determine the target lane corresponding to the vehicle.
本申请实施例综合利用图像数据、定位数据和地图数据,确定车辆对应的目标车道,可以实现车辆所在车道的定位。The embodiments of the present application comprehensively use image data, positioning data, and map data to determine the target lane corresponding to the vehicle, and can realize the positioning of the lane where the vehicle is located.
其中,图像数据可以为通过例如摄像头的图像采集装置得到,其可作为车道图像特征的确定依据;车道图像特征可以是图像维度的车道特征。The image data may be obtained by an image acquisition device such as a camera, which may be used as a basis for determining the image features of the lane; the image features of the lane may be the lane features of the image dimension.
定位数据可作为车道位置特征的确定依据,车道位置特征可以为位置维度的车道特征。本申请实施例对于定位数据的精度要求较低,定位数据可以源自普通定位精度的传感器,如GPS传感器、GNSS(全球卫星导航***,Global Navigation Satellite System)传感器等,普通定位精度通常为10米左右。The positioning data can be used as the basis for determining the position feature of the lane. The position feature of the lane can be the lane feature of the position dimension. The embodiments of the present application have low requirements on the accuracy of positioning data. The positioning data can be derived from sensors with common positioning accuracy, such as GPS sensors, GNSS (Global Satellite Navigation System, Global Navigation, Satellite) system sensors, etc. The general positioning accuracy is usually 10 meters about.
地图数据可作为车道数量特征的确定依据,因此,本申请实施例对于地图数据的精度要求较低,地图数据的精度可以为非高精度也即普通精度。The map data can be used as the basis for determining the number of lane features. Therefore, the accuracy requirements of the map data in the embodiments of the present application are low, and the precision of the map data may be non-high precision, that is, common precision.
本申请实施例可以对车道图像特征和车道位置特征进行融合,以将车道特征从图像坐标系转换到地图坐标系;进一步,可以依据地图坐标系的车道特征和车道数量特征,确定车辆对应的车道边界,进而可以依据上述车道边界,确定车辆对应的目标车道,也即确定车辆位于哪个车道。In the embodiment of the present application, the lane image feature and the lane position feature can be fused to convert the lane feature from the image coordinate system to the map coordinate system; further, the lane corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system The boundary, in turn, can determine the target lane corresponding to the vehicle according to the above lane boundary, that is, determine which lane the vehicle is located in.
由于本申请实施例的图像数据可以为通过例如摄像头的图像采集装置得到,且对定位数据和地图数据的精度要求较低,因此,本申请实施例可以在花费较低成本的情况下、实现车道的定位。Since the image data of the embodiments of the present application can be obtained by an image acquisition device such as a camera, and the accuracy requirements of positioning data and map data are low, the embodiments of the present application can realize lanes at a lower cost Positioning.
本申请实施例可以应用于智能交通场景,在智能交通场景提供车道级别的定位,可以提升用户的行驶体验。车道,又称行车线、车行道,是用在供车辆行经的道路。在一般公路和高速公路都有设置,高速公路对车道使用带有法律上的规则,例如行车道和超车道。The embodiments of the present application can be applied to an intelligent transportation scenario, and providing lane-level positioning in the intelligent transportation scenario can improve the user's driving experience. Lanes, also known as lanes and lanes, are roads used by vehicles to travel. There are settings on general highways and highways, and highways use legal rules for lanes, such as traffic lanes and overtaking lanes.
根据一种实施例,上述智能交通场景可以为导航场景。例如,在导航场景下可以提供车道级引导,上述车道级引导在于驶入复杂路口、多层立交道路、或者多出入口道路的情况下,可以提高导航的准确度和精度。另外,车道级别的定位可以为AR(增强现实技术,Augmented Reality)导航提供基础数据。According to an embodiment, the above-mentioned intelligent traffic scene may be a navigation scene. For example, in a navigation scenario, lane-level guidance may be provided. The above-mentioned lane-level guidance is to enter a complex intersection, a multi-level interchange road, or multiple entry and exit roads, which can improve the accuracy and precision of navigation. In addition, lane-level positioning can provide basic data for AR (augmented reality, Augmented Reality) navigation.
根据另一种实施例,上述智能交通场景可以为辅助驾驶场景、或者无人驾驶场景。其中,辅助驾驶场景需要人工监控,而无人驾驶场景不需要人工监控。由于可以依据车道级别的定位提供辅助驾驶信息,可以提高车辆的安全性。辅助驾驶信息可以包括:目标车道播报信息、或者车道保持信息、或者变道信息等。例如,在目标车道的路况通畅的情况下,可以输出车道保持信息;又如,在目标车道不适于转弯的情况下,可以输出 变道信息。According to another embodiment, the foregoing intelligent traffic scene may be an assisted driving scene or an unmanned driving scene. Among them, assisted driving scenarios require manual monitoring, while unmanned driving scenarios do not require manual monitoring. Since the driving assistance information can be provided according to the lane-level positioning, the safety of the vehicle can be improved. The driving assistance information may include: target lane announcement information, or lane keeping information, or lane change information. For example, when the target lane is clear, the lane keeping information can be output; for example, when the target lane is not suitable for turning, the lane change information can be output.
可以理解,上述智能交通场景只是作为可选实施例,实际上,需要使用车道级的定位的任意应用场景均在本申请实施例的应用场景的保护范围之内,本申请实施例对于具体的应用场景不加以限制。It can be understood that the foregoing intelligent transportation scenarios are only optional embodiments. In fact, any application scenarios that require lane-level positioning are within the scope of protection of the application scenarios of the embodiments of the present application. For specific applications of the embodiments of the present application The scene is not restricted.
本申请实施例提供的数据处理方案可应用于图1所示的应用环境中,如图1所示,客户端100与服务器200位于有线或无线网络中,通过该有线或无线网络,客户端100与服务器200进行数据交互。The data processing solution provided by the embodiments of the present application can be applied to the application environment shown in FIG. 1. As shown in FIG. 1, the client 100 and the server 200 are located in a wired or wireless network. Through the wired or wireless network, the client 100 Perform data interaction with the server 200.
可选地,客户端可以运行在设备上,例如,该客户端可以为终端上运行的APP,如导航APP、电子商务APP、即时通讯APP、输入法APP、或者操作***自带的APP等,本申请实施例对于客户端所对应的具体APP不加以限制。Optionally, the client may run on the device. For example, the client may be an APP running on the terminal, such as a navigation APP, an e-commerce APP, an instant messaging APP, an input method APP, or an APP that comes with the operating system. The embodiment of the present application does not limit the specific APP corresponding to the client.
可选地,上述设备可以内置或者外接屏幕,上述屏幕用于显示信息。上述设备还可以内置或者外置扬声器,上述扬声器用于播放信息。上述信息可以包括:目标车道的信息。例如,车辆所在的道路包括4条车道,4条车道可以对应有编号,例如,按照车道方向从左到右的顺序,4条车道的编号分别为:1、2、3、4,则目标车道的信息可以为目标车道的编号。可选地,可以播放如下信息:“您当前处于第X车道”,其中,X的范围为1~4。Optionally, the above device may have a built-in or external screen, and the above screen is used to display information. The above device may also have a built-in or external speaker, and the above-mentioned speaker is used for playing information. The above information may include: information of the target lane. For example, the road where the vehicle is located includes 4 lanes, and the 4 lanes may be numbered. For example, according to the order of the lane direction from left to right, the numbers of the 4 lanes are: 1, 2, 3, 4 respectively, then the target lane The information can be the number of the target lane. Optionally, the following information can be played: "You are currently in lane X", where X ranges from 1 to 4.
上述设备可以为车载设备、或者用户拥有的设备。上述设备具体可以包括但不限于:智能手机、平板电脑、电子书阅读器、MP3(动态影像专家压缩标准音频层面3,Moving Picture Experts Group Audio Layer III)播放器、MP4(动态影像专家压缩标准音频层面4,Moving Picture Experts Group Audio Layer IV)播放器、膝上型便携计算机、车载设备、PC(个人计算机,Personal Computer)、机顶盒、智能电视机、可穿戴设备等。可以理解,本申请实施例对于具体的设备不加以限制。The above-mentioned device may be a vehicle-mounted device or a device owned by a user. The above devices may specifically include but are not limited to: smartphones, tablets, e-book readers, MP3 (Motion Picture Expert Compression Standard Audio Level 3, Moving Pictures Experts Group Audio Layer III) player, MP4 (Motion Picture Expert Compression Standard Audio Level 4, Moving Picture Experts Group Audio Layer IV player, laptop portable computer, car equipment, PC (Personal Computer, Personal Computer), set-top box, smart TV, wearable device, etc. It can be understood that the embodiments of the present application do not limit specific devices.
车载设备的例子可以包括:HUD(平视显示器,Head Up Display)等,HUD通常安装在驾驶员前方,在驾驶过程中可为驾驶员提供一些必要的行车信息,如导航信息等,导航信息可以包括:目标车道的信息;换言之,HUD可以集多种功能于一体,方便驾驶员关注行车路况。Examples of in-vehicle equipment may include: HUD (Head Up Display, Head Up Display), etc., HUD is usually installed in front of the driver, during the driving process can provide the driver with some necessary driving information, such as navigation information, navigation information can include : The information of the target lane; in other words, HUD can integrate multiple functions into one, which is convenient for the driver to pay attention to the traffic conditions.
方法实施例一Method Example 1
参照图2,示出了本申请的一种数据处理方法实施例一的步骤流程图,具体可以包括如下步骤:Referring to FIG. 2, there is shown a flowchart of steps of Embodiment 1 of a data processing method of the present application, which may specifically include the following steps:
步骤201、依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Step 201: Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
步骤202、依据定位数据,确定车辆对应的车道位置特征;Step 202: According to the positioning data, determine the lane position feature corresponding to the vehicle;
步骤203、依据地图数据,确定车辆对应的车道数量特征;Step 203: According to the map data, determine the number of lane features corresponding to the vehicle;
步骤204、依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Step 204: Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane quantity feature;
步骤205、依据所述车道边界,确定所述车辆对应的目标车道。Step 205: Determine a target lane corresponding to the vehicle according to the lane boundary.
本申请实施例的方法所包括的至少一个步骤可由客户端和/或服务器执行,当然,本申请实施例对于方法的步骤的具体执行主体不加以限制。At least one step included in the method of the embodiment of the present application may be executed by a client and/or a server. Of course, the embodiment of the present application does not limit the specific execution subject of the method step.
步骤201中,道路图像可由例如摄像头、摄像机的图像采集装置得到。可选地,图像采集装置的数量可以为1,也可以大于1。可选地,图像采集装置可以设置于车辆的周边,上述周边可以包括:前方,该前方可以包括:正前方或者斜前方。该摄像头可以为单目摄像头、或者双目摄像头。In step 201, the road image can be obtained by an image acquisition device such as a camera or a camera. Optionally, the number of image acquisition devices may be one, or may be greater than one. Optionally, the image acquisition device may be disposed at the periphery of the vehicle. The above-mentioned periphery may include: a front, and the front may include a front straight or a diagonal front. The camera may be a monocular camera or a binocular camera.
在本申请的一种应用示例中,可以将摄像头安装在车辆纵向中轴线上位于车顶前方的位置,摄像头对准车辆前方,摄像头距离地面的高度、摄像头的俯仰角、航偏角和翻滚角可以根据实际应用需求确定。在车辆的行驶过程中,摄像头可以不断地采集车辆正前方的道路图像。当然,摄像头也可以位于车辆纵向中轴线上位于车投前方的位置,可以理解,本申请实施例对于具体的图像采集装置及其方位不加以限制。In an application example of the present application, the camera can be installed on the longitudinal center axis of the vehicle at a position in front of the roof, the camera is aimed at the front of the vehicle, the height of the camera from the ground, the camera's pitch angle, yaw angle and roll angle It can be determined according to actual application requirements. During the driving process of the vehicle, the camera can continuously collect images of the road directly in front of the vehicle. Of course, the camera may also be located on the longitudinal center axis of the vehicle at a position in front of the vehicle projection. It can be understood that the embodiment of the present application does not limit the specific image acquisition device and its orientation.
根据一种实施例,步骤201可以利用图像处理技术,从车辆对应的道路图像中确定出所述车辆对应的车道图像特征。According to an embodiment, step 201 may use image processing techniques to determine the lane image features corresponding to the vehicle from the road images corresponding to the vehicle.
上述图像处理技术可以包括:滤波技术。上述滤波技术可用于滤除道路图像中的噪声,以降低噪声对于车道图像特征的干扰。The above image processing technology may include: filtering technology. The above filtering technique can be used to filter out the noise in the road image to reduce the interference of the noise on the characteristics of the lane image.
上述图像处理技术可以包括:图像识别技术。图像识别,是指利用机器对图像进行处理、分析和理解,以识别各种不同模式的图像目标的技术。具体到本申请实施例,可以利用机器对道路图像进行处理、分析和理解,以识别各种不同模式的图像目标的技术。图像目标可以包括:本申请实施例的车道图像特征。The above image processing technology may include: image recognition technology. Image recognition refers to the technology of using machines to process, analyze and understand images in order to recognize image objects in various modes. Specifically to the embodiments of the present application, a machine may be used to process, analyze, and understand road images to identify various image target technologies. The image target may include: the lane image feature of the embodiment of the present application.
通常道路图像中的车道图像特征可以在道路图像中对应有一定的图像区域。本申请实施例可以通过边缘检测技术,确定单个车道图像特征对应的图像区域,并确定图像区域对应的车道图像特征。Generally, the image features of the lane in the road image can correspond to a certain image area in the road image. In the embodiments of the present application, an edge detection technique may be used to determine an image area corresponding to a single lane image feature, and determine a lane image feature corresponding to the image area.
在本申请的一种可选实施例中,步骤201确定所述车辆对应的车道图像特征的过程可以包括:检测道路图像中的图像目标,并利用深度学习方法对获取到的图像目标进行分析,以得到对应的图像目标信息,也即车道图像特征。图像目标信息可以包括:图像 目标的图像、名称、类别等信息。In an alternative embodiment of the present application, the process of determining the image features of the lane corresponding to the vehicle in step 201 may include: detecting image targets in the road image, and analyzing the acquired image targets using deep learning methods, To obtain the corresponding image target information, that is, the image features of the lane. The image target information may include: image target image, name, category and other information.
根据另一种实施例,步骤201可以利用摄影测量技术,从车辆对应的道路图像中确定出所述车辆对应的车道图像特征。摄影测量技术,可以利用光学摄影机或数码相机获取的像片,经过处理以获取被摄物体的位置、形状、大小、特性及其相互关系According to another embodiment, step 201 may use photogrammetric technology to determine the lane image features corresponding to the vehicle from the road images corresponding to the vehicle. Photogrammetry technology, which can use the photos obtained by optical cameras or digital cameras to process the position, shape, size, characteristics and interrelationship of the object
参照表1,示出了本申请实施例的一种车道图像特征的示例。上述车道图像特征,具体可以包括如下特征中的至少一种:车道特征点、车道特征线、以及车道特征区域。Referring to Table 1, an example of a lane image feature of an embodiment of the present application is shown. The above-mentioned lane image feature may specifically include at least one of the following features: a lane feature point, a lane feature line, and a lane feature area.
表1Table 1
Figure PCTCN2019123214-appb-000001
Figure PCTCN2019123214-appb-000001
其中,车道特征点可以为点类型的车道图像特征。上述车道特征点具体可以包括如下特征点中的至少一种:The lane feature point may be a point-type lane image feature. The lane feature points may specifically include at least one of the following feature points:
车道边界的端点;以及The end of the lane boundary; and
车道边界与车道边界的垂线之间的交点。The intersection between the lane boundary and the vertical line of the lane boundary.
车道边界的端点可以包括:车道线端点、或者道路边沿端点。The end points of the lane boundaries may include: end points of lane lines, or end points of road edges.
车道特征线可以为线类型的车道图像特征。所述车道特征线具体可以包括如下特征线中的至少一种:The lane feature line may be a lane image feature of line type. The lane characteristic line may specifically include at least one of the following characteristic lines:
车道线;Lane line
道路边沿线;Roadside;
道路边沿设施的轮廓线;The outline of road edge facilities;
道路上方设施的轮廓线;以及The outline of the facility above the road; and
隧道口轮廓线。Contour of tunnel entrance.
其中,轮廓线,又叫“外部线条”,指事物的外边缘界线,是一个对象与另一个对象之间、对象与背景之间的分界线。Among them, the outline, also called "outer line", refers to the outer boundary of things, is the boundary between an object and another object, and between the object and the background.
车道特征区域可以为区域类型或者面类型的车道图像特征。上述车道特征区域具体可以包括如下区域中的至少一种:The lane feature area may be a lane image feature of area type or area type. The above lane characteristic area may specifically include at least one of the following areas:
斑马线区域;Zebra crossing area;
绿化带区域;以及Green belt area; and
车辆区域。此处的车辆可以指道路上除了图像采集装置所在车辆之外的车辆。Vehicle area. The vehicle here may refer to a vehicle other than the vehicle on which the image acquisition device is located on the road.
步骤202中,定位数据可以源自普通定位精度的传感器,如GPS传感器、GNSS传感器等。In step 202, the positioning data may be derived from ordinary positioning accuracy sensors, such as GPS sensors, GNSS sensors, and the like.
步骤202确定的车道位置特征可以包括:车辆位置特征。车辆位置特征用于表征车辆所在的位置。The lane position characteristics determined in step 202 may include: vehicle position characteristics. The vehicle location feature is used to characterize the location of the vehicle.
可选地,可以依据定位数据和步骤201得到的车道图像特征,确定车辆位置特征。其中,车道图像特征可以反映车辆所处的周边环境,因此可以提高车辆位置特征的精度。Alternatively, the vehicle position feature may be determined based on the positioning data and the lane image feature obtained in step 201. Among them, the features of the lane image can reflect the surrounding environment where the vehicle is located, so the accuracy of the vehicle's position features can be improved.
步骤202确定的车道位置特征还可以包括:车道图像特征对应的位置特征。相应地,步骤202可以接收步骤201得到的车道图像特征,并依据定位数据,确定车道图像特征对应的位置特征。The lane position feature determined in step 202 may further include: a position feature corresponding to the lane image feature. Correspondingly, in step 202, the lane image features obtained in step 201 can be received, and the position features corresponding to the lane image features can be determined according to the positioning data.
可选地,可以利用SLAM(同步定位与建图,Simultaneous Localization and Mapping)方法,确定车道图像特征对应的位置特征。SLAM的原理可以为:机器人在未知环境中从一个未知位置开始移动,在移动过程中根据位置估计和地图进行自身定位,同时在自身定位的基础上建造增量式地图,实现机器人的自主定位和导航例如,可以根据SLAM方法,确定步骤201得到的车道特征点、或车道特征线、或车道特征区域的位置特征。Optionally, the SLAM (Simultaneous Localization and Mapping) method can be used to determine the location features corresponding to the lane image features. The principle of SLAM can be: the robot starts to move from an unknown position in an unknown environment, and positions itself based on position estimation and maps during the movement process, and at the same time builds an incremental map on the basis of self-positioning to achieve autonomous positioning and For the navigation, for example, the position feature of the lane feature point, the lane feature line, or the lane feature area obtained in step 201 may be determined according to the SLAM method.
步骤203中,地图数据可作为车道数量特征的确定依据,因此,本申请实施例对于地图数据的精度要求较低,地图数据的精度可以为非高精度也即普通精度。In step 203, the map data can be used as the basis for determining the characteristics of the number of lanes. Therefore, the accuracy requirements of the map data in the embodiments of the present application are low, and the precision of the map data may be non-high precision, that is, common precision.
可选地,步骤203可以依据步骤202得到的车道位置特征,确定车道数量特征。车道数量特征可用于表征车辆当前所处或者即将所处的道路中车道的数量。Optionally, step 203 may determine the number of lane features based on the lane position features obtained in step 202. The lane number feature can be used to characterize the number of lanes in the road where the vehicle is currently or will be.
可选地,步骤203还可以依据地图数据,确定车道方向特征。车道可以包括:单行道或者双行道。其中,双行道的方向可以包括:相反的两个方向,车道方向特征可以表征其中的一个方向,也即车辆的行驶方向。Optionally, step 203 can also determine the lane direction feature based on the map data. The lanes can include: one-way or two-way. The direction of the two-lane road may include two opposite directions, and the lane direction feature may represent one of the directions, that is, the driving direction of the vehicle.
步骤204可以对车道图像特征、车道位置特征和车道数量特征进行融合,以得到车辆对应的车道边界。车道边界可以指车道的边界,其可以作为车道与车道之间、或者车道与其他物体之间的界线。车道边界可以包括:车道线、道路边沿等。Step 204 may fuse lane image features, lane position features, and lane number features to obtain lane boundaries corresponding to vehicles. The lane boundary may refer to the boundary of the lane, which may serve as the boundary between the lane and the lane, or between the lane and other objects. Lane boundaries may include: lane lines, road edges, etc.
本申请实施例确定车辆对应的车道边界的原理可以为:依据车道图像特征、车道位置特征和车道数量特征,对车辆对应的车道边界进行解算。上述解算可以将车道特征从图像坐标系转换到地图坐标系,以确定车道边界在地图坐标系下的参数(如车道边界位 置特征、名称等),进而可以基于车辆位置特征与车道边界位置特征之间的相对位置,确定车辆对应的目标车道。The principle of determining the lane boundary corresponding to the vehicle in the embodiment of the present application may be: according to the lane image feature, the lane position feature, and the number of lane features, the lane boundary corresponding to the vehicle is solved. The above solution can convert the lane feature from the image coordinate system to the map coordinate system to determine the parameters of the lane boundary in the map coordinate system (such as lane boundary position feature, name, etc.), and then can be based on the vehicle position feature and the lane boundary position feature The relative position between them determines the target lane corresponding to the vehicle.
根据一种实施例,上述车道图像特征中可以包括全部车道边界的信息。此种情况下,可以依据车道图像特征确定全部车道边界的信息,也即可以确定全部车道边界在地图坐标系下的参数。According to an embodiment, the above lane image features may include information of all lane boundaries. In this case, the information of all lane boundaries can be determined according to the features of the lane image, that is, the parameters of all lane boundaries in the map coordinate system can be determined.
根据另一种实施例,图像采集装置受环境、气候的影响、光线等因素的影响较大,再加上车道线本身的复杂性和不连续性(路口),这使得上述车道图像特征中可能仅仅包括部分车道边界的信息,也即出现车辆对应的当前车道边界不完整的情况。According to another embodiment, the image acquisition device is greatly affected by factors such as the environment, climate, light, and the complexity and discontinuity (intersection) of the lane line itself, which makes the above-mentioned lane image features possible It only includes part of the lane boundary information, that is, the current lane boundary corresponding to the vehicle is incomplete.
例如,在车道线不连续的情况下,容易使得上述车道图像特征中可能仅仅包括部分车道边界的信息;车道线不连续的原因可以包括:车道线污损、或者路口情况下,图像采集装置的采集范围受限等。例如,在路况情况下,路面有行人和车辆遮挡,且转弯角度大,摄像头无法拍到全部车道线。For example, in the case of a discontinuous lane line, it is easy for the above-mentioned lane image features to include only part of the lane boundary information; the reasons for the discontinuous lane line may include: lane line defacement, or intersection, the image acquisition device’s The collection range is limited. For example, under road conditions, pedestrians and vehicles are blocking the road surface, and the turning angle is large, so the camera cannot capture all lane lines.
又如,在光线较弱的情况下,图像采集装置的采集清晰度降低,容易使得上述车道图像特征中可能仅仅包括部分车道边界的信息。上述光线较弱的情况可以包括:恶劣天气情况、或者夜晚情况等。In another example, in a case where the light is weak, the acquisition clarity of the image acquisition device is reduced, so that the above-mentioned lane image features may include only part of the lane boundary information. The above-mentioned weak light conditions may include: severe weather conditions or night conditions.
再如,在道路拥堵的情况下,路面有行人和车辆遮挡,图像采集装置的采集范围降低,容易使得上述车道图像特征中可能仅仅包括部分车道边界的信息。上述道路拥堵可以包括:道路中拥堵、或者路况拥堵等。As another example, in the case of road congestion, the road surface is blocked by pedestrians and vehicles, and the collection range of the image collection device is reduced, so that the above-mentioned lane image features may include only part of the lane boundary information. The above road congestion may include: road congestion, or road condition congestion, etc.
在所述车辆对应的当前车道边界不完整的情况下,本申请实施例可以通过确定所述车辆对应的目标车道的如下技术方案,确定全部车道边界:In the case where the current lane boundary corresponding to the vehicle is incomplete, embodiments of the present application may determine all lane boundaries by determining the following technical solution of the target lane corresponding to the vehicle:
技术方案1、 Technical Solution 1.
技术方案1中,步骤204确定所述车辆对应的车道边界的过程,具体可以包括:依据所述车道图像特征,得到两个车道特征点;所述两个车道特征点属于不同的第一车道边界;依据所述车道位置特征,确定所述两个车道特征点之间的距离;依据所述两个车道特征点之间的距离、以及所述车道数量特征,确定所述车辆对应的第二车道边界。In the technical solution 1, the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: obtaining two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries ; Determine the distance between the two lane feature points based on the lane position feature; determine the second lane corresponding to the vehicle based on the distance between the two lane feature points and the number of lane features boundary.
技术方案1可以在已知第一车道边界的两个车道特征点的情况下,确定第二车道边界,由于可以实现车道边界的扩展,因此可以确定全部车道边界。 Technical solution 1 can determine the second lane boundary when the two lane feature points of the first lane boundary are known. Since the lane boundary can be expanded, all lane boundaries can be determined.
参照图3,示出了本申请实施例的一种道路状况的示意,其中,该道路状况可以源自道路图像,车辆可以行驶在路口的附近区域,道路可以包括:人行横道301和若干条车道302。车道特征点可以为一条车道线的端点PA和PB。图3中,车道线的端点PA 和PB可以为车道线的起始端点,且PA和PB所属的车道线相邻。此种情况下,PA与PB之间的距离可以为一条车道的宽度(简称车道宽度);由于同一道路中不同车道的宽度通常是相同的,因此可以依据车道数量和车道宽度,确定未知的车道线。Referring to FIG. 3, a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, a vehicle may be driven in an area near an intersection, and the road may include: a crosswalk 301 and several lanes 302 . The lane feature points may be the end points PA and PB of a lane line. In FIG. 3, the end points PA and PB of the lane line may be the starting end points of the lane line, and the lane lines to which the PA and PB belong are adjacent. In this case, the distance between PA and PB can be the width of a lane (referred to as the lane width); because the width of different lanes on the same road is usually the same, the unknown lane can be determined based on the number of lanes and the lane width line.
技术方案2、 Technical Solution 2.
技术方案2中,所述车道图像特征具体可以包括:道路周边设施的轮廓线,步骤204确定所述车辆对应的车道边界的过程,具体可以包括:依据所述道路周边设施的轮廓线、以及车道位置特征,确定道路宽度;依据所述道路宽度、以及所述车道数量特征,确定所述车辆对应的车道边界。In technical solution 2, the features of the lane image may specifically include: contour lines of road surrounding facilities, and the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: according to the contour lines of the surrounding road facilities, and the lane The position feature determines the road width; according to the road width and the number of lane features, the lane boundary corresponding to the vehicle is determined.
技术方案2可以在已知道路周边设施的轮廓线的情况下,实现车道边界的扩展,因此可以确定全部车道边界。The technical solution 2 can realize the expansion of the lane boundary when the contour lines of the surrounding facilities of the road are known, so that all lane boundaries can be determined.
参照图4,示出了本申请实施例的一种道路状况的示意,其中,该道路状况可以源自道路图像,道路状况可以包括:若干条车道线401、道路上方设施402和道路边沿线403。部分车道线被道路上的车辆遮挡,此种情况下,可以根据道路上方设施的轮廓线确定道路宽度,道路宽度可以表征所有车道对应的宽度;由于同一道路中不同车道的宽度通常是相同的,因此可以依据车道数量和道路宽度,确定未知的车道线。Referring to FIG. 4, a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, and the road condition may include: several lane lines 401, road above facilities 402, and road edge lines 403 . Some lane lines are blocked by vehicles on the road. In this case, the width of the road can be determined according to the contour lines of the facilities above the road. The width of the road can represent the width of all lanes. Since the width of different lanes on the same road is usually the same, Therefore, the unknown lane line can be determined according to the number of lanes and the width of the road.
图4所示的道路上方设施402具体为交通龙门架,可以理解,图4所示的道路上方设施只是作为示例,实际上,道路上方设施还可以为隧道等,可以理解,本申请实施例对于具体的道路上方设施不加以限制。The above-road facility 402 shown in FIG. 4 is specifically a traffic gantry. It can be understood that the above-road facility shown in FIG. 4 is only an example. In fact, the above-road facility can also be a tunnel, etc. It can be understood that the embodiments of the present application are The specific facilities above the road are not restricted.
另外,道路上方设施只是作为可选实施例,实际上道路周边设施还可以包括:绿化带等,例如,可以依据道路两侧的绿化带,确定道路宽度。可以理解,本申请实施例对于具体的道路周边设施不加以限制。In addition, the facilities above the road are only optional embodiments. In fact, the facilities around the road may also include: green belts, etc. For example, the width of the road may be determined according to the green belts on both sides of the road. It can be understood that the embodiments of the present application do not limit specific road surrounding facilities.
技术方案1和技术方案2在道路图像中采集到部分车道边界的情况下,进行车道边界的扩展,可以在一定程度上克服图像采集装置受光线、遮挡,车道线本身不完整以及不连续等问题导致的车道边界不准确的问题,可以提升车道边界的准确率和可靠性以及连续可用性。 Technical solution 1 and technical solution 2 In the case where part of the lane boundary is collected in the road image, the expansion of the lane boundary can overcome the problems of the image acquisition device due to light and occlusion, the incomplete and discontinuous lane line itself to a certain extent. The problem of inaccurate lane boundaries can improve the accuracy and reliability of the lane boundaries and continuous availability.
可以理解,本领域技术人员可以根据实际应用需求,采用技术方案1和技术方案2中的任一或者组合,本申请实施例对于确定所述车辆对应的目标车道的具体过程不加以限制。It can be understood that those skilled in the art may adopt any one or a combination of technical solution 1 and technical solution 2 according to actual application requirements. The embodiments of the present application do not limit the specific process of determining the target lane corresponding to the vehicle.
步骤205可以依据步骤204得到的车道边界,确定所述车辆对应的目标车道。Step 205 may determine the target lane corresponding to the vehicle according to the lane boundary obtained in step 204.
本申请实施例确定车辆对应的目标车道的原理可以为:依据车辆与车道边界之间的 相对位置,确定车辆对应的目标车道,该目标车道也即车辆所处的车道。The principle of determining the target lane corresponding to the vehicle in the embodiment of the present application may be: determining the target lane corresponding to the vehicle according to the relative position between the vehicle and the lane boundary, which is also the lane where the vehicle is located.
在本申请的一种可选实施例中,步骤203确定所述车辆对应的目标车道的过程,具体可以包括:确定多个车道边界分别对应的车道特征点、以及第一方向;依据所述第一方向与第二方向之间的关系,确定所述车辆对应的目标车道;所述第二方向可以为所述车道特征点与车辆特征点对应的方向。In an optional embodiment of the present application, the process of determining the target lane corresponding to the vehicle in step 203 may specifically include: determining a lane feature point corresponding to a plurality of lane boundaries and a first direction; according to the first The relationship between the first direction and the second direction determines the target lane corresponding to the vehicle; the second direction may be the direction corresponding to the lane feature point and the vehicle feature point.
参照图5,示出了本申请实施例的一种道路状况的示意,其中,该道路状况可以源自道路图像,道路状况可以包括:若干条车道线501、车辆502和道路边沿线503。可以确定车辆502所在道路上同一位置的垂线504,车道特征点可以为:垂线504与车道线的交点,第一方向可以为车道线的方向,第二方向可以为特征点与车辆特征点之间的连线的方向,车辆特征点可以为图像采集装置所在的位置点;这样,可以依据第一方向与第二方向之间的位置关系(如夹角),确定车辆502对应的目标车道,图5中,第3车道所对应的夹角最小,因此可以确定目标车道为第3车道。5, a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, and the road condition may include: several lane lines 501, vehicles 502, and road edge lines 503. The vertical line 504 at the same position on the road where the vehicle 502 is located can be determined. The lane characteristic points can be: the intersection of the vertical line 504 and the lane line, the first direction can be the direction of the lane line, and the second direction can be the characteristic point and the vehicle characteristic point The direction of the connection between the vehicle feature points can be the location of the image acquisition device; in this way, the target lane corresponding to the vehicle 502 can be determined according to the positional relationship between the first direction and the second direction (such as the included angle) In FIG. 5, the angle corresponding to the third lane is the smallest, so it can be determined that the target lane is the third lane.
在实际应用中,由于车辆在路口行驶的情况下,不确定因素很多,这给目标车道的确定带来难度。In practical applications, because the vehicle is driving at the intersection, there are many uncertain factors, which makes it difficult to determine the target lane.
针对上述情况,在本申请的一种可选实施例中,步骤204确定所述车辆对应的车道边界的过程,具体可以包括:依据所述车道边界、以及所述车辆对应的连续运动信息,确定所述车辆对应的目标车道。In view of the above situation, in an optional embodiment of the present application, the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: determining according to the lane boundary and continuous motion information corresponding to the vehicle The target lane corresponding to the vehicle.
连续运动信息可以表征车辆在一个时间段内的运动状况,本申请实施例依据连续运动信息,可以确定车辆进入车道边界内的目标时间,并依据该目标时间下车辆与车道边界之间的相对位置,确定目标车道。由于目标时间车辆进入车道边界,因此此种时机下确定目标车道,可以避免尚未进入车道边界的情况下错误确定目标车道的情况,进而可以提高目标车道的准确度。Continuous motion information can characterize the movement of the vehicle within a period of time. Based on the continuous motion information, the embodiment of the present application can determine the target time for the vehicle to enter the lane boundary, and based on the relative position between the vehicle and the lane boundary at the target time To determine the target lane. Since the vehicle enters the lane boundary at the target time, determining the target lane at this timing can avoid the erroneous determination of the target lane when the lane boundary has not yet been entered, thereby improving the accuracy of the target lane.
参照图6,示出了本申请实施例的一种车辆与车道边界之间的关系的示意,其中,T0时刻可以为确定所有车道线的时刻,如果在T0时刻确定目标车道,将容易出现目标车道错误的情况。Referring to FIG. 6, there is shown a schematic diagram of a relationship between a vehicle and a lane boundary according to an embodiment of the present application, where the time T0 may be the time to determine all lane lines. If the target lane is determined at the time T0, the target will easily appear The wrong lane.
T1时刻可以为车辆进入车道线的时刻,本申请实施例可以累积T0时刻到T1时刻这一时间段内的累积运动矢量,并依据该累积运动矢量确定T1时刻,进而可以实现目标车道的准确确定。The time T1 may be the time when the vehicle enters the lane line. The embodiment of the present application may accumulate the cumulative motion vector from the time period T0 to the time T1, and determine the time T1 according to the cumulative motion vector, thereby achieving accurate determination of the target lane .
本申请实施例的连续运动信息可以通过惯性传感器得到。INS传感器可以为车辆上的已有传感器,故可以不耗费额外的传感器成本。惯性传感器可以包括:IMU(惯性测 量单元,Inertial measurement unit)。IMU可以包括:加速度计、陀螺仪等。The continuous motion information in the embodiment of the present application may be obtained through an inertial sensor. The INS sensor can be an existing sensor on the vehicle, so no additional sensor cost can be consumed. The inertial sensor may include: IMU (Inertial Measurement Unit, Inertial Measurement Unit). The IMU may include: accelerometer, gyroscope, etc.
可选地,可以通过INS(惯性导航***,Inertial Navigation System)确定连续运动信息。INS确定连续运动信息的原理可以为:根据惯性传感器测得的车辆的运动状态变化,通过上一时刻的位置、姿态推算当前时刻的位置和姿态。可选地,INS还可以利用里程计提供的行程数据。Alternatively, continuous motion information can be determined by INS (Inertial Navigation System, Inertial Navigation System). The principle that INS determines the continuous motion information may be: according to the change of the motion state of the vehicle measured by the inertial sensor, the position and posture at the current time can be estimated from the position and posture at the previous time. Optionally, INS can also utilize trip data provided by the odometer.
在实际应用中,可以依据图2包括的步骤201至步骤205确定车辆初始进入道路后的车道,也即,本申请实施例的目标车道可以包括:初始目标车道。在确定初始目标车道后,可以利用车道跟踪和/或变道检测方法,进行目标车道的实时更新。In practical applications, the lane after the vehicle initially enters the road may be determined according to steps 201 to 205 included in FIG. 2, that is, the target lane in the embodiment of the present application may include: the initial target lane. After determining the initial target lane, you can use lane tracking and/or lane change detection methods to update the target lane in real time.
由于路口情况复杂多变,且路口有空白区域,故本申请实施例详细描述了路口情况对应的处理过程,可以理解,本申请实施例可以适用于除了路况情况之外的其它情况。The intersection situation is complicated and changeable, and the intersection has a blank area. Therefore, the embodiment of the present application describes the processing process corresponding to the intersection situation in detail. It can be understood that the embodiment of the present application can be applied to other situations besides the road situation.
在实际应用中,本申请实施例可以通过视觉方式和/或听觉方式,输出目标车道的信息。其中,视觉方式可以通过屏幕显示目标车道的信息,听觉方式可以通过扬声器播放目标车道的信息。In practical applications, the embodiments of the present application may output the information of the target lane in a visual manner and/or an auditory manner. Among them, the visual way can display the target lane information through the screen, and the auditory way can play the target lane information through the speaker.
综上,本申请实施例的数据处理方法,综合利用图像数据、定位数据和地图数据,实现车辆所在车道的定位。其中,图像数据可以作为车道图像特征的确定依据;定位数据可作为车道位置特征的确定依据;地图数据可作为车道数量特征的确定依据;本申请实施例可以对车道图像特征和车道位置特征进行融合,以将车道特征从图像坐标系转换到地图坐标系;这样,可以依据地图坐标系的车道特征和车道数量特征,确定车辆对应的车道边界,进而可以依据上述车道边界,确定车辆对应的目标车道,也即确定车辆位于哪个车道。In summary, the data processing method of the embodiment of the present application comprehensively utilizes image data, positioning data and map data to realize the positioning of the lane where the vehicle is located. Among them, the image data can be used as the basis for determining the image features of the lane; the positioning data can be used as the basis for determining the position features of the lane; the map data can be used as the basis for determining the features of the number of lanes; the embodiments of the present application can fuse the features of the lane image and the features of the lane position To convert the lane features from the image coordinate system to the map coordinate system; in this way, the lane boundary corresponding to the vehicle can be determined according to the lane feature and the number of lane features of the map coordinate system, and then the target lane corresponding to the vehicle can be determined according to the above lane boundary , That is, determine which lane the vehicle is in.
由于本申请实施例的图像数据可以为通过例如摄像头的图像采集装置得到,且对定位数据和地图数据的精度要求较低,因此,本申请实施例可以在花费较低成本的情况下、实现车道的定位。Since the image data of the embodiments of the present application can be obtained by an image acquisition device such as a camera, and the accuracy requirements of positioning data and map data are low, the embodiments of the present application can realize lanes at a lower cost Positioning.
方法实施例二Method Example 2
参照图7,示出了本申请的一种数据处理方法实施例二的步骤流程图,具体可以包括如下步骤:Referring to FIG. 7, it shows a flowchart of steps of Embodiment 2 of a data processing method of the present application, which may specifically include the following steps:
步骤701、依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Step 701: Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
步骤702、依据定位数据,确定车辆对应的车道位置特征;Step 702: Determine the lane position feature corresponding to the vehicle according to the positioning data;
步骤703、依据地图数据,确定车辆对应的车道数量特征;Step 703: Determine the number of lane features corresponding to the vehicle based on the map data;
步骤704、依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定 所述车辆对应的车道边界;Step 704: Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane quantity feature;
步骤705、在所述车辆对应的当前车道边界不完整的情况下,依据车辆对应的道路图像、以及所述当前车道边界,确定所述车辆对应的最新车道图像特征和最新车道位置特征;Step 705: When the current lane boundary corresponding to the vehicle is incomplete, determine the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary;
步骤706、依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;Step 706: Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the lane quantity feature;
步骤707、依据所述最新车道边界,确定所述车辆对应的目标车道。Step 707: Determine the target lane corresponding to the vehicle according to the latest lane boundary.
本申请实施例可以在车辆对应的当前车道边界不完整的情况下,重新确定车道边界,具体地,可以依据当前车道边界,重新确定车道图像特征和车道位置特征,并依据重新确定的车道图像特征和车道位置特征,确定最新车道边界。其中,当前车道边界可以指当前时间的车道边界,当前时间可以指执行步骤的情况下的设备时间,当前车道边界可以随着当前时间的更新而更新。The embodiment of the present application can re-determine the lane boundary when the current lane boundary corresponding to the vehicle is incomplete. Specifically, the lane image feature and lane position feature can be re-determined based on the current lane boundary, and the re-determined lane image feature can be used And the location characteristics of the lane to determine the latest lane boundary. Wherein, the current lane boundary may refer to the lane boundary at the current time, the current time may refer to the device time when the step is performed, and the current lane boundary may be updated as the current time is updated.
其中,当前车道边界可以为车道图像特征的重新确定提供丰富的信息,因此可以作为车道图像特征的确定依据,尤其地,当前车道边界可以为图像区域的二次特征提取提供依据。例如,道路图像中包括不连续的车道线L1,步骤704可以结合车道图像特征、所述车道位置特征和所述车道数量特征,得到连续的车道线L1;且步骤705还可以结合车道图像特征、所述车道位置特征和所述车道数量特征,得到与车道线L1相邻的车道线L2,则步骤706可以依据连续的车道线L1和车道线L2,得到更多的车道图像特征,如车道线L2的端点特征、或者与车道线L2相邻的车道线L3的特征等。Among them, the current lane boundary can provide rich information for the re-determination of the lane image features, and thus can be used as the basis for determining the lane image features. In particular, the current lane boundary can provide the basis for the secondary feature extraction of the image area. For example, the road image includes a discontinuous lane line L1. In step 704, the lane image feature, the lane position feature, and the number of lane features can be combined to obtain a continuous lane line L1; and in step 705, the lane image feature can also be combined. The lane position feature and the lane quantity feature to obtain the lane line L2 adjacent to the lane line L1, then step 706 may obtain more lane image features, such as lane lines, based on the continuous lane line L1 and the lane line L2 The end point feature of L2, the feature of the lane line L3 adjacent to the lane line L2, and so on.
由于最新车道图像特征和最新车道位置特征为经过更新的、更为准确的特征,因此依据最新车道图像特征和最新车道位置特征,进行车道边界的确定,可以提高车道边界的准确度。Since the latest lane image feature and the latest lane position feature are updated and more accurate features, the determination of the lane boundary based on the latest lane image feature and the latest lane position feature can improve the accuracy of the lane boundary.
方法实施例三Method Example 3
参照图8,示出了本申请的一种数据处理方法实施例三的步骤流程图,具体可以包括如下步骤:Referring to FIG. 8, there is shown a flowchart of steps of Embodiment 3 of a data processing method of the present application, which may specifically include the following steps:
步骤801、依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Step 801: Determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
步骤802、依据定位数据,确定车辆对应的车道位置特征;Step 802: Determine the lane position feature corresponding to the vehicle according to the positioning data;
步骤803、依据地图数据,确定车辆对应的车道数量特征;Step 803: According to the map data, determine the number of lane features corresponding to the vehicle;
步骤804、依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Step 804: Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature, and the lane number feature;
步骤805、在所述车辆对应的当前车道边界不完整的情况下,确定预测车道边界;Step 805: Determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
步骤806、依据车辆对应的道路图像、所述当前车道边界、以及所述预测车道边界,确定所述车辆对应的最新车道图像特征和最新车道位置特征;Step 806: Determine the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
步骤807、依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;Step 807: Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the lane quantity feature;
步骤808、依据所述最新车道边界,确定所述车辆对应的目标车道。Step 808: Determine the target lane corresponding to the vehicle according to the latest lane boundary.
本申请实施例可以在所述车辆对应的当前车道边界不完整的情况下,确定预测车道边界。例如,可以利用前述的技术方案1和/或技术方案2,将扩展得到的车道边界,作为预测车道边界。The embodiment of the present application may determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete. For example, the aforementioned technical solution 1 and/or technical solution 2 may be used to use the expanded lane boundary as the predicted lane boundary.
预测车道边界可以作为最新车道图像特征的确定依据。可选地,可以依据预测车道边界,重新进行车道图像特征的提取。The predicted lane boundary can be used as the basis for determining the latest lane image features. Alternatively, the feature extraction of the lane image may be performed again based on the predicted lane boundary.
例如,在光线弱的情况下,最初无法从道路图像中提取车道线L3的图像特征,本申请实施例可以预测出车道线L3,并将车道线L3标记在道路图像的相应位置,则可以依据带有车道线L3的标记的道路图像,提取车道线L3的图像特征;车道线L3的图像特征再次加入车道边界的确定过程中,由于可以增加车道边界的确定过程中的信息,因此可以提高车道边界的准确度。For example, in the case of low light, the image features of the lane line L3 cannot be extracted from the road image initially. The embodiment of the present application can predict the lane line L3 and mark the lane line L3 at the corresponding position of the road image. A road image marked with the lane line L3 to extract the image features of the lane line L3; the image features of the lane line L3 are added to the process of determining the lane boundary again. Since the information in the process of determining the lane boundary can be increased, the lane can be improved The accuracy of the boundary.
同理,在车道线不连续的情况下,可以预测出连续的车道线。在此不作赘述,相互参照即可。Similarly, when the lane line is discontinuous, a continuous lane line can be predicted. Without repeating details here, cross-reference is sufficient.
需要说明的是,可以针对最新车道图像特征进行定位,以得到最新车道位置特征。例如,最新车道位置特征可以包括:预测车道边界对应的位置特征。It should be noted that the latest lane image features can be located to obtain the latest lane position features. For example, the latest lane position feature may include: predicting the position feature corresponding to the lane boundary.
综上,本申请实施例的数据处理方法,在所述车辆对应的当前车道边界不完整的情况下,确定预测车道边界。预测车道边界可以作为最新车道图像特征的确定依据,以得到更多的车道图像特征。由于可以增加车道边界的确定过程中的信息(如最新车道图像特征和最新车道位置特征),因此可以提高车道边界的准确度。In summary, the data processing method of the embodiment of the present application determines the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete. The predicted lane boundary can be used as the basis for determining the latest lane image features to obtain more lane image features. Since the information in the process of determining the lane boundary (such as the latest lane image feature and the latest lane position feature) can be added, the accuracy of the lane boundary can be improved.
本申请实施例还提供了一种导航方法,具体可以包括如下步骤:An embodiment of the present application also provides a navigation method, which may specifically include the following steps:
依据车辆对应的车道图像特征、车道位置特征和车道数量特征,确定所述车辆对应的车道边界;其中,所述车道图像特征依据所述车辆对应的道路图像确定,所述位置特征依据定位数据确定,所述车道数量特征依据地图数据确定;Determine the lane boundary corresponding to the vehicle according to the lane image feature, lane position feature and lane number feature corresponding to the vehicle; wherein, the lane image feature is determined according to the road image corresponding to the vehicle, and the position feature is determined according to the positioning data , The number of lane features is determined based on map data;
依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
依据所述目标车道,确定所述车辆对应的导航信息。According to the target lane, the navigation information corresponding to the vehicle is determined.
在实际应用中,上述导航信息用于引导所述车辆的行驶。In practical applications, the above navigation information is used to guide the driving of the vehicle.
根据一种实施例,上述导航信息可以包括:语音形式的目标车道信息、或者,在地图上绘制的目标车道对应的车道边界线,以使用户确定车辆所处的目标车道。According to an embodiment, the navigation information may include: target lane information in voice form, or a lane boundary line corresponding to the target lane drawn on the map, so that the user can determine the target lane where the vehicle is located.
根据另一种实施例,上述导航信息可以包括:基于目标车道的导航路线等。According to another embodiment, the above navigation information may include: a navigation route based on a target lane and the like.
综上,本申请实施例在导航场景下可以提供车道级引导,上述车道级引导在于驶入复杂路口、多层立交多出入口道路的情况下,可以提高导航的准确度和精度。另外,车道级别的定位可以为AR导航提供基础数据。In summary, the embodiments of the present application can provide lane-level guidance in a navigation scenario. The above-mentioned lane-level guidance is to improve the accuracy and precision of navigation when entering a complex intersection or a multi-level interchange with multiple entrances and exits. In addition, lane-level positioning can provide basic data for AR navigation.
本申请实施例还提供了一种辅助驾驶方法,具体可以包括如下步骤:An embodiment of the present application also provides a driving assistance method, which may specifically include the following steps:
依据车辆对应的车道图像特征、车道位置特征和车道数量特征,确定所述车辆对应的车道边界;其中,所述车道图像特征依据所述车辆对应的道路图像确定,所述位置特征依据定位数据确定,所述车道数量特征依据地图数据确定;Determine the lane boundary corresponding to the vehicle according to the lane image feature, lane position feature and lane number feature corresponding to the vehicle; wherein, the lane image feature is determined according to the road image corresponding to the vehicle, and the position feature is determined according to the positioning data , The number of lane features is determined based on map data;
依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
依据所述目标车道,确定所述车辆对应的辅助驾驶信息。According to the target lane, the auxiliary driving information corresponding to the vehicle is determined.
上述辅助驾驶信息可以包括:目标车道播报信息、或者车道保持信息、或者变道信息等。例如,在目标车道的路况通畅的情况下,可以输出车道保持信息;又如,在目标车道不适于转弯的情况下,可以输出变道信息。可以理解,本申请实施例对于具体的辅助驾驶信息不加以限制,例如上述辅助驾驶信息还可以包括:刹车提示信息等。The above-mentioned driving assistance information may include: target lane announcement information, or lane keeping information, or lane change information, etc. For example, when the target lane is clear, the lane keeping information can be output; for example, when the target lane is not suitable for turning, the lane change information can be output. It can be understood that the embodiments of the present application do not limit specific auxiliary driving information, for example, the foregoing auxiliary driving information may further include: brake prompt information and the like.
由于本申请实施例可以依据车道级别的定位提供辅助驾驶信息,可以提高车辆的安全性和辅助驾驶信息的合理性。Since the embodiments of the present application can provide assisted driving information according to the lane-level positioning, the safety of the vehicle and the rationality of the assisted driving information can be improved.
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。It should be noted that, for simplicity of description, the method embodiments are expressed as a series of action combinations, but those skilled in the art should know that the embodiments of the present application are not limited by the sequence of actions described because According to the embodiments of the present application, some steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the involved actions are not necessarily required by the embodiments of the present application.
本申请实施例还提供了一种数据处理装置。An embodiment of the present application also provides a data processing device.
参照图9,示出了本申请的一种数据处理装置实施例的结构框图,具体可以包括如下模块:Referring to FIG. 9, it shows a structural block diagram of an embodiment of a data processing device of the present application, which may specifically include the following modules:
图像处理模块901,用于依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;The image processing module 901 is used to determine the lane image features corresponding to the vehicle according to the road image corresponding to the vehicle;
定位模块902,用于依据定位数据,确定车辆对应的车道位置特征;The positioning module 902 is used to determine the lane position characteristics of the vehicle according to the positioning data;
地图处理模块903,用于依据地图数据,确定车辆对应的车道数量特征;The map processing module 903 is used to determine the number of lane features corresponding to the vehicle according to the map data;
车道边界确定模块904,用于依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;以及The lane boundary determination module 904 is configured to determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature; and
目标车道确定模块905,用于依据所述车道边界,确定所述车辆对应的目标车道。The target lane determination module 905 is used to determine the target lane corresponding to the vehicle according to the lane boundary.
可选地,所述车道边界确定模块904可以包括:Optionally, the lane boundary determination module 904 may include:
车道特征点确定模块,用于依据所述车道图像特征,得到两个车道特征点;所述两个车道特征点属于不同的第一车道边界;The lane feature point determination module is used to obtain two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries;
距离确定模块,用于依据所述车道位置特征,确定所述两个车道特征点之间的距离;以及A distance determination module for determining the distance between the two lane feature points based on the lane position feature; and
第一边界确定模块,用于依据所述两个车道特征点之间的距离、以及所述车道数量特征,确定所述车辆对应的第二车道边界。The first boundary determining module is configured to determine a second lane boundary corresponding to the vehicle according to the distance between the two lane feature points and the number of lane features.
可选地,所述车道图像特征可以包括:道路周边设施的轮廓线,所述车道边界确定模块904可以包括:Optionally, the lane image features may include: contour lines of road surrounding facilities, and the lane boundary determination module 904 may include:
道路宽度确定模块,用于依据所述道路周边设施的轮廓线、以及车道位置特征,确定道路宽度;The road width determination module is used to determine the road width according to the contour lines of the surrounding facilities of the road and the characteristics of the position of the lane;
第二边界确定模块,用于依据所述道路宽度、以及所述车道数量特征,确定所述车辆对应的车道边界。The second boundary determination module is used to determine the lane boundary corresponding to the vehicle according to the road width and the number of lane features.
可选地,所述目标车道确定模块905可以包括:Optionally, the target lane determination module 905 may include:
第一目标车道确定模块,用于依据所述车道边界、以及所述车辆对应的连续运动信息,确定所述车辆对应的目标车道。The first target lane determination module is configured to determine the target lane corresponding to the vehicle according to the lane boundary and continuous motion information corresponding to the vehicle.
可选地,所述目标车道确定模块905可以包括:Optionally, the target lane determination module 905 may include:
特征点及方向确定模块,用于确定多个车道边界分别对应的车道特征点、以及第一方向;The feature point and direction determination module is used to determine the lane feature points corresponding to the multiple lane boundaries and the first direction respectively;
第二目标车道确定模块,用于依据所述第一方向与第二方向之间的关系,确定所述车辆对应的目标车道;所述第二方向为所述车道特征点与车辆特征点对应的方向。The second target lane determination module is used to determine the target lane corresponding to the vehicle according to the relationship between the first direction and the second direction; the second direction is the correspondence between the lane feature point and the vehicle feature point direction.
可选地,所述图像处理模块901,还用于在所述车辆对应的当前车道边界不完整的情况下,依据车辆对应的道路图像、以及所述当前车道边界,确定所述车辆对应的最新车道图像特征;Optionally, the image processing module 901 is also used to determine the latest corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary when the current lane boundary corresponding to the vehicle is incomplete Lane image features;
所述定位模块902还用于在所述车辆对应的当前车道边界不完整的情况下,依据车辆对应的道路图像、以及所述当前车道边界,确定所述车辆对应的最新车道位置特征;The positioning module 902 is also used to determine the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
所述车道边界确定模块904,还用于依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;The lane boundary determination module 904 is also used to determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
所述目标车道确定模块905,还用于依据所述最新车道边界,确定所述车辆对应的目标车道。The target lane determination module 905 is also used to determine the target lane corresponding to the vehicle according to the latest lane boundary.
可选地,所述车道边界确定模块模块904,还用于在所述车辆对应的当前车道边界不完整的情况下,确定预测车道边界;Optionally, the lane boundary determination module module 904 is also used to determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
所述图像处理模块901,还用于依据车辆对应的道路图像、所述当前车道边界、以及所述预测车道边界,确定所述车辆对应的最新车道图像特征;The image processing module 901 is also used to determine the latest lane image feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
所述定位模块902,还用于依据车辆对应的道路图像、所述当前车道边界、以及所述预测车道边界,确定最新车道位置特征;The positioning module 902 is also used to determine the latest lane position feature based on the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
所述车道边界确定模块904,还用于依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;The lane boundary determination module 904 is also used to determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
所述目标车道确定模块905,还用于依据所述最新车道边界,确定所述车辆对应的目标车道。The target lane determination module 905 is also used to determine the target lane corresponding to the vehicle according to the latest lane boundary.
可选地,所述车道图像特征可以包括如下特征中的至少一种:Optionally, the lane image feature may include at least one of the following features:
车道特征点、车道特征线、以及车道特征区域。Lane feature points, lane feature lines, and lane feature areas.
可选地,所述车道特征点可以包括如下特征点中的至少一种:Optionally, the lane feature points may include at least one of the following feature points:
车道边界的端点;以及The end of the lane boundary; and
车道边界与车道边界的垂线之间的交点。The intersection between the lane boundary and the vertical line of the lane boundary.
可选地,所述车道特征线可以包括如下特征线中的至少一种:Optionally, the lane characteristic line may include at least one of the following characteristic lines:
车道线;Lane line
道路边沿线;Roadside;
道路边沿设施的轮廓线;The outline of road edge facilities;
道路上方设施的轮廓线;以及The outline of the facility above the road; and
隧道口轮廓线。Contour of tunnel entrance.
可选地,所述车道特征区域可以包括如下区域中的至少一种:Optionally, the lane feature area may include at least one of the following areas:
斑马线区域;Zebra crossing area;
绿化带区域;以及Green belt area; and
车辆区域。Vehicle area.
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关 之处参见方法实施例的部分说明即可。For the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method embodiment.
参照图10,示出了本申请实施例的一种数据处理装置的数据交互示意,其中,图像处理模块901,用于对图像采集装置获取到的道路图像进行图像处理,上述图像处理可以包括:图像识别和特征提取等,其与车道边界确定模块904和定位模块902之间都有交互。图像处理结果可以输入到车道边界确定模块904;另外,图像处理结果也会传递到定位模块902,用以提升车道位置特征的精度。10, there is shown a data interaction schematic diagram of a data processing device according to an embodiment of the present application. The image processing module 901 is configured to perform image processing on a road image acquired by an image collection device. The image processing may include: Image recognition and feature extraction, etc., interact with the lane boundary determination module 904 and the positioning module 902. The image processing result can be input to the lane boundary determination module 904; in addition, the image processing result can also be passed to the positioning module 902 to improve the accuracy of the lane position feature.
定位模块902用于确定车辆位置特征、以及车道图像特征对应的位置特征。定位模块902可以将图像处理结果转换到地图坐标系中。The positioning module 902 is used to determine the position features of the vehicle and the position features corresponding to the lane image features. The positioning module 902 can convert the image processing result into a map coordinate system.
由于图像识别结果可以为定位模块902提供依据,故可以提升车道位置特征的定位精度。另外,车道位置特征还可以作为车道边界确定模块904的输入与图像处理结果进行融合。并且,定位模块902还可以与地图处理模块903进行交互,以获取车辆所在道路的道路特征,上述道路特征可以包括:车道数量、车道方向特征等。Since the image recognition result can provide a basis for the positioning module 902, the positioning accuracy of the lane position feature can be improved. In addition, the lane position feature can also be fused as the input of the lane boundary determination module 904 and the image processing result. In addition, the positioning module 902 can also interact with the map processing module 903 to obtain the road features of the road where the vehicle is located. The above-mentioned road features may include: the number of lanes, lane direction features, and the like.
地图处理模块903,用于确定车辆位置所处的道路特征,上述道路特征在车道边界确定模块904中与图像处理结果和车道位置特征进行融合。The map processing module 903 is used to determine the road feature where the vehicle position is located. The above road feature is merged with the image processing result and the lane position feature in the lane boundary determination module 904.
车道边界确定模块904和目标车道确定模块905可以集成设置,或者分开设置。二者用于确定初始目标车道,在确定初始目标车道后,可以利用车道跟踪和/或变道检测方法,进行目标车道的实时更新。The lane boundary determination module 904 and the target lane determination module 905 may be set integrally or separately. Both are used to determine the initial target lane. After the initial target lane is determined, the lane tracking and/or lane change detection method can be used to update the target lane in real time.
在本申请的一种实施例中,图像处理模块901向车道边界确定模块904传递的数据可以包括:车道图像特征,车道图像特征具体可以包括但不限于:车道线及道路边沿、道路两侧特征点等。车道边界确定模块904向图像处理模块901传递的数据可以包括:当前车道边界、车道位置特征、车辆的连续运动信息和图像采集装置的姿态、道路特征等。其中,图像采集装置的姿态可以作为图像处理的依据,上述姿态可以包括:平视、仰视、俯视等。In an embodiment of the present application, the data transmitted by the image processing module 901 to the lane boundary determination module 904 may include: lane image features, and the lane image features may specifically include but are not limited to: lane lines and road edges, and features on both sides of the road Wait. The data transmitted by the lane boundary determination module 904 to the image processing module 901 may include: current lane boundary, lane position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, road characteristics, and the like. The posture of the image acquisition device can be used as a basis for image processing, and the posture can include head-up, head-up, and top-view.
在本申请的一种示例中,假设车道数量为4,其中的一条车道线被干扰,则图像处理模块901可以尝试将被干扰的车道线对应的图像特征提取出来;而车道边界确定模块904可以依据融合的多种数据,确定被干扰的车道线。In an example of the present application, assuming that the number of lanes is 4, and one of the lane lines is disturbed, the image processing module 901 may try to extract image features corresponding to the disturbed lane line; and the lane boundary determination module 904 may Based on the fused multiple data, determine the lane line to be disturbed.
在本申请的一种实施例中,定位模块902向车道边界确定模块904传递的数据可以包括:车辆位置特征、车辆的连续运动信息、图像采集装置的姿态、车道位置特征及变道检测结果等。车道边界确定模块904向定位模块传递的数据可以包括:当前车道边界、车辆位置特征、车辆的连续运动信息、图像采集装置的姿态、以及道路特征等。In an embodiment of the present application, the data transmitted by the positioning module 902 to the lane boundary determination module 904 may include: vehicle position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, lane position characteristics, and lane change detection results, etc. . The data transmitted by the lane boundary determination module 904 to the positioning module may include: current lane boundary, vehicle position characteristics, continuous motion information of the vehicle, posture of the image acquisition device, and road characteristics.
图像处理模块901,用于依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;The image processing module 901 is used to determine the lane image features corresponding to the vehicle according to the road image corresponding to the vehicle;
定位模块902,用于依据定位数据,确定车辆对应的车道位置特征;The positioning module 902 is used to determine the lane position characteristics of the vehicle according to the positioning data;
地图处理模块903,用于依据地图数据,确定车辆对应的车道数量特征;The map processing module 903 is used to determine the number of lane features corresponding to the vehicle according to the map data;
车道边界确定模块904与图像处理模块901、定位模块902和地图处理模块903之间的数据交互,一方面可以更好地实现车道边界的确定,另一方面可以将有用的信息在不同的模块之间流转,从而可以提升多个模块的性能和效率。The data interaction between the lane boundary determination module 904 and the image processing module 901, the positioning module 902, and the map processing module 903, on the one hand, can better determine the lane boundary, on the other hand, it can put useful information in different Flow between, which can improve the performance and efficiency of multiple modules.
可选地,车道边界确定模块904可以根据定位结果也即车道位置特征,确定车辆所在路段的车道数量、车道类型、车道方向等车道特征,结合图像识别结果、定位结果、以及车道特征;可以首先对图像识别结果进行预处理,上述预处理可以包括:粗差检测与剔除,预处理后的数据进行车道边界的解算,解算结果可以是车道边界在定位、地图统一的坐标系下的信息,如果所有车道边界均已确定,则可以进入下一步确定目标车道的流程;否则,可以在已有的第一车道边界的基础上,并对剩余的车道边界进行预测,预测车道边界回传到图像识别模块901,图像识别模块901再根据预测车道边界对特定区域进行二次提取,提取后的结果再次加入到车道边界的解算流程中,直到完成所有车道边界的确定。上述预测车道边界的回传,可以将更多的信息应用于车道边界的确定。Optionally, the lane boundary determination module 904 may determine lane characteristics such as the number of lanes, lane type, lane direction, etc. of the road segment where the vehicle is located based on the positioning result, that is, the lane position feature, combined with the image recognition result, positioning result, and lane feature; Pre-process the image recognition results. The pre-processing can include: gross error detection and elimination. The pre-processed data is used to solve the lane boundary. The result of the solution can be the information of the lane boundary under the coordinate system of positioning and map unified If all lane boundaries have been determined, you can proceed to the next step to determine the target lane; otherwise, you can predict the remaining lane boundaries on the basis of the existing first lane boundary, and the predicted lane boundary is passed back to The image recognition module 901 and the image recognition module 901 perform secondary extraction on the specific area according to the predicted lane boundary, and the extracted result is added to the lane boundary solution process again until the determination of all lane boundaries is completed. The above-mentioned return of the predicted lane boundary can apply more information to the determination of the lane boundary.
可选地,在确定所有车道边界后,开始对车辆的运动矢量进行累积,之所以这样做是考虑车辆此时可能处于变道或正好行驶在路口空白区域,最后,根据累积的运动矢量判定车辆所处的目标车道,可以完成初始目标车道的确定。Optionally, after determining all the lane boundaries, the vehicle's motion vectors start to be accumulated. The reason for this is to consider that the vehicle may be in a lane change or just driving in a blank area of the intersection at the time. Where the target lane is, the initial target lane can be determined.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the embodiments may refer to each other.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the device in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment related to the method, and will not be elaborated here.
本申请实施例的实施例可被实现为使用任意适当的硬件和/或软件进行想要的配置的***或装置。图11示意性地示出了可被用于实现本申请中所述的各个实施例的示例性设备1300。The embodiments of the embodiments of the present application may be implemented as a system or device that uses any suitable hardware and/or software to perform a desired configuration. FIG. 11 schematically shows an exemplary device 1300 that can be used to implement various embodiments described in this application.
对于一个实施例,图11示出了示例性设备1300,该设备1300可以包括:一个或多个处理器1302、与处理器1302中的至少一个耦合的***控制模块(芯片组)1304、与***控制模块1304耦合的***存储器1306、与***控制模块1304耦合的非易失性存储器(NVM)/存储装置1308、与***控制模块1304耦合的一个或多个输入/输出设备1310, 以及与***控制模块1306耦合的网络接口1312。该***存储器1306可以包括:指令1362,该指令1362可被一个或多个处理器1302执行。For one embodiment, FIG. 11 shows an exemplary device 1300, which may include: one or more processors 1302, a system control module (chip set) 1304 coupled with at least one of the processors 1302, and a system System memory 1306 coupled to control module 1304, non-volatile memory (NVM)/storage device 1308 coupled to system control module 1304, one or more input/output devices 1310 coupled to system control module 1304, and system control The module 1306 is coupled to the network interface 1312. The system memory 1306 may include instructions 1362, which may be executed by one or more processors 1302.
处理器1302可包括一个或多个单核或多核处理器,处理器1302可包括通用处理器或专用处理器(例如图形处理器、应用程序处理器、基带处理器等)的任意组合。在一些实施例中,设备1300能够作为本申请实施例中所述的服务器、目标设备、无线设备等。The processor 1302 may include one or more single-core or multi-core processors, and the processor 1302 may include any combination of general-purpose processors or special-purpose processors (eg, graphics processors, application processors, baseband processors, etc.). In some embodiments, the device 1300 can serve as the server, target device, wireless device, etc. described in the embodiments of the present application.
在一些实施例中,设备1300可包括具有指令的一个或多个机器可读介质(例如,***存储器1306或NVM/存储装置1308)以及与该一个或多个机器可读介质相合并被配置为执行指令、以实现前述装置包括的模块、从而执行本申请实施例中所述的动作的一个或多个处理器1302。In some embodiments, the device 1300 may include one or more machine-readable media with instructions (eg, system memory 1306 or NVM/storage 1308) and be configured in conjunction with the one or more machine-readable media One or more processors 1302 that execute instructions to implement the modules included in the foregoing apparatus to perform the actions described in the embodiments of the present application.
一个实施例的***控制模块1304可包括任何适合的接口控制器,用于提供任何适合的接口给处理器1302中的至少一个和/或与***控制模块1304通信的任意适合的装置或部件。The system control module 1304 of an embodiment may include any suitable interface controller for providing any suitable interface to at least one of the processors 1302 and/or any suitable device or component in communication with the system control module 1304.
一个实施例的***控制模块1304可包括一个或多个存储器控制器,用于提供接口给***存储器1306。存储器控制器可以是硬件模块、软件模块和/或固件模块。The system control module 1304 of an embodiment may include one or more memory controllers for providing an interface to the system memory 1306. The memory controller may be a hardware module, a software module, and/or a firmware module.
一个实施例的***存储器1306可被用于加载和存储数据和/或指令1362。对于一个实施例,***存储器1306可包括任何适合的易失性存储器,例如,适合的DRAM(动态随机存取存储器)。在一些实施例中,***存储器1306可包括:双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。The system memory 1306 of one embodiment may be used to load and store data and/or instructions 1362. For one embodiment, the system memory 1306 may include any suitable volatile memory, for example, a suitable DRAM (Dynamic Random Access Memory). In some embodiments, the system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
一个实施例的***控制模块1304可包括一个或多个输入/输出控制器,以向NVM/存储装置1308及(一个或多个)输入/输出设备1310提供接口。The system control module 1304 of one embodiment may include one or more input/output controllers to provide an interface to the NVM/storage device 1308 and the input/output device(s) 1310.
一个实施例的NVM/存储装置1308可被用于存储数据和/或指令1382。NVM/存储装置1308可包括任何适合的非易失性存储器(例如闪存等)和/或可包括任何适合的(一个或多个)非易失性存储设备,例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器等。The NVM/storage device 1308 of one embodiment may be used to store data and/or instructions 1382. The NVM/storage device 1308 may include any suitable non-volatile memory (such as flash memory, etc.) and/or may include any suitable non-volatile storage device(s), for example, one or more hard drives ( HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives, etc.
NVM/存储装置1308可包括在物理上是设备1300被安装在其上的装置的一部分的存储资源,或者其可被该装置访问而不必作为该装置的一部分。例如,NVM/存储装置1308可经由网络接口1312通过网络和/或通过输入/输出设备1310进行访问。The NVM/storage device 1308 may include storage resources that are physically part of the device on which the device 1300 is installed, or it may be accessed by the device without having to be part of the device. For example, the NVM/storage device 1308 can be accessed via the network interface 1312 through the network and/or through the input/output device 1310.
一个实施例的(一个或多个)输入/输出设备1310可为设备1300提供接口以与任意其他适当的设备通信,输入/输出设备1310可以包括通信组件、音频组件、传感器组件 等。The input/output device(s) 1310 of one embodiment may provide an interface for the device 1300 to communicate with any other suitable device. The input/output device 1310 may include communication components, audio components, sensor components, and the like.
一个实施例的网络接口1312可为设备1300提供接口以通过一个或多个网络和/或与任何其他适合的装置通信,设备1300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信,例如接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合进行无线通信。The network interface 1312 of an embodiment may provide an interface for the device 1300 to communicate through one or more networks and/or with any other suitable device. The device 1300 may be based on one or more wireless network standards and/or any of the protocols And/or protocol to wirelessly communicate with one or more components of the wireless network, for example, to access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof for wireless communication.
对于一个实施例,处理器1302中的至少一个可与***控制模块1304的一个或多个控制器(例如,存储器控制器)的逻辑封装在一起。对于一个实施例,处理器1302中的至少一个可与***控制模块1304的一个或多个控制器的逻辑封装在一起以形成***级封装(SiP)。对于一个实施例,处理器1302中的至少一个可与***控制模块1304的一个或多个控制器的逻辑集成在同一新品上。对于一个实施例,处理器1302中的至少一个可与***控制模块1304的一个或多个控制器的逻辑集成在同一芯片上以形成片上***(SoC)。For one embodiment, at least one of the processors 1302 may be packaged with the logic of one or more controllers (eg, memory controllers) of the system control module 1304. For one embodiment, at least one of the processors 1302 may be packaged with the logic of one or more controllers of the system control module 1304 to form a system-in-package (SiP). For one embodiment, at least one of the processors 1302 may be integrated with the logic of one or more controllers of the system control module 1304 on the same new product. For one embodiment, at least one of the processors 1302 may be integrated with the logic of one or more controllers of the system control module 1304 on the same chip to form a system on chip (SoC).
在各个实施例中,设备1300可以包括但不限于:台式计算设备或移动计算设备(例如,膝上型计算设备、手持计算设备、平板电脑、上网本等)等计算设备。在各个实施例中,设备1300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,设备1300可以包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。In various embodiments, the device 1300 may include, but is not limited to, desktop computing devices or mobile computing devices (eg, laptop computing devices, handheld computing devices, tablet computers, netbooks, etc.) and other computing devices. In various embodiments, the device 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, the device 1300 may include one or more cameras, keyboards, liquid crystal display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits (ASIC) and speakers.
其中,如果显示器包括触摸面板,显示屏可以被实现为触屏显示器,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。Among them, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
本申请实施例还提供了一种非易失性可读存储介质,该存储介质中存储有一个或多个模块(programs),该一个或多个模块被应用在装置时,可以使得该装置执行本申请实施例中各方法的指令(instructions)。An embodiment of the present application also provides a non-volatile readable storage medium, where one or more modules are stored in the storage medium, and when the one or more modules are applied to a device, the device can be executed Instructions of each method in the embodiments of the present application.
在一个示例中提供了一种设备,包括:一个或多个处理器;和,其上存储的一个或多个机器可读介质中的指令,由所述一个或多个处理器执行时,导致所述装置执行如本申请实施例中的方法,该方法可以包括:图2或图3或图4或图5或图6或图7或图8所示的方法。In an example, a device is provided, including: one or more processors; and, instructions stored in one or more machine-readable media stored thereon, when executed by the one or more processors, results in The device executes the method as in the embodiment of the present application, and the method may include: the method shown in FIG. 2 or FIG. 3 or FIG. 4 or FIG. 5 or FIG. 6 or FIG. 7 or FIG.
在一个示例中还提供了一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如本申请实施例中的方法,该方法可以包括:图2或图 3或图4或图5或图6或图7或图8所示的方法。In one example, one or more machine-readable media are also provided, on which instructions are stored, which, when executed by one or more processors, causes the apparatus to execute the method as in the embodiments of the present application, and the method may include: The method shown in Figure 2 or Figure 3 or Figure 4 or Figure 5 or Figure 6 or Figure 7 or Figure 8.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明,相关之处参见方法实施例的部分说明即可。Regarding the device in the above embodiment, the specific manner in which each module executes the operation has been described in detail in the embodiment of the method, and will not be elaborated here. For related parts, please refer to the part of the method embodiment. can.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The embodiments in this specification are described in a progressive manner. Each embodiment focuses on the differences from other embodiments, and the same or similar parts between the embodiments may refer to each other.
本申请实施例是参照根据本申请实施例的方法、装置(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理装置的处理器以产生一个机器,使得通过计算机或其他可编程数据处理装置的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The embodiments of the present application are described with reference to flowcharts and/or block diagrams of the method, apparatus (system), and computer program product according to the embodiments of the present application. It should be understood that each flow and/or block in the flowchart and/or block diagram and a combination of the flow and/or block in the flowchart and/or block diagram may be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to generate a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device An apparatus for realizing the functions specified in one block or multiple blocks of one flow or multiple flows of a flowchart and/or one block or multiple blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理装置以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including the instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理装置上,使得在计算机或其他可编程装置上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程装置上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device The instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。Although the preferred embodiments of the embodiments of the present application have been described, those skilled in the art can make additional changes and modifications to these embodiments once they learn the basic inventive concept. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and all changes and modifications falling within the scope of the embodiments of the present application.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一 个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者装置中还存在另外的相同要素。Finally, it should also be noted that in this article, relational terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply these entities Or there is any such actual relationship or order between operations. Moreover, the terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device that includes a series of elements includes not only those elements, but also those not explicitly listed Or other elements that are inherent to this process, method, article, or device. Without further restrictions, the element defined by the sentence "include one..." does not exclude that there are other identical elements in the process, method, article or device that includes the element.
以上对本申请所提供的一种数据处理方法、一种数据处理装置、一种设备、以及一种机器可读介质介质,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The above provides a detailed description of a data processing method, a data processing device, a device, and a machine-readable medium provided in this application. In this paper, specific examples are applied to the principles and implementation of this application It has been elaborated that the descriptions of the above embodiments are only used to help understand the method of the present application and its core ideas; meanwhile, for those of ordinary skill in the art, according to the ideas of the present application, there will be As for the changes, in summary, the content of this specification should not be understood as a limitation to this application.

Claims (16)

  1. 一种数据处理方法,其特征在于,包括:A data processing method, including:
    依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;Determine the image features of the lane corresponding to the vehicle according to the road image corresponding to the vehicle;
    依据定位数据,确定车辆对应的车道位置特征;According to the positioning data, determine the corresponding lane position characteristics of the vehicle;
    依据地图数据,确定车辆对应的车道数量特征;According to the map data, determine the characteristics of the number of lanes corresponding to the vehicle;
    依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;Determine the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
    依据所述车道边界,确定所述车辆对应的目标车道。According to the lane boundary, a target lane corresponding to the vehicle is determined.
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述车辆对应的车道边界,包括:The method according to claim 1, wherein the determining the lane boundary corresponding to the vehicle comprises:
    依据所述车道图像特征,得到两个车道特征点;所述两个车道特征点属于不同的第一车道边界;According to the lane image features, two lane feature points are obtained; the two lane feature points belong to different first lane boundaries;
    依据所述车道位置特征,确定所述两个车道特征点之间的距离;Determine the distance between the two lane feature points based on the lane position feature;
    依据所述两个车道特征点之间的距离、以及所述车道数量特征,确定所述车辆对应的第二车道边界。The second lane boundary corresponding to the vehicle is determined according to the distance between the two lane feature points and the number of lane features.
  3. 根据权利要求1所述的方法,其特征在于,所述车道图像特征包括:道路周边设施的轮廓线,所述确定所述车辆对应的车道边界,包括:The method of claim 1, wherein the lane image features include: contour lines of road surrounding facilities, and the determining of the lane boundary corresponding to the vehicle includes:
    依据所述道路周边设施的轮廓线、以及车道位置特征,确定道路宽度;Determine the width of the road based on the contours of the surrounding facilities of the road and the characteristics of the position of the lane;
    依据所述道路宽度、以及所述车道数量特征,确定所述车辆对应的车道边界。The lane boundary corresponding to the vehicle is determined according to the road width and the number of lane features.
  4. 根据权利要求1所述的方法,其特征在于,所述确定所述车辆对应的目标车道,包括:The method according to claim 1, wherein the determining the target lane corresponding to the vehicle comprises:
    依据所述车道边界、以及所述车辆对应的连续运动信息,确定所述车辆对应的目标车道。The target lane corresponding to the vehicle is determined according to the lane boundary and continuous motion information corresponding to the vehicle.
  5. 根据权利要求1所述的方法,其特征在于,所述确定所述车辆对应的目标车道,包括:The method according to claim 1, wherein the determining the target lane corresponding to the vehicle comprises:
    确定多个车道边界分别对应的车道特征点、以及第一方向;Determine the lane feature points and the first direction corresponding to the multiple lane boundaries respectively;
    依据所述第一方向与第二方向之间的关系,确定所述车辆对应的目标车道;所述第二方向为所述车道特征点与车辆特征点对应的方向。The target lane corresponding to the vehicle is determined according to the relationship between the first direction and the second direction; the second direction is the direction corresponding to the lane feature point and the vehicle feature point.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    在所述车辆对应的当前车道边界不完整的情况下,依据车辆对应的道路图像、以及 所述当前车道边界,确定所述车辆对应的最新车道图像特征和最新车道位置特征;In the case where the current lane boundary corresponding to the vehicle is incomplete, the latest lane image feature and the latest lane position feature corresponding to the vehicle are determined according to the road image corresponding to the vehicle and the current lane boundary;
    依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
    依据所述最新车道边界,确定所述车辆对应的目标车道。According to the latest lane boundary, a target lane corresponding to the vehicle is determined.
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    在所述车辆对应的当前车道边界不完整的情况下,确定预测车道边界;Determine the predicted lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
    依据车辆对应的道路图像、所述当前车道边界、以及所述预测车道边界,确定所述车辆对应的最新车道图像特征和最新车道位置特征;Determine the latest lane image feature and the latest lane position feature corresponding to the vehicle based on the road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
    依据所述车辆对应的最新车道图像特征、所述最新车道位置特征和所述车道数量特征,确定所述车辆对应的最新车道边界;Determine the latest lane boundary corresponding to the vehicle according to the latest lane image feature corresponding to the vehicle, the latest lane position feature and the number of lane features;
    依据所述最新车道边界,确定所述车辆对应的目标车道。According to the latest lane boundary, a target lane corresponding to the vehicle is determined.
  8. 根据权利要求1至7中任一所述的方法,其特征在于,所述车道图像特征包括如下特征中的至少一种:The method according to any one of claims 1 to 7, wherein the lane image features include at least one of the following features:
    车道特征点、车道特征线、以及车道特征区域。Lane feature points, lane feature lines, and lane feature areas.
  9. 根据权利要求8所述的方法,其特征在于,所述车道特征点包括如下特征点中的至少一种:The method according to claim 8, wherein the lane feature points include at least one of the following feature points:
    车道边界的端点;以及The end of the lane boundary; and
    车道边界与车道边界的垂线之间的交点。The intersection between the lane boundary and the vertical line of the lane boundary.
  10. 根据权利要求8所述的方法,其特征在于,所述车道特征线包括如下特征线中的至少一种:The method according to claim 8, wherein the lane characteristic line comprises at least one of the following characteristic lines:
    车道线;Lane line
    道路边沿线;Roadside;
    道路边沿设施的轮廓线;The outline of road edge facilities;
    道路上方设施的轮廓线;以及The outline of the facility above the road; and
    隧道口轮廓线。Contour of tunnel entrance.
  11. 根据权利要求8所述的方法,其特征在于,所述车道特征区域包括如下区域中的至少一种:The method according to claim 8, wherein the lane feature area includes at least one of the following areas:
    斑马线区域;Zebra crossing area;
    绿化带区域;以及Green belt area; and
    车辆区域。Vehicle area.
  12. 一种数据处理装置,其特征在于,包括:A data processing device, including:
    图像处理模块,用于依据车辆对应的道路图像,确定所述车辆对应的车道图像特征;The image processing module is used to determine the characteristics of the lane image corresponding to the vehicle according to the road image corresponding to the vehicle;
    定位模块,用于依据定位数据,确定车辆对应的车道位置特征;The positioning module is used to determine the lane position characteristics of the vehicle according to the positioning data;
    地图处理模块,用于依据地图数据,确定车辆对应的车道数量特征;The map processing module is used to determine the number of lane features corresponding to the vehicle based on the map data;
    车道边界确定模块,用于依据所述车道图像特征、所述车道位置特征和所述车道数量特征,确定所述车辆对应的车道边界;以及A lane boundary determining module, configured to determine a lane boundary corresponding to the vehicle based on the lane image feature, the lane position feature and the lane number feature; and
    目标车道确定模块,用于依据所述车道边界,确定所述车辆对应的目标车道。The target lane determination module is used to determine a target lane corresponding to the vehicle according to the lane boundary.
  13. 一种设备,其特征在于,包括:A device characterized by comprising:
    一个或多个处理器;和One or more processors; and
    其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述设备执行如权利要求1-11中一个或多个所述的方法。One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, causes the device to perform the method as described in one or more of claims 1-11.
  14. 一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得装置执行如权利要求1-11中一个或多个所述的方法。One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the device to perform the method described in one or more of claims 1-11.
  15. 一种导航方法,其特征在于,包括:A navigation method is characterized by including:
    依据车辆对应的车道图像特征、车道位置特征和车道数量特征,确定所述车辆对应的车道边界;其中,所述车道图像特征依据所述车辆对应的道路图像确定,所述位置特征依据定位数据确定,所述车道数量特征依据地图数据确定;Determine the lane boundary corresponding to the vehicle according to the lane image feature, lane position feature and lane number feature corresponding to the vehicle; wherein, the lane image feature is determined according to the road image corresponding to the vehicle, and the position feature is determined according to the positioning data , The number of lane features is determined based on map data;
    依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
    依据所述目标车道,确定所述车辆对应的导航信息。According to the target lane, the navigation information corresponding to the vehicle is determined.
  16. 一种辅助驾驶方法,其特征在于,包括:A driving assistance method is characterized by including:
    依据车辆对应的车道图像特征、车道位置特征和车道数量特征,确定所述车辆对应的车道边界;其中,所述车道图像特征依据所述车辆对应的道路图像确定,所述位置特征依据定位数据确定,所述车道数量特征依据地图数据确定;Determine the lane boundary corresponding to the vehicle according to the lane image feature, lane position feature and lane number feature corresponding to the vehicle; wherein, the lane image feature is determined according to the road image corresponding to the vehicle, and the position feature is determined according to the positioning data , The number of lane features is determined based on map data;
    依据所述车道边界,确定所述车辆对应的目标车道;Determine the target lane corresponding to the vehicle according to the lane boundary;
    依据所述目标车道,确定所述车辆对应的辅助驾驶信息。According to the target lane, the auxiliary driving information corresponding to the vehicle is determined.
PCT/CN2019/123214 2018-12-12 2019-12-05 Data processing method, apparatus, device and machine readable medium WO2020119567A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811519987.2A CN111311902B (en) 2018-12-12 2018-12-12 Data processing method, device, equipment and machine readable medium
CN201811519987.2 2018-12-12

Publications (1)

Publication Number Publication Date
WO2020119567A1 true WO2020119567A1 (en) 2020-06-18

Family

ID=71076765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123214 WO2020119567A1 (en) 2018-12-12 2019-12-05 Data processing method, apparatus, device and machine readable medium

Country Status (3)

Country Link
CN (1) CN111311902B (en)
TW (1) TW202033932A (en)
WO (1) WO2020119567A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914651A (en) * 2020-07-01 2020-11-10 浙江大华技术股份有限公司 Method and device for judging driving lane and storage medium
US20210394763A1 (en) * 2020-06-19 2021-12-23 Toyota Jidosha Kabushiki Kaisha Vehicle control device
CN114518120A (en) * 2020-11-18 2022-05-20 阿里巴巴集团控股有限公司 Navigation guidance method, road shape data generation method, apparatus, device and medium
CN117058647A (en) * 2023-10-13 2023-11-14 腾讯科技(深圳)有限公司 Lane line processing method, device and equipment and computer storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112115219A (en) * 2020-08-31 2020-12-22 汉海信息技术(上海)有限公司 Position determination method, device, equipment and storage medium
CN113286096B (en) * 2021-05-19 2022-08-16 中移(上海)信息通信科技有限公司 Video identification method and system
CN115451982A (en) * 2021-06-09 2022-12-09 腾讯科技(深圳)有限公司 Positioning method and related device
CN113642533B (en) * 2021-10-13 2022-08-09 宁波均联智行科技股份有限公司 Lane level positioning method and electronic equipment
WO2024092559A1 (en) * 2022-11-02 2024-05-10 华为技术有限公司 Navigation method and corresponding device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235485A1 (en) * 2011-12-19 2015-08-20 Lytx, Inc. Driver identification based on driving maneuver signature
CN105674992A (en) * 2014-11-20 2016-06-15 高德软件有限公司 Navigation method and apparatus
CN106056100A (en) * 2016-06-28 2016-10-26 重庆邮电大学 Vehicle auxiliary positioning method based on lane detection and object tracking
US20170116477A1 (en) * 2015-10-23 2017-04-27 Nokia Technologies Oy Integration of positional data and overhead images for lane identification

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002175599A (en) * 2000-12-05 2002-06-21 Hitachi Ltd Lane position estimating device for precedent vehicle or target
DE10327869A1 (en) * 2003-06-18 2005-01-13 Siemens Ag Navigation system with lane references
JP4377284B2 (en) * 2004-06-02 2009-12-02 株式会社ザナヴィ・インフォマティクス Car navigation system
JP4437556B2 (en) * 2007-03-30 2010-03-24 アイシン・エィ・ダブリュ株式会社 Feature information collecting apparatus and feature information collecting method
JP4886597B2 (en) * 2007-05-25 2012-02-29 アイシン・エィ・ダブリュ株式会社 Lane determination device, lane determination method, and navigation device using the same
JP4780534B2 (en) * 2009-01-23 2011-09-28 トヨタ自動車株式会社 Road marking line detection device
US9077958B2 (en) * 2010-08-30 2015-07-07 Honda Motor Co., Ltd. Road departure warning system
CN102184535B (en) * 2011-04-14 2013-08-14 西北工业大学 Method for detecting boundary of lane where vehicle is
CN103942959B (en) * 2014-04-22 2016-08-24 深圳市宏电技术股份有限公司 A kind of lane detection method and device
KR101610502B1 (en) * 2014-09-02 2016-04-07 현대자동차주식회사 Apparatus and method for recognizing driving enviroment for autonomous vehicle
US9721471B2 (en) * 2014-12-16 2017-08-01 Here Global B.V. Learning lanes from radar data
US20160209219A1 (en) * 2015-01-15 2016-07-21 Applied Telemetrics Holdings Inc. Method of autonomous lane identification for a multilane vehicle roadway
CN106740841B (en) * 2017-02-14 2018-07-10 驭势科技(北京)有限公司 Method for detecting lane lines, device and mobile unit based on dynamic control
US10942525B2 (en) * 2017-05-09 2021-03-09 Uatc, Llc Navigational constraints for autonomous vehicles
KR101870229B1 (en) * 2018-02-12 2018-06-22 주식회사 사라다 System and method for determinig lane road position of vehicle
CN108957475A (en) * 2018-06-26 2018-12-07 东软集团股份有限公司 A kind of Method for Road Boundary Detection and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235485A1 (en) * 2011-12-19 2015-08-20 Lytx, Inc. Driver identification based on driving maneuver signature
CN105674992A (en) * 2014-11-20 2016-06-15 高德软件有限公司 Navigation method and apparatus
US20170116477A1 (en) * 2015-10-23 2017-04-27 Nokia Technologies Oy Integration of positional data and overhead images for lane identification
CN106056100A (en) * 2016-06-28 2016-10-26 重庆邮电大学 Vehicle auxiliary positioning method based on lane detection and object tracking

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210394763A1 (en) * 2020-06-19 2021-12-23 Toyota Jidosha Kabushiki Kaisha Vehicle control device
US11897507B2 (en) * 2020-06-19 2024-02-13 Toyota Jidosha Kabushiki Kaisha Vehicle control device
CN111914651A (en) * 2020-07-01 2020-11-10 浙江大华技术股份有限公司 Method and device for judging driving lane and storage medium
CN114518120A (en) * 2020-11-18 2022-05-20 阿里巴巴集团控股有限公司 Navigation guidance method, road shape data generation method, apparatus, device and medium
CN117058647A (en) * 2023-10-13 2023-11-14 腾讯科技(深圳)有限公司 Lane line processing method, device and equipment and computer storage medium
CN117058647B (en) * 2023-10-13 2024-01-23 腾讯科技(深圳)有限公司 Lane line processing method, device and equipment and computer storage medium

Also Published As

Publication number Publication date
CN111311902A (en) 2020-06-19
TW202033932A (en) 2020-09-16
CN111311902B (en) 2022-05-24

Similar Documents

Publication Publication Date Title
WO2020119567A1 (en) Data processing method, apparatus, device and machine readable medium
JP7349792B2 (en) How to provide information for vehicle operation
EP3343172B1 (en) Creation and use of enhanced maps
US10380890B2 (en) Autonomous vehicle localization based on walsh kernel projection technique
JP7213293B2 (en) Road information data determination method, device and computer storage medium
US20190204094A1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
JP2020064068A (en) Visual reinforcement navigation
CN109489673A (en) Data-driven map updating system for automatic driving vehicle
JP2018084573A (en) Robust and efficient algorithm for vehicle positioning and infrastructure
CN108362295A (en) Vehicle route guides device and method
JP2019099138A (en) Lane-keep auxiliary method and device
KR20210061722A (en) Method, apparatus, computer program and computer readable recording medium for producing high definition map
WO2011053335A1 (en) System and method of detecting, populating and/or verifying condition, attributions, and/or objects along a navigable street network
JP7481534B2 (en) Vehicle position determination method and system
KR102123844B1 (en) Electronic apparatus, control method of electronic apparatus and computer readable recording medium
WO2020156923A2 (en) Map and method for creating a map
CN109754636A (en) Parking stall collaborative perception identification, parking assistance method, device
CN111091037A (en) Method and device for determining driving information
JP5742558B2 (en) POSITION DETERMINING DEVICE, NAVIGATION DEVICE, POSITION DETERMINING METHOD, AND PROGRAM
WO2022021982A1 (en) Travelable region determination method, intelligent driving system and intelligent vehicle
CN109085818A (en) The method and system of car door lock based on lane information control automatic driving vehicle
US20240142254A1 (en) Image processing apparatus, image processing method, computer program and computer readable recording medium
EP4019900A1 (en) Navigation system with parking space identification mechanism and method of operation thereof
US12036992B2 (en) Lane change planning method and vehicle-mounted device
TW201616464A (en) Updating method of parking information and electronic device performing the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19895388

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19895388

Country of ref document: EP

Kind code of ref document: A1