WO2022082571A1 - 一种车道线检测方法和装置 - Google Patents

一种车道线检测方法和装置 Download PDF

Info

Publication number
WO2022082571A1
WO2022082571A1 PCT/CN2020/122716 CN2020122716W WO2022082571A1 WO 2022082571 A1 WO2022082571 A1 WO 2022082571A1 CN 2020122716 W CN2020122716 W CN 2020122716W WO 2022082571 A1 WO2022082571 A1 WO 2022082571A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
image
lane
pixels
lines
Prior art date
Application number
PCT/CN2020/122716
Other languages
English (en)
French (fr)
Inventor
罗达新
高鲁涛
马莎
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/122716 priority Critical patent/WO2022082571A1/zh
Priority to CN202080004827.3A priority patent/CN112654998B/zh
Publication of WO2022082571A1 publication Critical patent/WO2022082571A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the present application relates to the field of sensor technology, and in particular, to a lane line detection method and device.
  • lane line detection can be performed based on sensors. For example, when the vehicle is driving, the camera is used to obtain road pictures, and the vehicle driving system detects and recognizes the lane lines in the road pictures to assist in deciding whether to take measures such as adjusting the direction and changing lanes.
  • the first is a detection method based on deep learning.
  • machine learning methods such as convolutional neural networks are used to learn the features of the lane lines, segment the lane lines, and then simulate the lane lines.
  • the traditional computer vision detection method uses the Hough transform to estimate the positions of multiple lane lines, extracts the area where the lane lines are located, and then fits each area separately.
  • the embodiments of the present application provide a lane line detection method and device, which can obtain at least one first area according to a first image, obtain the first lane line in the first area, and then constrain the first lane according to the law followed by the lane line This can avoid problems such as excessive curvature of lane lines, non-parallel lane lines or intersection of lane lines in the identified lane lines, thereby improving the accuracy of lane line detection. Rate.
  • an embodiment of the present application provides a lane line detection method, which determines at least one first area according to a first image; obtains at least one first lane line according to the at least one first area;
  • the second lane line of the constraint condition; the constraint condition includes the law that the lane line follows.
  • This embodiment of the present application constrains the relationship between the first lane lines according to the rules followed by the lane lines, and obtains a lane line detection result that satisfies the constraint conditions, so as to avoid excessive curvature of the lane lines and lane lines existing in the identified lane lines. problems such as non-parallel or intersection of lane lines, thereby improving the accuracy of lane line detection.
  • the rules followed by the lane lines include at least one of the following: the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, the first lane line The curvature satisfies the second range, the distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the embodiment of the present application determines at least one first area according to the first image, including: acquiring a third lane line according to the first image; determining at least one first area according to the third lane line and the first distance an area; wherein the first distance is related to the width of the lane.
  • the first area is determined according to the first distance and the third lane line with better recognition effect, so that the first lane line determined in the first area is also relatively more accurate.
  • the embodiment of the present application determines at least one first region according to the first image, including: acquiring a third lane line according to the first image; according to the third lane line and an integral map constructed by using the first image , and determine a plurality of first regions in the first image; wherein, the abscissa of the integral map is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the vertical axis direction.
  • the first region is determined according to the third lane line and the maximum value of the integral map, wherein the position of the maximum value of the integral map may be a position where the pixels of the lane line are concentrated, so that the maximum value is determined at the maximum value.
  • the first area is also more accurate.
  • the embodiment of the present application determines at least one first area according to the third lane line and the integral map constructed by using the first image, including: determining, according to the third lane line, where the third lane line is located obtain a plurality of maxima of the integral map; at the positions corresponding to the plurality of maxima, determine at least one first region parallel to the region where the third lane line is located.
  • the embodiment of the present application obtains multiple maximum values of the integral map, including: straightening the first image according to the third lane line to obtain the second image; wherein, the straightened second image
  • the third lane line in is parallel to the vertical axis; an integral map is generated according to the second image; multiple maxima of the integral map are obtained.
  • the embodiment of the present application uses any pixel point of the third lane line as a reference point, and straightens the third lane line into a fourth lane line parallel to the longitudinal axis; according to the third lane line
  • the position and direction of other pixels moving in the straightening process are straightened, and the pixels with the same ordinate as other pixels in the first image are straightened to obtain the second image.
  • the third lane line is the lane line with the largest number of pixels in the first image; or, the number of pixels of the third lane line is greater than the first threshold.
  • obtaining the first lane line in the at least one first area in the embodiment of the present application includes: using a random sampling consistency algorithm to respectively fit the pixel points in the at least one first area to obtain A first lane line in at least one first area.
  • the embodiment of the present application uses the random sampling consistency algorithm to respectively fit the pixel points in the at least one first region, including: using the random sampling consistency algorithm to perform parallel matching on the pixels in the at least one first region The pixel points are fitted.
  • the RANSAC algorithm is used to simultaneously fit the first area, which can improve the efficiency of detecting lane lines.
  • lane lines that satisfy the constraint conditions are determined in the first area N times, and multiple lane lines are obtained; wherein, N is a non-zero natural number; pixels are determined in the multiple lane lines The one with the largest number of lane lines gets the second lane line.
  • This embodiment of the present application constrains the relationship between the first lane lines according to the rules followed by the lane lines, and selects a lane line with the largest number of pixels among the first lane lines that satisfy the constraint conditions as the second lane line, thus obtaining a lane Line detection results are also more accurate.
  • the first image is an overhead image of the lane line.
  • an embodiment of the present application provides a lane line detection device.
  • the lane line detection device can be a vehicle with a lane line detection function, or other components with a lane line detection function.
  • the lane line detection device includes but is not limited to: on-board terminals, on-board controllers, on-board modules, on-board modules, on-board components, on-board chips, on-board units, on-board radars or on-board cameras and other sensors.
  • the lane line detection device can be an intelligent terminal, or set in other intelligent terminals with lane line detection function except the vehicle, or set in a component of the intelligent terminal.
  • the intelligent terminal may be other terminal equipment such as intelligent transportation equipment, smart home equipment, and robots.
  • the lane line detection device includes, but is not limited to, a smart terminal or a controller, a chip, a radar or a camera and other sensors in the smart terminal, and other components.
  • the lane line detection device may be a general-purpose device or a special-purpose device.
  • the apparatus can also be a desktop computer, a portable computer, a network server, a PDA (personal digital assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, or other devices with processing functions.
  • PDA personal digital assistant
  • the embodiment of the present application does not limit the type of the lane line detection device.
  • the lane line detection device may also be a chip or processor with a processing function, and the lane line detection device may include at least one processor.
  • the processor can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the chip or processor with processing function may be arranged in the sensor, or may not be arranged in the sensor, but arranged at the receiving end of the output signal of the sensor.
  • the processor includes but is not limited to a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU), a micro control unit (micro control unit, MCU), a microprocessor (micro processor unit, MPU) ), at least one of the coprocessors.
  • the lane line detection device may also be a terminal device, or a chip or a chip system in the terminal device.
  • the lane line detection device may include a processing unit.
  • the processing unit may be a processor.
  • the lane line detection device may further include a storage unit, which may be a memory. The storage unit is used for storing instructions, and the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a lane line detection method described in the first aspect or any possible implementation manner of the first aspect .
  • the processing unit may be a processor.
  • the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a lane line detection method described in the first aspect or any possible implementation manner of the first aspect.
  • the storage unit may be a storage unit (eg, a register, a cache, etc.) in the chip, or a storage unit (eg, a read-only memory, a random access memory, etc.) located outside the chip in the terminal device.
  • the processing unit is specifically used to determine at least one first area according to the first image; the processing unit is specifically used to obtain at least one first lane line according to the at least one first area; the processing unit is specifically used to obtain at least one first lane line according to the at least one first area; At least one first lane line determines a second lane line that satisfies a constraint condition; the constraint condition includes a law followed by the lane line.
  • the rules followed by the lane lines include at least one of the following: the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, the first lane line The curvature satisfies the second range, the distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the processing unit is specifically configured to acquire the third lane line according to the first image; the processing unit is further configured to determine at least one first area according to the third lane line and the first distance; Among them, the first distance is related to the width of the lane.
  • the processing unit is specifically configured to acquire the third lane line according to the first image; the processing unit is further configured to, according to the third lane line and the integral map constructed by using the first image, in A plurality of first regions are determined in the first image; wherein, the abscissa of the integral map is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the vertical axis direction.
  • the processing unit is specifically used for, according to the third lane line, to determine the area where the third lane line is located; the processing unit is specifically used for acquiring multiple maximum values of the integral graph; processing The unit is specifically further configured to determine at least one first area parallel to the area where the third lane line is located at positions corresponding to the multiple maximum values.
  • the processing unit is specifically configured to straighten the first image according to the third lane line to obtain the second image; wherein the third lane line in the straightened second image and the vertical axis parallel; the processing unit is specifically used for generating an integral map according to the second image; the processing unit is also specifically used for acquiring multiple maximum values of the integral map.
  • the processing unit is specifically configured to, using any pixel point of the third lane line as a reference point, straighten the third lane line into a fourth lane line parallel to the longitudinal axis; the processing unit , which is also specifically used to straighten the pixels with the same ordinate as other pixels in the first image according to the positions and directions of other pixels in the third lane line moving in the straightening to obtain the second image.
  • the third lane line is the lane line with the largest number of pixels in the first image; or, the number of pixels of the third lane line is greater than the first threshold.
  • the processing unit is specifically configured to use a random sampling consistency algorithm to respectively fit the pixel points in the at least one first area to obtain the first lane line in the at least one first area.
  • the processing unit is specifically configured to use a random sampling consistency algorithm to perform fitting on the pixels in the at least one first region in parallel
  • the processing unit is specifically configured to determine lane lines satisfying the constraint conditions in the first area N times to obtain a plurality of lane lines; wherein, N is a non-zero natural number; the processing unit is specifically further It is used to determine one lane with the largest number of pixels among the multiple lanes to obtain the second lane.
  • the first image is an overhead image of the lane line.
  • an embodiment of the present application further provides a sensor system for providing a vehicle with a lane line detection function. It includes at least one lane line detection device mentioned in the above embodiments of the present application, and other sensors such as cameras and radars. At least one sensor device in the system can be integrated into a whole machine or equipment, or at least one sensor device in the system. The sensor device can also be provided independently as an element or device.
  • the embodiments of the present application further provide a system, which is applied in unmanned driving or intelligent driving, which includes at least one of the lane line detection devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • a system which is applied in unmanned driving or intelligent driving, which includes at least one of the lane line detection devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • At least one, at least one device in the system can be integrated into a whole machine or equipment, or at least one device in the system can also be independently set as a component or device.
  • any of the above systems may interact with the vehicle's central controller to provide detection and/or fusion information for decision-making or control of the vehicle's driving.
  • an embodiment of the present application further provides a terminal, where the terminal includes at least one lane line detection device mentioned in the above embodiments of the present application or any of the above systems.
  • the terminal may be smart home equipment, smart manufacturing equipment, smart industrial equipment, smart transportation equipment (including drones, vehicles, etc.) and the like.
  • the present application provides a chip or a chip system, the chip or chip system includes at least one processor and a communication interface, the communication interface and the at least one processor are interconnected by a line, and the at least one processor is used for running a computer program or instruction, The lane line detection method described in any one of the implementation manners of the first aspect is performed.
  • the communication interface in the chip may be an input/output interface, a pin, a circuit, or the like.
  • the chip or chip system described above in this application further includes at least one memory, where instructions are stored in the at least one memory.
  • the memory may be a storage unit inside the chip, such as a register, a cache, etc., or a storage unit of the chip (eg, a read-only memory, a random access memory, etc.).
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program or instruction is stored in the computer-readable storage medium, and when the computer program or instruction is run on a computer, the computer is made to execute any one of the first aspect.
  • an embodiment of the present application provides a target tracking device, including: at least one processor and an interface circuit, where the interface circuit is configured to provide information input and/or information output for the at least one processor; at least one processor is configured to run code instructions to implement any method of the first aspect or any possible implementations of the first aspect.
  • FIG. 1 is a schematic diagram of an automatic driving scenario provided by an embodiment of the present application
  • Fig. 2 is the schematic diagram of the problem existing in the existing detection method
  • FIG. 3 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the present application.
  • Fig. 4 is a kind of integral diagram constructed for the embodiment of this application.
  • FIG. 5 is a schematic flowchart of a lane line detection method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a first area determined in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of determining a first area according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of determining a lane line position according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first area determined in an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of determining a first area according to an embodiment of the present application.
  • FIG. 11 is a flowchart of determining a maximum value provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of straightening a first image according to an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of determining a second lane line according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a lane line detection device provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first log and the second log are only for distinguishing network logs in different time windows, and the sequence of the logs is not limited.
  • the words “first”, “second” and the like do not limit the quantity, and the words “first”, “second” and the like do not limit certain differences.
  • “at least one” means one or more, and “plurality” means two or more.
  • “And/or”, which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • possible lane line detection methods include: detection methods based on deep learning and detection methods based on traditional computer vision.
  • the vehicle driving system uses a machine learning method such as a convolutional neural network to learn lane line features, segment the lane lines, and then fit the lane lines to obtain the lane line detection method. result.
  • a machine learning method such as a convolutional neural network
  • detection methods based on deep learning require specially labeled data, which may lead to insufficient data or low data quality.
  • the labeled data needs to be trained by high-performance computers to obtain models, which has certain limitations.
  • a possible implementation of the computer vision-based detection method is: using Hough transform to fit a road image to determine multiple lane lines, and obtain a lane line detection result.
  • the embodiment of the present application provides a lane line detection method, which can obtain at least one first area according to the first image, and obtain the first area in the first area. Then, according to the rules followed by the lane lines, the relationship between the first lane lines is constrained, and the lane line detection results that meet the constraints are obtained, which can avoid the excessive curvature of the lane lines and the lane lines existing in the identified lane lines. problems such as non-parallel or intersection of lane lines, thereby improving the accuracy of lane line detection.
  • FIG. 3 is a functional block diagram of a vehicle 300 according to an embodiment of the present invention.
  • the vehicle 300 is configured in a fully or partially autonomous driving mode.
  • the vehicle 300 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surroundings through human manipulation, determine the likely behavior of at least one other vehicle in the surrounding environment, and determine the other vehicle
  • the vehicle 300 is controlled based on the determined information with a confidence level corresponding to the likelihood of performing the possible behavior.
  • the vehicle 300 may be placed to operate without human interaction.
  • Vehicle 300 may include various subsystems, such as travel system 302 , sensor system 304 , control system 306 , one or more peripherals 308 and power supply 310 , computer system 312 and user interface 316 .
  • vehicle 300 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 300 may be interconnected by wire or wirelessly. The following is a detailed description of the computer system 312 related to the present invention.
  • Computer system 312 may include at least one processor 313 that executes instructions 315 stored in a non-transitory computer-readable medium such as data storage device 314 .
  • Computer system 312 may also be a plurality of computing devices that control individual components or subsystems of vehicle 300 in a distributed fashion.
  • Processor 313 may be any conventional processor, such as a commercially available CPU.
  • the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 3 functionally illustrates the processor, memory, and other elements of the computer 310 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually include a processor, a computer, or a memory that may or may not Multiple processors, computers, or memories stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located within an enclosure other than computer 310 .
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage device 314 may include instructions 315 (eg, program logic) executable by processor 313 to perform various functions of vehicle 300 , including those described above.
  • the data storage device 314 may contain lane line detection instructions 315 that may be executed by the processor 313 to perform the function of lane line detection of the vehicle 300 .
  • the data storage device 314 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 300 and the computer system 312 during operation of the vehicle 300 in autonomous, semi-autonomous and/or manual modes.
  • the data storage device 314 may store environmental information obtained from the sensor system 304 or other components of the vehicle 300 .
  • the environmental information may be, for example, whether there are green belts, traffic lights, pedestrians, etc. near the current environment of the vehicle. Algorithms such as machine learning can be used to calculate whether there are green belts, traffic lights, pedestrians, etc. near the current environment.
  • the data storage device 314 may also store state information of the vehicle itself, as well as state information of other vehicles with which the vehicle interacts.
  • the state information includes, but is not limited to, the speed, acceleration, heading angle, etc. of the vehicle.
  • the vehicle obtains the distance between other vehicles and itself, the speed of other vehicles, etc. based on the speed measurement and distance measurement functions of the radar 326 .
  • the processor 313 can obtain the above-mentioned environmental information or state information from the data storage device 314, and execute the instruction 315 including the lane line detection program to obtain the lane line detection result in the road. And based on the environmental information of the environment where the vehicle is located, the state information of the vehicle itself, the state information of other vehicles, and the traditional rule-based driving strategy, combined with the lane line detection results, the final driving strategy is obtained, and the steering system 332 is used to control the vehicle. Autopilot (such as steering, U-turn, etc.).
  • one or more of these components described above may be installed or associated with the vehicle 300 separately.
  • data storage device 314 may exist partially or completely separate from vehicle 300 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • the above component is just an example.
  • components in each of the above modules may be added or deleted according to actual needs, and FIG. 3 should not be construed as a limitation on the embodiment of the present invention.
  • the above-mentioned vehicle 300 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the embodiments of the invention are not particularly limited.
  • the integral map described in the embodiments of the present application may be constructed based on a grayscale image.
  • the grayscale image may be a grayscale image obtained by performing grayscale processing on the first image.
  • the abscissa of the integral graph is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the direction of the vertical axis.
  • FIG. 4 is an integral graph constructed by an embodiment of the present application for the grayscale image of the first image.
  • the range of the abscissa in the integral graph is 0 to 250
  • the range of the ordinate of the integral graph is 0 to 700, where points A, B, and C are the maxima of the integral graph.
  • FIG. 5 provides a lane line detection method according to an embodiment of the present application, comprising the following steps:
  • S501 Determine at least one first area according to the first image.
  • the first image described in this embodiment of the present application may be a road picture acquired by a camera.
  • the first image may be a color image.
  • the camera in the embodiment of the present application may be a camera of a driver monitoring system, a cockpit-type camera, an infrared camera, a driving recorder (ie, a video recording terminal), etc., which is not limited in the specific embodiment of the present application.
  • the first area described in this embodiment of the present application may be an area estimated in the road where lane lines may exist.
  • the first area described in the embodiments of the present application is not the first area in a specific image, but an area that may include lane lines in each image, and the first area may correspond to different images. different content.
  • the part of the first image corresponding to the first area may be an image located in the first area in the first image; the part of the second image corresponding to the first area may be an image located in the first area in the second image .
  • a first possible implementation of determining at least one first region according to the first image is: the vehicle driving system acquires a grayscale image of the first image, and then constructs an integral map according to the grayscale image of the first image, where the integral The location of at least one maximum value of the map defines at least one first region.
  • the grayscale image of the first image is obtained by performing grayscale processing on the first image.
  • Grayscale processing is a relatively common technology, and details are not described here.
  • the number of maximum values corresponds to the number of lane lines in the road, and a first area is determined at the position of each maximum value.
  • the width of the first area may be set by the machine, and the height of the first area may be the same as the height of the first image.
  • the left image of FIG. 6 is a grayscale image of the first image
  • the middle image of FIG. 6 is an integral graph constructed according to the grayscale image of the first image.
  • a rectangular area can be used to frame the possible areas of the lane lines, and the three first areas shown in the right figure of Figure 6 can be obtained.
  • the second possible implementation of determining the at least one first region according to the first image is: straightening the first image. At least one maximum value is obtained according to the integral map constructed from the grayscale image corresponding to the straightened first image, and then at least one first region is determined at the position of at least one maximum value of the integral map.
  • the vehicle driving system may rotate the first image, so that the pixels of the lane line in the rotated first image are more concentrated in the vertical direction.
  • the angle at which the first image is rotated may be set by the machine.
  • the grayscale image of the straightened first image is obtained by performing grayscale processing on the straightened first image.
  • Grayscale processing is a relatively common technology, and details are not described here.
  • the position of the maximum value is the position where the lane line pixels are more concentrated, and the lane line pixels are more concentrated in the vertical direction after straightening the first image.
  • the first region determined at the position of the maximum value of the integral map constructed by the straightened first image is also relatively more accurate.
  • S502 Obtain at least one first lane line according to at least one first area.
  • the first lane line described in this embodiment of the present application may be a lane line obtained by detecting pixel points in the first area.
  • the first lane line may be a lane line obtained by detecting pixels in the first area by using methods such as Hough transform, sliding window, or random sample consensus (RANSAC).
  • the vehicle driving system uses the Hough transform algorithm to fit the pixel points in the at least one first area to obtain the at least one first lane line.
  • a possible implementation is: The coordinate value is transformed into a curve in the parameter space, and the intersection of the curves is obtained in the parameter space, thereby determining at least one first lane line.
  • the Hough transform is suitable for detecting straight lines.
  • the sliding window or RANSAC algorithm can be considered for detection.
  • the vehicle driving system uses the sliding window algorithm to fit the pixels in the at least one first area, and a possible implementation of obtaining the at least one first lane line is: position, select N (N can be a natural number greater than or equal to 1) pixels as the search starting point, and then generate an initial sliding window with the selected search starting point as the center to complete the search from the bottom to the top.
  • the number of search starting points may correspond to the number of first regions.
  • the search from the bottom to the top of each initial sliding window can be understood as the process of finding a pixel of a lane line in a first area.
  • the number of sliding windows in the vertical direction and the width of the sliding windows can be set manually or by machine, and the height of the sliding windows can be obtained by dividing the number of pixels in the vertical direction in the first area by the set number of sliding windows.
  • the vehicle driving system After determining the width and height of the initial sliding window, the vehicle driving system determines the center of the next sliding window according to the mean value of the pixel coordinates of the lane lines in the initial sliding window, and then repeats this operation, that is, the position of a sliding window in each search is determined by the center within the next window until the sliding window covers the lane line pixels in the image. Finally, a second-order polynomial fitting is performed on these center points to obtain at least one first lane line.
  • the vehicle driving system uses the RANSAC algorithm to fit the pixel points in the at least one first area to obtain the at least one first lane line.
  • a possible implementation is: randomize the lane line pixel points in the first area. Sampling to obtain the pixels of part of the lane lines respectively. Fit the acquired pixel points of part of the lane lines to obtain the corresponding lane lines, and record the number of pixels in the lane lines. Repeat the above steps to obtain multiple lane lines, and select the one with the largest number of pixel points among the multiple lane lines to obtain at least one first lane line.
  • S503 Determine a second lane line that satisfies the constraint condition according to the at least one first lane line.
  • the constraint condition satisfied by the first lane line may include a rule followed by the lane line.
  • the constraint condition satisfied by the first lane lines may be that two adjacent first lane lines are parallel lane lines, or two adjacent first lane lines do not intersect, and so on.
  • the corresponding curvatures of the two adjacent first lane lines are calculated respectively.
  • the curvatures corresponding to the two adjacent first lane lines are not equal, it may be that the distance between the two adjacent first lane lines is too close or the intersection, etc., resulting in non-parallel two adjacent lane lines, and the actual On the road, the lane line usually conforms to the driving rules of the vehicle, and there is no situation such as too close or crossing, so it can be judged that the obtained lane line detection result is inaccurate.
  • the curvatures corresponding to the two adjacent first lane lines are equal, it can be judged that the detection result of the lane lines is accurate, and a second lane line that conforms to the law followed by the lane lines is obtained.
  • the coordinate values corresponding to the pixels in the first lane line in the image are counted respectively. If the same coordinates are found in the coordinate values corresponding to the lane lines, it may be that two adjacent first lane lines intersect, etc. However, in the actual road, the lane lines usually conform to the driving rules of vehicles, and there will be no intersection, etc. Therefore, it is possible to It is judged that the obtained lane line detection result is inaccurate. When the same coordinates are not found in the coordinate values corresponding to each first lane line, it can be judged that the lane line detection result is accurate, and a second lane line that conforms to the law followed by the lane lines is obtained.
  • An embodiment of the present application provides a lane line detection method, which can obtain at least one first area according to a first image, obtain the first lane line in the first area, and then constrain the first lane line according to the law followed by the lane line In this way, the problems of excessive curvature of lane lines, non-parallel lane lines or intersection of lane lines in the identified lane lines can be reduced, thereby improving the accuracy of lane line detection.
  • the first image is an overhead image of the lane line.
  • the road picture acquired by the camera undergoes perspective transformation, for example, the lane line in the distant view is closer to the middle, and the thickness of the lane line in the distant view and the close view is different.
  • the vehicle driving system can perform inverse perspective transformation on the road picture undergoing perspective transformation, such as converting the road picture to a top-view perspective to obtain the first image.
  • the lane lines in the first image obtained after inverse perspective transformation are parallel to each other, and the widths of the lane lines are equal.
  • a possible implementation of performing inverse perspective transformation on the road picture to obtain the first image is: calculating the transformation matrix of the camera, wherein the transformation matrix of the camera can be obtained by multiplying the internal parameter matrix and the external parameter matrix of the camera.
  • the transformation matrix of the camera represents the imaging of the camera. If the transformation matrix of the camera is inversely transformed, the inverse perspective transformation can be realized to eliminate the perspective deformation.
  • the transformation process can be expressed by the following formula:
  • the extrinsic parameter matrix calibrated for the camera are the coordinates after inverse perspective transformation, are the coordinates before inverse perspective transformation, fx and fy in the internal parameter matrix are related to the lens focal length of the camera, and c x and cy are the positions of the optical center of the camera in the pixel coordinate system, corresponding to the center coordinates of the image matrix.
  • the parameters in the internal parameter matrix and the external parameter matrix of the camera can be obtained through camera calibration.
  • the method of performing inverse perspective transformation on the first image is not limited to the above calculation method, and those skilled in the art can also obtain the overhead image of the road picture by calculating in other ways, which are not specifically limited in the embodiments of the present application.
  • the road picture obtained by the camera of the vehicle driving system may not undergo perspective transformation.
  • performing inverse perspective transformation on the road picture obtained by the camera is an optional step, and the road picture obtained by the camera can be used as the first step. an image.
  • FIG. 7 shows a possible implementation manner of S501.
  • S501 includes:
  • S701 Acquire a third lane line according to the first image.
  • the third lane line described in the embodiment of the present application may be any lane line obtained by identifying the first image, and the third lane line may be used as a reference for obtaining the first area.
  • the third lane line can be a lane line with distinctive features, or can be understood as a lane line with better recognition effect.
  • the third lane line may be the lane line with the largest number of pixels in the first image.
  • the number of pixels of the third lane line is greater than the first threshold.
  • the first threshold can be set manually or by a machine. When the number of pixels of the third lane line is greater than the first threshold, it can be considered that the third lane line is relatively complete, and the subsequent determination is based on the area where the third lane line is located. In the first region, a more accurate first region can be obtained.
  • the vehicle driving system detects the first image to obtain multiple lane lines. Select the one with the largest number of pixels among the multiple lane lines to get the third lane line.
  • a possible implementation of the third lane line being a lane line with a number of pixels greater than the first threshold is: the vehicle driving system sets a first threshold for the number of pixels of the third lane line, and selects to detect the first image.
  • One of the plurality of obtained lane lines whose number of pixels is greater than the first threshold is obtained, and a third lane line is obtained.
  • the vehicle driving system when the number of pixels in multiple lane lines does not reach the first threshold, the vehicle driving system performs image enhancement on the first image, and then re-detects the first image obtained after the image enhancement.
  • the vehicle driving system selects one of the plurality of lane lines detected again whose number of pixels is greater than the first threshold to obtain a third lane line.
  • a possible implementation of obtaining the third lane line according to the first image in the embodiment of the present application is: the vehicle driving system detects the first image to obtain a plurality of lane lines. Select any one of the obtained multiple lane lines as the third lane line.
  • the method for detecting the first image in the embodiment of the present application may include: a method based on deep learning, a method based on computer vision, and the like.
  • the vehicle driving system uses a method based on deep learning to detect the first image. For example, an image sample containing lane lines can be used to train a neural network model capable of outputting multiple lane lines, and multiple lane lines can be obtained by inputting the first image into the neural network model. Then select one lane line from the obtained multiple lane lines as the third lane line.
  • the image samples of the lane lines in this embodiment of the present application may include road image samples, and the road image samples may be obtained through a database.
  • the database used in this embodiment of the present application may be an existing public database or a created database.
  • the embodiment of the present application uses a method based on computer vision to detect the first image, and obtains multiple lane lines after processing such as lane line pixel extraction and lane line fitting. Then select one lane line from the obtained multiple lane lines as the third lane line.
  • the vehicle driving system obtains the lane line pixel information by performing edge detection on the first image.
  • the vehicle driving system may perform grayscale processing on the first image, and change the first image containing brightness and color into a grayscale image, so as to facilitate subsequent edge detection on the image.
  • Gaussian blurring may be performed on the grayscale image of the first image.
  • some relatively unclear noises in the grayscale image of the first image can be removed, so that edge information of the lane line can be obtained more accurately.
  • the vehicle driving system uses an algorithm such as Canny to perform edge detection on the processed first image to obtain edge information of the processed first image.
  • the edge information obtained by performing edge detection on the processed first image may include, in addition to edge information of lane lines, other edge information, such as edge information of trees and houses beside the road.
  • the vehicle driving system can infer the position of the lane line in the first image according to the angle of the camera, shooting direction, etc., filter out other edge information, retain the edge information of the lane line, and finally get the information contained in the road.
  • Image of lane line pixel information may include, in addition to edge information of lane lines, other edge information, such as edge information of trees and houses beside the road.
  • the vehicle driving system to infer the position of the lane line in the first image according to the angle of the camera, the shooting direction, etc. is as follows: when the vehicle is moving forward, the shooting direction of the camera is the road area in front of the vehicle, and it can be inferred that the lane line is located at the front of the vehicle. The lower position of the first image; when the vehicle is reversing, the camera shooting direction is the road area behind the rear of the vehicle, and it can be inferred that the lane line is located below the first image; when the camera is a 360-degree multi-angle camera, the shooting direction can be the vehicle. In the surrounding 360-degree road area, it can also be inferred that the lane line is located below the first image.
  • the left image of Figure 8 is obtained after the vehicle driving system performs edge detection on the processed first image.
  • the shooting direction of the camera is the area in front of the front of the vehicle, and it can be inferred that the lane line is located in the image. in the lower area.
  • the vehicle driving system sets the area below the image as the area of interest, and obtains the pixel information of the lane line.
  • the lane line is located in the entire acquired image.
  • the edge information obtained by the edge detection of the processed first image is the pixel information of the lane line. That is, the vehicle driving system can infer the position of the lane line in the first image according to the angle of the camera, the shooting direction, etc., and select other edge information except the lane line as an optional step.
  • the vehicle driving system may perform color segmentation on the first image according to the color features of the lane lines in the first image to obtain an image containing lane line pixel information.
  • the vehicle driving system may choose to set corresponding color intervals in the color space (such as RGB color space, etc.) to extract the corresponding color in the first image. Pixel information for lane lines. Wherein, when there are two colors of lane lines in the acquired first image, the vehicle driving system combines the pixel information of the lane lines extracted in different color intervals to obtain an image containing the pixel information of the lane lines.
  • the color space such as RGB color space, etc.
  • the vehicle driving system uses sliding window, Hough transform and other algorithms to fit the image containing the lane line pixel information to obtain multiple lane lines. According to the multiple lane lines obtained by fitting, the third lane line is determined.
  • the vehicle driving system uses algorithms such as sliding window and Hough transform to fit an image containing lane line pixel information, and a possible implementation of obtaining multiple lane lines is: according to the position of the bottom lane line pixel in the image, Select N (N can be a natural number greater than or equal to 1) pixels as the search starting point, and then generate an initial sliding window with the selected search starting point as the center to complete the search from bottom to top.
  • N can be a natural number greater than or equal to 1
  • the number of search starting points and the number of lane lines in the road can be the same, and the search from the bottom to the top of each initial sliding window can be understood as the process of finding a pixel of a lane line.
  • the number of sliding windows in the vertical direction and the width of the sliding windows can be set manually or by machines, and the height of the sliding windows can be obtained by dividing the number of pixels in the vertical direction in the picture by the set number of sliding windows.
  • the vehicle driving system After determining the width and height of the initial sliding window, the vehicle driving system determines the center of the next sliding window according to the mean value of the pixel coordinates of the lane lines in the initial sliding window, and then repeats this operation, that is, the position of a sliding window in each search is determined by the center within the next window until the sliding window covers the lane line pixels in the image. Finally, a second-order polynomial fitting is performed on these center points to obtain multiple lane lines.
  • a possible implementation of determining the third lane line according to the multiple obtained lane lines is: the vehicle driving system selects one lane line from the obtained multiple lane lines to obtain the third lane line.
  • determining the third lane line according to the multiple lane lines obtained by fitting is: according to the positions of the multiple lane lines obtained by fitting, generate the area of each lane line respectively, and then use The random sampling consistency algorithm RANSAC algorithm respectively fits the pixels in the area where the lane lines are located, and obtains multiple lane lines.
  • RANSAC algorithm The random sampling consistency algorithm
  • the multiple lane lines obtained by fitting the area through the RANSAC algorithm have better recognition effect than the multiple lane lines obtained directly by image fitting.
  • the vehicle driving system selects one lane line from the multiple lane lines fitted by the RANSAC algorithm to obtain the third lane line.
  • S702 Determine at least one first area according to the third lane line and the first distance.
  • the first distance is related to the width of the lane.
  • the relationship between the first distance and the lane width may be determined by acquiring internal parameters and external parameters of the camera. For example, a linear relationship between the width of the lane and the pixels in the first image is obtained from the intrinsic and extrinsic parameter matrices of the camera. Then, according to the linear relationship between the width of the lane and the pixels in the first image, a first distance corresponding to the width of the lane in the first image is determined.
  • the first distance corresponding to the lane width in the first image may be the number of pixels corresponding to the lane width in the first image.
  • the relationship between the first distance and the lane width may also be determined through prior knowledge.
  • the prior knowledge may be a table established based on the relationship between the pixels in the picture acquired according to the history of the camera and the distances corresponding to the pixels in practice.
  • the first distances corresponding to different road widths are also different. After obtaining the specific road width, the first distance can be obtained by querying the table.
  • the embodiment of the present application may determine the area where the third lane line is located according to the position of the third lane line and the first distance in the first image, and then translate the area where the third lane line is located, At least one first area is determined according to the area where the third lane line is located and the area obtained after the translation.
  • the distance for translating the area where the third lane line is located may be determined by the first distance.
  • the third lane line is at the left position in the figure, and a rectangular area is used to frame the area where the third lane line is located to obtain the third lane line as shown in the middle figure of FIG. 9 . the area in which it is located.
  • the resolution of the known image is 250*700, that is, there are 250 pixels in the horizontal direction. If the width of the lane is 3 meters, according to the internal parameter matrix and the external parameter matrix of the camera, it can be obtained that the lane width of 3 meters corresponds to 70 pixels in the horizontal direction in the first image, that is, the first distance of the lane width in the first image. .
  • the vehicle driving system translates the area where the third lane line is located to obtain the areas where the other lane lines are located. As shown on the right of Figure 9, the area where the third lane line is located and the area obtained after translation constitute three first areas.
  • the embodiment of the present application can estimate the positions of other lane lines according to the position of the third lane line and the first distance in the first image, and use rectangles for the positions of the third lane line and other lane lines respectively.
  • the area frames the area where the third lane line and the other lane lines are located to obtain at least one first area.
  • An embodiment of the present application provides a lane line detection method, which can determine a first area according to a first distance and a third lane line with a better recognition effect, so that the first lane line determined in the first area is also relatively more accurate .
  • S501 includes:
  • S1001 Acquire a third lane line according to the first image.
  • S1001 can correspond to the term to explain the record about the third lane line, which will not be repeated here.
  • S1002 Determine an area where the third lane line is located according to the third lane line.
  • a rectangular area is used to frame the area where the third lane line is located.
  • the vehicle driving system may determine, according to the position of the third lane line as shown in the left figure of FIG. 9 , the area where the third lane line is located as shown in the middle figure of FIG. 9 , that is, the rectangular area outside the third lane line .
  • S1003 Construct an integral map according to the first image.
  • a grayscale image of the first image is acquired, and an integral map is constructed according to the grayscale image of the first image.
  • S1003 may correspond to the description of the integral graph in the noun explanation part, and details are not repeated here.
  • the ordinate of the integral graph is the number of pixels of the image in the direction of the ordinate axis.
  • the position of the maximum value of the integral map is the position where the pixels of the lane line are concentrated, and the number of the maximum value of the integral map is the same as the number of lane lines in the road.
  • S1005 At the position corresponding to the maximum value, determine at least one first area parallel to the area where the third lane line is located.
  • a first area parallel to the area where the third lane line is located is respectively generated at the position of the maximum value.
  • the positions of three maxima are obtained respectively according to the integral map constructed from the grayscale image of the first image.
  • Three first regions parallel to the region where the third lane line is located are generated at the positions of the maximum values, as shown in the right figure of FIG. 9 .
  • An embodiment of the present application provides a lane line detection method.
  • the first area is determined according to the third lane line and the maximum value of the integral map.
  • the first region determined at the maximum value is also more accurate.
  • S1004 includes:
  • the third lane line is straightened into a third lane line parallel to the longitudinal axis. Then, according to the moving position and direction of other pixels in the third lane line during the straightening, straighten the pixels with the same ordinate as other pixels in the first image to obtain the second image.
  • the left picture of FIG. 12 is the third lane line obtained according to the first image. If the pixels of the image are 250*700, there are 700 rows of pixels in the image. If the number of pixels in the third lane is 700, then if the pixels in the third lane also have 700 rows, and each row has one pixel. Taking the pixel point of the first row in the third lane line as the reference point, move the other pixel points of the third lane line to the same place as the abscissa of the reference point.
  • other pixel points in the third lane line may refer to the corresponding lines from the 2nd to the 700th row in the third lane line. of pixels. Then record the moving positions and directions of other pixels in the third lane. For example, the pixels in the second row in the third lane move two pixels along the positive semi-axis of the horizontal axis, etc., to obtain the first parallel to the vertical axis. Four-lane line.
  • the pixels with the same ordinate as other pixels in the first image as shown in the middle image of Fig. 12 will be moved to the same position and direction.
  • the pixel points of the second row in an image are moved by two pixels in the direction of the positive semi-axis of the horizontal axis, etc., to obtain the second image as shown in the right image of FIG. 12 .
  • the pixel point in the first image that has the same vertical coordinate as other pixel points may be a pixel point in the first image that is in a row with other pixel points.
  • S1101 is an optional step.
  • S1102 Generate an integral map according to the second image.
  • a grayscale image of the second image may be acquired, and an integral map is constructed according to the grayscale image of the second image.
  • S1103 Acquire at least one maximum value of the integral graph.
  • the embodiment of the present application provides a lane line detection method, wherein the position of the maximum value is determined according to the straightened first image, wherein the lane line pixels in the straightened second image are more concentrated in the vertical direction, The position of the maximum value obtained in this way is also more accurate. Therefore, the first region determined by the maximum value is also relatively accurate.
  • S502 includes: the vehicle driving system uses the RANSAC algorithm to fit the pixel points in at least one first area to obtain at least one first lane line.
  • the vehicle driving system randomly samples the pixels in at least one first area, obtains some pixels in the first area, and then fits some pixels in the first area to obtain the corresponding Lane line, and record the number of pixels in the lane line. Repeat the above steps to obtain multiple lane lines, and select the one with the largest number of pixel points among the multiple lane lines to obtain at least one first lane line.
  • using the RANSAC algorithm to fit the pixel points in the at least one first region may be to perform fitting on the pixel points in the at least one first region in parallel.
  • the RANSAC algorithm is used to simultaneously and separately fit the pixels in the at least one first region.
  • An embodiment of the present application provides a lane line detection method, which uses the RANSAC algorithm to simultaneously fit the first area, which can improve the efficiency of lane line detection.
  • the relationship between the first lane lines is constrained according to the rules followed by the lane lines, and the detection results of the lane lines that meet the constraints can be obtained, which can avoid the excessive curvature of the lane lines, the non-parallel lane lines or the lane lines caused by the separate fitting. problems such as intersections, thereby improving the accuracy of lane line detection.
  • S503 includes:
  • S1301 Determine lane lines satisfying the constraint conditions in the first area N times, and obtain multiple lane lines.
  • the constraint condition satisfied by the lane line may include a rule followed by the lane line.
  • the law followed by the lane line may include at least one of the following: the width between the pixels with the same ordinate in the two adjacent first lane lines satisfies the first range, the curvature of the first lane line satisfies the second range, The distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the width between the pixels with the same ordinate in two adjacent first lane lines does not satisfy the first range, it may be that the distance between the two adjacent first lane lines is too close or intersects, etc.
  • the lane lines usually conform to the vehicle driving rules, and there will be no situations such as too close or intersection, so it can be judged that the obtained lane line detection results are inaccurate.
  • the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, it can be judged that the lane line detection result is accurate, and a second lane line conforming to the law that the lane lines follow is obtained.
  • the first range can be set according to actual application scenarios.
  • the first range can include a value equal to or similar to the width of the vehicle, or a common lane width value, etc.
  • the first range is not specifically limited in this embodiment of the present application. .
  • the lane line when the curvature of the first lane line does not meet the second range, it may be that the curvature of the first lane line is too large, but in an actual road, the lane line can be divided into vertical lane lines and curved lanes in shape. In different curved lane lines, the lane line usually conforms to the vehicle driving rules, and there is no such thing as excessive curvature, so it can be judged that the obtained lane line detection result is inaccurate.
  • the curvature of the first lane line satisfies the second range, it can be judged that the detection result of the lane line is accurate, and a second lane line that conforms to the law followed by the lane line is obtained.
  • the second range may be set according to an actual application scenario, for example, the second range may include a common lane line curvature value, and the embodiment of the present application does not specifically limit the first range.
  • the third range may be set according to actual application scenarios, for example, the third range may include a value equal to or similar to the width of the vehicle, or a common lane width value, which is not specifically limited in this embodiment of the present application.
  • the fourth range may be set according to an actual application scenario, for example, the fourth range may include the curvature difference of a common lane line, etc.
  • the embodiment of the present application does not specifically limit the first range.
  • N is a non-zero natural number, such as 1, 2, 3, etc.
  • the RANSAC algorithm is used to detect the pixel points in the first area, and the first lane line is determined in the first area.
  • the first lane line satisfies the constraint condition
  • S1302 Determine a lane line with the largest number of pixels among the plurality of lane lines to obtain a second lane line.
  • a lane line with the largest number of pixels is determined among the plurality of lane lines to obtain the second lane line.
  • An embodiment of the present application provides a lane line detection method, which constrains the relationship between the first lane lines according to the rules followed by the lane lines, and selects a lane line with the largest number of pixels among the first lane lines that satisfy the constraint condition, as For the second lane line, the detection result of the lane line is also more accurate.
  • the vehicle driving system may mark the obtained second lane line in the first image, and then output it to the display screen in the vehicle driving system.
  • the first image is obtained by performing inverse perspective transformation on the road picture obtained by the camera, in this case, the first image containing the detection result of the lane line can be subjected to perspective transformation, and then output to the vehicle driving system. in the display.
  • the vehicle driving system may combine the lane line detection results based on the environmental information of the environment where the vehicle is located, the state information of the vehicle itself and/or the state information of other vehicles, and Get driving strategies (such as steering, U-turn, etc.) to ensure the safety of the vehicle.
  • the vehicle driving system can also send out warning information when the vehicle is about to deviate from its own lane (by means of screen display, voice broadcast or vibration, etc.), and the user can manually intervene according to the warning information to ensure the safety of the vehicle.
  • the first image may also be a processed image, for example, the first image may also be obtained after processing a road picture.
  • the grayscale image, etc., the step of performing grayscale processing on the first image may be omitted in the above steps, which will not be repeated here.
  • FIG. 14 shows a schematic structural diagram of a lane line detection device provided by an embodiment of the present application.
  • the lane line detection device includes: a processing unit 1401 .
  • the processing unit 1401 is used to complete the step of lane line detection.
  • the processing unit 1401 is configured to support the lane line detection apparatus to perform S501 to S503 in the above embodiment, or S1001 to S1001 to S1005, or S1101 to S1103, etc.
  • the lane line detection apparatus may further include: a communication unit 1402 and a storage unit 1403 .
  • the processing unit 1401, the communication unit 1402, and the storage unit 1403 are connected through a communication bus.
  • the storage unit 1403 may include one or more memories, and the memories may be devices in one or more devices or circuits for storing programs or data.
  • the storage unit 1403 may exist independently, and is connected to the processing unit 101 of the lane line detection apparatus through a communication bus.
  • the storage unit 1403 may also be integrated with the processing unit.
  • the lane line detection device can be used in communication equipment, circuits, hardware components or chips.
  • the communication unit 1402 may be an input or output interface, a pin, a circuit, or the like.
  • the storage unit 103 may store computer execution instructions of the method of the terminal device, so that the processing unit 1401 executes the method of the terminal device in the foregoing embodiments.
  • the storage unit 1403 may be a register, a cache or a RAM, etc., and the storage unit 1403 may be integrated with the processing unit 101 .
  • the storage unit 1403 may be a ROM or other types of static storage devices that may store static information and instructions, and the storage unit 1403 may be independent of the processing unit 1401 .
  • An embodiment of the present application provides a lane line detection device
  • the lane line detection device includes one or more modules for implementing the method in the steps included in the above-mentioned FIG. 4 to FIG. 13
  • the one or more modules may be Corresponds to the steps of the method in the steps contained in FIGS. 4-13 above.
  • a unit or module in the terminal device that performs each step in the method.
  • a module that performs detection of lane lines may be referred to as a processing module.
  • a module that performs the steps of processing messages or data on the side of the lane line detection device may be referred to as a communication module.
  • FIG. 15 is a schematic structural diagram of a chip 150 provided by an embodiment of the present invention.
  • the chip 150 includes one or more (including two) processors 1510 and a communication interface 1530 .
  • the chip 150 shown in FIG. 15 further includes a memory 1540 , which may include read-only memory and random access memory, and provides operation instructions and data to the processor 1510 .
  • a portion of memory 1540 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • memory 1540 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set of them:
  • the corresponding operation is performed by calling the operation instruction stored in the memory 1540 (the operation instruction may be stored in the operating system).
  • a possible implementation manner is: the structure of the chips used by the terminal equipment, the wireless access network device or the session management network element is similar, and different devices may use different chips to realize their respective functions.
  • the processor 1510 controls the operation of the terminal device, and the processor 1510 may also be referred to as a central processing unit (central processing unit, CPU).
  • Memory 1540 may include read-only memory and random access memory, and provides instructions and data to processor 1510 .
  • a portion of memory 1540 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1540, the communication interface 1530, and the memory 1540 are coupled together through the bus system 1520, wherein the bus system 1520 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus.
  • the various buses are labeled as bus system 1520 in FIG. 15 .
  • the above communication unit may be an interface circuit or a communication interface of the device for receiving signals from other devices.
  • the communication unit is an interface circuit or a communication interface used by the chip to receive or transmit signals from other chips or devices.
  • the methods disclosed in the above embodiments of the present invention may be applied to the processor 1510 or implemented by the processor 1510 .
  • the processor 1510 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1510 or an instruction in the form of software.
  • the above-mentioned processor 1510 may be a general-purpose processor, a digital signal processing (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or Other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present invention may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 1540, and the processor 1510 reads the information in the memory 1540, and completes the steps of the above method in combination with its hardware.
  • the communication interface 1530 is configured to perform the steps of receiving and sending the terminal equipment, radio access network device or session management network element in the embodiments shown in FIG. 4-FIG. 13 .
  • the processor 1510 is configured to perform processing steps of the terminal device, the radio access network device or the session management network element in the embodiments shown in FIGS. 4-13 .
  • the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product.
  • the computer program product can be pre-written in the memory, or downloaded and installed in the memory in the form of software.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored on or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g. coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • a wire e.g. coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless e.g, infrared, wireless, microwave, etc.
  • the computer-readable storage medium can be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks, SSDs), and the like.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media can include both computer storage media and communication media and also include any medium that can transfer a computer program from one place to another.
  • the storage medium can be any target medium that can be accessed by a computer.
  • the computer readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium intended to carry or in an instruction or data structure
  • the required program code is stored in the form and can be accessed by the computer.
  • any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable , twisted pair, DSL or wireless technologies such as infrared, radio and microwave
  • Disk and disc as used herein includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种车道线检测方法和装置,涉及传感器技术领域,可用于安防、辅助驾驶和自动驾驶。方法包括:根据第一图像确定至少一个第一区域(S501);根据至少一个第一区域得到至少一个第一车道线(S502);根据至少一个第一车道线确定满足约束条件的第二车道线(S503);其中,约束条件包括车道线所遵循的规律。这样,根据车道线所遵循的规律来约束第一车道线之间的关系,得到满足约束条件的车道线检测结果,可以避免识别的车道线中存在的车道线曲率过大、车道线不平行或车道线交叉等问题,从而提高车道线检测的准确率。提升了自动驾驶或者辅助驾驶中的高级驾驶辅助***ADAS能力,可以应用于车联网,如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。

Description

一种车道线检测方法和装置 技术领域
本申请涉及传感器技术领域,尤其涉及一种车道线检测方法和装置。
背景技术
随着社会的发展,智能运输设备、智能家居设备、机器人等智能终端正在逐步进入人们的日常生活中。传感器在智能终端上发挥着十分重要的作用。安装在智能终端上的各式各样的传感器,比如毫米波雷达,激光雷达,成像雷达,超声波雷达,摄像头等,使得智能终端可以感知周围的环境,收集数据,进行移动物体的辨识与追踪,以及静止场景如车道线、标示牌的识别、结合导航仪及地图数据进行路径规划,等。示例性的,在自动驾驶、安防或监控等领域,可以基于传感器进行车道线检测。例如,车辆在行驶过程中,利用摄像头获取道路图片,车辆驾驶***通过检测和识别道路图片中的车道线,辅助决定是否采取调整方向、变道等措施。
目前,常用的多车道线检测方法有两种,第一种,基于深度学习的检测方法,例如,利用卷积神经网络等机器学习方法学习车道线特征,分割车道线,然后对车道线进行拟合;第二种,传统的计算机视觉的检测方法,例如,利用霍夫变换估计多条车道线的位置,提取车道线所在的区域,然后分别对每个区域进行拟合。
然而,上述车道线检测方法存在车道线检测结果不准确的问题。
发明内容
本申请实施例提供一种车道线检测方法和装置,可以根据第一图像得到至少一个第一区域,在第一区域中得到第一车道线,然后根据车道线所遵循的规律来约束第一车道线之间的关系,得到满足约束条件的车道线检测结果,这样可以避免识别的车道线中存在的车道线曲率过大、车道线不平行或车道线交叉等问题,从而提高车道线检测的准确率。
第一方面,本申请实施例提供一种车道线检测方法,根据第一图像确定至少一个第一区域;根据至少一个第一区域得到至少一个第一车道线;根据至少一个第一车道线确定满足约束条件的第二车道线;约束条件包括车道线所遵循的规律。本申请实施例根据车道线所遵循的规律来约束第一车道线之间的关系,得到满足约束条件的车道线检测结果,这样可以避免识别的车道线中存在的车道线曲率过大、车道线不平行或车道线交叉等问题,从而提高车道线检测的准确率。
在一种可能的实现方式中,车道线所遵循的规律包括以下至少一种:相邻的两个第一车道线中纵坐标相同的像素点之间的宽度满足第一范围、第一车道线曲率满足第二范围、相邻的两个第一车道线间的距离满足第三范围、相邻的两个第一车道线间的曲率差满足第四范围。
在一种可能的实现方式中,本申请实施例根据第一图像确定至少一个第一区域, 包括:根据第一图像获取第三车道线;根据第三车道线以及第一距离,确定至少一个第一区域;其中,第一距离与车道的宽度有关。本申请实施例,根据第一距离以及具有较好识别效果的第三车道线来确定第一区域,这样在第一区域内确定的第一车道线也相对更加准确。
在一种可能的实现方式中,本申请实施例根据第一图像确定至少一个第一区域,包括:根据第一图像获取第三车道线;根据第三车道线和利用第一图像构建的积分图,在第一图像中确定多个第一区域;其中,积分图的横坐标为图像的像素列数,纵坐标为纵轴方向图像的像素数目。本申请实施例根据第三车道线以及积分图的极大值来确定第一区域,其中,积分图的极大值的位置可以是车道线像素较集中的位置,这样在极大值处确定的第一区域也更加准确。
在一种可能的实现方式中,本申请实施例根据第三车道线和利用第一图像构建的积分图,确定至少一个第一区域,包括:根据第三车道线,确定第三车道线所位于的区域;获取积分图的多个极大值;在多个极大值对应的位置处,确定与第三车道线所位于的区域平行的至少一个第一区域。
在一种可能的实现方式中,本申请实施例获取积分图的多个极大值,包括:根据第三车道线拉直第一图像,得到第二图像;其中,拉直后的第二图像中的第三车道线与纵轴平行;根据第二图像生成积分图;获取积分图的多个极大值。
在一种可能的实现方式中,本申请实施例以第三车道线的任一像素点为参考点,将第三车道线拉直为与纵轴平行的第四车道线;根据第三车道线中其他像素点在拉直中移动的位置和方向,拉直第一图像中与其他像素点纵坐标相同的像素点,得到第二图像。
在一种可能的实现方式中,第三车道线为第一图像中像素数量最多的车道线;或者,第三车道线的像素数量大于第一阈值。
在一种可能的实现方式中,本申请实施例获取至少一个第一区域中的第一车道线,包括:利用随机抽样一致性算法分别对至少一个第一区域中的像素点进行拟合,得到至少一个第一区域中的第一车道线。
在一种可能的实现方式中,本申请实施例利用随机抽样一致性算法分别对至少一个第一区域中的像素点进行拟合,包括:利用随机抽样一致性算法并行对至少一个第一区域中的像素点进行拟合。本申请实施例,使用RANSAC算法对第一区域同时进行拟合,可以提高检测车道线检测的效率。
在一种可能的实现方式中,本申请实施例N次在第一区域中确定满足约束条件的车道线,得到多个车道线;其中,N为非零自然数;在多个车道线中确定像素数目最多的一个车道线,得到第二车道线。本申请实施例根据车道线所遵循的规律来约束第一车道线之间的关系,在满足约束条件的第一车道线中选择像素数量最多的一个车道线,作为第二车道线,这样得到车道线检测结果也更准确。
在一种可能的实现方式中,第一图像为车道线的俯视图像。
第二方面,本申请实施例提供一种车道线检测装置。
该车道线检测装置可为具有车道线检测功能的车辆,或者为具有车道线检测功能的其他部件。该车道线检测装置包括但不限于:车载终端、车载控制器、车载模块、 车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头等其他传感器,车辆可通过该车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或摄像头,实施本申请提供的方法。
该车道线检测装置可以智能终端,或设置在除了车辆之外的其他具有车道线检测功能的智能终端中,或设置于该智能终端的部件中。该智能终端可以为智能运输设备、智能家居设备、机器人等其他终端设备。该车道线检测装置包括但不限于智能终端或智能终端内的控制器、芯片、雷达或摄像头等其他传感器、以及其他部件等。
该车道线检测装置可以是一个通用设备或者是一个专用设备。在具体实现中,该装置还可以台式机、便携式电脑、网络服务器、掌上电脑(personal digital assistant,PDA)、移动手机、平板电脑、无线终端设备、嵌入式设备或其他具有处理功能的设备。本申请实施例不限定该车道线检测装置的类型。
该车道线检测装置还可以是具有处理功能的芯片或处理器,该车道线检测装置可以包括至少一个处理器。处理器可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。该具有处理功能的芯片或处理器可以设置在传感器中,也可以不设置在传感器中,而设置在传感器输出信号的接收端。所述处理器包括但不限于中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微控制单元(micro control unit,MCU)、微处理器(micro processor unit,MPU)、协处理器中的至少一个。
该车道线检测装置还可以是终端设备,也可以是终端设备内的芯片或者芯片***。该车道线检测装置可以包括处理单元。当该车道线检测装置是终端设备时,该处理单元可以是处理器。该车道线检测装置还可以包括存储单元,该存储单元可以是存储器。该存储单元用于存储指令,该处理单元执行该存储单元所存储的指令,以使该终端设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种车道线检测方法。当该车道线检测装置是终端设备内的芯片或者芯片***时,该处理单元可以是处理器。该处理单元执行存储单元所存储的指令,以使该终端设备实现第一方面或第一方面的任意一种可能的实现方式中描述的一种车道线检测方法。该存储单元可以是该芯片内的存储单元(例如,寄存器、缓存等),也可以是该终端设备内的位于该芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
示例性的,处理单元,具体用于根据第一图像确定至少一个第一区域;处理单元,具体用于,根据至少一个第一区域得到至少一个第一车道线;处理单元,具体还用于根据至少一个第一车道线确定满足约束条件的第二车道线;约束条件包括车道线所遵循的规律。
在一种可能的实现方式中,车道线所遵循的规律包括以下至少一种:相邻的两个第一车道线中纵坐标相同的像素点之间的宽度满足第一范围、第一车道线曲率满足第二范围、相邻的两个第一车道线间的距离满足第三范围、相邻的两个第一车道线间的曲率差满足第四范围。
在一种可能的实现方式中,处理单元,具体用于,根据第一图像获取第三车道线;处理单元,具体还用于根据第三车道线以及第一距离,确定至少一个第一区域;其中,第一距离与车道的宽度有关。
在一种可能的实现方式中,处理单元,具体用于,根据第一图像获取第三车道线;处理单元,具体还用于,根据第三车道线和利用第一图像构建的积分图,在第一图像中确定多个第一区域;其中,积分图的横坐标为图像的像素列数,纵坐标为纵轴方向图像的像素数目。
在一种可能的实现方式中,处理单元,具体用于,根据第三车道线,确定第三车道线所位于的区域;处理单元,具体用于,获取积分图的多个极大值;处理单元,具体还用于,在多个极大值对应的位置处,确定与第三车道线所位于的区域平行的至少一个第一区域。
在一种可能的实现方式中,处理单元,具体用于,根据第三车道线拉直第一图像,得到第二图像;其中,拉直后的第二图像中的第三车道线与纵轴平行;处理单元,具体用于,根据第二图像生成积分图;处理单元,具体还用于,获取积分图的多个极大值。
在一种可能的实现方式中,处理单元,具体用于,以第三车道线的任一像素点为参考点,将第三车道线拉直为与纵轴平行的第四车道线;处理单元,具体还用于,根据第三车道线中其他像素点在拉直中移动的位置和方向,拉直第一图像中与其他像素点纵坐标相同的像素点,得到第二图像。
在一种可能的实现方式中,第三车道线为第一图像中像素数量最多的车道线;或者,第三车道线的像素数量大于第一阈值。
在一种可能的实现方式中,处理单元,具体用于,利用随机抽样一致性算法分别对至少一个第一区域中的像素点进行拟合,得到至少一个第一区域中的第一车道线。
在一种可能的实现方式中,处理单元,具体用于,利用随机抽样一致性算法并行对至少一个第一区域中的像素点进行拟合
在一种可能的实现方式中,处理单元,具体用于,N次在第一区域中确定满足约束条件的车道线,得到多个车道线;其中,N为非零自然数;处理单元,具体还用于,在多个车道线中确定像素数目最多的一个车道线,得到第二车道线。
在一种可能的实现方式中,第一图像为车道线的俯视图像。
第三方面,本申请实施例还提供一种传感器***,用于为车辆提供车道线检测功能。其包含至少一个本申请上述实施例提到的车道线检测装置,以及,摄像头和雷达等其他传感器,该***内的至少一个传感器装置可以集成为一个整机或设备,或者该***内的至少一个传感器装置也可以独立设置为元件或装置。
第四方面,本申请实施例还提供一种***,应用于无人驾驶或智能驾驶中,其包含至少一个本申请上述实施例提到的车道线检测装置、摄像头、雷达等传感器其他传感器中的至少一个,该***内的至少一个装置可以集成为一个整机或设备,或者该***内的至少一个装置也可以独立设置为元件或装置。
进一步,上述任一***可以与车辆的中央控制器进行交互,为所述车辆驾驶的决策或控制提供探测和/或融合信息。
第五方面,本申请实施例还提供一种终端,所述终端包括至少一个本申请上述实施例提到的车道线检测装置或上述任一***。进一步,所述终端可以为智能家居设备、智能制造设备、智能工业设备、智能运输设备(含无人机、车辆等)等。
第六方面,本申请提供一种芯片或者芯片***,该芯片或者芯片***包括至少一个处理器和通信接口,通信接口和至少一个处理器通过线路互联,至少一个处理器用于运行计算机程序或指令,以进行第一方面任意的实现方式中任一项所描述的车道线检测方法。
其中,芯片中的通信接口可以为输入/输出接口、管脚或电路等。
在一种可能的实现中,本申请中上述描述的芯片或者芯片***还包括至少一个存储器,该至少一个存储器中存储有指令。该存储器可以为芯片内部的存储单元,例如,寄存器、缓存等,也可以是该芯片的存储单元(例如,只读存储器、随机存取存储器等)。
第七方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第一方面的任意一种实现方式中描述的车道线检测方法。
第八方面,本申请实施例提供一种目标跟踪装置,包括:至少一个处理器和接口电路,接口电路用于为至少一个处理器提供信息输入和/或信息输出;至少一个处理器用于运行代码指令,以实现第一方面或第一方面任意可能的实现方式中的任一方法。
应当理解的是,本申请实施例的第二方面至第八方面与本申请实施例的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1为本申请实施例提供的一种自动驾驶场景示意图;
图2为现有检测方法存在的问题的示意图;
图3本申请实施例提供的一种自动驾驶汽车的结构示意图;
图4为本申请实施例构建的一种积分图;
图5为本申请实施例提供的一种车道线检测方法的流程示意图;
图6为本申请实施例确定的一种第一区域的示意图;
图7为本申请实施例提供的一种确定第一区域的流程示意图;
图8为本申请实施例提供的一种确定车道线位置的示意图;
图9为本申请实施例确定的一种第一区域的示意图;
图10为本申请实施例提供的一种确定第一区域的流程示意图;
图11为本申请实施例提供的一种确定极大值的流程图;
图12为本申请实施例提供的一种将第一图像进行拉直的示意图;
图13为本申请实施例提供的一种确定第二车道线的流程示意图;
图14为本申请实施例提供的一种车道线检测装置的结构示意图;
图15为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第 一日志和第二日志仅仅是为了区分不同时间窗内的网络日志,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
随着人工智能、视觉技术等技术的不断发展,自动驾驶逐渐成为智能汽车的新趋势。而在自动驾驶过程中,驾驶安全性尤为重要。其中,为了有效提升驾驶安全性,如图1所示,车辆在行驶过程中,通过检测车道线,在车辆偏离自车道或即将偏离自车道前给予提醒,及时地调整车辆在车道中的位置,可以保障行车安全,有效地降低或避免交通事故的发生。因此,车道线检测是自动驾驶***中的重要任务之一。
目前,可能的车道线检测方法包括:基于深度学习的检测方法和传统计算机视觉的检测方法。
示例性的,基于深度学习的检测方法的一种可能实现为:车辆驾驶***利用卷积神经网络等机器学习方法学习车道线特征,分割车道线,然后对车道线进行拟合,得到车道线检测结果。
然而,基于深度学习的检测方法需要专门标注的数据,会存在数据不足或数据质量不高的问题,同时标注的数据需要高性能计算机进行训练得到模型,存在着一定的局限性。
示例性的,基于计算机视觉的检测方法的一种可能实现为:利用霍夫变换对道路图片进行拟合确定多条车道线,得到车道线检测结果。
然而,使用上述计算机视觉的方法进行车道线检测时,虽一定程度上弥补了基于深度学***行或车道线交叉等问题。
基于深度学***行或车道线交叉等问题,从而提高车道线检测的准确率。
图3是本发明实施例提供的车辆300的功能框图。在一个实施例中,将车辆300配置为完全或部分地自动驾驶模式。例如,车辆300可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆300。在车辆300处于自动驾驶模式中时,可以将车辆300置为在没有和人交互的情况下操作。
车辆300可包括各种子***,例如行进***302、传感器***304、控制***306、一个或多个***设备308以及电源310、计算机***312和用户接口316。可选地,车辆300可包括更多或更少的子***,并且每个子***可包括多个元件。另外,车辆300的每个子***和元件可以通过有线或者无线互连。下面就本发明相关的计算机***312进行具体描述。
车辆300的部分或所有功能受计算机***312控制。计算机***312可包括至少一个处理器313,处理器313执行存储在例如数据存储装置314这样的非暂态计算机可读介质中的指令315。计算机***312还可以是采用分布式方式控制车辆300的个体组件或子***的多个计算设备。
处理器313可以是任何常规的处理器,诸如商业可获得的CPU。替选地,该处理器可以是诸如ASIC或其它基于硬件的处理器的专用设备。尽管图3功能性地图示了处理器、存储器、和在相同块中的计算机310的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机310的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置314可包含指令315(例如,程序逻辑),指令315可被处理器313执行来执行车辆300的各种功能,包括以上描述的那些功能。作为一种示例,数据存储装置314中可包含车道线检测指令315,可被处理器313执行来执行车辆300的车道线检测的功能。
除了指令315以外,数据存储装置314还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆300在自主、半自主和/或手动模式中操作期间被车辆300和计算机***312使用。
作为一种示例,数据存储装置314可以存储从传感器***304或车辆300的其他组件获取的环境信息,环境信息例如可以为车辆当前所处环境附近是否有绿化带、交通信号灯、行人等,车辆300可以通过机器学习等算法计算当前所处环境附近是否存在绿化带、交通信号灯、行人等。数据存储装置314还可以存储该车辆自身的状态信息,以及与该车辆有交互的其他车辆的状态信息。状态信息包括但不限于车辆的速度、 加速度、航向角等。比如,车辆基于雷达326的测速、测距功能,得到其他车辆与自身之间的距离、其他车辆的速度等。因此,处理器313可从数据存储装置314获取上述环境信息或者状态信息,并执行包含车道线检测程序的指令315,得到道路中车道线检测结果。并基于车辆所处环境的环境信息、车辆自身的状态信息、其他车辆的状态信息,以及传统的基于规则的驾驶策略,结合车道线检测结果,得到最终的驾驶策略,通过转向***332以控制车辆进行自动驾驶(比如转向、掉头等)。
可选地,上述这些组件中的一个或多个可与车辆300分开安装或关联。例如,数据存储装置314可以部分或完全地与车辆300分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图3不应理解为对本发明实施例的限制。
上述车辆300可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本发明实施例不做特别的限定。
下面对本申请实施例中的一些词汇进行说明。其中,本申请实施例解释的词汇是为了便于本领域技术人员的理解,并不构成对于本申请实施例的限定。
本申请实施例所描述的积分图可以是根据灰度图像构建的。例如,该灰度图像可以为对第一图像进行灰度处理后得到的灰度图像。其中,积分图的横坐标为图像的像素列数,纵坐标为纵轴方向图像的像素数目。
示例性的,若第一图像的灰度图像的分辨率为250*700,则该第一图像的灰度图像在水平方向上250个像素点,在垂直方向上有700个像素点,即第一图像的灰度图像中有250列像素,每一列中车道线像素数目最多为700个。例如,图4为本申请实施例对第一图像的灰度图像构建的积分图,如图8所示,积分图中的横坐标的范围为0至250,积分图的纵坐标的范围为0至700,其中,A点、B点和C点为该积分图的极大值。
下面采用具体的实施例对本申请的车道线检测方法进行详细说明,需要说明的是,下面几个具体实施例可以相互结合,对于相同或相似的内容,在不同的实施例中不再进行重复说明。
图5为本申请实施例提供的一种车道线检测方法,包括下列步骤:
S501:根据第一图像确定至少一个第一区域。
本申请实施例所描述的第一图像可以是通过摄像机获取的道路图片。其中,第一图像可以为彩色图像。
本申请实施例中的摄像机可以是驾驶员监测***的摄像机、座舱型摄像机、红外摄像机、行车记录仪(即录像终端)等,具体本申请实施例不进行限制。
本申请实施例所描述的第一区域可以是在道路中预估的可能存在车道线的区域。可能的理解方式中,本申请实施例所描述的第一区域不是特定的某张图像中的第一区域,而是各图像中可能包含车道线的区域,第一区域在不同的图像中可以对应不同的内容。例如,第一图像中对应于第一区域的部分,可以是第一图像中位于第一区域的图像;第二图像中对应于第一区域的部分可以是第二图像中位于第一区域的图像。
示例性的,根据第一图像确定至少一个第一区域的第一种可能实现为:车辆驾驶***获取第一图像的灰度图像,然后根据第一图像的灰度图像构建积分图,在该积分图的至少一个极大值位置确定至少一个第一区域。
可能的实现方式中,第一图像的灰度图像是通过对第一图像进行灰度化处理后得到的。灰度化处理是较为通用的技术,在此不再赘述。
可能的理解方式中,极大值的数量与道路中车道线的数量是对应的,分别在每个极大值的位置确定一个第一区域。其中,第一区域的宽度可以由机器设定,第一区域的高度可以与第一图像的高度相同。
示例性的,图6左图为第一图像的灰度图像,图6中间图为根据第一图像的灰度图像构建的积分图,如图6中间图所示,该积分图中有三个极大值。可以在三个极大值对应的位置,分别用矩形区域框出车道线可能存在的区域,得到如图6右图所示的三个第一区域。
示例性的,根据第一图像确定至少一个第一区域的第二种可能实现为:将第一图像进行拉直。根据拉直后的第一图像所对应的灰度图像构建的积分图,获取至少一个极大值,然后在该积分图的至少一个极大值位置确定至少一个第一区域。可能的实现方式中,当第一图像中车道线是倾斜的,车辆驾驶***可以对第一图像进行旋转,使旋转后的第一图像中的车道线像素在竖直方向更集中。其中,将第一图像进行旋转的角度可以由机器设定。
可能的实现方式中,拉直后的第一图像的灰度图像是通过对拉直后的第一图像进行灰度化处理后得到的。灰度化处理是较为通用的技术,在此不再赘述。
可能的理解方式中,极大值的位置是车道线像素较集中的位置,将第一图像进行拉直后车道线像素在竖直方向上更集中。对应的,在拉直后的第一图像所构建的积分图的极大值位置确定的第一区域也相对更加准确。
本申请实施例中,也可以结合实际的应用场景采用其他可能的方式根据第一图像确定至少一个第一区域,例如利用车道线颜色在第一图像识别与车道线颜色相同或相近的区域,得到至少一个第一区域等,本申请实施例对根据第一图像确定至少一个第一区域的具体实现不作限定。
S502:根据至少一个第一区域得到至少一个第一车道线。
本申请实施例所描述的第一车道线可以是对第一区域中的像素点进行检测得到的车道线。例如,第一车道线可以是利用霍夫变换、滑动窗口或随机抽样一致性算法(random sample consensus,RANSAC)等方法对第一区域中的像素点进行检测得到的车道线。
示例性的,车辆驾驶***利用霍夫变换算法对至少一个第一区域中的像素点进行拟合,得到至少一个第一车道线的一种可能实现为:将第一区域中的所有像素点的坐标值变换成参数空间的曲线,并在参数空间得到曲线交点,从而确定至少一个第一车道线。
可能的理解方式中,霍夫变换适用于对直线进行检测,当道路中存在弯曲车道线时,可以考虑使用滑动窗口或RANSAC算法来进行检测。
示例性的,车辆驾驶***利用滑动窗口算法对至少一个第一区域中的像素点进行 拟合,得到至少一个第一车道线的一种可能实现为:分别在第一区域中底部车道线像素的位置,选取N(N可以为大于或等于1的自然数)个像素点作为搜寻起点,然后分别以选取的搜寻起点为中心生成初始滑动窗口,完成自底部向上的搜寻。可能的理解方式中,搜寻起点的数量可以与第一区域的数量是对应的。每一个初始滑动窗口由底部向上的搜寻可以理解为在一个第一区域中找到一条车道线的像素的过程。其中,竖直方向上滑动窗口数目和滑动窗口的宽度可以由人工或机器设置,滑动窗口的高度可以由第一区域中竖直方向像素的数目除以设置的滑动窗口数目来得到。
在确定初始滑动窗口的宽度和高度后,车辆驾驶***根据初始滑动窗口内车道线像素点坐标的均值确定下一滑动窗口的中心,然后重复执行该操作,即每一次搜寻中一个滑动窗口的位置是由下一窗口内的中心来决定,直到滑动窗口覆盖图像中的车道线像素。最后,对这些中心点做二阶的多项式拟合,得到至少一个第一车道线。
示例性的,车辆驾驶***利用RANSAC算法对至少一个第一区域中的像素点进行拟合,得到至少一个第一车道线的一种可能实现为:对第一区域内的车道线像素点进行随机采样,分别获得部分车道线的像素点。对获取的部分车道线的像素点进行拟合,得到对应的车道线,并记录该车道线中像素点的数量。重复执行上述步骤,得到多条车道线,在多条车道线中选择像素点数量最多的一个,得到至少一个第一车道线。
S503:根据至少一个第一车道线确定满足约束条件的第二车道线。
本申请实施例中,第一车道线满足的约束条件可以包括车道线所遵循的规律。
可能的实现方式中,第一车道线满足的约束条件可以是相邻的两个第一车道线为平行车道线,或相邻的两个第一车道线没有出现交叉,等。
可能的实现方式中,当第一车道线满足的约束条件是相邻的两个第一车道线为平行车道线时,分别计算相邻的两个第一车道线对应的曲率。当相邻的两个第一车道线对应的曲率不相等时,可能是相邻的两个第一车道线出现距离过近或者交叉等情况,导致相邻的两个车道线不平行,而实际道路中,车道线通常符合车辆行驶规则,不会存在距离过近或交叉等情况,因此可以判断得到的车道线检测结果不准确。在相邻的两个第一车道线对应的曲率相等时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。
可能的实现方式中,当第一车道线满足的约束条件是相邻的两个第一车道线没有出现交叉时,分别统计图像中第一车道线中像素点对应的坐标值,若在第一车道线对应的坐标值中发现相同的坐标,可能是相邻的两个第一车道线出现交叉等情况,而实际道路中,车道线通常符合车辆行驶规则,不会出现交叉等情况,因此可以判断得到的车道线检测结果不准确。在每条第一车道线对应的坐标值中没有发现相同的坐标时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。
本申请实施例中,也可以结合实际的应用场景采用其他可能的方式根据至少一个第一车道线确定满足约束条件的第二车道线,本申请实施例对此不作限定。
本申请实施例提供一种车道线检测方法,可以根据第一图像得到至少一个第一区域,在第一区域中得到第一车道线,然后根据车道线所遵循的规律来约束第一车道线之间的关系,得到满足约束条件的车道线检测结果,这样可以减少识别的车道线中存在的车道线曲率过大、车道线不平行或车道线交叉等问题,从而提高车道线检测的准 确率。
在图5对应的实施例的基础上,一种可能的实现方式中,第一图像为车道线的俯视图像。
可能的实现方式中,摄像机获取的道路图片发生了透视变换,例如远景处的车道线向中间靠拢,而且车道线在远景和近景处的粗细不同。为避免透视变换给车道线检测的结果带来误差,车辆驾驶***可以将发生透视变换的道路图片进行逆透视变换,例如把道路图片转换到俯视图视角,得到第一图像。可能的理解方式中,经过逆透视变换后得到的第一图像中车道线互相平行,且车道线宽度相等。
示例性的,将道路图片进行逆透视变换得到第一图像的一种可能实现为:计算摄像机的变换矩阵,其中,摄像机的变换矩阵可以通过摄像机的内参数矩阵和外参数矩阵连乘得到,摄像机的变换矩阵表示摄像机的成像,将摄像机的变换矩阵进行反变换,则可以实现逆透视变换消除透视形变,变换过程可用下述公式表示:
Figure PCTCN2020122716-appb-000001
其中,
Figure PCTCN2020122716-appb-000002
为摄像机的内参数矩阵,
Figure PCTCN2020122716-appb-000003
为摄像机标定的外参数矩阵,
Figure PCTCN2020122716-appb-000004
为逆透视变换后的坐标,
Figure PCTCN2020122716-appb-000005
为逆透视变换前的坐标,内参数矩阵中fx和fy与摄像机的镜头焦距相关,c x和c y是摄像机的光学中心在像素坐标系中的位置,对应着图像矩阵的中心坐标。其中,上述相机的内参数矩阵和外参数矩阵中的各参数可通过摄像机标定得到。
可以理解,对第一图像进行逆透视变换的方法不仅限于上述计算方法,本领域技术人员也可以根据其他方式计算得到道路图片的俯视图像,本申请实施例不做具体限定。
可能的实现方式中,车辆驾驶***的摄像机获取的道路图片可能没有发生透视变换,在这种情况下,将摄像机获取的道路图片进行逆透视变换为可选步骤,摄像机获取的道路图片可以作为第一图像。
在图5对应的实施例的基础上,图7示出了S501的一种可能的实现方式中,如图7所示,S501包括:
S701:根据第一图像获取第三车道线。
本申请实施例所描述的第三车道线可以是识别第一图像得到的任一条车道线,第三车道线可以用于作为获取第一区域的基准。
可能的实现方式中,为了得到较准确的第一区域,从而得到更好的车道线检测效果。第三车道线可以为具有显著特征的车道线,或者可以理解为具有较好识别效果的车道线。例如,第三车道线可以为第一图像中像素数量最多的车道线。或者,例如第三车道线的像素数量大于第一阈值。可能的理解方式中,第一阈值可以由人工或机器设定,在第三车道线的像素数量大于第一阈值时,可以认为第三车道线较为完整,后续基于第三车道线所在的区域确定第一区域时,能得到较准确的第一区域。
示例性的,第三车道线为第一图像中像素数量最多的车道线的一种可能实现方式 为:车辆驾驶***对第一图像进行检测,得到多条车道线。在多条车道线中选择像素数量最多的一条,得到第三车道线。
示例性的,第三车道线为像素数目大于第一阈值的一条车道线的一种可能实现为:车辆驾驶***对第三车道线的像素数量设定第一阈值,选择对第一图像进行检测得到的多条车道线中像素数量大于第一阈值的其中一条,得到第三车道线。
可能的实现方式中,在多条车道线中像素数量都未达到第一阈值的情况下,车辆驾驶***对第一图像进行图像增强,然后对图像增强后得到的第一图像再次进行检测。车辆驾驶***选择再次检测得到的多条车道线中像素数量大于第一阈值的其中一条,得到第三车道线。
其中,本申请实施例根据第一图像获取第三车道线的一种可能实现为:车辆驾驶***对第一图像进行检测,得到多条车道线。在得到的多条车道线中任选一条车道线,作为第三车道线。本申请实施例中对第一图像进行检测的方法可以包括:基于深度学习的方法和基于计算机视觉的方法,等。
一种可能的实现方式中,车辆驾驶***使用基于深度学习的方法对第一图像进行检测。例如,可以利用包含车道线的图像样本,训练得到能够输出多条车道线的神经网络模型,将第一图像输入该神经网络模型,可以得到多条车道线。然后从得到的多条车道线中任选一条车道线,作为第三车道线。
其中,本申请实施例的车道线的图像样本可以包括道路图片样本,道路图片样本可以通过数据库获取,本申请实施例所使用的数据库可以是已有的公开的数据库,也可以是为了训练模型自建的数据库。
另一种可能的实现方式中,本申请实施例使用基于计算机视觉的方法对第一图像进行检测,经过车道线像素提取、车道线拟合后等处理后,得到多条车道线。然后从得到的多条车道线中任选一条车道线,作为第三车道线。
在车道线像素提取过程中,一种可能的实现方式中,车辆驾驶***通过对第一图像进行边缘检测来获取车道线的像素信息。
在对第一图像进行边缘检测前,车辆驾驶***可以对第一图像进行灰度化处理,将含有亮度和色彩的第一图像变化成灰度图像,便于后续对图像进行边缘检测。同时,为了减少图像中的噪声对边缘检测带来的影响,可以对第一图像的灰度图进行高斯模糊。可能的理解方式中,经过高斯模糊处理后可以去除第一图像的灰度图中的一些相对不清晰的噪声,这样可以更准确的获取车道线的边缘信息。
在对第一图像完成上述灰度化和高斯模糊处理后,示例性的,车辆驾驶***利用Canny等算法对处理后的第一图像进行边缘检测,得到处理后的第一图像的边缘信息。
其中,将处理后的第一图像进行边缘检测得到的边缘信息除包括车道线的边缘信息外,还可以包括其他的边缘信息,例如道路旁的树木、房子等的边缘信息。在这种情况下,车辆驾驶***可以根据摄像机的角度、拍摄方向等,推断出车道线在第一图像中的位置,筛选掉其他的边缘信息,保留车道线的边缘信息,最后得到包含道路中车道线像素信息的图像。
其中,车辆驾驶***根据摄像机的角度、拍摄方向等,推断车道线在第一图像中位置的一种可能实现为:当车辆前行时,摄像机拍摄方向为车头前方道路区域,可以 推断车道线位于第一图像的下方位置;当车辆倒车时,摄像机拍摄方向为车尾后方道路区域,可以推断车道线位于第一图像的下方位置;当摄像器为360度多角度摄像机时,拍摄方向可以为车辆周边360度的道路区域,同样可以推断车道线位于第一图像的下方位置。
示例性的,如图8所示,图8左图为车辆驾驶***对处理后的第一图像进行边缘检测后得到的,此时摄像机的拍摄方向为车头前方区域,可以推断出车道线位于图像中的下方区域。如图8右图所示,车辆驾驶***将图像下方区域设置为感兴趣区域,得到车道线的像素信息。
可能的实现方式中,当第一图像为俯视图像时,车道线位于获取的整张图像中。在这种情况下,处理后的第一图像经边缘检测后得到的边缘信息,即为车道线的像素信息。即车辆驾驶***可以根据摄像机的角度、拍摄方向等,推断车道线在第一图像中的位置,来筛选除车道线以外的其他边缘信息为可选步骤。
在车道线像素提取过程中,另一种可能的实现方式中,车辆驾驶***可以根据第一图像中车道线的颜色特征,对第一图像进行颜色分割,得到包含车道线像素信息的图像。
示例性的,根据车道线的颜色特征(例如白色和/或黄色),车辆驾驶***可以选择在颜色空间(例如RGB颜色空间等)中分别设置相应的颜色区间来提取第一图像中对应颜色的车道线的像素信息。其中,当获取的第一图像中车道线有两种颜色时,车辆驾驶***将在不同的颜色区间提取的车道线像素信息进行组合,得到包含车道线像素信息的图像。
在车道线拟合过程中,车辆驾驶***利用滑动窗口、霍夫变换等算法对包含车道线像素信息的图像进行拟合,得到多条车道线。根据拟合得到的多条车道线,确定第三车道线。
示例性的,车辆驾驶***利用滑动窗口、霍夫变换等算法对包含车道线像素信息的图像进行拟合,得到多条车道线的一种可能实现为:根据图像中底部车道线像素的位置,选取N(N可以为大于或等于1的自然数)个像素点作为搜寻起点,然后分别以选取的搜寻起点为中心生成初始滑动窗口,完成自底部向上的搜寻。可能的理解方式中,搜寻起点的数量与道路中车道线的数量可以是相同的,每一个初始滑动窗口由底部向上的搜寻可以理解为找到一条车道线的像素的过程。其中,竖直方向上滑动窗口数目和滑动窗口的宽度可以由人工或机器设置,滑动窗口的高度可以由图片中竖直方向像素的数目除以设置的滑动窗口数目来得到。
在确定初始滑动窗口的宽度和高度后,车辆驾驶***根据初始滑动窗口内车道线像素点坐标的均值确定下一滑动窗口的中心,然后重复执行该操作,即每一次搜寻中一个滑动窗口的位置是由下一窗口内的中心来决定,直到滑动窗口覆盖图像中的车道线像素。最后,对这些中心点做二阶的多项式拟合,得到多条车道线。
示例性的,根据拟合得到的多条车道线,确定第三车道线的一种可能实现为:车辆驾驶***在得到的多条车道线中任选一条车道线,得到第三车道线。
示例性的,根据拟合得到的多条车道线,确定第三车道线的另一种可能实现为:根据拟合得到的多条车道线的位置,分别生成每条车道线的区域,然后使用随机抽样 一致性算法RANSAC算法分别对车道线所在的区域内的像素点进行拟合,得到多条车道线。可能的理解方式中,通过RANSAC算法对区域进行拟合得到的多条车道线比直接根据图像拟合得到的多条车道线具有更好的识别效果。此时车辆驾驶***在使用RANSAC算法拟合得到的多条车道线中任选一条车道线,得到第三车道线。
S702:根据第三车道线以及第一距离,确定至少一个第一区域。
可能的理解方式中,第一距离与车道的宽度有关。
一种可能的实现方式中,第一距离与车道宽度的关系可以通过获取摄像机的内参数和外参数来确定。例如,根据摄像机的内参数矩阵和外参数矩阵,获取车道的宽度和第一图像中的像素之间的线性关系。然后根据车道的宽度和第一图像中的像素之间的线性关系,确定车道宽度在第一图像中对应的第一距离。可能的理解方式中,车道宽度在第一图像中对应的第一距离可以为车道宽度在第一图像中对应的像素数目。
另一种可能的实现方式中,第一距离与车道宽度的关系也可以通过先验知识来确定。其中,该先验知识可以为根据摄像机历史获取的图片中的像素与像素在实际中对应的距离之间的关系建立的表格。可能的理解方式中,在不同的道路中,不同的道路宽度对应的第一距离也不同,在得到具体的道路宽度后,可通过查询表格来获取第一距离。
可能的实现方式中,本申请实施例可以根据第一图像中第三车道线的位置以及第一距离,确定第三车道线所位于的区域,然后将第三车道线所位于的区域进行平移,根据第三车道线所在的区域和平移后得到的区域,确定至少一个第一区域。其中,将第三车道线所位于的区域进行平移的距离可以由第一距离确定。
示例性的,如图9左图所示,第三车道线在图中靠左边位置,用矩形区域框出该第三车道线位于的区域,得到如图9中间图所示的第三车道线所位于的区域。已知图像的分辨率为250*700,即在水平方向上有250个像素点。若车道的宽度为3米,根据摄像机的内参数矩阵和外参数矩阵,可以得到车道宽度3米对应第一图像中水平方向上70个像素点,即车道宽度在第一图像中的第一距离。车辆驾驶***将第三车道线所在的区域进行平移,分别得到其他车道线所在的区域。如图9右图,第三车道线所在的区域和平移后得到的区域构成三个第一区域。
可能的实现方式中,本申请实施例可以根据第一图像中第三车道线的位置以及第一距离,预估其他车道线的位置,在第三车道线和其他车道线的位置,分别用矩形区域框出第三车道线和其他车道线位于的区域,得到至少一个第一区域。
本申请实施例提供一种车道线检测方法,可以根据第一距离以及具有较好识别效果的第三车道线来确定第一区域,这样在第一区域内确定的第一车道线也相对更加准确。
在图5对应的实施例的基础上,一种可能的实现方式中,如图10所示,S501包括:
S1001:根据第一图像获取第三车道线。
S1001的具体实现可以对应名词解释关于第三车道线的记载,此处不再赘述。
S1002:根据第三车道线,确定第三车道线所位于的区域。
可能的实现方式中,根据第三车道线的位置,用矩形区域框出第三车道线所位于 的区域。
示例性的,车辆驾驶***可以根据如图9左图所示的第三车道线的位置确定如图9中间图所示的第三车道线所位于的区域,即为第三车道线外矩形区域。
S1003:根据第一图像构建积分图。
可能的实现方式中,获取第一图像的灰度图像,根据第一图像的灰度图像构建积分图。
S1003的具体实现可以对应名词解释部分关于积分图的记载,此处不再赘述。
S1004:获取积分图的极大值。
其中,积分图的纵坐标为纵轴方向图像的像素数目。
可能的理解方式中,积分图的极大值的位置是车道线像素较集中的位置,积分图的极大值的数量与道路中车道线的数量是相同的。
S1005:在极大值对应的位置处,确定与第三车道线所位于的区域平行的至少一个第一区域。
可能的实现方式中,分别在极大值的位置生成与第三车道线所位于的区域平行的第一区域。
示例性的,如图9中间图所示,为第三车道线所位于的区域,根据第一图像的灰度图像构建的积分图,分别获取3个极大值的位置。分别在极大值的位置生成如图9右图所示的与第三车道线所位于的区域平行的三个第一区域。
本申请实施例提供一种车道线检测方法,根据第三车道线以及积分图的极大值来确定第一区域,其中积分图的极大值的位置可以是车道线像素较集中的位置,因此在极大值处确定的第一区域也更加准确。
在图10对应的实施例的基础上,一种可能的实现方式中,如图11所示,S1004包括:
S1101:根据第三车道线拉直第一图像,得到第二图像。
可能的实现方式中,以第三车道线的任一像素点为参考点,将第三车道线拉直为与纵轴平行的第三车道线。然后根据第三车道线中其他像素点在拉直中移动的位置和方向,拉直第一图像中与其他像素点纵坐标相同的像素点,得到第二图像。
示例性的,图12左图为根据第一图像得到的第三车道线。若图像的像素为250*700,则图像中有700行像素点。若第三车道线中像素个数为700个,则若第三车道线中的像素同样有700行,且每一行的像素点为1个。以第三车道线中第一行的像素点为参考点,将第三车道线其他像素点移动到与参考点的横坐标相同的地方。可能的理解方式中,在第三车道线中第一行的像素点为参考点的情况下,第三车道线中其他像素点可以指的是第三车道线中第2行至第700行对应的像素点。然后记录第三车道线中其他像素点移动的位置和方向,例如,第三车道线中第2行的像素点沿横轴正半轴方向移动两个像素点等,得到与纵轴平行的第四车道线。
然后将根据第三车道线中其他像素点移动的位置和方向,将如图12中间图所示的第一图像中与其他像素点纵坐标相同的像素点移动相同的位置和方向,例如将第一图像中第2行的像素点沿横轴正半轴方向移动两个像素点等,得到如图12右图所示的第二图像。可能的理解方式中,第一图像中与其他像素点纵坐标相同的像素点可以是第 一图像中与其他像素点在一行的像素点。
其中,当第三车道线为竖直车道线的情况下,不需再根据第三车道线拉直第一图像,在这种情况下,S1101为可选步骤。
S1102:根据第二图像生成积分图。
可能的实现方式中,可以获取第二图像的灰度图像,根据第二图像的灰度图像构建积分图。
S1103:获取积分图的至少一个极大值。
本申请实施例提供一种车道线检测方法,根据拉直后的第一图像来确定极大值的位置,其中,拉直后得到的第二图像中车道线像素在竖直方向上更集中,这样获取的极大值的位置也更加准确。因此由极大值确定的第一区域也相对较准确。
在图5对应的实施例的基础上,一种可能的实现方式中,S502包括:辆驾驶***利用RANSAC算法对至少一个第一区域中的像素点进行拟合,得到至少一个第一车道线。
可能的实现方式中,车辆驾驶***对至少一个第一区域内的像素点进行随机采样,获得第一区域中的部分像素点,然后对第一区域中的部分像素点进行拟合,得到对应的车道线,并记录该车道线中像素点的数量。重复执行上述步骤,得到多条车道线,分别在多条车道线中选择像素点数量最多的一个,得到至少一个第一车道线。
可能的实现方式中,利用RANSAC算法对至少一个第一区域中的像素点进行拟合可以是并行对至少一个第一区域中的像素点进行拟合。例如,利用RANSAC算法同时、分别对至少一个第一区域中的像素点进行拟合。
本申请实施例提供一种车道线检测方法,使用RANSAC算法对第一区域同时进行拟合,可以提高检测车道线检测的效率。根据车道线所遵循的规律来约束第一车道线之间的关系,得到满足约束条件的车道线检测结果,可以避免因单独拟合带来的车道线曲率过大、车道线不平行或车道线交叉等问题,从而提高车道线检测的准确率。
在图5对应的实施例的基础上,一种可能的实现方式中,如图13所示,S503包括:
S1301:N次在第一区域中确定满足约束条件的车道线,得到多个车道线。
本申请实施例中,车道线满足的约束条件可以包括车道线所遵循的规律。其中,车道线所遵循的规律可以包括以下至少一种:相邻的两个第一车道线中纵坐标相同的像素点之间的宽度满足第一范围、第一车道线曲率满足第二范围、相邻的两个第一车道线间的距离满足第三范围、相邻的两个第一车道线间的曲率差满足第四范围。
示例性的,在相邻的两个第一车道线中纵坐标相同的像素点之间的宽度不满足第一范围时,可能是相邻的两个第一车道线出现距离过近或者交叉等情况,而实际道路中,车道线通常符合车辆行驶规则,不会存在距离过近或交叉等情况,因此可以判断得到的车道线检测结果不准确。在相邻的两个第一车道线中纵坐标相同的像素点之间的宽度满足第一范围时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。可以理解,第一范围可以根据实际的应用场景设定,例如第一范围可以包括与车辆的宽度相等或相近的值,或者通常的车道宽度值等,本申请实施例对第一范围不作具体限定。
示例性的,在第一车道线曲率不满足第二范围时,可能是第一车道线出现曲率过大等情况,而实际道路中,车道线从形状上可以分为竖直车道线和弯曲车道线,在不同的弯曲车道线中,车道线通常符合车辆行驶规则,不会存在曲率过大等情况,因此可以判断得到的车道线检测结果不准确。在第一车道线曲率满足第二范围时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。可以理解,第二范围可以根据实际的应用场景设定,例如第二范围可以包括通常的车道线曲率值,本申请实施例对第一范围不作具体限定。
示例性的,在相邻的两个第一车道线间的距离不满足第三范围时,可能是相邻的两个第一车道线出现距离过近等情况,而实际道路中,通常符合车辆行驶规则,例如,平行车道线间的距离不会出现过近等情况,因此可以判断得到的车道线检测结果不准确。在相邻的两个第一车道线间的距离满足第三范围时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。可以理解,第三范围可以根据实际的应用场景设定,例如第三范围可以包括与车辆的宽度相等或相近的值,或者通常的车道宽度值,本申请实施例不作具体限定。
示例性的,在相邻的两个第一车道线间的曲率差不满足第四范围时,可能是相邻的两个第一车道线出现距离过近或交叉等情况。而实际道路中,车道线通常符合车辆行驶规则,不会存在距离过近或交叉等情况,因此可以判断得到的车道线检测结果不准确。在相邻的两个第一车道线间的曲率差满足第四范围时,可以判断车道线检测结果准确,得到符合车道线遵循的规律的第二车道线。可以理解,第四范围可以根据实际的应用场景设定,例如第四范围可以包括通常的车道线的曲率差值等,本申请实施例对第一范围不作具体限定。
可能的理解方式中,N为非零自然数,例如1,2,3,等。
可能的实现方式中,利用RANSAC算法对第一区域中的像素点进行检测,在第一区域中确定第一车道线。当第一车道线满足约束条件时,记录第一车道线中对应的像素数目。当第一车道线不满足约束条件时,N次在第一区域中确定满足约束条件的车道线,得到多个车道线及多个车道线对应的像素数目。例如,当第一车道线不满足约束条件时,重新对第一区域中的像素点进行采样,对采样后获得的部分车道线像素点再次进行检测,得到第一车道线。在再次得到的第一车道线中确定满足约束条件的车道线,重复上述步骤N次,在第一区域中确定满足约束条件的车道线,得到多个车道线,并记录对应的像素数目。
S1302:在多个车道线中确定像素数目最多的一个车道线,得到第二车道线。
可能的实现方式中,根据S1301中确定的多个车道线以及对应的像素数目,在多个车道线中确定像素数目最多的一个车道线,得到第二车道线。
本申请实施例提供一种车道线检测方法,根据车道线所遵循的规律来约束第一车道线之间的关系,在满足约束条件的第一车道线中选择像素数量最多的一个车道线,作为第二车道线,得到车道线检测结果也更准确。
在上述任一实施例的基础上,可能的实现方式中,车辆驾驶***可以将得到的第二车道线标记在第一图像中,然后输出至车辆驾驶***中的显示屏中。
可选的,若第一图像是经过对摄像机获取的道路图片进行逆透视变换得到的,在 这种情况下,可以对包含车道线检测结果的第一图像进行透视变换,然后输出至车辆驾驶***中的显示屏中。
可能的实现方式中,车辆驾驶***在确定车道线检测结果后,车辆驾驶***可以基于车辆所处环境的环境信息、车辆自身的状态信息和/或其他车辆的状态信息,结合车道线检测结果,得到驾驶策略(比如转向、掉头等)保障车辆的行驶安全。或者,车辆驾驶***也可以在车辆即将偏离自车道时发出告警信息(可通过屏幕显示、语音播报或者震动等方式发出告警),用户可以根据告警信息进行人工干预,保证车辆的行驶安全。
需要说明的是,上述实施例是以第一图像为彩色图像为例进行说明,具体应用中,第一图像也可以是经过处理的图像,例如第一图像也可以是对道路图片进行处理后得到的灰度图等,则上述步骤中可以省略对第一图像进行灰度处理的步骤,在此不做赘述。
上面结合图4-图13,对本申请实施例的方法进行了说明,下面对本申请实施例提供的执行上述方法的车道线检测装置进行描述。本领域技术人员可以理解,方法和装置可以相互结合和引用,本申请实施例提供的一种车道线装置可以执行上述车道线方法的步骤。
下面以采用对应各个功能划分各个功能模块为例进行说明:
如图14所示,图14示出了本申请实施例提供的车道线检测装置的结构示意图。该车道线检测装置包括:处理单元1401。其中,处理单元1401用于完成车道线检测的步骤。
一种示例,以该车道线检测装置为终端设备或应用于终端设备中的芯片或芯片***为例,处理单元1401用于支持车道线检测装置执行上述实施例中的S501至S503,或S1001至S1005,或S1101至S1103等。
在一种可能的实施例中,车道线检测装置还可以包括:通信单元1402和存储单元1403。处理单元1401、通信单元1402、存储单元1403通过通信总线相连。
存储单元1403可以包括一个或者多个存储器,存储器可以是一个或者多个设备、电路中用于存储程序或者数据的器件。
存储单元1403可以独立存在,通过通信总线与车道线检测装置具有的处理单元101相连。存储单元1403也可以和处理单元集成在一起。
车道线检测装置可以用于通信设备、电路、硬件组件或者芯片中。
以车道线检测装置可以是本申请实施例中的终端设备为例,则通信单元1402可以是输入或者输出接口、管脚或者电路等。示例性的,存储单元103可以存储终端设备的方法的计算机执行指令,以使处理单元1401执行上述实施例中终端设备的方法。存储单元1403可以是寄存器、缓存或者RAM等,存储单元1403可以和处理单元101集成在一起。存储单元1403可以是ROM或者可存储静态信息和指令的其他类型的静态存储设备,存储单元1403可以与处理单元1401相独立。
本申请实施例提供了一种车道线检测装置,该车道线检测装置包括一个或者多个模块,用于实现上述图4-图13中所包含的步骤中的方法,该一个或者多个模块可以与上述图4-图13中所包含的步骤中的方法的步骤相对应。具体的,本申请实施例中由终 端设备执行的方法中的每个步骤,终端设备中存在执行该方法中每个步骤的单元或者模块。例如,对于执行对车道线进行检测的模块可以称为处理模块。对于执行对在车道线检测装置侧进行消息或数据处理的步骤的模块可以称为通信模块。
图15是本发明实施例提供的芯片150的结构示意图。芯片150包括一个或两个以上(包括两个)处理器1510和通信接口1530。
在一种可能的实施例中,如图15所示的芯片150还包括存储器1540,存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供操作指令和数据。存储器1540的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
在一些实施方式中,存储器1540存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
在本发明实施例中,通过调用存储器1540存储的操作指令(该操作指令可存储在操作***中),执行相应的操作。
一种可能的实现方式中为:终端设备、无线接入网装置或会话管理网元所用的芯片的结构类似,不同的装置可以使用不同的芯片以实现各自的功能。
处理器1510控制终端设备的操作,处理器1510还可以称为中央处理单元(central processing unit,CPU)。存储器1540可以包括只读存储器和随机存取存储器,并向处理器1510提供指令和数据。存储器1540的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。例如应用中存储器1540、通信接口1530以及存储器1540通过总线***1520耦合在一起,其中总线***1520除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图15中将各种总线都标为总线***1520。
以上通信单元可以是一种该装置的接口电路或通信接口,用于从其它装置接收信号。例如,当该装置以芯片的方式实现时,该通信单元是该芯片用于从其它芯片或装置接收信号或发送信号的接口电路或通信接口。
上述本发明实施例揭示的方法可以应用于处理器1510中,或者由处理器1510实现。处理器1510可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1510中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1510可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1540,处理器1510读取存储器1540中的信息,结合其硬件完成上述方法的步骤。
一种可能的实现方式中,通信接口1530用于执行图4-图13所示的实施例中的终 端设备、无线接入网装置或会话管理网元的接收和发送的步骤。处理器1510用于执行图4-图13所示的实施例中的终端设备、无线接入网装置或会话管理网元的处理的步骤。
在上述实施例中,存储器存储的供处理器执行的指令可以以计算机程序产品的形式实现。计算机程序产品可以是事先写入在存储器中,也可以是以软件形式下载并安装在存储器中。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘solid state disk,SSD)等。
本申请实施例还提供了一种计算机可读存储介质。上述实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。如果在软件中实现,则功能可以作为一个或多个指令或代码存储在计算机可读介质上或者在计算机可读介质上传输。计算机可读介质可以包括计算机存储介质和通信介质,还可以包括任何可以将计算机程序从一个地方传送到另一个地方的介质。存储介质可以是可由计算机访问的任何目标介质。
作为一种可能的设计,计算机可读介质可以包括RAM,ROM,EEPROM,CD-ROM或其它光盘存储器,磁盘存储器或其它磁存储设备,或目标于承载的任何其它介质或以指令或数据结构的形式存储所需的程序代码,并且可由计算机访问。而且,任何连接被适当地称为计算机可读介质。例如,如果使用同轴电缆,光纤电缆,双绞线,数字用户线(DSL)或无线技术(如红外,无线电和微波)从网站,服务器或其它远程源传输软件,则同轴电缆,光纤电缆,双绞线,DSL或诸如红外,无线电和微波之类的无线技术包括在介质的定义中。如本文所使用的磁盘和光盘包括光盘(CD),激光盘,光盘,数字通用光盘(DVD),软盘和蓝光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光光学地再现数据。上述的组合也应包括在计算机可读介质的范围内。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (27)

  1. 一种车道线检测方法,其特征在于,包括:
    根据第一图像确定至少一个第一区域;
    根据所述至少一个第一区域得到至少一个第一车道线;
    根据所述至少一个第一车道线确定满足约束条件的第二车道线;所述约束条件包括车道线所遵循的规律。
  2. 根据权利要求1所述的方法,其特征在于,所述车道线所遵循的规律包括以下至少一种:相邻的两个所述第一车道线中纵坐标相同的像素点之间的宽度满足第一范围、所述第一车道线曲率满足第二范围、相邻的两个所述第一车道线间的距离满足第三范围、相邻的两个所述第一车道线间的曲率差满足第四范围。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一图像确定所述至少一个第一区域,包括:
    根据所述第一图像获取第三车道线;
    根据所述第三车道线以及第一距离,确定所述至少一个第一区域;其中,所述第一距离与车道的宽度有关。
  4. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一图像确定所述至少一个第一区域,包括:
    根据所述第一图像获取第三车道线;
    根据所述第三车道线和利用所述第一图像构建的积分图,在所述第一图像中确定多个所述第一区域;其中,所述积分图的横坐标为图像的像素列数,纵坐标为纵轴方向图像的像素数目。
  5. 根据权利要求4所述的方法,其特征在于,根据所述第三车道线和利用所述第一图像构建的积分图,确定所述至少一个第一区域,包括:
    根据所述第三车道线,确定所述第三车道线所位于的区域;
    获取所述积分图的多个极大值;
    在所述多个极大值对应的位置处,确定与所述第三车道线所位于的区域平行的所述至少一个第一区域。
  6. 根据权利要求5所述的方法,其特征在于,所述获取所述积分图的所述多个极大值,包括:
    根据所述第三车道线拉直所述第一图像,得到第二图像;其中,拉直后的所述第二图像中的所述第三车道线与所述纵轴平行;
    根据所述第二图像生成所述积分图;
    获取所述积分图的所述多个极大值。
  7. 根据权利要求6所述的方法,其特征在于,根据所述第三车道线拉直所述第一图像,得到所述第二图像,包括:
    以所述第三车道线的任一像素点为参考点,将所述第三车道线拉直为与所述纵轴平行的第四车道线;
    根据所述第三车道线中其他像素点在拉直中移动的位置和方向,拉直所述第一图像中与所述其他像素点纵坐标相同的像素点,得到所述第二图像。
  8. 根据权利要求3-7任一项所述的方法,其特征在于,所述第三车道线为所述第一图像中像素数量最多的车道线;或者,所述第三车道线的像素数量大于第一阈值。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述获取所述至少一个第一区域中的所述第一车道线,包括:
    利用随机抽样一致性算法分别对所述至少一个第一区域中的像素点进行拟合,得到所述至少一个第一区域中的所述第一车道线。
  10. 根据权利要求9所述的方法,其特征在于,所述利用随机抽样一致性算法分别对所述至少一个第一区域中的像素点进行拟合,包括:
    利用随机抽样一致性算法并行对所述至少一个第一区域中的像素点进行拟合。
  11. 根据权利要求1-10所述的方法,其特征在于,所述根据所述至少一个第一车道线确定满足约束条件的所述第二车道线,包括:
    N次在所述第一区域中确定满足所述约束条件的车道线,得到多个车道线;其中,N为非零自然数;
    在所述多个车道线中确定像素数目最多的一个车道线,得到所述第二车道线。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述第一图像为所述车道线的俯视图像。
  13. 一种车道线检测装置,其特征在于,包括:
    处理单元,用于根据第一图像确定至少一个第一区域;
    所述处理单元,还用于根据所述至少一个第一区域得到至少一个第一车道线;
    所述处理单元,还用于根据所述至少一个第一车道线确定满足约束条件的第二车道线;所述约束条件包括车道线所遵循的规律。
  14. 根据权利要求13所述的装置,其特征在于,所述车道线所遵循的规律包括以下至少一种:相邻的两个所述第一车道线中纵坐标相同的像素点之间的宽度满足第一范围、所述第一车道线曲率满足第二范围、相邻的两个所述第一车道线间的距离满足第三范围、相邻的两个所述第一车道线间的曲率差满足第四范围。
  15. 根据权利要求13或14所述的装置,其特征在于,所述处理单元,具体用于:根据所述第一图像获取第三车道线;根据所述第三车道线以及第一距离,确定所述至少一个第一区域;其中,所述第一距离与车道的宽度有关。
  16. 根据权利要求13或14所述的装置,其特征在于,所述处理单元,具体用于:根据所述第一图像获取第三车道线;根据所述第三车道线和利用所述第一图像构建的积分图,在所述第一图像中确定多个所述第一区域;其中,所述积分图的横坐标为图像的像素列数,纵坐标为纵轴方向图像的像素数目。
  17. 根据权利要求16所述的装置,其特征在于,所述处理单元,具体用于:根据所述第三车道线,确定所述第三车道线所位于的区域;获取所述积分图的多个极大值;在所述多个极大值对应的位置处,确定与所述第三车道线所位于的区域平行的所述至少一个第一区域。
  18. 根据权利要求17所述的装置,其特征在于,所述处理单元,具体用于:根据所述第三车道线拉直所述第一图像,得到第二图像;其中,拉直后的所述第二图像中的所述第三车道线与所述纵轴平行;根据所述第二图像生成所述积分图;获取所述积 分图的所述多个极大值。
  19. 根据权利要求18所述的装置,其特征在于,所述处理单元,具体用于:以所述第三车道线的任一像素点为参考点,将所述第三车道线拉直为与所述纵轴平行的第四车道线;根据所述第三车道线中其他像素点在拉直中移动的位置和方向,拉直所述第一图像中与所述其他像素点纵坐标相同的像素点,得到所述第二图像。
  20. 根据权利要求15-19任一项所述的装置,其特征在于,所述第三车道线为所述第一图像中像素数量最多的车道线;或者,所述第三车道线的像素数量大于第一阈值。
  21. 根据权利要求13-20任一项所述的装置,其特征在于,所述处理单元,具体用于:利用随机抽样一致性算法分别对所述至少一个第一区域中的像素点进行拟合,得到所述至少一个第一区域中的所述第一车道线。
  22. 根据权利要求21所述的装置,其特征在于,所述处理单元,具体用于:利用随机抽样一致性算法并行对所述至少一个第一区域中的像素点进行拟合。
  23. 根据权利要求13-22所述的装置,其特征在于,所述处理单元,具体用于:N次在所述第一区域中确定满足所述约束条件的车道线,得到多个车道线;其中,N为非零自然数;在所述多个车道线中确定像素数目最多的一个车道线,得到所述第二车道线。
  24. 根据权利要求13-23任一项所述的装置,其特征在于,所述第一图像为所述车道线的俯视图像。
  25. 一种车道线检测装置,其特征在于,包括:处理器,用于调用存储器中的程序,以执行权利要求1-12任一项所述的方法。
  26. 一种芯片,其特征在于,包括:处理器和接口电路,所述接口电路用于与其它装置通信,所述处理器用于执行权利要求1至12任一项所述的方法。
  27. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有指令,当所述指令被执行时,使得计算机执行如权利要求1-12任一项所述的方法。
PCT/CN2020/122716 2020-10-22 2020-10-22 一种车道线检测方法和装置 WO2022082571A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/122716 WO2022082571A1 (zh) 2020-10-22 2020-10-22 一种车道线检测方法和装置
CN202080004827.3A CN112654998B (zh) 2020-10-22 2020-10-22 一种车道线检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/122716 WO2022082571A1 (zh) 2020-10-22 2020-10-22 一种车道线检测方法和装置

Publications (1)

Publication Number Publication Date
WO2022082571A1 true WO2022082571A1 (zh) 2022-04-28

Family

ID=75368435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122716 WO2022082571A1 (zh) 2020-10-22 2020-10-22 一种车道线检测方法和装置

Country Status (2)

Country Link
CN (1) CN112654998B (zh)
WO (1) WO2022082571A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911574A (zh) * 2024-03-18 2024-04-19 腾讯科技(深圳)有限公司 道路拉直数据处理方法、装置及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311635B (zh) * 2022-07-26 2023-08-01 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备及存储介质
CN117710795B (zh) * 2024-02-06 2024-06-07 成都同步新创科技股份有限公司 一种基于深度学习的机房线路安全性检测方法及***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217427A (zh) * 2014-08-22 2014-12-17 南京邮电大学 一种交通监控视频中车道线定位方法
CN106529493A (zh) * 2016-11-22 2017-03-22 北京联合大学 一种基于透视图的鲁棒性多车道线检测方法
CN106682646A (zh) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 一种车道线的识别方法及装置
JP6384182B2 (ja) * 2013-08-12 2018-09-05 株式会社リコー 道路上の線形指示標識の検出方法及び装置
CN109583365A (zh) * 2018-11-27 2019-04-05 长安大学 基于成像模型约束非均匀b样条曲线拟合车道线检测方法
CN110287779A (zh) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 车道线的检测方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6384182B2 (ja) * 2013-08-12 2018-09-05 株式会社リコー 道路上の線形指示標識の検出方法及び装置
CN104217427A (zh) * 2014-08-22 2014-12-17 南京邮电大学 一种交通监控视频中车道线定位方法
CN106529493A (zh) * 2016-11-22 2017-03-22 北京联合大学 一种基于透视图的鲁棒性多车道线检测方法
CN106682646A (zh) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 一种车道线的识别方法及装置
CN109583365A (zh) * 2018-11-27 2019-04-05 长安大学 基于成像模型约束非均匀b样条曲线拟合车道线检测方法
CN110287779A (zh) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 车道线的检测方法、装置及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911574A (zh) * 2024-03-18 2024-04-19 腾讯科技(深圳)有限公司 道路拉直数据处理方法、装置及电子设备
CN117911574B (zh) * 2024-03-18 2024-05-31 腾讯科技(深圳)有限公司 道路拉直数据处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN112654998B (zh) 2022-04-15
CN112654998A (zh) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112417967B (zh) 障碍物检测方法、装置、计算机设备和存储介质
US10599930B2 (en) Method and apparatus of detecting object of interest
CN111666921B (zh) 车辆控制方法、装置、计算机设备和计算机可读存储介质
WO2022082571A1 (zh) 一种车道线检测方法和装置
EP4152204A1 (en) Lane line detection method, and related apparatus
US20210150231A1 (en) 3d auto-labeling with structural and physical constraints
US11531892B2 (en) Systems and methods for detecting and matching keypoints between different views of a scene
CN111860227B (zh) 训练轨迹规划模型的方法、装置和计算机存储介质
WO2022104774A1 (zh) 目标检测方法和装置
US11195064B2 (en) Cross-modal sensor data alignment
US11475628B2 (en) Monocular 3D vehicle modeling and auto-labeling using semantic keypoints
US10891795B2 (en) Localization method and apparatus based on 3D color map
EP4307219A1 (en) Three-dimensional target detection method and apparatus
CN112753038A (zh) 识别车辆变道趋势的方法和装置
WO2023179027A1 (zh) 一种道路障碍物检测方法、装置、设备及存储介质
CN112800822A (zh) 利用结构约束和物理约束进行3d自动标记
WO2022082574A1 (zh) 一种车道线检测方法和装置
WO2022204905A1 (zh) 一种障碍物检测方法及装置
KR20230140654A (ko) 운전자 보조 시스템 및 운전자 보조 방법
CN115797578A (zh) 一种高精地图的处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20958154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20958154

Country of ref document: EP

Kind code of ref document: A1