WO2022179016A1 - 车道检测方法、装置、设备及存储介质 - Google Patents

车道检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022179016A1
WO2022179016A1 PCT/CN2021/102639 CN2021102639W WO2022179016A1 WO 2022179016 A1 WO2022179016 A1 WO 2022179016A1 CN 2021102639 W CN2021102639 W CN 2021102639W WO 2022179016 A1 WO2022179016 A1 WO 2022179016A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
area
tested
lane
Prior art date
Application number
PCT/CN2021/102639
Other languages
English (en)
French (fr)
Inventor
赵永磊
朱铖恺
徐亮
谭发兵
武伟
Original Assignee
上海商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤科技开发有限公司 filed Critical 上海商汤科技开发有限公司
Publication of WO2022179016A1 publication Critical patent/WO2022179016A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a lane detection method, device, equipment and storage medium.
  • the present disclosure provides a lane detection method, device, device and storage medium.
  • a lane detection method includes: acquiring an image to be tested; wherein the image to be tested includes at least one lane area; detecting the image to be tested, Determine the vehicle in the image to be tested; determine the vehicle area of the vehicle from the image to be tested; wherein, the vehicle area is used to represent the road area occupied by the vehicle; The relative position of the lane area in the image to be tested determines the lane where the vehicle is located.
  • a lane detection device the device includes: a to-be-measured image acquisition module for acquiring a to-be-measured image; wherein the to-be-measured image includes at least one lane area; a vehicle an area determination module, configured to detect the image to be tested and determine the vehicle in the image to be tested; determine the vehicle area of the vehicle from the image to be tested; wherein the vehicle area is used to represent The road surface area occupied by the vehicle; a lane determination module, configured to determine the lane where the vehicle is located according to the relative position of the vehicle area and the lane area in the image to be measured.
  • a computer device including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the first aspect when the program is executed
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the lane detection method according to any one of the first aspects.
  • a computer program product including a computer program, which implements the lane detection method according to any one of the first aspects when the program is executed by a processor.
  • FIG. 1 is a flowchart of a lane detection method according to an exemplary embodiment
  • FIG. 2 is a schematic diagram of an image to be measured according to an exemplary embodiment
  • FIG. 3 is a schematic structural diagram of a vehicle detection network according to an exemplary embodiment
  • FIG. 4 is a schematic diagram of a foreground view of a vehicle according to an exemplary embodiment
  • FIG. 5 is a flowchart of a method for determining a vehicle area according to an exemplary embodiment
  • FIG. 6 is a schematic diagram of a network structure of a key point detection network according to an exemplary embodiment
  • FIG. 7 is a flowchart of a method for detecting key points of a wheel according to an exemplary embodiment
  • Fig. 8 is a kind of key point detection network training flow chart shown according to an exemplary embodiment
  • FIG. 9 is a flow chart of a method for determining a lane according to an exemplary embodiment
  • FIG. 10 is a schematic diagram of a lane detection device according to an exemplary embodiment
  • FIG. 11 is a schematic diagram of yet another lane detection device according to an exemplary embodiment
  • FIG. 12 is a schematic diagram of a key point detection sub-module according to an exemplary embodiment
  • Fig. 13 is a schematic structural diagram of a computer device according to an exemplary embodiment.
  • first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • the present disclosure provides a lane detection method, which can detect the road surface area occupied by the vehicle in the image to be tested as the vehicle area, and further determine the lane where the vehicle is located according to the relative position of the vehicle area and the lane area.
  • FIG. 1 is a flowchart of a lane detection method according to an embodiment of the present disclosure.
  • the lane detection method can be applied to any system equipment suitable for implementation, such as a server, a computing equipment, a vehicle terminal equipment or other processing equipment.
  • the process includes steps 101 to 104 .
  • Step 101 Obtain an image to be tested; wherein, the image to be tested includes at least one lane area.
  • the image to be tested is an image of the lane where the vehicle is located in the image to be detected. And, the image to be tested includes one or more lane areas. It should be noted that the manner of acquiring the image to be measured may include various specific implementations.
  • the image to be measured can be acquired by multiplexing an existing image acquisition device.
  • existing hardware devices can be reused to reduce hardware costs.
  • an existing monitoring device on a highway can be used to obtain a video stream, and a frame of image can be extracted from the video stream as an image to be measured.
  • a specific image acquisition device may be installed at a preset position to acquire an image to be measured that meets the requirements.
  • a high-resolution image acquisition device may be installed at a main intersection or road section of a highway to acquire a high-resolution image to be measured.
  • Step 102 Detect the image to be tested to determine the vehicle in the image to be tested.
  • the acquired image to be tested may be images collected under different conditions, so there may or may not be a vehicle in the image to be tested, or there may be multiple vehicles in the image to be tested. For example, for monitoring equipment installed at highway intersections, there may or may not be vehicles in the collected images to be tested, or there may be multiple vehicles.
  • the image to be tested can be detected, and the vehicle in the image to be tested can be determined. It can be understood that, if there is at least one vehicle in the image to be tested, the subsequent steps may be continued; if there is no vehicle in the image to be tested, the subsequent steps do not need to be performed. For example, if there is no vehicle in the image to be tested, the image to be tested can be re-acquired and the vehicle in the image to be tested can be detected.
  • Step 103 Determine a vehicle area of the vehicle from the image to be tested; wherein, the vehicle area is used to represent a road surface area occupied by the vehicle.
  • the road area occupied by the vehicle can best represent the position of the vehicle.
  • the road area occupied by the vehicle in the image to be measured is taken as the vehicle area of the vehicle in the image to be measured.
  • Box1 is the vehicle detection frame determined in the traditional image detection technology, and the vehicle region determined by the detection of the image to be tested in this step is Region1. As shown in FIG. 2 , the area represented by Region1 can be closer to the road area occupied by the actual vehicle, so the vehicle area can more accurately represent the position of the vehicle.
  • Step 104 Determine the lane where the vehicle is located according to the relative position of the vehicle area and the lane area in the image to be tested.
  • this step may further determine the relationship between the vehicle and the lane area according to the relative positions of the lane area and the vehicle area in the image to be tested.
  • the image to be tested shown in FIG. 2 it includes a lane area L1 and a lane area L2. Take L2 as the lane area in the image to be tested, and take Region1 as the vehicle area of the vehicle. In this step, it can be determined whether the vehicle is located on L2 according to the relative positions of Region1 and L2.
  • the specific manner of determining the lane where the vehicle is located according to the relative positions of the vehicle area and the lane area can be flexibly implemented according to specific applications, which is not limited in this embodiment. Exemplarily, when the overlapping area between the vehicle area and the lane area is large enough, it may be determined that the vehicle is located on the corresponding lane area.
  • the image to be tested can be detected and the road area occupied by the vehicle in the image to be tested can be determined as the vehicle area, so that the relationship between the vehicle and the lane area can be further determined according to the relative position of the vehicle area and the lane area.
  • the road area occupied by the vehicle is detected from the image to be tested as the vehicle area, the actual position of the vehicle can be more accurately represented, so that the relationship between the vehicle and the lane area can be more accurately determined, and the vehicle can be more accurately determined.
  • detecting the image to be tested to determine the vehicle in the image to be tested includes: inputting the image to be tested into a pre-trained vehicle detection network, and the vehicle detection network Vehicles contained in the image to be tested are detected.
  • a vehicle detection network that can detect the vehicle in the image to be tested needs to be pre-trained.
  • the vehicle detection network can be obtained by training corresponding training samples based on any learnable machine learning model or neural network model.
  • the specific form of the vehicle detection network is not limited.
  • a vehicle detection network can be constructed based on the Faster-RCNN network framework.
  • Figure 3 shows a schematic diagram of the structure of a vehicle detection network based on the Faster-RCNN network framework.
  • a deep convolutional network is used to extract features from the image to be tested, and further features are extracted through a specific convolutional layer (such as Region Proposal Layer) to obtain at least one candidate vehicle detection frame.
  • Detection stage based on at least one candidate vehicle detection frame obtained in the first stage, class classification and coordinate regression can be performed on at least one candidate vehicle detection frame, such as through ROI Pooling, to obtain the confidence of each candidate vehicle detection frame. and location.
  • the candidate vehicle detection frames whose intersection ratio is greater than the threshold are merged, and the vehicle detection frame containing the vehicle is finally obtained.
  • this step can input the image to be tested into the vehicle detection network, and the vehicle detection network detects each vehicle detection frame corresponding to all vehicles included in the image to be tested. Since there may be multiple vehicles in the image to be tested, the vehicle detection network can detect multiple vehicle detection frames corresponding to multiple vehicles from the image to be tested.
  • the method before determining the vehicle area of the vehicle from the image to be tested, the method further includes: a vehicle including the vehicle detected in the image to be tested based on the vehicle detection network A detection frame is used to crop a vehicle foreground image of the vehicle from the image to be tested.
  • the image to be tested is an image of the original size collected by the image collection device. Due to differences in image acquisition equipment, there may be differences in the size of the acquired images to be tested. Therefore, in this step, the to-be-measured image can be cropped into a vehicle foreground image of a preset size based on the vehicle detection frame containing the vehicle detected from the to-be-measured image. Among them, only one vehicle is included in the vehicle foreground image. In this way, the image to be tested can be cropped into a vehicle foreground image with a uniform size, so that the vehicle in the vehicle foreground image can be further detected more conveniently.
  • the image to be tested as shown in FIG. 2 may be cropped to obtain the vehicle foreground image as shown in FIG. 4 .
  • the vehicle foreground image shown in FIG. 4 may be an image of a preset size, and the vehicle foreground image includes only one vehicle.
  • the vehicle foreground image After the vehicle foreground image is obtained, the vehicle foreground image can be further detected, and the vehicle area of the vehicle in the vehicle foreground image can be determined.
  • the image to be tested by detecting the vehicle detection frame of the vehicle in the image to be tested, the image to be tested can be cropped into a vehicle foreground image of a preset size according to the vehicle detection frame of the vehicle. In this way, a vehicle foreground image with a preset size can be obtained, so that the vehicle area of the vehicle can be more conveniently and accurately determined, so as to more accurately determine the relationship between the vehicle and the lane area, and determine the lane where the vehicle is located.
  • step 303 may include steps 501 to 502 .
  • Step 501 Input the vehicle foreground image into a pre-trained key point detection network, and the key point detection network detects the vehicle key points of the vehicle in the vehicle foreground image.
  • a key point detection network that can detect the vehicle key points of the vehicle in the vehicle foreground image needs to be pre-trained.
  • the vehicle key point may be, for example, a wheel key point of the vehicle, and the key point detection network may be obtained by training based on any learnable machine learning model or neural network model.
  • the specific form of the key point detection network is not limited.
  • FIG. 6 a schematic diagram of the network structure of a key point detection network is given.
  • ResNet is used as the backbone network for extracting image features.
  • the input of the backbone network can be a foreground image of a vehicle.
  • the backbone network may include other forms of backbone networks besides ResNet, for example, other types of general convolutional neural network structures such as GoogLeNet, VGGNet, or ShuffleNet.
  • multi-scale features can be extracted using Feature Pyramid Network (FPN).
  • FPN Feature Pyramid Network
  • the resolution of the low-resolution feature map can be restored through deconvolution and element-level addition operations.
  • the output of the FPN is a resolution equivalent to a quarter of the original image, such as 32*32 pixels. feature map.
  • the output of the FPN can be further convolved and used to predict 5 positioning heatmaps.
  • the five positioning heat maps correspond to the left front wheel, left rear wheel, right rear wheel, right front wheel and background of the vehicle, respectively.
  • the key points of the wheel in the vehicle foreground image can be further determined.
  • the key point of the wheel includes the position point where the wheel directly contacts the road surface, or includes the center point of the wheel.
  • the wheel key point is used to represent the position of the wheel. It can be understood that different vehicles have different numbers of wheels, so the number of wheel key points can also vary from vehicle to vehicle.
  • the position coordinates of 4 wheel key points can be obtained.
  • the wheel key points include: a left front wheel key point, a left rear wheel key point, a right rear wheel key point, and a right front wheel key point.
  • the wheel key points of the vehicle may include a left front wheel key point S1 , a left rear wheel key point S2 , a right rear wheel key point S3 and a right front wheel key point S4 .
  • Step 502 Determine the vehicle area of the vehicle based on the polygon enclosed by the vehicle key points.
  • the vehicle area composed of the key points of the wheel can more accurately represent the road area occupied by the vehicle.
  • the vehicle area of the vehicle may be determined according to the wheel key points detected in the vehicle foreground image.
  • the vehicle area of the vehicle can also be determined according to the vehicle body key points detected in the vehicle foreground image.
  • the specific manner of determining the vehicle area according to the key points of the vehicle may include various implementations, which is not limited in this embodiment.
  • a polygonal area formed by multiple wheel key points may be used as the vehicle area.
  • the quadrilateral area formed by the 4 wheel key points can be used as the vehicle area of the vehicle, that is, the quadrilateral S1S2S3S4 can be used as the vehicle area of the vehicle.
  • the road area occupied by the vehicle can be more accurately represented, and then the relative positions of the vehicle area and the lane area can be accurately displayed.
  • the image to be tested is first detected through the vehicle detection network, and the foreground image of the vehicle is obtained based on the vehicle detection frame detected from the image to be tested, and then the vehicle foreground image is obtained by using the key point detection network to obtain the vehicle.
  • key points of the vehicle are connected in a cascaded manner, with strong decoupling. It is more flexible to update or upgrade one of the detection networks individually, which is suitable for the rapid upgrade iteration of the algorithm landing.
  • the key point detection network can be customized for special scenarios to quickly achieve the expected performance.
  • step 501 may include steps 701 to 702 .
  • Step 701 Input the vehicle foreground image into the key point detection network, and the key point detection network outputs the probability value of each pixel in the vehicle foreground image corresponding to the vehicle key point as a positioning heat map.
  • the key point detection network can detect the probability value of each pixel in the vehicle foreground image corresponding to the vehicle key point (for example, the wheel key point).
  • the vehicle foreground image can be input into the key point detection network, and the key point detection network outputs, for example, 32*32 key points corresponding to each pixel in the vehicle foreground image corresponding to the key point of the left front wheel of the vehicle.
  • the probability value is obtained, that is, the positioning heat map that can locate the key points of the left front wheel of the vehicle is obtained.
  • a positioning heat map that can locate the key points of the left rear wheel of the vehicle a positioning heat map that can locate the key points of the right rear wheel of the vehicle, and a positioning heat map that can locate the key points of the right front wheel of the vehicle can also be obtained.
  • Step 702 based on the pixel position with the largest pixel value in the positioning heat map, determine the pixel corresponding to the pixel position in the vehicle foreground image as a vehicle key point.
  • Different pixel values in the positioning heat map represent the probability values of different pixels in the vehicle foreground image corresponding to the key points of the vehicle.
  • the pixel position with the largest pixel value can be determined from the positioning heat map, and the pixel in the vehicle foreground image corresponding to the pixel position is the vehicle key point.
  • the key point detection network can detect the probability value of each pixel in the vehicle foreground image corresponding to the vehicle key point as a positioning heat map, so that the vehicle in the vehicle foreground image can be determined according to the largest pixel value in the positioning heat map. key point.
  • the key point detection network Before inputting the vehicle foreground image into the pre-trained key point detection network, the key point detection network may be trained in advance.
  • the training process of the key point detection network as shown in FIG. 8 , may include steps 801 to 802 .
  • Step 801 based on the key point detection network to be trained, obtain the predicted location heat map and the predicted background heat map of the sample vehicle map; wherein, the predicted location heat map is determined by the probability of each pixel corresponding to the vehicle key point in the sample vehicle map.
  • the predicted background heat map is composed of the probability values of non-vehicle key points corresponding to each pixel in the sample vehicle map.
  • a large number of sample vehicle images can be collected in advance as training samples of the key point detection network to be trained.
  • the sample vehicle image can be input into the key point detection network to be trained, and the key point detection network predicts the probability value of each pixel in the sample vehicle image corresponding to the wheel key point, That is, the predicted positioning heat map is obtained.
  • multiple different predicted positioning heat maps can be obtained according to different wheel key points. For example, it is possible to obtain the predicted positioning heat map of the left front wheel key point, the left rear wheel key point, the right rear wheel key point and the right front wheel key point of each pixel in the sample vehicle map, respectively.
  • the sample vehicle image can be input into the key point detection network to be trained, and the key point detection network can predict the corresponding key points of the body contour (such as the chassis) of each pixel in the sample vehicle image.
  • vertices that is, the predicted positioning heat map is obtained.
  • multiple different predicted positioning heat maps can be obtained. For example, the predicted location heat map of each pixel in the sample vehicle map corresponding to the front left vertex key point, the front right vertex key point, the key point of the left vertex and the key point of the rear right vertex of the vehicle, respectively, can be obtained.
  • the vehicle area composed of the key points of the wheel can more accurately represent the road area occupied by the vehicle. Determining the vehicle area of the vehicle according to the key points of the wheel can more accurately represent the road area occupied by the vehicle, and then accurately determine the relationship between the vehicle and the lane area according to the relative position of the vehicle area and the lane area, so as to determine the lane where the vehicle is located. .
  • any pixel point in the sample vehicle map may be the wheel key point, or may be a background pixel point other than the wheel key point. Since the pixels in the sample vehicle map have two classification results, wheel key points and background pixels, the key point detection network needs to predict the probability value of each pixel in the sample vehicle map as a background pixel during the training process, which can be obtained. Predicted background heatmap.
  • Any pixel in the sample vehicle map can be a wheel key point or a background pixel, that is, there are only two different classification results for the same pixel in the sample vehicle map.
  • the probability value that the pixel point predicted by the key point detection network is the wheel key point is different from the predicted pixel point that is the background pixel point. The sum of the probability values is 1.
  • the key point detection network in the application stage does not need to output the probability of each pixel in the vehicle foreground image corresponding to the background. value, that is, no background heatmap is output.
  • Step 802 according to the difference between the predicted positioning heat map and the preset standard positioning heat map, and the difference between the predicted background heat map and the preset standard background heat map, adjust the key point detection network network parameters.
  • standard localization heatmaps and standard background heatmaps can be preset.
  • the pixel values of different pixels in the standard positioning heat map represent the probability values of different pixels in the sample vehicle map corresponding to the wheel key points.
  • the pixel values of different pixels in the standard background heat map represent the probability values of different pixels in the sample vehicle map corresponding to the background.
  • the pixel value of the pixel corresponding to the wheel key point in the standard positioning heatmap can be set to "1", and the pixel value of other pixels can be set to "0".
  • the pixel value of the pixel corresponding to the wheel key point in the standard background heatmap can be set to "0”, and the pixel value of other pixels is set to "1", that is, the pixel corresponding to the pixel of the background part except the wheel key point. The value is set to "1".
  • the pixels within a predetermined range of the pixels corresponding to the key points of the wheel in the preset standard positioning heat map may be marked.
  • the contact area between the vehicle and the road is often not a contact point, but corresponds to a contact area. Therefore, if a certain pixel in the standard positioning heat map is determined as the corresponding wheel key point, it is not in line with the actual application scenario and may confuse the learning of the key point detection network.
  • all the pixels in a predetermined range around the pixels corresponding to the key points of the wheel in the standard positioning heat map may be marked.
  • the pixel value of the pixel corresponding to the key point may be set to "1”
  • the pixel value of the pixel adjacent to the pixel corresponding to the key point may be set to "0.9”.
  • the predetermined range can be determined according to a Gaussian blurring manner, and a range area centered on the pixel corresponding to the key point of the wheel is determined.
  • the labeling of a range area centered on the pixels corresponding to the wheel key points is closer to the situation of the wheel key points in the actual image, so that the trained key point detection network can be more accurate. Detect vehicle keypoints in the image.
  • the pre-set standard localization heatmap and standard background heatmap are used as the learning targets of the keypoint detection network to be trained. After the keypoint detection network outputs the predicted localization heatmap and the predicted background heatmap, the network parameters of the keypoint detection network can be adjusted according to the difference from the preset learning target.
  • Any pixel point in the sample vehicle map may be a vehicle key point, or may be a background pixel point other than the vehicle key point. That is, the pixel points in the sample vehicle map have two classification results: vehicle key points and background pixel points.
  • all classification results of the pixels of the input image are considered, so that the key point detection model obtained by training can more accurately detect the vehicle key points in the image.
  • step 104 may include steps 901 to 903 .
  • Step 901 determining the overlapping area of the vehicle area and the lane area
  • Step 902 Determine the ratio of the overlap area to the vehicle area as the vehicle overlap degree.
  • FIG. 2 it includes a vehicle region Region1 and a lane region L2, wherein the overlapping region of the vehicle region Region1 and the lane region L2 is R1.
  • the ratio of R1 to the vehicle area Region1 may be determined as the vehicle overlap degree, that is, the calculation method of the vehicle overlap degree is: R1/Region1.
  • the area of the overlapping area may be further calculated, and when the area of the overlapping area is greater than a preset area threshold, it is determined that the vehicle is located in the corresponding lane area.
  • the preset area threshold may be determined according to the total area of the vehicle area. Exemplarily, the preset area threshold may be pre-specified as half of the total area of the vehicle area, so the area threshold may be different according to the total area of the vehicle area.
  • Step 903 when the overlap degree of the vehicles is greater than a preset threshold, determine that the vehicle is located in the lane area.
  • a preset threshold may be preset as a comparison value of vehicle overlap, so as to determine whether the vehicle is in the corresponding lane area.
  • the preset threshold may be set to 0.5, and in the case that the vehicle overlap R1/Region1 is greater than 0.5, it can be determined that the vehicle is located in the lane area L2.
  • the ratio of the overlap area between the vehicle area and the lane area to the vehicle area is defined as the vehicle overlap degree
  • the vehicle overlap degree is compared with a preset threshold, and when the vehicle overlap degree is greater than the preset threshold, It can be determined that the vehicle is located in the corresponding lane area.
  • the lane area where the vehicle is located can be more accurately determined.
  • a predetermined area in the to-be-measured image may be determined as a lane area under the condition that the image capture device used to capture the to-be-measured image is fixed.
  • the lane area in the acquired image to be tested is fixed.
  • monitoring equipment on a highway is often installed on a fixed intersection or road section, and the position of the corresponding intersection or road section in the image to be measured is fixed in the image. Therefore, a predetermined area in the image to be measured that corresponds to the lane in the actual road can be determined as the lane area. Since the image acquisition device is fixed, the position of the lane area in each image to be tested remains unchanged, so the lane area in the image to be tested can be determined by one setting to determine the lane area in all images to be tested.
  • the image to be tested is input into a pre-trained lane recognition network, and the lane recognition network determines the lane area in the image to be tested.
  • training samples can be used for training in advance to obtain a lane recognition network that can detect the lane area in the image to be tested.
  • the lane recognition network may be obtained by training based on any learnable machine learning model or neural network model, and this embodiment does not limit the specific form of the lane recognition network.
  • the image to be tested can be input into the lane recognition network, and the lane recognition network can determine the lane area in the image to be tested.
  • This method can use the lane recognition network to detect the image to be tested, which does not limit whether the image acquisition device that collects the image to be tested is fixed.
  • "movable monitoring equipment” can be installed at highway intersections, which can collect images of different areas as images to be measured through its own movement or rotation.
  • the lane area in the image to be tested collected in this way is not located in a fixed area, so the lane area in different images to be tested can be determined with the help of the lane recognition network of this embodiment.
  • the lane recognition network in the embodiment of the present disclosure does not rely on the vehicle detection network or the key point detection network, and the lane recognition network can detect the lane area in the image to be tested in a cascaded manner, and the decoupling is stronger.
  • the lane recognition network can be updated more flexibly, and the lane areas located in different areas in the image to be measured can be identified, which is suitable for the image to be measured collected by fixed or non-fixed image acquisition equipment.
  • the lane area where the vehicle is located can be more accurately determined according to the relative position of the lane area and the vehicle area.
  • the present disclosure provides a lane detection apparatus, and the apparatus can execute the lane detection method of any embodiment of the present disclosure.
  • the apparatus may include a to-be-measured image acquisition module 1001 , a vehicle area determination module 1002 and a lane determination module 1003 . in:
  • a to-be-measured image acquisition module 1001 configured to acquire a to-be-measured image; wherein the to-be-measured image includes at least one lane area;
  • the vehicle area determination module 1002 is configured to detect the image to be tested and determine the vehicle in the image to be tested; determine the vehicle area of the vehicle from the image to be tested; wherein, the vehicle area is determined by to represent the area of the road surface occupied by the vehicle;
  • the lane determination module 1003 is configured to determine the lane where the vehicle is located according to the relative position of the vehicle area and the lane area in the image to be measured.
  • the device further includes: a vehicle detection module 1101 , which is configured to input the image to be tested into a vehicle detection network obtained by pre-training, and the image to be tested is detected by the vehicle detection network vehicles included.
  • a vehicle detection module 1101 which is configured to input the image to be tested into a vehicle detection network obtained by pre-training, and the image to be tested is detected by the vehicle detection network vehicles included.
  • the apparatus further includes: a vehicle foreground image cropping module 1102, configured to detect a vehicle detection frame including the vehicle based on the vehicle detection network in the image to be tested, A vehicle foreground image of the vehicle is cropped from the image to be tested.
  • a vehicle foreground image cropping module 1102 configured to detect a vehicle detection frame including the vehicle based on the vehicle detection network in the image to be tested, A vehicle foreground image of the vehicle is cropped from the image to be tested.
  • the vehicle area determination module 1002 includes: a key point detection sub-module 1103, which is used to input the vehicle foreground image into a pre-trained key point detection network, and the vehicle foreground image is detected by the key point detection network.
  • the vehicle key points of the vehicle; the vehicle area determination sub-module 1104 is configured to determine the vehicle area of the vehicle based on the polygons enclosed by the vehicle key points.
  • the key point detection sub-module 1103 includes: a positioning heat map determination sub-module 1201, which is used to input the vehicle foreground image into the key point detection network, and the key point The detection network outputs the probability value of each pixel in the vehicle foreground image corresponding to the key point of the vehicle as a positioning heat map; the vehicle key point determination sub-module 1202 is used to determine the pixel position with the largest pixel value in the positioning heat map. The pixel corresponding to the pixel position in the vehicle foreground image is the vehicle key point.
  • the vehicle area determination module 1002 further includes: a heat map prediction sub-module 1105, which is used to obtain the predicted location heat map and prediction of the sample vehicle map based on the key point detection network to be trained. Background heat map; wherein, the predicted positioning heat map is composed of the probability values of each pixel corresponding to the vehicle key point in the sample vehicle map, and the predicted background heat map is composed of the non-vehicle key points corresponding to each pixel in the sample vehicle map. The probability value of The difference between the two, adjust the network parameters of the key point detection network.
  • the vehicle area determination module 1002 further includes: a label value determination sub-module 1107, which is used to locate the predetermined standard of the pixel corresponding to the wheel key point in the heatmap based on Gaussian blurring. Pixels within the range are labeled.
  • a label value determination sub-module 1107 which is used to locate the predetermined standard of the pixel corresponding to the wheel key point in the heatmap based on Gaussian blurring. Pixels within the range are labeled.
  • the lane determination module 1003 is further configured to determine the overlap area between the vehicle area and the lane area; determine the ratio of the overlap area to the vehicle area as the vehicle overlap degree; When the overlap degree of the vehicles is greater than a preset threshold, it is determined that the vehicle is located in the lane area.
  • the apparatus further includes: a first lane area determination module 1108, configured to record the image to be measured when the image acquisition device used to collect the image to be measured is fixed The predetermined area in is determined as the at least one lane area; or, the second lane area determination module 1109 is configured to input the to-be-measured image into a pre-trained lane recognition network, and the lane recognition network determines the to-be-measured image the at least one lane area in the survey image.
  • a first lane area determination module 1108 configured to record the image to be measured when the image acquisition device used to collect the image to be measured is fixed The predetermined area in is determined as the at least one lane area
  • the second lane area determination module 1109 is configured to input the to-be-measured image into a pre-trained lane recognition network, and the lane recognition network determines the to-be-measured image the at least one lane area in the survey image.
  • the present disclosure also provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor can implement the lane of any embodiment of the present disclosure when the processor executes the program Detection method.
  • the device may include: a processor 1010 , a memory 1020 , an input/output interface 1030 , a communication interface 1040 and a bus 1050 .
  • the processor 1010 , the memory 1020 , the input/output interface 1030 and the communication interface 1040 realize the communication connection among each other within the device through the bus 1050 .
  • the processor 1010 can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. program to implement the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application specific integrated circuit (Application Specific Integrated Circuit, ASIC)
  • ASIC Application Specific Integrated Circuit
  • the memory 1020 may be implemented in the form of a ROM (Read Only Memory, read-only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like.
  • the memory 1020 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1020 and invoked by the processor 1010 for execution.
  • the input/output interface 1030 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 1040 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, mobile network, WIFI, Bluetooth, etc.).
  • Bus 1050 includes a path to transfer information between the various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
  • the above-mentioned device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in the specific implementation process, the device may also include necessary components for normal operation. other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present specification, rather than all the components shown in the figures.
  • the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, can implement the lane detection method of any embodiment of the present disclosure.
  • the present disclosure also provides a computer program product, including a computer program, which, when executed by a processor, can implement the lane detection method of any embodiment of the present disclosure.
  • non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc., which is not limited in the present disclosure.
  • embodiments of the present disclosure provide a computer program product, comprising computer-readable code, when the computer-readable code is executed on a device, the processor in the device executes any of the above implementations.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车道检测方法、装置、设备及存储介质,其中方法包括:获取待测图像;其中,待测图像中包括至少一个车道区域(101);对待测图像进行检测,确定待测图像中的车辆(102);从待测图像中确定出车辆的车辆区域;其中,车辆区域用于表示车辆所占据的路面区域(103);根据车辆区域与待测图像中车道区域的相对位置,确定车辆所在的车道(104)。

Description

车道检测方法、装置、设备及存储介质
相关申请交叉引用
本申请主张申请号为202110221186.3、申请日为2021年2月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及计算机视觉技术领域,具体涉及一种车道检测方法、装置、设备及存储介质。
背景技术
目前,车辆仍然是交通运输领域的重要参与对象,车辆的行驶行为也是交通领域的研究热点。实际生活中,部分车辆存在不按照路面标志行驶、变道不打转向灯或倒逆行驶等不符合行驶规范的行为,严重影响或危害了正常的交通秩序。而现场执法或者远程人工巡视等方法存在识别效率低和覆盖面小的问题,亟需基于视频的车辆行为自动化分析识别来辅助。其中,基于视频对车辆行为进行分析的基础是判断车辆所处的车道,即将车辆与车道相关联。
发明内容
本公开提供了一种车道检测方法、装置、设备及存储介质。
根据本公开实施例的第一方面,提供一种车道检测方法,所述方法包括:获取待测图像;其中,所述待测图像中包括至少一个车道区域;对所述待测图像进行检测,确定所述待测图像中的车辆;从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
根据本公开实施例的第二方面,提供一种车道检测装置,所述装置包括:待测图像获取模块,用于获取待测图像;其中,所述待测图像中包括至少一个车道区域;车辆区域确定模块,用于对所述待测图像进行检测,确定所述待测图像中的车辆;从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;车道确定模块,用于根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
根据本公开实施例的第三方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现第一 方面中任一项所述的车道检测方法。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现第一方面中任一项所述的车道检测方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,包括计算机程序,所述程序被处理器执行时实现第一方面中任一项所述的车道检测方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种车道检测方法流程图;
图2是根据一示例性实施例示出的一种待测图像示意图;
图3是根据一示例性实施例示出的一种车辆检测网络结构示意图;
图4是根据一示例性实施例示出的一种车辆前景图示意图;
图5是根据一示例性实施例示出的一种车辆区域确定方法流程图;
图6是根据一示例性实施例示出的一种关键点检测网络的网络结构示意图;
图7是根据一示例性实施例示出的一种车轮关键点检测方法流程图;
图8是根据一示例性实施例示出的一种关键点检测网络训练流程图;
图9是根据一示例性实施例示出的一种车道确定方法流程图;
图10是根据一示例性实施例示出的一种车道检测装置示意图;
图11是根据一示例性实施例示出的又一种车道检测装置示意图;
图12是根据一示例性实施例示出的一种关键点检测子模块示意图;
图13是根据一示例性实施例示出的一种计算机设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的具体方式并不代表与本公开相一致的所有方案。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和 /或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
基于视频对车辆行为进行分析的相关技术中,在判断车辆所处车道时大多基于车辆检测框进行。由于立体的车辆投影到平面图像时,会发生位置的变化,这导致了车辆检测框相比于车辆实际尺寸会有所外扩。无论是基于车辆检测框中心点或者车辆检测框边缘点的方法,均无法有效对抗车辆外扩的影响,从而导致无法准确判断车辆所处的车道。
基于以上,本公开提供了一种车道检测方法,可以检测出待测图像中车辆占据的路面区域作为车辆区域,进一步根据车辆区域与车道区域的相对位置,确定车辆所在的车道。
为了使本公开提供的车道检测方法更加清楚,下面结合附图和具体实施例对本公开提供的方案执行过程进行详细描述。
参见图1,图1是本公开提供的实施例示出的一种车道检测方法流程图。其中,车道检测方法可以应用于任何适于实现的***设备中,例如服务器、计算设备、车载终端设备或者其他处理设备。如图1所示,该流程包括步骤101至步骤104。
步骤101,获取待测图像;其中,所述待测图像中包括至少一个车道区域。
本公开实施例中,待测图像是需要检测图像中车辆所在车道的图像。并且,待测图像中包括一个或多个车道区域。需要说明的是,获取待测图像的方式可以包括多种具体实现。
在一种可能的实现方式中,可以复用已有的图像采集设备采集得到待测图像。该方式中可以复用已有的硬件设备,减少硬件成本。示例性的,可以利用高速公路上已有的监控设备,获取视频流,从视频流中抽取一帧图像,作为待测图像。
在一种可能的实现方式中,可以在预设位置处安装特定的图像采集设备,以采集符合要求的待测图像。示例性的,可以在高速公路的主要路口或路段安装高分辨率的图像采集设备,以获取高分辨率的待测图像。
步骤102,对所述待测图像进行检测,确定所述待测图像中的车辆。
获取的待测图像可能是不同情况下采集的图像,所以待测图像中可能存在车辆也可能不存在车辆,或者待测图像中可能存在多个车辆。例如,在高速公路路口安装的监控设备,采集的待测图像中可能存在车辆或不存在车辆,或者可能存在多个车辆。
本步骤可以对待测图像进行检测,确定所述待测图像中的车辆。可以理解的是,如果待测图像中存在至少一辆车辆的情况下,可以继续执行后续步骤;如果待测图像中不存在任何车辆的情况下,则不需要执行后续步骤。例如,如果待测图像中不存在车辆,则可以重新获取待测图像并检测待测图像中的车辆。
步骤103,从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域。
车辆在路面上行驶的过程中,车辆所占据的路面区域最能代表车辆的位置。本步骤中,将待测图像中车辆所占据的路面区域,作为待测图像中车辆的车辆区域。
参照图2所示的待测图像,其中Box1是传统图像检测技术中确定的车辆检测框,而本步骤对待测图像进行检测确定的车辆区域是Region1。如图2所示,Region1所表示的区域可以更加贴近实际车辆所占据的路面区域,因此该车辆区域可以更加准确的表示车辆的位置。
步骤104,根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
在确定待测图像中的车辆的车辆区域后,本步骤可以进一步根据待测图像中车道区域与车辆区域的相对位置,确定车辆与车道区域的关联关系。参照图2所示待测图像,其中包括车道区域L1和车道区域L2。以L2作为待测图像中的车道区域,以Region1作为车辆的车辆区域。本步骤可以根据Region1与L2的相对位置,确定车辆是否位于L2上。
其中,根据车辆区域与车道区域的相对位置确定车辆所在车道的具体方式,可以根据具体应用灵活实现,本实施例并不限制。示例性的,可以在车辆区域与车道区域的重叠区域足够大的情况下,确定车辆位于对应车道区域上。
本公开实施例中,可以对待测图像进行检测并确定待测图像中车辆占据的路面区域作为车辆区域,从而可以进一步根据车辆区域与车道区域的相对位置确定车辆与车道区域的关联关系。该方式中,由于从待测图像中检测出车辆占据的路面区域作为车辆区域,可以更加准确地表示出车辆的实际位置,从而可以更加准确地判断出车辆与车道区域的关联关系,更准确确定车辆所在车道。
在一些可选实施例中,对所述待测图像进行检测,确定所述待测图像中的车辆,包括:将所述待测图像输入预先训练得到的车辆检测网络,由所述车辆检测网络检测所述待测图像中包含的车辆。
本实施例中需要预先训练得到可检测出待测图像中的车辆的车辆检测网络。其中,车辆检测网络可以基于任一可学习的机器学习模型或神经网络模型,由对应的训练样本训练后得到。本实施例中,并不限制车辆检测网络的具体形式。
在一种可能的实现方式中,可以基于Faster-RCNN网络框架构建得到车辆检测网络。示例性的,图3给出一种基于Faster-RCNN网络框架的车辆检测网络结构示意图。在第一阶段(Propose阶段),利用深度卷积网络对待测图像进行特征提取,并通过特定卷积层(例如Region Proposal Layer)进一步提取特征得到至少一个候选车辆检测框。在第二阶段(Detection阶段),可以基于第一阶段得到的至少一个候选车辆检测框,对至少一个候选车辆检测框例如通过ROI Pooling进行类别分类和坐标回归,得到各个候选车辆检测框的置信度和位置。最后,通过极大值抑制算法,合并交并比大于阈值的候选车辆检测框,最终得到包含车辆的车辆检测框。
在训练得到可用的车辆检测网络后,本步骤可以将待测图像输入车辆检测网络,并由车辆检测网络检测出待测图像中包括的所有车辆对应的各个车辆检测框。由于待测图像中可能存在多辆车辆,所以车辆检测网络可以从待测图像中检测出分别对应多辆车辆的多个车辆检测框。
在一些可选实施例中,从所述待测图像中确定出所述车辆的车辆区域之前,还包括:基于所述车辆检测网络在所述待测图像中检测到的包含所述车辆的车辆检测框,从所述待测图像中裁剪出所述车辆的车辆前景图。
本实施例中,待测图像是图像采集设备采集的原尺寸图像。由于图像采集设备的差异,可能导致采集得到的待测图像的尺寸存在差异。所以,本步骤可以基于从待测图像中检测出的包含车辆的车辆检测框,将待测图像裁剪为预设尺寸的车辆前景图。其中,车辆前景图中仅包括一辆车辆。该方式,将待测图像裁剪可以为尺寸统一的车辆前景图,以更加方便的对车辆前景图中的车辆进行进一步的检测。
示例性的,本步骤可以将如图2所示的待测图像进行裁剪,得到如图4所示的车辆前景图。其中,图4所示的车辆前景图可以是预设尺寸大小的图像,并且该车辆前景图中仅包括一辆车辆。
在得到车辆前景图后,可以进一步对车辆前景图进行检测,并确定车辆前景图中车辆的车辆区域。
本公开实施例中,通过检测出待测图像中包含车辆的车辆检测框,从而可以根据车辆的车辆检测框将待测图像裁剪为预设尺寸的车辆前景图。该方式,可以得到预设尺寸的车辆前景图,从而可以更加方便、准确的确定车辆的车辆区域,以便于更加准确确定车辆与车道区域的关联关系,确定车辆所在车道。
在一些可选实施例中,步骤303的具体实现,如图5所示,可以包括步骤501至步骤502。
步骤501,将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点。
本实施例中,需要预先训练得到可以检测出车辆前景图中车辆的车辆关键点的关键点检测网络。其中,该车辆关键点例如可以为车辆的车轮关键点,关键点检测网络可以基于任一可学习的机器学习模型或神经网络模型进行训练得到。本实施例中,并不限制关键点检测网络的具体形式。
示例性的,如图6给出一种关键点检测网络的网络结构示意图。其中,使用ResNet作为提取图片特征的骨干网络。其中,骨干网络的输入可以是一张车辆前景图,经过骨干网络的卷积操作以后,特征的空间分辨率逐步降低,语义特征愈发明显。可以理解的是,骨干网络除ResNet之外还可以包括更多其他形式的骨干网络,例如可以是GoogLeNet、VGGNet或ShuffleNet等其他类型的通用卷积神经网络结构。
进一步,可以使用特征金字塔网络(FPN)提取多尺度的特征。具体的,可以将低分辨率的特征图通过逆卷积和元素级别相加的操作进行分辨率复原,FPN的输出是分辨率相当于原图四分之一大小的例如包含32*32个像素的特征图。
再进一步,可以对FPN的输出进行进一步卷积运算后,用于预测5个定位热力图。其中,5个定位热力图分别对应为车辆的左前轮、左后轮、右后轮、右前轮和背景。根据定位热力图可以进一步确定车辆前景图中的车轮关键点。
在一种可能的实现方式中,车轮关键点包括车轮与路面直接接触的位置点,或者包括车轮中心点。其中,车轮关键点用于表示车轮的位置。可以理解的是,不同车辆具有不同数量的车轮,所以车轮关键点的数量也可以根据车辆的不同而改变。
例如,可以得到4个车轮关键点的位置坐标。示例性的,所述车轮关键点包括:左前轮关键点、左后轮关键点、右后轮关键点和右前轮关键点。如图4所示,车辆的车轮关键点可以包括左前轮关键点S1、左后轮关键点S2、右后轮关键点S3和右前轮关键点S4。
步骤502,基于所述车辆关键点围成的多边形,确定所述车辆的车辆区域。
由于车辆在路面上行驶过程中,车轮是与路面直接接触的具***置,所以由车轮关键点构成的车辆区域可以更加准确的表示出车辆占据的路面区域。本步骤,可以根据车辆前景图中检测出的车轮关键点,确定车辆的车辆区域。也可以根据车辆前景图中检测出的车身关键点,确定车辆的车辆区域。其中,根据车辆关键点确定车辆区域的具体方式,可包括多种实现,本实施例并不限制。
在一种可能的实现方式中,可以将多个车轮关键点构成的多边形区域作为车辆区域。以图4所示,车辆包括4个车轮关键点的情况下,可以将4个车轮关键点构成的四边形区域作为车辆的车辆区域,即四边形S1S2S3S4作为车辆的车辆区域。
本公开实施例中,通过检测车辆的车轮关键点,根据车轮关键点确定车辆的车辆区域,可以更加准确地表示出车辆所占据的路面区域,进而可以根据车辆区域与车道区 域的相对位置,准确确定车辆与车道区域的关联关系,从而确定车辆所在车道。
另一方面,本实施例中是通过车辆检测网络先对待测图像进行检测,基于从待测图像中检测到的车辆检测框得到车辆前景图后,再对车辆前景图利用关键点检测网络得到车辆的车辆关键点。其中,车辆检测网络与关键点检测网络以级联的方式进行连接,解耦性强。可以更加灵活的对其中某一个检测网络单独进行更新或升级,适合算法落地的快速升级迭代。可以在算法落地过程中,针对特殊场景定制关键点检测网络,快速达到预期性能。
在一些可选实施例中,步骤501的具体实现,如图7所示,可以包括步骤701至步骤702。
步骤701,将所述车辆前景图输入所述关键点检测网络,由所述关键点检测网络输出所述车辆前景图中各像素对应车辆关键点的概率值,作为定位热力图。
将车辆前景图输入关键点检测网络后,关键点检测网络可以检测出车辆前景图中各个像素对应车辆关键点(例如车轮关键点)的概率值。示例性的,以车辆左前轮关键点为例,可以将车辆前景图输入关键点检测网络,由关键点检测网络输出车辆前景图中各个像素对应车辆左前轮关键点的例如32*32个概率值,即得到可以定位车辆左前轮关键点的定位热力图。
基于相同原理,还可以得到可以定位车辆左后轮关键点的定位热力图、可以定位车辆右后轮关键点的定位热力图、可以定位车辆右前轮关键点的定位热力图。
步骤702,基于所述定位热力图中像素值最大的像素位置,确定所述车辆前景图中对应所述像素位置的像素为车辆关键点。
定位热力图中不同像素值表示车辆前景图中不同像素对应车辆关键点的概率值。本步骤可以从定位热力图中确定出像素值最大的像素位置,该像素位置对应的车辆前景图中的像素即为车辆关键点。
上述实施例中,关键点检测网络可以检测出车辆前景图中各个像素对应车辆关键点的概率值,作为定位热力图,从而可以根据定位热力图中最大的像素值确定出车辆前景图中的车辆关键点。
在将车辆前景图输入预先训练得到的关键点检测网络之前,可以预先对关键点检测网络进行训练,关键点检测网络的训练过程,如图8所示,可以包括步骤801至步骤802。
步骤801,基于待训练的关键点检测网络,获取样本车辆图的预测定位热力图与预测背景热力图;其中,所述预测定位热力图由所述样本车辆图中各像素对应车辆关键点的概率值组成,所述预测背景热力图由所述样本车辆图中各像素对应非车辆关键点的概率值组成。
本实施例中可以预先收集大量样本车辆图作为待训练的关键点检测网络的训练样本。以车辆关键点为车轮关键点为例,在训练过程中,可以将样本车辆图输入待训练的关键点检测网络,由关键点检测网络预测样本车辆图中各个像素对应车轮关键点的概率值,即得到预测定位热力图。其中,根据不同车轮关键点可以得到多个不同的预测定位热力图。例如,可以得到样本车辆图中各个像素分别对应车辆的左前轮关键点、左后轮关键点、右后轮关键点和右前轮关键点的预测定位热力图。以车辆关键点为车身关键点为例,在训练过程中,可以将样本车辆图输入待训练的关键点检测网络,由关键点检测网络预测样本车辆图中各个像素对应车身轮廓关键点(例如底盘顶点)的概率值,即得到预测定位热力图。其中,根据不同车身轮廓关键点可以得到多个不同的预测定位热力图。例如,可以得到样本车辆图中各个像素分别对应车辆的左前顶点关键点、右前顶点关键点、左顶点关键点和右后顶点关键点的预测定位热力图。由于车辆在路面上行驶过程中,车轮是与路面直接接触的具***置,所以由车轮关键点构成的车辆区域可以更加准确的表示出车辆占据的路面区域。根据车轮关键点确定车辆的车辆区域,可以更加准确地表示出车辆所占据的路面区域,进而可以根据车辆区域与车道区域的相对位置,准确确定车辆与车道区域的关联关系,从而确定车辆所在车道。
以车辆关键点为车轮关键点为例,样本车辆图中的任一像素点可以是车轮关键点,或者可以是除车轮关键点之外的背景像素点。由于样本车辆图中的像素点有车轮关键点和背景像素点两种分类结果,所以在训练过程中关键点检测网络需要预测样本车辆图中各个像素点是背景像素点的概率值,即可以得到预测背景热力图。
样本车辆图中的任一像素点可以是车轮关键点或背景像素点,即样本车辆图中的同一个像素点只有两种不同的分类结果。在一种可能的实现方式中,对于同一张样本车辆图中的同一个像素点,关键点检测网络预测的该像素点是车轮关键点的概率值,与预测的该像素点是背景像素点的概率值之和为1。
而在关键点检测网络的应用阶段,由于并不关心车辆前景图中除车轮关键点之外的像素的概率,所以在应用阶段关键点检测网络可以不用输出车辆前景图中各个像素对应背景的概率值,即不用输出背景热力图。
步骤802,根据所述预测定位热力图与预先设置的标准定位热力图之间的差异,和所述预测背景热力图与预先设置的标准背景热力图之间的差异,调整所述关键点检测网络的网络参数。
在训练关键点检测网络之前,可以预先设置标准定位热力图和标准背景热力图。其中,标准定位热力图中不同像素的像素值表示样本车辆图中不同像素对应车轮关键点的概率值。标准背景热力图中不同像素的像素值表示样本车辆图中不同像素对应背景的概率值。
例如,可以将标准定位热力图中对应车轮关键点的像素的像素值设置为“1”,其他像素的像素值设置为“0”。例如,可以将标准背景热力图中对应车轮关键点的像素的像素值设置为“0”,其他像素的像素值设置为“1”,即对应除车轮关键点之外的背景部分的像素的像素值设置为“1”。
在一种可能的实现方式中,可以将预先设置的标准定位热力图中车轮关键点对应像素的预定范围内的像素进行标注。
由于车辆与路面的接触区域往往不是一个接触点,而是对应一个接触区域。所以,如果将标准定位热力图中的某一个像素确定为对应的车轮关键点,并不符合实际应用场景,可能混淆关键点检测网络的学习。
本实施例中,可以将标准定位热力图中车轮关键点对应的像素周围预定范围的像素均进行标注。例如,可以将关键点对应的像素的像素值设置为“1”,将与关键点对应的像素相邻的像素的像素值设置为“0.9”。其中,预定范围可以根据高斯模糊的方式,确定出以车轮关键点对应的像素为中心的一个范围区域。
该方式中,训练关键点检测网络的过程中,以车轮关键点对应的像素为中心的一个范围区域的标注更接近实际图像中车轮关键点的情况,从而训练得到的关键点检测网络可以更加准确检测出图像中的车辆关键点。
预先设置的标准定位热力图和标准背景热力图,作为待训练的关键点检测网络的学习目标。在关键点检测网络输出预测定位热力图和预测背景热力图之后,可以根据与预先设置的学习目标的差异调整关键点检测网络的网络参数。
样本车辆图中的任一像素点可以是车辆关键点,或者可以是除车辆关键点之外的背景像素点。即样本车辆图中的像素点有车辆关键点和背景像素点两种分类结果。上述实施例中,在对关键点检测网络的实际训练过程中,考虑了输入图像的像素的所有分类结果,从而训练得到的关键点检测模型可以更加准确检测出图像中的车辆关键点。
在一些可选实施例中,步骤104的具体实现,如图9所示,可以包括步骤901至步骤903。
步骤901,确定所述车辆区域与所述车道区域的重叠区域;
步骤902,将所述重叠区域占所述车辆区域的比例,确定为车辆重叠度。
以图2为例进行说明,其中包括车辆区域Region1、车道区域L2,其中车辆区域Region1与车道区域L2的重叠区域为R1。本实施例可以将R1占车辆区域Region1的比例,确定为车辆重叠度,即车辆重叠度的计算方式为:R1/Region1。
在一种可能的实现方式中,在得到车辆区域与车道区域的重叠区域后,可以进一步计算重叠区域的面积,并在重叠区域的面积大于预设的面积阈值的情况下,确定车辆位于对应的车道区域。其中,预设的面积阈值可以根据车辆区域的总面积确定。示例性 的,可以预先规定预设的面积阈值为车辆区域的总面积的一半,所以面积阈值可以根据车辆区域总面积的不同而不同。
步骤903,在所述车辆重叠度大于预设阈值的情况下,确定所述车辆位于所述车道区域。
本实施例可以预先设置预设阈值,作为车辆重叠度的对比值,以判断车辆是否处于对应车道区域。例如可以将预设阈值设置为0.5,在车辆重叠度R1/Region1大于0.5的情况下,则可以确定车辆位于车道区域L2。
本公开实施例中,将车辆区域与车道区域的重叠区域占车辆区域的比例定义为车辆重叠度,将车辆重叠度与预设阈值进行比较,并在车辆重叠度大于预设阈值的情况下,可以确定车辆位于对应的车道区域。该确定车辆与车道区域的关联关系的方式中,由于车辆区域本身是车辆占据的路面区域,以重叠区域占据车辆区域的比例作为判断基础,可以更加准确判断出车辆所处的车道区域。
从待测图像中检测出车辆的车辆区域后,在根据车辆区域与车道区域确定关联关系之前,需要预先确定待测图像中的车道区域。在一些可选实施例中,可以在用于采集所述待测图像的图像采集设备固定的情况下,将所述待测图像中的预定区域确定为车道区域。
实际应用中,采集待测图像的图像采集设备大多固定于某一个位置,所以采集得到的待测图像中的车道区域是固定不变的。例如,高速公路上的监控设备往往安装于某一固定路口或路段上,采集得到的待测图像中对应路口或路段在图像中的位置是固定不变的。因此,可以将待测图像中与实际道路中车道对应的预定区域确定为车道区域。由于图像采集设备固定不动,所以每张待测图像中车道区域的位置保持不变,所以待测图像中车道区域可以通过一次设置确定所有待测图像中的车道区域。
在一些可选实施例中,将所述待测图像输入预先训练得到的车道识别网络,由所述车道识别网络确定所述待测图像中的所述车道区域。
上述可选实施例中,可以预先利用训练样本进行训练,得到可以检测出待测图像中车道区域的车道识别网络。其中,车道识别网络可以基于任一可学习的机器学习模型或神经网络模型进行训练得到,本实施例并不限制车道识别网络的具体形式。
在训练得到车道识别网络后,可以将待测图像输入该车道识别网络,由该车道识别网络确定待测图像中的车道区域。该方式可以利用车道识别网络对待测图像进行检测,其并不限制采集待测图像的图像采集设备是否固定。例如,高速路口可安装“可移动监控设备”,该设备可以通过自身的移动或旋转采集不同区域的图像作为待测图像。这样采集得到的待测图像中车道区域并非位于固定区域,所以可以借助本实施例的车道识别网络确定不同待测图像中的车道区域。
本公开实施例中的车道识别网络,并不依赖车辆检测网络或关键点检测网络,车道识别网络可以通过级联的方式检测出待测图像中的车道区域,解耦性更强。可以更加 灵活的更新车道识别网络,并且能够识别出待测图像中位于不同区域的车道区域,适用于固定或不固定的图像采集设备采集得到的待测图像。在准确确定待测图像中车道区域的基础上,可以根据车道区域与车辆区域的相对位置更加准确的确定车辆所处的车道区域。
图10所示,本公开提供了一种车道检测装置,该装置可以执行本公开任一实施例的车道检测方法。该装置可以包括待测图像获取模块1001、车辆区域确定模块1002和车道确定模块1003。其中:
待测图像获取模块1001,用于获取待测图像;其中,所述待测图像中包括至少一个车道区域;
车辆区域确定模块1002,用于对所述待测图像进行检测,确定所述待测图像中的车辆;从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;
车道确定模块1003,用于根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
可选的,如图11所示,所述装置还包括:车辆检测模块1101,用于将所述待测图像输入预先训练得到的车辆检测网络,由所述车辆检测网络检测所述待测图像中包含的车辆。
可选的,如图11所示,所述装置还包括:车辆前景图裁剪模块1102,用于基于所述车辆检测网络在所述待测图像中检测到的包含所述车辆的车辆检测框,从所述待测图像中裁剪出所述车辆的车辆前景图。
所述车辆区域确定模块1002,包括:关键点检测子模块1103,用于将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点;车辆区域确定子模块1104,用于基于所述车辆关键点围成的多边形,确定所述车辆的车辆区域。
可选的,如图12所示,所述关键点检测子模块1103,包括:定位热力图确定子模块1201,用于将所述车辆前景图输入所述关键点检测网络,由所述关键点检测网络输出所述车辆前景图中各像素对应车辆关键点的概率值,作为定位热力图;车辆关键点确定子模块1202,用于基于所述定位热力图中像素值最大的像素位置,确定所述车辆前景图中对应所述像素位置的像素为所述车辆关键点。
可选的,如图11所示,所述车辆区域确定模块1002,还包括:热力图预测子模块1105,用于基于待训练的关键点检测网络,获取样本车辆图的预测定位热力图与预测背景热力图;其中,所述预测定位热力图由所述样本车辆图中各像素对应车辆关键点的概率值组成,所述预测背景热力图由所述样本车辆图中各像素对应非车辆关键点的概率值组成;网络参数调整子模块1106,用于根据所述预测定位热力图与预先设置的标准定位热力图之间的差异,和所述预测背景热力图与预先设置的标准背景热力图之间的差异, 调整所述关键点检测网络的网络参数。
可选的,如图11所示,所述车辆区域确定模块1002,还包括:标签值确定子模块1107,用于基于高斯模糊,将预先设置的标准定位热力图中车轮关键点对应像素的预定范围内的像素进行标注。
可选的,所述车道确定模块1003,还用于确定所述车辆区域与所述车道区域的重叠区域;将所述重叠区域占所述车辆区域的比例,确定为车辆重叠度;在所述车辆重叠度大于预设阈值的情况下,确定所述车辆位于所述车道区域。
可选的,如图11所示,所述装置还包括:第一车道区域确定模块1108,用于在用于采集所述待测图像的图像采集设备固定的情况下,将所述待测图像中的预定区域确定为所述至少一个车道区域;或者,第二车道区域确定模块1109,用于将所述待测图像输入预先训练得到的车道识别网络,由所述车道识别网络确定所述待测图像中的所述至少一个车道区域。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开至少一个实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本公开还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时能够实现本公开任一实施例的车道检测方法。
图13示出了本公开实施例所提供的一种更为具体的计算机设备硬件结构示意图,该设备可以包括:处理器1010、存储器1020、输入/输出接口1030、通信接口1040和总线1050。其中处理器1010、存储器1020、输入/输出接口1030和通信接口1040通过总线1050实现彼此之间在设备内部的通信连接。
处理器1010可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。
存储器1020可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1020可以存储操作***和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1020中,并由处理器1010来调用执行。
输入/输出接口1030用于连接输入/输出模块,以实现信息输入及输出。输入输出/ 模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口1040用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线1050包括一通路,在设备的各个组件(例如处理器1010、存储器1020、输入/输出接口1030和通信接口1040)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器1010、存储器1020、输入/输出接口1030、通信接口1040以及总线1050,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
本公开还提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时能够实现本公开任一实施例的车道检测方法。
本公开还提供了一种计算机程序产品,包括计算机程序,所述程序被处理器执行时能够实现本公开任一实施例的车道检测方法。
其中,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等,本公开并不对此进行限制。
在一些可选实施例中,本公开实施例提供了一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任一实施例提供的车道检测方法。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。
本领域技术人员在考虑说明书及实践这里公开后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
以上所述仅为本公开的较佳实施例而已,并不用于限制本公开,凡在本公开的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本公开保护的范围之内。

Claims (19)

  1. 一种车道检测方法,包括:
    获取待测图像;其中,所述待测图像中包括至少一个车道区域;
    对所述待测图像进行检测,确定所述待测图像中的车辆;
    从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;
    根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
  2. 根据权利要求1所述的方法,其中,对所述待测图像进行检测,确定所述待测图像中的所述车辆,包括:
    将所述待测图像输入预先训练得到的车辆检测网络,由所述车辆检测网络检测所述待测图像中包含的车辆;
    基于所述车辆检测网络在所述待测图像中检测到的包含所述车辆的车辆检测框,从所述待测图像中裁剪出所述车辆的车辆前景图。
  3. 根据权利要求1或2所述的方法,其中,从所述待测图像中确定出所述车辆的车辆区域,包括:
    将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点;
    基于所述车辆关键点围成的多边形,确定所述车辆的车辆区域。
  4. 根据权利要求3所述的方法,其中,将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点,包括:
    将所述车辆前景图输入所述关键点检测网络,由所述关键点检测网络输出所述车辆前景图中各像素对应车辆关键点的概率值,作为定位热力图;
    基于所述定位热力图中像素值最大的像素位置,确定所述车辆前景图中对应所述像素位置的像素为所述车辆关键点。
  5. 根据权利要求3或4所述的方法,其中,所述关键点检测网络是通过如下训练得到的:
    基于待训练的关键点检测网络,获取样本车辆图的预测定位热力图与预测背景热力图;其中,所述预测定位热力图由所述样本车辆图中各像素对应车辆关键点的概率值组成,所述预测背景热力图由所述样本车辆图中各像素对应非车辆关键点的概率值组成;
    根据所述预测定位热力图与预先设置的标准定位热力图之间的差异,和所述预测背景热力图与预先设置的标准背景热力图之间的差异,调整所述关键点检测网络的网络参数。
  6. 根据权利要求5所述的方法,其中,所述预先设置的标准定位热力图中车辆关键点对应像素的预定范围内的像素是基于高斯模糊标注的。
  7. 根据权利要求1至6中任一项所述的方法,其中,根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道,包括:
    确定所述车辆区域与所述车道区域的重叠区域;
    将所述重叠区域占所述车辆区域的比例,确定为车辆重叠度;
    在所述车辆重叠度大于预设阈值的情况下,确定所述车辆位于所述车道区域。
  8. 根据权利要求1至7中任一项所述的方法,其中,在根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道之前,还包括以下任一:
    在用于采集所述待测图像的图像采集设备固定的情况下,将所述待测图像中的预定区域确定为所述至少一个车道区域;
    将所述待测图像输入预先训练得到的车道识别网络,由所述车道识别网络确定所述待测图像中的所述至少一个车道区域。
  9. 一种车道检测装置,包括:
    待测图像获取模块,用于获取待测图像;其中,所述待测图像中包括至少一个车道区域;
    车辆区域确定模块,用于对所述待测图像进行检测,确定所述待测图像中的车辆;从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;
    车道确定模块,用于根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车道。
  10. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现:
    获取待测图像;其中,所述待测图像中包括至少一个车道区域;
    对所述待测图像进行检测,确定所述待测图像中的车辆;
    从所述待测图像中确定出所述车辆的车辆区域;其中,所述车辆区域用于表示所述车辆所占据的路面区域;
    根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆所在的车 道。
  11. 根据权利要求10所述的计算机设备,其中,对所述待测图像进行检测,确定所述待测图像中的车辆时,所述处理器执行所述程序时实现:
    将所述待测图像输入预先训练得到的车辆检测网络,由所述车辆检测网络检测所述待测图像中包含的车辆;
    基于所述车辆检测网络在所述待测图像中检测到的包含所述车辆的车辆检测框,从所述待测图像中裁剪出所述车辆的车辆前景图。
  12. 根据权利要求10或11所述的计算机设备,其中,从所述待测图像中确定出所述车辆的车辆区域时,所述处理器执行所述程序时实现:
    将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点;
    基于所述车辆关键点围成的多边形,确定所述车辆的车辆区域。
  13. 根据权利要求12所述的计算机设备,其中,将所述车辆前景图输入预先训练得到的关键点检测网络,由所述关键点检测网络检测出所述车辆前景图中所述车辆的车辆关键点时,所述处理器执行所述程序时实现:
    将所述车辆前景图输入所述关键点检测网络,由所述关键点检测网络输出所述车辆前景图中各像素对应车辆关键点的概率值,作为定位热力图;
    基于所述定位热力图中像素值最大的像素位置,确定所述车辆前景图中对应所述像素位置的像素为所述车辆关键点。
  14. 根据权利要求12或13所述的计算机设备,所述处理器执行所述程序时还实现:
    基于待训练的关键点检测网络,获取样本车辆图的预测定位热力图与预测背景热力图;其中,所述预测定位热力图由所述样本车辆图中各像素对应车辆关键点的概率值组成,所述预测背景热力图由所述样本车辆图中各像素对应非车辆关键点的概率值组成;
    根据所述预测定位热力图与预先设置的标准定位热力图之间的差异,和所述预测背景热力图与预先设置的标准背景热力图之间的差异,调整所述关键点检测网络的网络参数。
  15. 根据权利要求14所述的计算机设备,所述处理器执行所述程序时还实现:
    所述预先设置的标准定位热力图中车辆关键点对应像素的预定范围内的像素是基于高斯模糊标注的。
  16. 根据权利要求10至15中任一项所述的计算机设备,其中,在根据所述车辆区域与所述待测图像中车道区域的相对位置,确定所述车辆位于所述车道区域时,所述处 理器执行所述程序时实现:
    确定所述车辆区域与所述车道区域的重叠区域;
    将所述重叠区域占所述车辆区域的比例,确定为车辆重叠度;
    在所述车辆重叠度大于预设阈值的情况下,确定所述车辆位于所述车道区域。
  17. 根据权利要求10至16中任一项所述的计算机设备,所述处理器执行所述程序时还实现:
    在用于采集所述待测图像的图像采集设备固定的情况下,将所述待测图像中的预定区域确定为所述至少一个车道区域;或者,
    将所述待测图像输入预先训练得到的车道识别网络,由所述车道识别网络确定所述待测图像中的所述至少一个车道区域。
  18. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现权利要求1至8中任一项所述的方法。
  19. 一种计算机程序产品,包括计算机程序,所述程序被处理器执行时能够实现权利要求1至8中任一项所述的方法。
PCT/CN2021/102639 2021-02-26 2021-06-28 车道检测方法、装置、设备及存储介质 WO2022179016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110221186.3A CN112784817B (zh) 2021-02-26 2021-02-26 车辆所在车道检测方法、装置、设备及存储介质
CN202110221186.3 2021-02-26

Publications (1)

Publication Number Publication Date
WO2022179016A1 true WO2022179016A1 (zh) 2022-09-01

Family

ID=75762011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102639 WO2022179016A1 (zh) 2021-02-26 2021-06-28 车道检测方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112784817B (zh)
WO (1) WO2022179016A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784817B (zh) * 2021-02-26 2023-01-31 上海商汤科技开发有限公司 车辆所在车道检测方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050993A1 (en) * 2017-08-09 2019-02-14 Samsung Electronics Co., Ltd. Lane detection method and apparatus
CN109711264A (zh) * 2018-11-30 2019-05-03 武汉烽火众智智慧之星科技有限公司 一种公交车道占道检测方法及装置
CN110909626A (zh) * 2019-11-04 2020-03-24 上海眼控科技股份有限公司 车辆压线检测方法、装置、移动终端及存储介质
CN111259706A (zh) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 一种车辆的车道线压线判断方法和***
CN111340877A (zh) * 2020-03-25 2020-06-26 北京爱笔科技有限公司 一种车辆定位方法及装置
CN111368639A (zh) * 2020-02-10 2020-07-03 浙江大华技术股份有限公司 车辆越线判定方法、装置、计算机设备和存储介质
CN112784817A (zh) * 2021-02-26 2021-05-11 上海商汤科技开发有限公司 车辆所在车道检测方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652245B (zh) * 2020-04-28 2024-04-30 中国平安财产保险股份有限公司 车辆轮廓检测方法、装置、计算机设备及存储介质
CN112052807B (zh) * 2020-09-10 2022-06-10 讯飞智元信息科技有限公司 车辆位置检测方法、装置、电子设备及存储介质
CN112348035B (zh) * 2020-11-11 2024-05-24 东软睿驰汽车技术(沈阳)有限公司 车辆关键点检测方法、装置及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050993A1 (en) * 2017-08-09 2019-02-14 Samsung Electronics Co., Ltd. Lane detection method and apparatus
CN109711264A (zh) * 2018-11-30 2019-05-03 武汉烽火众智智慧之星科技有限公司 一种公交车道占道检测方法及装置
CN111259706A (zh) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 一种车辆的车道线压线判断方法和***
CN110909626A (zh) * 2019-11-04 2020-03-24 上海眼控科技股份有限公司 车辆压线检测方法、装置、移动终端及存储介质
CN111368639A (zh) * 2020-02-10 2020-07-03 浙江大华技术股份有限公司 车辆越线判定方法、装置、计算机设备和存储介质
CN111340877A (zh) * 2020-03-25 2020-06-26 北京爱笔科技有限公司 一种车辆定位方法及装置
CN112784817A (zh) * 2021-02-26 2021-05-11 上海商汤科技开发有限公司 车辆所在车道检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN112784817B (zh) 2023-01-31
CN112784817A (zh) 2021-05-11

Similar Documents

Publication Publication Date Title
JP7052663B2 (ja) 物体検出装置、物体検出方法及び物体検出用コンピュータプログラム
JP6821762B2 (ja) 畳み込みニューラルネットワークを利用してpoi変化を検出するためのシステムおよび方法
JP6866440B2 (ja) 物体識別方法、装置、機器、車両及び媒体
JP6144656B2 (ja) 歩行者の視覚的認識が困難であり得ることを運転者に警告するシステム及び方法
JP5453538B2 (ja) 近赤外線カメラを使用して歩行者を検出、分類、および追跡するための費用対効果の高いシステムおよび方法
JP6230751B1 (ja) 物体検出装置および物体検出方法
JP2015514278A (ja) マルチキュー・オブジェクトの検出および分析のための方法、システム、製品、およびコンピュータ・プログラム(マルチキュー・オブジェクトの検出および分析)
CN111274926B (zh) 图像数据筛选方法、装置、计算机设备和存储介质
CN112036385B (zh) 库位修正方法、装置、电子设备及可读存储介质
CN111507204A (zh) 倒计时信号灯的检测方法、装置、电子设备及存储介质
JP2021149863A (ja) 物体状態識別装置、物体状態識別方法及び物体状態識別用コンピュータプログラムならびに制御装置
CN112634368A (zh) 场景目标的空间与或图模型生成方法、装置及电子设备
CN111079621A (zh) 检测对象的方法、装置、电子设备和存储介质
JP7226368B2 (ja) 物体状態識別装置
CN108629225B (zh) 一种基于多幅子图与图像显著性分析的车辆检测方法
WO2022179016A1 (zh) 车道检测方法、装置、设备及存储介质
EP3376438A1 (en) A system and method for detecting change using ontology based saliency
CN113743163A (zh) 交通目标识别模型训练方法、交通目标定位方法、装置
CN112598743B (zh) 一种单目视觉图像的位姿估算方法及相关装置
CN113012215A (zh) 一种空间定位的方法、***及设备
CN113870322A (zh) 一种基于事件相机的多目标追踪方法、装置及计算机设备
Song et al. Exploring vision-based techniques for outdoor positioning systems: A feasibility study
CN112287905A (zh) 车辆损伤识别方法、装置、设备及存储介质
US20220164584A1 (en) Method and system for detecting lane pattern
CN115841660A (zh) 距离预测方法、装置、设备、存储介质及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927448

Country of ref document: EP

Kind code of ref document: A1