CN113297939A - Obstacle detection method, system, terminal device and storage medium - Google Patents

Obstacle detection method, system, terminal device and storage medium Download PDF

Info

Publication number
CN113297939A
CN113297939A CN202110534201.XA CN202110534201A CN113297939A CN 113297939 A CN113297939 A CN 113297939A CN 202110534201 A CN202110534201 A CN 202110534201A CN 113297939 A CN113297939 A CN 113297939A
Authority
CN
China
Prior art keywords
image
obstacle
lane
free
barrier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110534201.XA
Other languages
Chinese (zh)
Other versions
CN113297939B (en
Inventor
顾在旺
程骏
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110534201.XA priority Critical patent/CN113297939B/en
Publication of CN113297939A publication Critical patent/CN113297939A/en
Application granted granted Critical
Publication of CN113297939B publication Critical patent/CN113297939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a system, a terminal device and a storage medium for detecting obstacles, wherein the method comprises the following steps: carrying out lane line detection on an image to be detected to obtain the position information of a lane line; determining a lane driving image in the image to be detected according to the position information of the lane line, and performing obstacle-free prediction on the lane driving image to obtain an obstacle-free image; and comparing the lane driving image with the barrier-free image to obtain the barrier information. According to the method and the device, the lane line detection is carried out on the image to be detected, the position information of the lane line corresponding to the lane line in the image to be detected can be determined, the lane driving image in the image to be detected can be determined based on the position information of the lane line, the barrier-free image corresponding to the lane driving image can be obtained by carrying out barrier-free prediction on the lane driving image, and the barrier information on the lane driving image can be determined by carrying out image comparison on the lane driving image and the barrier-free image.

Description

Obstacle detection method, system, terminal device and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a method, a system, a terminal device, and a storage medium for detecting an obstacle.
Background
With the continuous development of economy and society, the number of automobiles is increased, and social problems such as urban traffic, safety of vehicle driving, energy supply, environmental pollution and the like are highlighted. These real social problems are all caused by the contradiction between the existing traffic infrastructure and the vehicle, which not only reflects in the problem of traffic jam, but also reflects in the environmental pollution caused by unsmooth traffic, relatively backward road conditions and the potential safety hazard of advanced vehicle technology to people's life and property. At present, the loss of people and property caused by traffic accidents is more and more serious in the society, and the traffic accidents mainly involve the collision of vehicles, so that the problem of detecting obstacles in lanes is more and more emphasized by people in the driving process of automobiles.
In the existing obstacle detection process, whether obstacles exist in a lane is detected by a target detection algorithm based on deep learning, but the types of the obstacles are not fixed, so that the target detection algorithm based on the deep learning cannot detect the obstacles in all types, and the accuracy of obstacle detection is further reduced.
Disclosure of Invention
The embodiment of the application provides an obstacle detection method, an obstacle detection system, terminal equipment and a storage medium, and aims to solve the problem that in the existing obstacle detection process, due to the fact that a target detection algorithm based on deep learning cannot realize the detection of obstacles of all categories, the obstacle detection accuracy is not high.
In a first aspect, an embodiment of the present application provides an obstacle detection method, where the method includes:
responding to the received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
determining a lane driving image in the image to be detected according to the position information of the lane line;
carrying out obstacle-free prediction on the lane driving image to obtain an obstacle-free image;
and comparing the lane driving image with the barrier-free image to obtain barrier information.
Compared with the prior art, the embodiment of the application has the advantages that: the lane line detection is carried out on the image to be detected, the position information of the lane line corresponding to the lane line in the image to be detected can be effectively determined, the lane driving image in the image to be detected can be effectively determined based on the position information of the lane line, the barrier-free image corresponding to the lane driving image can be obtained by carrying out barrier-free prediction on the lane driving image, and the barrier information on the lane driving image can be effectively determined by carrying out image comparison on the lane driving image and the barrier-free image.
Further, the image comparison of the lane driving image and the obstacle-free image to obtain the obstacle information includes:
respectively obtaining pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Further, the generating the obstacle information from the obstacle image includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected to obtain the obstacle coordinates;
generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
Further, said determining an obstacle image on the lane driving image from the first set of pixel values and the second set of pixel values comprises:
according to the first pixel value set and the second pixel value set, pixel difference values of the lane driving image and the barrier-free image on the same pixel point are respectively determined;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
and determining an image formed by the marked pixel points as the obstacle image on the lane driving image.
Further, the extracting the image feature in the obstacle extraction image includes:
carrying out gray level processing on the obstacle extraction image to obtain a gray level image, and carrying out normalization processing on the gray level image;
and respectively extracting the gradient of each pixel point in the gray level image after normalization processing to obtain the image characteristics.
Further, the performing obstacle-free prediction on the lane driving image to obtain an obstacle-free image includes:
and inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
Further, before inputting the lane driving image into the pre-trained generative confrontation network for image generation, the method further comprises:
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generative countermeasure network for image discrimination to obtain an image discrimination result;
and performing loss calculation according to the image discrimination result to obtain a model loss value, and updating parameters of the generator and the discriminator according to the model loss value respectively until the generator and the discriminator are converged to obtain the pre-trained generative confrontation network.
In a second aspect, an embodiment of the present application provides an obstacle detection system, including:
the lane line detection module is used for responding to the received image to be detected and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
the barrier-free prediction module is used for determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected, and carrying out barrier-free prediction on the lane driving image to obtain a barrier-free image;
and the image comparison module is used for carrying out image comparison on the lane driving image and the barrier-free image to obtain barrier information.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the method described above.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method as described above.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the obstacle detection method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a flowchart of an obstacle detection method according to a first embodiment of the present application;
fig. 2 is a flowchart of an obstacle detection method according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of an obstacle detection system according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Example one
Please refer to fig. 1, which is a flowchart of an obstacle detection method according to a first embodiment of the present application, including the steps of:
and step S10, responding to the received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line.
The method includes the steps of performing lane line detection on an image to be detected according to any preset lane line detection algorithm to obtain position information of a lane line in the image to be detected, wherein the preset lane line detection algorithm can be set according to requirements, for example, the preset lane line detection algorithm can be set to be a gaussian fuzzy algorithm, a Canny edge detection algorithm or a hough transform algorithm, and the preset lane line detection algorithm is used for performing position extraction on the lane line in the image to be detected to obtain the position information of the lane line.
In this step, if there are a plurality of different lane lines in the image to be detected, then after the image to be detected is subjected to lane line detection, the position information corresponding to each lane line is obtained, optionally, in this step, before the image to be detected is subjected to lane line detection, the method further includes: the image corrosion is used for detecting the image to be detected according to a corrosion operator, and determining a region which can bear the corrosion operator in the image to be detected, the image corrosion is a process of eliminating image boundary points, so that the image boundary shrinks inwards, small and meaningless pixel points in the image to be detected can be eliminated, and the accuracy of detecting the lane line of the image to be detected is improved
And step S20, determining a lane driving image in the image to be detected according to the position information of the lane line, and carrying out obstacle-free prediction on the lane driving image to obtain an obstacle-free image.
The lane driving image is an area image formed by a lane line in the image to be detected, namely the lane driving image is a lane on a road corresponding to the image to be detected.
In this step, the image to be detected is subjected to image reverse selection according to the position information of the lane line to obtain a background image, the image reverse selection is used for selecting the image to be detected except for the position information of the lane line, and the background image is filled according to a preset filling color to determine the lane driving image in the image to be detected, the preset filling color can be set according to requirements, for example, the preset filling color can be set to be black or red, and the like.
In this step, the obstacle-free image is obtained by predicting the image of the lane in which the lane driving image corresponds to the obstacle-free state of the lane, and for example, when the obtained lane driving image is determined to be the lane driving image a1, the lane corresponding to the lane driving image a1 in the image to be detected is the lane b1, and the obstacle-free prediction is performed on the lane driving image a1 to obtain the obstacle-free image c1 corresponding to the lane b1 in the case of no obstacle, based on the position information of the lane line.
Further, the performing obstacle-free prediction on the lane driving image to obtain an obstacle-free image includes: and inputting the lane driving image into a pre-trained generating type countermeasure network for image generation to obtain the barrier-free image, wherein the pre-trained generating type countermeasure network is used for carrying out barrier-free prediction on the input lane driving image so as to predict and obtain the barrier-free image corresponding to the lane driving image under the barrier-free condition.
Further, before the inputting the lane driving image into the pre-trained generative countermeasure network for image generation, the method further includes:
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
the generator is used for judging whether the image generated by the generator is a real image or not so as to achieve the game effect on the image data, and further effectively improving the accuracy of generating the lane generated image by the generator in the generative countermeasure network.
Inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generative countermeasure network for image discrimination to obtain an image discrimination result;
the discriminator detects the generated image output by the generator according to the barrier-free image corresponding to the input lane sample image to judge whether the generated image output by the generator is a real image, and when the discriminator judges that the generated image output by the generator is the real image, the discriminator judges that the generated image output by the generator is the barrier-free image corresponding to the lane sample image.
Performing loss calculation according to the image discrimination result to obtain a model loss value, and performing parameter updating on the generator and the discriminator according to the model loss value respectively until the generator and the discriminator converge to obtain the pre-trained generator-type countermeasure network;
optionally, in the step, when the image to be detected is transmitted in a video stream manner, continuous three-frame images in the video stream are input into the pre-trained generative countermeasure network, so as to obtain an obstacle-free image corresponding to the image to be detected of a first frame image in the continuous three-frame images. For example, when the video stream includes the first frame image d1, the second frame image d2, the third frame image d3 and the fourth frame image d4, if the first frame image d1 is an image to be detected, the first frame image d1, the second frame image d2 and the third frame image d3 are input into the pre-trained generated countermeasure network, and an obstacle-free image corresponding to the first frame image d1 is obtained.
And step S30, comparing the lane driving image with the non-obstacle image to obtain obstacle information.
The obstacle information on the lane driving image can be effectively recognized by comparing the lane driving image with the barrier-free image without the barrier.
In this embodiment, by performing lane line detection on an image to be detected, position information of a lane line corresponding to the lane line in the image to be detected can be effectively determined, a lane driving image in the image to be detected can be effectively determined based on the position information of the lane line, a barrier-free image corresponding to the lane driving image can be obtained by performing barrier-free prediction on the lane driving image, and barrier information on the lane driving image can be effectively determined by performing image comparison on the lane driving image and the barrier-free image.
Example two
Please refer to fig. 2, which is a flowchart of an obstacle detection method according to a second embodiment of the present application, where the second embodiment is used to refine step S30, and includes:
step S31, obtaining pixel values of each pixel point on the lane driving image and the obstacle-free image, respectively, to obtain a first pixel value set and a second pixel value set.
The image size between the lane driving image and the obstacle-free image is the same, so that the pixel values of the pixel points on the lane driving image and the obstacle-free image are respectively obtained according to a preset sorting obtaining rule, and the pixel coordinates of the pixel points corresponding to the same sorting position between the first pixel value set and the second pixel value set are the same.
For example, when both the lane driving image and the obstacle-free image are 2 × 2 pixels, the first pixel value set includes a pixel point e1, a pixel point e2, a pixel point e3, and a pixel point e4, the second pixel value set includes a pixel point e5, a pixel point e6, a pixel point e7, and a pixel point e8, pixel coordinates of the pixel point e1 are the same as those of the pixel point e5, pixel coordinates of the pixel point e2 are the same as those of the pixel point e6, pixel coordinates of the pixel point e3 are the same as those of the pixel point e7, and pixel coordinates of the pixel point e4 are the same as those of the pixel point e 8.
Step S32, determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Optionally, in this step, the determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set includes:
according to the first pixel value set and the second pixel value set, pixel difference values of the lane driving image and the barrier-free image on the same pixel point are respectively determined;
for example, when the lane driving image and the barrier-free image are both 2x2 pixel images, the first pixel value set includes a pixel point e1, a pixel point e2, a pixel point e3, and a pixel point e4, and the second pixel value set includes a pixel point e5, a pixel point e6, a pixel point e7, and a pixel point e8, the pixel values between a pixel point e1 and a pixel point e5, between a pixel point e2 and a pixel point e6, between a pixel point e3 and a pixel point e7, and between a pixel point e4 and a pixel point e8 are respectively calculated.
If the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
the preset threshold value can be set according to requirements, if the pixel difference value of any pixel point is larger than the preset threshold value, it is judged that an obstacle exists at the pixel point on the lane driving image, for example, when the pixel difference values between the pixel point e1 and the pixel point e5 and between the pixel point e2 and the pixel point e6 are larger than the preset threshold value, it is judged that an obstacle exists at the pixel point e1 and the pixel point e2 of the lane driving image, and the pixel point e1 and the pixel point e2 are marked on the lane driving image, so that the subsequent obstacle image can be effectively determined conveniently.
And determining an image formed by the marked pixel points as the obstacle image on the lane driving image.
Further, the generating the obstacle information according to the obstacle image in this step includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
the preset parameter range can be set according to requirements, the preset parameter range comprises a pixel brightness range and a pixel color range, and the image filtering is carried out on the obstacle image according to the preset parameter range, so that the accuracy of image contour extraction in the filtered image is effectively improved.
Performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
the image contour is a contour of a corresponding obstacle in the obstacle image, so that the obstacle image is subjected to image extraction through the image contour to obtain an obstacle extraction image corresponding to the obstacle in the obstacle image, and image features in the image are extracted through the obstacle, so that the accuracy of determining the type of the subsequent obstacle is improved.
Determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
the method includes the steps of obtaining feature similarity and contour similarity by respectively calculating similarity between image features and image contours and preset features and preset contours of preset types, determining the preset type as the type of an obstacle in an obstacle extraction image if the feature similarity and the contour similarity between the image features and the image contours and any preset type are larger than the corresponding preset similarity, and setting the preset similarity according to requirements, wherein the preset similarity can be set to be 80%, 75% or 80% and the like.
For example, when the feature similarity between the image feature and the image contour is greater than a first preset similarity and the contour similarity is greater than a second preset similarity, it is determined that the preset vehicle type is the type of the obstacle in the obstacle extraction image.
Determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected to obtain obstacle coordinates, and generating obstacle information according to the obstacle coordinates and the type of the obstacle;
the method comprises the steps of obtaining image coordinates of an obstacle image in an image to be detected and image coordinates of an obstacle extraction image in the obstacle image respectively to obtain a first coordinate and a second coordinate, determining a coordinate mapping relation between the obstacle extraction image and the image to be detected according to the first coordinate and the second coordinate, and performing coordinate mapping on the image coordinates of the obstacle extraction image in the obstacle image according to the determined coordinate mapping relation to obtain the obstacle coordinates.
Further, in this step, the extracting the image feature in the obstacle extraction image includes: carrying out gray level processing on the obstacle extraction image to obtain a gray level image, and carrying out normalization processing on the gray level image; respectively extracting the gradients of all pixel points in the gray level image after normalization processing to obtain the image characteristics; the gray level image is obtained by performing gray level processing on the obstacle extraction image, and the gray level image is normalized, so that pixel points in the obstacle extraction image can be effectively screened, and the accuracy of image feature extraction in the gray level image is improved.
In this embodiment, the first pixel value set and the second pixel value set are obtained by respectively obtaining the pixel values of the pixel points on the lane driving image and the obstacle-free image, which facilitates the determination of the pixel difference value of the lane driving image and the obstacle-free image on the same pixel point.
EXAMPLE III
Fig. 3 shows a schematic structural diagram of an obstacle detection system 100 provided in the third embodiment of the present application, corresponding to the obstacle detection method described in the above embodiments, and only shows portions related to the embodiments of the present application for convenience of description.
Referring to fig. 3, the system includes: lane line detection module 10, barrier-free prediction module 11 and image comparison module 12, wherein:
and the lane line detection module 10 is used for responding to the received image to be detected, and performing lane line detection on the image to be detected to obtain the position information of the lane line.
And the barrier-free prediction module 11 is configured to determine a lane driving image in the image to be detected according to the position information of the lane line, where the lane driving image is an area image formed by the lane line in the image to be detected, and perform barrier-free prediction on the lane driving image to obtain a barrier-free image.
Wherein the barrier-free prediction module 11 is further configured to: and inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
Optionally, the barrier-free prediction module 11 is further configured to: inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generative countermeasure network for image discrimination to obtain an image discrimination result;
and performing loss calculation according to the image discrimination result to obtain a model loss value, and updating parameters of the generator and the discriminator according to the model loss value respectively until the generator and the discriminator are converged to obtain the pre-trained generative confrontation network.
And the image comparison module 12 is configured to perform image comparison on the lane driving image and the obstacle-free image to obtain obstacle information.
Wherein, the image comparison module 12 is further configured to: respectively obtaining pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
Optionally, the image comparison module 12 is further configured to: performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected to obtain the obstacle coordinates;
generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
Further, the image matching module 12 is further configured to: according to the first pixel value set and the second pixel value set, pixel difference values of the lane driving image and the barrier-free image on the same pixel point are respectively determined;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
and determining an image formed by the marked pixel points as the obstacle image on the lane driving image.
Further, the image matching module 12 is further configured to: carrying out gray level processing on the obstacle extraction image to obtain a gray level image, and carrying out normalization processing on the gray level image;
and respectively extracting the gradient of each pixel point in the gray level image after normalization processing to obtain the image characteristics.
In this embodiment, by performing lane line detection on an image to be detected, position information of a lane line corresponding to the lane line in the image to be detected can be effectively determined, a lane driving image in the image to be detected can be effectively determined based on the position information of the lane line, a barrier-free image corresponding to the lane driving image can be obtained by performing barrier-free prediction on the lane driving image, and barrier information on the lane driving image can be effectively determined by performing image comparison on the lane driving image and the barrier-free image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
Fig. 4 is a schematic structural diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 4, the terminal device 2 of this embodiment includes: at least one processor 20 (only one processor is shown in fig. 4), a memory 21, and a computer program 22 stored in the memory 21 and executable on the at least one processor 20, the steps of any of the various method embodiments described above being implemented when the computer program 22 is executed by the processor 20.
The terminal device 2 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 20, a memory 21. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 2, and does not constitute a limitation of the terminal device 2, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 20 may be a Central Processing Unit (CPU), and the Processor 20 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 21 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An obstacle detection method, characterized in that the method comprises:
responding to the received image to be detected, and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected;
carrying out obstacle-free prediction on the lane driving image to obtain an obstacle-free image;
and comparing the lane driving image with the barrier-free image to obtain barrier information.
2. The obstacle detection method according to claim 1, wherein the image comparing the lane driving image with the obstacle-free image to obtain obstacle information includes:
respectively obtaining pixel values of all pixel points on the lane driving image and the barrier-free image to obtain a first pixel value set and a second pixel value set;
determining an obstacle image on the lane driving image according to the first pixel value set and the second pixel value set, and generating the obstacle information according to the obstacle image.
3. The obstacle detection method according to claim 2, wherein the generating the obstacle information from the obstacle image includes:
performing image filtering on the obstacle image according to a preset parameter range to obtain a filtered image, and extracting an image contour in the filtered image;
performing image extraction on the obstacle image according to the image contour to obtain an obstacle extraction image, and extracting image features in the obstacle extraction image;
determining the type of the obstacle in the obstacle extraction image according to the image characteristics and the image contour, and acquiring the image coordinates of the obstacle image in the image to be detected;
determining the image coordinates of the obstacle extraction image in the image to be detected according to the image coordinates of the obstacle image in the image to be detected to obtain the obstacle coordinates;
generating the obstacle information according to the obstacle coordinates and the type of the obstacle.
4. The obstacle detection method according to claim 2, wherein said determining an obstacle image on the lane travel image from the first set of pixel values and the second set of pixel values includes:
according to the first pixel value set and the second pixel value set, pixel difference values of the lane driving image and the barrier-free image on the same pixel point are respectively determined;
if the pixel difference value of any pixel point is larger than a preset threshold value, marking the pixel point on the lane driving image;
and determining an image formed by the marked pixel points as the obstacle image on the lane driving image.
5. The obstacle detection method according to claim 3, wherein the extracting image features in the obstacle extraction image includes:
carrying out gray level processing on the obstacle extraction image to obtain a gray level image, and carrying out normalization processing on the gray level image;
and respectively extracting the gradient of each pixel point in the gray level image after normalization processing to obtain the image characteristics.
6. The obstacle detection method according to claim 1, wherein the performing obstacle-free prediction on the lane travel image to obtain an obstacle-free image includes:
and inputting the lane driving image into a pre-trained generation type countermeasure network for image generation to obtain the barrier-free image.
7. The obstacle detection method according to claim 6, wherein before inputting the lane driving image into the pre-trained generative confrontation network for image generation, the method further comprises:
inputting the lane sample image into a generator in the generating type countermeasure network for image generation to obtain a lane generation image;
inputting the generated image and the barrier-free image corresponding to the lane sample image into a discriminator in the generative countermeasure network for image discrimination to obtain an image discrimination result;
and performing loss calculation according to the image discrimination result to obtain a model loss value, and updating parameters of the generator and the discriminator according to the model loss value respectively until the generator and the discriminator are converged to obtain the pre-trained generative confrontation network.
8. An obstacle detection system, comprising:
the lane line detection module is used for responding to the received image to be detected and carrying out lane line detection on the image to be detected to obtain the position information of the lane line;
the barrier-free prediction module is used for determining a lane driving image in the image to be detected according to the position information of the lane line, wherein the lane driving image is an area image formed by the lane line in the image to be detected, and carrying out barrier-free prediction on the lane driving image to obtain a barrier-free image;
and the image comparison module is used for carrying out image comparison on the lane driving image and the barrier-free image to obtain barrier information.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110534201.XA 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium Active CN113297939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110534201.XA CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110534201.XA CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN113297939A true CN113297939A (en) 2021-08-24
CN113297939B CN113297939B (en) 2024-04-16

Family

ID=77322386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110534201.XA Active CN113297939B (en) 2021-05-17 2021-05-17 Obstacle detection method, obstacle detection system, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN113297939B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797783A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Method and device for generating barrier-free information, electronic equipment and storage medium
WO2023179027A1 (en) * 2022-03-24 2023-09-28 商汤集团有限公司 Road obstacle detection method and apparatus, and device and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production
CN109188460A (en) * 2018-09-25 2019-01-11 北京华开领航科技有限责任公司 Unmanned foreign matter detection system and method
CN109993074A (en) * 2019-03-14 2019-07-09 杭州飞步科技有限公司 Assist processing method, device, equipment and the storage medium driven
CN110239592A (en) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 A kind of active barrier of rail vehicle and derailing detection system
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006004188A (en) * 2004-06-17 2006-01-05 Daihatsu Motor Co Ltd Obstacle recognition method and obstacle recognition device
CN106228110A (en) * 2016-07-07 2016-12-14 浙江零跑科技有限公司 A kind of barrier based on vehicle-mounted binocular camera and drivable region detection method
CN108460760A (en) * 2018-03-06 2018-08-28 陕西师范大学 A kind of Bridge Crack image discriminating restorative procedure fighting network based on production
CN109188460A (en) * 2018-09-25 2019-01-11 北京华开领航科技有限责任公司 Unmanned foreign matter detection system and method
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN109993074A (en) * 2019-03-14 2019-07-09 杭州飞步科技有限公司 Assist processing method, device, equipment and the storage medium driven
CN110239592A (en) * 2019-07-03 2019-09-17 中铁轨道交通装备有限公司 A kind of active barrier of rail vehicle and derailing detection system
CN110765922A (en) * 2019-10-18 2020-02-07 华南理工大学 AGV is with two mesh vision object detection barrier systems
CN110929655A (en) * 2019-11-27 2020-03-27 厦门金龙联合汽车工业有限公司 Lane line identification method in driving process, terminal device and storage medium
CN112329552A (en) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 Obstacle detection method and device based on automobile

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179027A1 (en) * 2022-03-24 2023-09-28 商汤集团有限公司 Road obstacle detection method and apparatus, and device and storage medium
CN115797783A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Method and device for generating barrier-free information, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113297939B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US9082038B2 (en) Dram c adjustment of automatic license plate recognition processing based on vehicle class information
Huang et al. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads
CN106991820B (en) Illegal vehicle processing method and device
Li et al. Lane detection based on connection of various feature extraction methods
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
CN113297939B (en) Obstacle detection method, obstacle detection system, terminal device and storage medium
CN112149649B (en) Road spray detection method, computer equipment and storage medium
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN111881832A (en) Lane target detection method, device, equipment and computer readable storage medium
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
CN114332802A (en) Road surface flatness semantic segmentation method and system based on binocular camera
CN112613434A (en) Road target detection method, device and storage medium
CN111967384A (en) Vehicle information processing method, device, equipment and computer readable storage medium
CN111860219A (en) High-speed road occupation judging method and device and electronic equipment
Chen Road vehicle recognition algorithm in safety assistant driving based on artificial intelligence
CN116721396A (en) Lane line detection method, device and storage medium
CN117037082A (en) Parking behavior recognition method and system
CN111144361A (en) Road lane detection method based on binaryzation CGAN network
CN111160183A (en) Method and device for detecting red light running of vehicle
CN114724128B (en) License plate recognition method, device, equipment and medium
CN112950961B (en) Traffic flow statistical method, device, equipment and storage medium
CN105069410A (en) Unstructured road recognition method and device
CN115482672A (en) Vehicle reverse running detection method and device, terminal equipment and storage medium
CN113449647A (en) Method, system, device and computer-readable storage medium for fitting curved lane line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant