CN113158976B - Ground arrow identification method, system, terminal and computer readable storage medium - Google Patents

Ground arrow identification method, system, terminal and computer readable storage medium Download PDF

Info

Publication number
CN113158976B
CN113158976B CN202110523788.4A CN202110523788A CN113158976B CN 113158976 B CN113158976 B CN 113158976B CN 202110523788 A CN202110523788 A CN 202110523788A CN 113158976 B CN113158976 B CN 113158976B
Authority
CN
China
Prior art keywords
arrow
ground
steering
matching
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110523788.4A
Other languages
Chinese (zh)
Other versions
CN113158976A (en
Inventor
宋京
吴子章
王凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zongmu Anchi Intelligent Technology Co ltd
Zongmu Technology Shanghai Co Ltd
Original Assignee
Beijing Zongmu Anchi Intelligent Technology Co ltd
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zongmu Anchi Intelligent Technology Co ltd, Zongmu Technology Shanghai Co Ltd filed Critical Beijing Zongmu Anchi Intelligent Technology Co ltd
Priority to CN202110523788.4A priority Critical patent/CN113158976B/en
Publication of CN113158976A publication Critical patent/CN113158976A/en
Application granted granted Critical
Publication of CN113158976B publication Critical patent/CN113158976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a ground arrow identification method, a ground arrow identification system, a terminal and a computer readable storage medium, wherein the ground arrow identification method comprises the following steps: extracting a ground arrow and a lane line, and taking a steering arrow in the ground arrow as an initial steering arrow template; performing pixel level traversal on the picture after the ground arrow and the lane line are extracted, and detecting a straight arrow in the ground arrow; rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template; performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, scaling the steering arrow to generate candidate frames with different sizes, and performing template matching, position clustering and steering arrow type recognition on the candidate frames and the steering arrow matching templates to recognize the steering arrow; the identified straight and turn arrows are combined to output a ground arrow. The invention does not need a large number of training sets and pixel level labels, reduces great workload, improves the recognition rate of the arrows, and greatly reduces the CPU occupancy rate.

Description

Ground arrow identification method, system, terminal and computer readable storage medium
Technical Field
The invention belongs to the technical field of image processing, relates to a recognition method and a recognition system, and particularly relates to a recognition method, a recognition system, a terminal and a computer readable storage medium for ground arrows.
Background
The road sign is one of traffic rules to be observed in the driving process of the automobile, can provide key information for road users, helps the drivers to drive correctly and safely, and can also maintain the smoothness of road traffic. However, during driving, the sign on the road surface is not noticed for some reasons, or sometimes the driver does not know the specific meaning of a sign, which affects the normal traffic order and is easy to cause traffic accidents. The automatic extraction and identification of the marks on the road by using the prior art can better assist the driver to correctly run.
Urban traffic accidents mostly occur near traffic intersections, and road traffic sign recognition technology is used as an important research branch of advanced driving assistance systems and is mainly used for providing road information, so that the urban traffic sign recognition technology has an irreplaceable effect in the aspect of driving safety. Conventional automatic segmentation of road traffic signs is mostly based on a variety of image preprocessing techniques. Urban traffic accidents mostly occur near traffic intersections, and road traffic sign recognition technology is used as an important research branch of advanced driving assistance systems and is mainly used for providing road information, so that the urban traffic sign recognition technology has an irreplaceable effect in the aspect of driving safety. The conventional automatic segmentation of pavement traffic signs is mainly based on various image preprocessing technologies, for example FoucherP and the like, and a method capable of identifying a crosswalk, an arrow sign and other signs is proposed, and is mainly divided into two steps: the pavement traffic sign elements are extracted and the sign components are connected based on a single pattern, or a repeating rectangular pattern. Neural network models are also utilized, but most neural network models are training verification and optimization of large amounts of data. This process requires a large amount of data of ground arrows for the supervised training of the model and labeling of these data at the pixel level, whereas the labeling process requires a lot of manpower. While the neural network is equivalent to a black box when the network model is optimized, the optimization may be improved for the current test data, but it is also possible to detect interference on the previous data, namely, only local optimization, not global optimization. In terms of CPU occupancy rate, the CPU occupancy rate of the ground arrow recognition process by using the neural network model is 3 to 7 times higher than that of the traditional feature matching algorithm.
Therefore, how to provide a ground arrow identification method, a ground arrow identification system, a terminal and a computer readable storage medium, so as to solve the technical problems that in the prior art, a great deal of labor is required in the labeling process of pixel level labeling of a great deal of ground arrow data, and the defects of low arrow identification rate, overlarge CPU occupation rate and the like in the ground arrow identification process are overcome, which are needed to be solved by those skilled in the art.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a ground arrow identification method, a system, a terminal, and a computer readable storage medium, which are used for solving the problems that in the prior art, a large amount of ground arrow data needs to be marked on pixel levels, a marking process needs to take a large amount of manpower, and in addition, the arrow identification rate is low and the CPU occupancy rate is too large in the ground arrow identification process.
To achieve the above and other related objects, an aspect of the present invention provides a ground arrow identification method, including: acquiring a ground ring view; extracting a ground arrow and a lane line from the ground ring view, and taking a steering arrow of the ground arrow as a steering arrow initial template; performing pixel level traversal on the picture after the ground arrow and the lane line are extracted, and detecting all straight arrows in the ground arrow; rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template; performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, scaling the steering arrow to generate candidate frames with different sizes, and performing template matching, position clustering and steering arrow type recognition on the candidate frames and the steering arrow matching templates to recognize the steering arrow; the identified straight and turn arrows are combined to output a ground arrow in the ground ring view.
In an embodiment of the present invention, the step of performing pixel level row traversal on the image after the ground arrow and the lane line are extracted, and detecting all the straight arrows in the ground arrow to detect the straight arrows includes: performing pixel level line traversal on the picture after the ground arrow and the lane lines are extracted, and storing the central position and the line width of each line of arrow; if it is determined that continuous gradient of the line width of the arrow exists, determining whether straight line fitting can be performed on the center point of each line of the arrow; if yes, determining the arrow as a straight arrow; if not, determining that the arrow is not a straight arrow.
In an embodiment of the present invention, the step of determining whether the center point where each row of arrows is located can perform straight line fitting includes: calculating an included angle between straight lines formed by every two points in a plurality of continuous center points on the basis of continuous gradient of the line width of each line of arrows; judging whether the included angle is larger than an included angle threshold value or not; if yes, a plurality of continuous center points cannot be fitted into a straight line, and the arrow is a non-straight arrow; if not, a plurality of continuous center points can be fitted into a straight line, and the arrow is a straight arrow.
In an embodiment of the present invention, before rotating the initial template of the turning arrow by a fixed angle to form the matching template of the turning arrow, the ground arrow identification method further includes: pixel-level traversal is carried out on the pictures after the ground arrows and the lane lines are extracted, and slope included angles of the lane lines are calculated for central points of two continuous lines; removing the maximum included angle and the minimum included angle of all slope included angles, and calculating the average included angle of the rest slope included angles; calculating a car body course angle according to the average included angle; the body heading angle is defined as the fixed angle of rotation of the steering arrow initial template.
In an embodiment of the present invention, the car body heading angle is calculated by: θ' =pi/2- θ; wherein θ' is the heading angle of the vehicle body, and θ is the average included angle.
In an embodiment of the present invention, the step of performing pixel-level column traversal on the image after extracting the ground arrow and the lane line, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, and identifying the turning arrow includes: performing pixel-level column traversal on the picture after the ground arrow and the lane lines are extracted, and storing the center point position of each column of the arrow; scaling the center point position to generate a plurality of candidate frames with different sizes; interpolation is carried out on different candidate frames, so that the size of the candidate frames is the same as the size of the initial template of the steering arrow; and calculating the template matching degree between the interpolated candidate frame and the steering arrow initial template, removing the candidate frame with the matching degree smaller than the matching degree threshold, and storing the candidate frame with the maximum matching degree, the arrow type and the current position of the candidate frame.
In an embodiment of the present invention, the step of performing pixel-level column traversal on the image after extracting the ground arrow and the lane line, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, and identifying the turning arrow further includes: the positions of the steering arrows are classified according to the distances among the positions of the center points, and the results of all the templates in each column after matching; traversing the positions of the center points of the results of all template matching, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered to be the matching result under the same turning arrow, and the matching result of the two templates is classified as a tuple; if the distance is greater than or equal to the distance threshold, the matching result of the two templates is considered to be the matching result under the other turning arrow, and the matching result of the two templates is classified as another tuple; and selecting the arrow type corresponding to the maximum matching degree from the tuples as the type of the steering arrow at the position.
In another aspect, the present invention provides a ground arrow recognition system, including: the acquisition module is used for acquiring the ground ring view; the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a steering arrow of the ground arrow as a steering arrow initial template; the detection module is used for performing pixel level row traversal on the picture after the ground arrow and the lane line are extracted, and detecting all the straight arrows in the ground arrow; the rotating module is used for rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template; the recognition module is used for performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, so as to recognize the turning arrow; and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
Yet another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of identifying a ground arrow.
In a final aspect, the present invention provides a ground arrow identification terminal, including: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory so as to enable the identification terminal to execute the ground arrow identification method.
In an embodiment of the invention, the ground arrow identification terminal includes a vehicle-mounted terminal.
As described above, the ground arrow identification method, system, terminal and computer readable storage medium of the present invention have the following beneficial effects:
firstly, the invention uses triangle feature detection and template matching to identify the arrow, and does not use a neural network model, so that a large number of training sets and pixel level labels are not needed, and great workload is reduced.
Secondly, the invention improves the recognition rate of the arrow, the algorithm is linear in terms of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the invention has better recognition rate under the conditions of processing multi-arrow, arrow abrasion and deformation.
Drawings
Fig. 1 is a schematic flow chart of a ground arrow recognition method according to an embodiment of the invention.
Fig. 2 shows an exemplary diagram of a turning arrow initial template of the present invention.
FIG. 3 is a schematic view of a ground-based perspective view of the invention after semantic segmentation.
FIG. 4 shows an exemplary view of ground arrows and lane lines extracted for the present invention
Fig. 5 shows an exemplary view of the turning arrow initial template rotated by a fixed angle in accordance with the present invention.
Fig. 6 shows a flowchart of the implementation of S15 of the present invention.
Fig. 7A shows a schematic diagram of the recognition effect of the final ground arrow of the present invention.
Fig. 7B shows a schematic view of the recognition effect of the final ground arrow of the present invention.
Fig. 7C shows a schematic view of the recognition effect of the final ground arrow of the present invention.
FIG. 8 is a schematic flow chart of a ground arrow recognition system according to an embodiment of the invention.
Description of element reference numerals
8 Ground arrow identification system
81 Acquisition module
82 Extraction module
83 Detection module
84 Rotary module
85 Identification module
86 Output module
S11~S16 Step (a)
S151~S159 Step (a)
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Example 1
The embodiment provides a ground arrow identification method, which is characterized by comprising the following steps:
acquiring a ground ring view;
extracting a ground arrow and a lane line from the ground ring view, and taking a steering arrow of the ground arrow as a steering arrow initial template;
performing pixel level traversal on the picture after the ground arrow and the lane line are extracted, and detecting all straight arrows in the ground arrow; rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template;
performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, scaling the steering arrow to generate candidate frames with different sizes, and performing template matching, position clustering and steering arrow type recognition on the candidate frames and the steering arrow matching templates to recognize the steering arrow;
the identified straight and turn arrows are combined to output a ground arrow in the ground ring view.
The ground arrow recognition method provided by the present embodiment will be described in detail below with reference to the drawings. Referring to fig. 1, a flow chart of an identification method of a ground arrow in an embodiment is shown. As shown in fig. 1, the ground arrow identification method specifically includes the following steps:
s11, acquiring a ground ring view.
Specifically, the step S11 includes acquiring four ground pictures acquired by four (front, rear, left, right) fisheye cameras, and stitching the four ground pictures to form a ground ring view.
And S12, extracting a ground arrow and a lane line from the ground ring view, and extracting a steering arrow of the ground arrow to serve as an initial template of the steering arrow. Referring to FIG. 2, an exemplary diagram of a turn arrow initial template is shown. A plurality of turn arrow initial templates are included as in fig. 2.
In this embodiment, the ground ring view is semantically segmented through the FCN network (see fig. 3, which is a schematic diagram of the semantically segmented ground ring view), and the ground arrows and the lane lines in the semantically segmented ground ring view are extracted (see fig. 4, which is a schematic diagram of the extracted ground arrows and lane lines). In this embodiment, the method of dividing the ground ring view to generate the division result shown in fig. 3 is applicable to the present invention.
Specifically, the RGB values of the extracted ground arrows are (255, 255, 255) and the RGB values of the lane lines are (0, 255).
S13, performing pixel level row traversal on the picture after the ground arrow and the lane lines are extracted, and detecting all straight arrows in the ground arrow.
In this embodiment, the S13 includes:
and performing pixel-level line traversal (for example, performing line traversal with 3 steps) on the picture after the ground arrow and the lane lines are extracted, and storing the central position and the line width of each line of arrow. Specifically, this step is to save the coordinates of all points RGB values (255 ) of each line, calculate the average value of the points, and count the line width of the pixel of each line arrow.
Judging whether continuous gradient of the line width of the arrow exists, if so, determining whether the center point of each line of arrow can be subjected to straight line fitting; if the straight line fitting can be performed, determining the arrow as a straight arrow; if the straight line fitting is not possible, the arrow is determined to be a non-straight arrow. If not, determining that the arrow is not a straight arrow.
In this embodiment, there is a continuous progression of the line width of the arrow that includes continuously incrementing t values or continuously decrementing t values.
Theoretically, the center point of each line traversed by the straight arrow is on a straight line, but there is actually a deviation. Therefore, in this embodiment, whether the straight line fitting is possible is determined by calculating the difference between the straight line included angles formed by the two consecutive points from the three center points that are successively tapered. If the angle difference delta theta (|theta (L) i )-θ(L i+1 ) I) is greater than an angle threshold θ, then it is considered that no straight line can be fitted, and the arrow is determined to be a straight arrow; if the angle difference delta theta is less than or equal to an angle threshold theta, the arrow is considered to be a straight line, and the arrow is determined to be a non-straight arrow.
S14, calculating a fixed angle required to rotate the steering arrow initial template, and rotating the steering arrow initial template according to the fixed angle to form a steering arrow matching template.
In this embodiment, the step of calculating the fixed angle at which the steering arrow initial template needs to rotate includes:
pixel level line traversal (e.g., line traversal in 17 steps) of the picture after the ground arrow and lane lines are extracted, save the coordinates of each lane line RGB value (0,0,255), and calculate the center position of these points, i.e., p shown in fig. 5 1 (x,y)…p n (x,y)。
Calculating the slope included angle of the lane lines for the central points of two continuous lines;
removing the maximum included angle and the minimum included angle of all slope included angles, and calculating the average included angle of the rest slope included angles;
and calculating a car body course angle according to the average included angle, and defining the car body course angle as a fixed angle of rotation of the steering arrow initial template.
In this embodiment, the method for calculating the heading angle of the vehicle body is as follows: θ' =pi/2- θ; wherein θ' is the heading angle of the vehicle body, and θ is the average included angle.
In the present embodiment, S13 and S14 are parallel operations as shown in fig. 1. In practical applications, S13 and S14 may be performed in parallel or sequentially.
And S15, performing pixel-level column traversal on the picture after the ground arrow and the lane lines are extracted, scaling the turning arrow to generate candidate frames with different sizes, and performing template matching, position clustering and turning arrow type recognition on the candidate frames and the turning arrow matching templates to recognize the turning arrow. Referring to FIG. 6, a flowchart of an implementation of S15 is shown. As shown in fig. 5, the step S15 specifically includes the following steps:
and S151, performing pixel-level column traversal (for example, performing column traversal with 3 steps) on the picture after the ground arrow and the lane lines are extracted, and storing the center point position of each column of the arrow.
Specifically, the coordinates of all points RGB values (255 ) of the column are saved, and the center point positions of the points are calculated.
And S152, scaling and generating a plurality of candidate frames with different sizes at the center point position.
In this embodiment, since it is determined that the size of the steering arrow may be different after the segmentation, 5 candidate frames of different sizes are generated in consideration of the influence of the arrow and the vehicle distance in different environments.
And S153, interpolating the different candidate frames to enable the size of the candidate frames to be the same as the size of the initial template of the steering arrow.
In this embodiment, bilinear interpolation is used for different candidate boxes.
S154, calculating the template matching degree between the interpolated candidate frame and the steering arrow initial template, eliminating the candidate frame with matching degree smaller than the matching degree threshold, and storing the candidate frame with the largest matching degree in the 5 candidate frames in the column, the arrow type and the current position thereof, namely confidence, class, (x, y).
In practical application, any matching algorithm capable of calculating the matching degree can be applied to the invention. For example, the present embodiment calculates the template matching degree between the interpolated candidate frame and the steering arrow initial templates (e.g., each initial template in fig. 2) using a correlation coefficient template matching algorithm.
S155, classifying the positions of the steering arrows of the results after all the templates in each column are matched by utilizing the distances between the positions of the central points.
S156, traversing the positions of the center points of the result of matching all templates, and calculating the distance between every two templates;
the distance between every two center point positions is calculated as follows:
d=(x i -x i+1 ) 2 +(y i -y i+1 ) 2
if the distance is smaller than the distance threshold, the two template matching results are considered to be the matching results under the same turning arrow, and the two template matching results are attributed to a tuple.
S158, if the distance is greater than or equal to the distance threshold, the matching result of the two templates is considered to be the matching result under the other turning arrow, and the matching result of the two templates is classified as another tuple;
s159, selecting the arrow type corresponding to the maximum matching degree from the tuples as the type of the steering arrow at the position.
S16, combining the identified straight arrow and the identified turning arrow to output the ground arrow in the ground ring view. Referring to fig. 7A, fig. 7B and fig. 7C are schematic diagrams showing the recognition effect of the final ground arrow, respectively.
The ground arrow identification method of the embodiment has the following beneficial effects:
firstly, the method for identifying the ground arrow in the embodiment uses triangle feature detection and template matching to identify the arrow, and does not use a neural network model, so that a large number of training sets and pixel level labels are not needed, and great workload is reduced.
Secondly, the ground arrow recognition method improves the arrow recognition rate, the algorithm is linear in terms of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the ground arrow recognition method in the embodiment also has a better recognition rate under the conditions of processing multiple arrows, arrow abrasion and deformation.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the identification method as described in fig. 1.
The present application may be a system, method, and/or computer program product at any possible level of technical detail. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device. Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and a procedural programming language such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which may execute the computer readable program instructions.
Example two
The present embodiment provides a ground arrow recognition system, including:
the acquisition module is used for acquiring the ground ring view;
the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a steering arrow of the ground arrow as a steering arrow initial template;
the detection module is used for performing pixel level row traversal on the picture after the ground arrow and the lane line are extracted, and detecting all the straight arrows in the ground arrow;
the rotating module is used for rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template;
the recognition module is used for performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, so as to recognize the turning arrow;
and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
The ground arrow recognition system provided in this embodiment will be described in detail with reference to drawings. Referring to fig. 8, a schematic structural diagram of an identification system of a ground arrow in an embodiment is shown. As shown in fig. 8, the ground arrow recognition system 8 includes an acquisition module 81, an extraction module 82, a detection module 83, a rotation module 84, a recognition module 85, and an output module 86.
The acquisition module 81 is configured to acquire a ground ring view.
Specifically, the obtaining module 81 obtains four ground pictures collected by four (front, back, left and right) fisheye cameras, and splices the four ground pictures to form a ground ring view.
The extraction module 82 is configured to extract a ground arrow and a lane line from the ground annular view, and extract a turn arrow of the ground arrow as an initial template of the turn arrow.
In this embodiment, the extracting module 82 performs semantic segmentation on the ground ring view through the FCN network, and extracts the ground arrows and the lane lines in the ground ring view after the semantic segmentation.
Specifically, the RGB values of the ground arrows extracted by the extracting module 82 are (255, 255, 255) and the RGB values of the lane lines are (0, 255).
The detection module 83 is configured to perform pixel level line traversal on the image after the ground arrow and the lane line are extracted, and detect all the directional arrows in the ground arrow.
In this embodiment, the detection process of the detection module 83 is as follows:
and performing pixel-level line traversal (for example, performing line traversal with 3 steps) on the picture after the ground arrow and the lane lines are extracted, and storing the central position and the line width of each line of arrow. Specifically, this step is to save the coordinates of all points RGB values (255 ) of each line, calculate the average value of the points, and count the line width of the pixel of each line arrow.
Judging whether continuous gradient of the line width of the arrow exists, if so, determining whether the center point of each line of arrow can be subjected to straight line fitting; if the straight line fitting can be performed, determining the arrow as a straight arrow; if the straight line fitting is not possible, the arrow is determined to be a non-straight arrow. If not, determining that the arrow is not a straight arrow.
In this embodiment, there is a continuous progression of the line width of the arrow that includes continuously incrementing t values or continuously decrementing t values.
Theoretically, the center point of each line traversed by the straight arrow is on a straight line, but is in fact storedAt the bias. Therefore, in this embodiment, whether the straight line fitting is possible is determined by calculating the difference between the straight line included angles formed by the two consecutive points from the three center points that are successively tapered. If the angle difference delta theta (|theta (L) i )-θ(L i+1 ) I) is greater than an angle threshold θ, then it is considered that no straight line can be fitted, and the arrow is determined to be a straight arrow; if the angle difference delta theta is less than or equal to an angle threshold theta, the arrow is considered to be a straight line, and the arrow is determined to be a non-straight arrow.
The rotation module 84 is configured to calculate a fixed angle at which the steering arrow initial template needs to rotate, and rotate the steering arrow initial template by the fixed angle to form a steering arrow matching template.
In this embodiment, the process of calculating the fixed angle by the rotation module 84 for the initial template of the turning arrow is as follows:
pixel level line traversal (e.g., line traversal in 17 steps) of the picture after the ground arrow and lane lines are extracted, save the coordinates of each lane line RGB value (0,0,255), and calculate the center position of these points, i.e., p shown in fig. 5 1 (x,y)…p n (x,y)。
Calculating the slope included angle of the lane lines for the central points of two continuous lines;
removing the maximum included angle and the minimum included angle of all slope included angles, and calculating the average included angle of the rest slope included angles;
and calculating a car body course angle according to the average included angle, and defining the car body course angle as a fixed angle of rotation of the steering arrow initial template.
In this embodiment, the method for calculating the heading angle of the vehicle body is as follows: θ' =pi/2- θ; wherein θ' is the heading angle of the vehicle body, and θ is the average included angle.
The recognition module 85 is configured to perform pixel-level column traversal on the image after the ground arrow and the lane line are extracted, perform template matching, position clustering and turn arrow type recognition on the candidate frames with different sizes generated by scaling the turn arrow, and then perform the template matching, position clustering and turn arrow type recognition on the candidate frames with different sizes.
Specifically, the identification process of the identification module 85 is as follows:
performing pixel-level column traversal (for example, performing column traversal with 3 steps) on the picture after the ground arrow and the lane lines are extracted, and storing the center point position of each column of the arrow; scaling the center point position to generate a plurality of candidate frames with different sizes; interpolation is carried out on different candidate frames, so that the size of the candidate frames is the same as the size of the initial template of the steering arrow; and calculating template matching degree between the interpolated candidate frame and the steering arrow initial template, removing the candidate frame with matching degree smaller than the matching degree threshold value, and storing the candidate frame with the largest matching degree in the 5 candidate frames in the column, and the arrow type and the current position thereof, namely confidence, class, (x, y). The positions of the steering arrows are classified according to the distances among the positions of the center points, and the results of all the templates in each column after matching; traversing the positions of the center points of the results of all template matching, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered to be the matching result under the same turning arrow, and the matching result of the two templates is attributed to a tuple. If the distance is greater than or equal to the distance threshold, the matching result of the two templates is considered to be the matching result under the other turning arrow, and the matching result of the two templates is classified as another tuple; and selecting the arrow type corresponding to the maximum matching degree from the tuples as the type of the steering arrow at the position.
The output module 86 is configured to combine the identified straight and turn arrows to output a ground arrow in the ground ring view.
It should be noted that, it should be understood that the division of the modules of the above system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in a form of calling the processing element through software, can be realized in a form of hardware, can be realized in a form of calling the processing element through part of the modules, and can be realized in a form of hardware. For example: the x module may be a processing element which is independently set up, or may be implemented in a chip integrated in the system. The x module may be stored in the memory of the system in the form of program codes, and the functions of the x module may be called and executed by a certain processing element of the system. The implementation of the other modules is similar. All or part of the modules can be integrated together or can be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form. The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), one or more microprocessors (Digital Singnal Processor, DSP for short), one or more field programmable gate arrays (Field Programmable Gate Array, FPGA for short), and the like. When a module is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC) for short.
Example III
The present embodiment provides an identification terminal of a ground arrow, the identification terminal including: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected with the processor and the transceiver through the system bus and complete the communication with each other, the memory is used for storing a computer program, the communication interface is used for communicating with other devices, and the processor and the transceiver are used for running the computer program to enable the identification terminal to execute the steps of the identification method of the ground arrow. In this embodiment, the ground arrow identification terminal includes a vehicle-mounted terminal.
The system bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other devices (such as a client, a read-write library and a read-only library). The memory may comprise random access memory (Random Access Memory, RAM) and may also comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The protection scope of the ground arrow recognition method of the present invention is not limited to the execution sequence of the steps listed in the present embodiment, and all the schemes implemented by the steps of increasing or decreasing and step replacing in the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The invention also provides a ground arrow recognition system, which can realize the ground arrow recognition method, but the device for realizing the ground arrow recognition method comprises but is not limited to the structure of the ground arrow recognition system listed in the embodiment, and all the structural modifications and substitutions of the prior art according to the principles of the invention are included in the protection scope of the invention.
In summary, the ground arrow recognition method, the ground arrow recognition system, the terminal and the computer readable storage medium have the following beneficial effects:
firstly, the invention uses triangle feature detection and template matching to identify the arrow, and does not use a neural network model, so that a large number of training sets and pixel level labels are not needed, and great workload is reduced.
Secondly, the invention improves the recognition rate of the arrow, the algorithm is linear in terms of space-time complexity, and the CPU occupancy rate is greatly reduced.
Thirdly, the invention has better recognition rate under the conditions of processing multi-arrow, arrow abrasion and deformation. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (9)

1. A method for identifying a ground arrow, comprising:
acquiring a ground ring view;
extracting a ground arrow and a lane line from the ground ring view, and taking a steering arrow of the ground arrow as a steering arrow initial template;
performing pixel level traversal on the picture after the ground arrow and the lane line are extracted, and detecting all straight arrows in the ground arrow; performing pixel level line traversal on the picture after the ground arrow and the lane lines are extracted, and storing the central position and the line width of each line of arrow; if it is determined that continuous gradient of the line width of the arrow exists, determining whether straight line fitting can be performed on the center point of each line of the arrow; if yes, determining the arrow as a straight arrow; if not, determining that the arrow is not a straight arrow; rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template;
performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, scaling the steering arrow to generate candidate frames with different sizes, and performing template matching, position clustering and steering arrow type recognition on the candidate frames and the steering arrow matching templates to recognize the steering arrow; the positions of the steering arrows are classified according to the distances among the positions of the center points, and the results of all the templates in each column after matching; traversing the positions of the center points of the results of all template matching, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered to be the matching result under the same turning arrow, and the matching result of the two templates is classified as a tuple; if the distance is greater than or equal to the distance threshold, the matching result of the two templates is considered to be the matching result under the other turning arrow, and the matching result of the two templates is classified as another tuple; selecting an arrow type corresponding to the maximum matching degree from the tuples as a type of a steering arrow at the position;
the identified straight and turn arrows are combined to output a ground arrow in the ground ring view.
2. The method for recognizing a ground arrow according to claim 1, wherein the step of determining whether the center point of each row of arrows is linearly fit comprises:
calculating an included angle between straight lines formed by every two points in a plurality of continuous center points on the basis of continuous gradient of the line width of each line of arrows;
judging whether the included angle is larger than an included angle threshold value or not; if yes, a plurality of continuous center points cannot be fitted into a straight line, and the arrow is a non-straight arrow; if not, a plurality of continuous center points can be fitted into a straight line, and the arrow is a straight arrow.
3. The method of claim 2, wherein the method of identifying a ground arrow further comprises, prior to rotating the turn arrow initial template by a fixed angle to form a turn arrow match template:
pixel-level traversal is carried out on the pictures after the ground arrows and the lane lines are extracted, and slope included angles of the lane lines are calculated for central points of two continuous lines;
removing the maximum included angle and the minimum included angle of all slope included angles, and calculating the average included angle of the rest slope included angles;
calculating a car body course angle according to the average included angle;
the body heading angle is defined as the fixed angle of rotation of the steering arrow initial template.
4. The method for recognizing a ground arrow according to claim 3, wherein the vehicle body heading angle is calculated by:
θ'=π/2-θ;
wherein θ' is the heading angle of the vehicle body, and θ is the average included angle.
5. The method for recognizing a ground arrow according to claim 3, wherein the step of performing pixel-level column traversal on the image after the ground arrow and the lane line are extracted, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, and recognizing the turning arrow comprises:
performing pixel-level column traversal on the picture after the ground arrow and the lane lines are extracted, and storing the center point position of each column of the arrow;
scaling the center point position to generate a plurality of candidate frames with different sizes;
interpolation is carried out on different candidate frames, so that the size of the candidate frames is the same as the size of the initial template of the steering arrow;
and calculating the template matching degree between the interpolated candidate frame and the steering arrow initial template, removing the candidate frame with the matching degree smaller than the matching degree threshold, and storing the candidate frame with the maximum matching degree, the arrow type and the current position of the candidate frame.
6. A ground arrow identification system, comprising:
the acquisition module is used for acquiring the ground ring view;
the extraction module is used for extracting a ground arrow and a lane line from the ground annular view, and taking a steering arrow of the ground arrow as a steering arrow initial template;
the detection module is used for performing pixel level row traversal on the picture after the ground arrow and the lane line are extracted, and detecting all the straight arrows in the ground arrow; performing pixel level line traversal on the picture after the ground arrow and the lane lines are extracted, and storing the central position and the line width of each line of arrow; if it is determined that continuous gradient of the line width of the arrow exists, determining whether straight line fitting can be performed on the center point of each line of the arrow; if yes, determining the arrow as a straight arrow; if not, determining that the arrow is not a straight arrow;
the rotating module is used for rotating the steering arrow initial template by a fixed angle to form a steering arrow matching template;
the recognition module is used for performing pixel-level column traversal on the picture after the ground arrow and the lane line are extracted, performing template matching, position clustering and turning arrow type recognition on the turning arrow after scaling the turning arrow to generate candidate frames with different sizes, so as to recognize the turning arrow; the positions of the steering arrows are classified according to the distances among the positions of the center points, and the results of all the templates in each column after matching; traversing the positions of the center points of the results of all template matching, and calculating the distance between every two templates; if the distance is smaller than the distance threshold, the matching result of the two templates is considered to be the matching result under the same turning arrow, and the matching result of the two templates is classified as a tuple; if the distance is greater than or equal to the distance threshold, the matching result of the two templates is considered to be the matching result under the other turning arrow, and the matching result of the two templates is classified as another tuple; selecting an arrow type corresponding to the maximum matching degree from the tuples as a type of a steering arrow at the position;
and the output module is used for combining the identified straight arrow and the identified turning arrow so as to output the ground arrow in the ground ring view.
7. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method of identifying a ground arrow according to any one of claims 1 to 5.
8. A ground arrow identification terminal, comprising: a processor and a memory;
the memory is configured to store a computer program, and the processor is configured to execute the computer program stored in the memory, to cause the identification terminal to execute the ground arrow identification method according to any one of claims 1 to 5.
9. The ground arrow identification terminal of claim 8, wherein the ground arrow identification terminal comprises a vehicle-mounted terminal.
CN202110523788.4A 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium Active CN113158976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110523788.4A CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110523788.4A CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113158976A CN113158976A (en) 2021-07-23
CN113158976B true CN113158976B (en) 2024-04-02

Family

ID=76875258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110523788.4A Active CN113158976B (en) 2021-05-13 2021-05-13 Ground arrow identification method, system, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113158976B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013113011A1 (en) * 2012-01-26 2013-08-01 Telecommunication Systems, Inc. Natural navigational guidance
CN105825203A (en) * 2016-03-30 2016-08-03 大连理工大学 Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN111414826A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Method, device and storage medium for identifying landmark arrow
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN112131963A (en) * 2020-08-31 2020-12-25 青岛秀山移动测量有限公司 Road marking line extraction method based on driving direction structural feature constraint
CN112183427A (en) * 2020-10-10 2021-01-05 厦门理工学院 Rapid extraction method for arrow-shaped traffic signal lamp candidate image area

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013113011A1 (en) * 2012-01-26 2013-08-01 Telecommunication Systems, Inc. Natural navigational guidance
CN105825203A (en) * 2016-03-30 2016-08-03 大连理工大学 Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN109948630A (en) * 2019-03-19 2019-06-28 深圳初影科技有限公司 Recognition methods, device, system and the storage medium of target sheet image
CN111414826A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Method, device and storage medium for identifying landmark arrow
CN111476157A (en) * 2020-04-07 2020-07-31 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN112131963A (en) * 2020-08-31 2020-12-25 青岛秀山移动测量有限公司 Road marking line extraction method based on driving direction structural feature constraint
CN112183427A (en) * 2020-10-10 2021-01-05 厦门理工学院 Rapid extraction method for arrow-shaped traffic signal lamp candidate image area

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"道路箭头标志检测与识别算法研究";魏瑾瑜;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;第C034-967页 *

Also Published As

Publication number Publication date
CN113158976A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
Marzougui et al. A lane tracking method based on progressive probabilistic Hough transform
Ghanem et al. Lane detection under artificial colored light in tunnels and on highways: an IoT-based framework for smart city infrastructure
WO2019228211A1 (en) Lane-line-based intelligent driving control method and apparatus, and electronic device
KR101596299B1 (en) Apparatus and Method for recognizing traffic sign board
Yuan et al. Robust lane detection for complicated road environment based on normal map
An et al. Real-time lane departure warning system based on a single FPGA
CN106951830B (en) Image scene multi-object marking method based on prior condition constraint
CN112997190B (en) License plate recognition method and device and electronic equipment
Ye et al. A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification
Ding et al. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm
Li et al. A robust lane detection method based on hyperbolic model
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
Wang et al. Fast vanishing point detection method based on road border region estimation
CN117218622A (en) Road condition detection method, electronic equipment and storage medium
Li et al. Multi‐lane detection based on omnidirectional camera using anisotropic steerable filters
Fan et al. Road vanishing point detection using weber adaptive local filter and salient‐block‐wise weighted soft voting
CN111753858B (en) Point cloud matching method, device and repositioning system
Annamalai et al. An optimized computer vision and image processing algorithm for unmarked road edge detection
Al Mamun et al. Efficient lane marking detection using deep learning technique with differential and cross-entropy loss.
CN117315263A (en) Target contour segmentation device, training method, segmentation method and electronic equipment
CN113158976B (en) Ground arrow identification method, system, terminal and computer readable storage medium
Hwang et al. Optimized clustering scheme-based robust vanishing point detection
CN112101139B (en) Human shape detection method, device, equipment and storage medium
CN113449647A (en) Method, system, device and computer-readable storage medium for fitting curved lane line
CN114118188A (en) Processing system, method and storage medium for moving objects in an image to be detected

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Song Jing

Inventor after: Wu Zizhang

Inventor after: Wang Fan

Inventor before: Song Jing

Inventor before: Xiang Weixing

Inventor before: Wu Zizhang

Inventor before: Wang Fan

GR01 Patent grant
GR01 Patent grant