CN112084988A - Lane line instance clustering method and device, electronic equipment and storage medium - Google Patents

Lane line instance clustering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112084988A
CN112084988A CN202010972488.XA CN202010972488A CN112084988A CN 112084988 A CN112084988 A CN 112084988A CN 202010972488 A CN202010972488 A CN 202010972488A CN 112084988 A CN112084988 A CN 112084988A
Authority
CN
China
Prior art keywords
clustering
lane line
feature vector
center
radius
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010972488.XA
Other languages
Chinese (zh)
Other versions
CN112084988B (en
Inventor
李宇明
刘国清
郑伟
杨广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Youjia Innovation Technology Co ltd
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Publication of CN112084988A publication Critical patent/CN112084988A/en
Application granted granted Critical
Publication of CN112084988B publication Critical patent/CN112084988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a lane line instance clustering method, a lane line instance clustering device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, performing distance judgment on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line feature vector, and performing corresponding mapping on the lane line feature vector and the clustering mark corresponding to the lane line feature vector and the lane line binary segmentation result to obtain a lane line example segmentation result. In the whole process, the clustering network is used for replacing the traditional clustering algorithm to obtain the clustering center and the clustering radius, the processing of the clustering algorithm can be transferred to GPU operation, the CPU calculation amount is saved, and the lane line example segmentation efficiency is improved.

Description

Lane line instance clustering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a lane line instance clustering method, apparatus, electronic device, and storage medium.
Background
With the development of artificial intelligence technology and the improvement of sensor precision, the automatic driving technology becomes a popular research field and is paid social attention. Among them, lane line detection is one of the fundamental and important tasks, and plays a critical role in both an assistant driving system and an automatic driving system.
In order to solve the problems of poor stability and time-consuming detection of the traditional lane line detection method, researchers adopt a deep neural network to replace the traditional lane line detection method, and the method is obviously improved in the accuracy and robustness of lane line detection. The most representative lane line example segmentation method based on deep learning is to apply an example segmentation algorithm to lane line detection, output a lane line feature vector branch while outputting a lane line binary segmentation result, and further convert the output of lane line semantic segmentation into example segmentation by combining the result of the feature vector branch.
However, the above method still has some problems in practical engineering applications: the lane line feature vector branches also need a complex post-Processing clustering algorithm to obtain a final clustering result and convert lane line semantic segmentation into instance segmentation, the process is complex and time-consuming, and a Central Processing Unit (CPU) on the vehicle-mounted embedded device has limited computing power, and generally cannot support the complex operation of frequent clustering. Therefore, the problem that the lane line example segmentation efficiency is low exists in the current lane line example segmentation method based on deep learning.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a lane line instance clustering method, a lane line instance clustering device, an electronic device, and a storage medium, which can improve the lane line instance segmentation efficiency.
A lane line instance clustering method, the method comprising:
acquiring a lane line binary segmentation result and a lane line feature vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, performing distance judgment on the lane line characteristic vector to obtain a clustering mark corresponding to the lane line characteristic vector;
correspondingly mapping the lane line characteristic vector and the clustering identification corresponding to the lane line characteristic vector with a lane line binary segmentation result to obtain a lane line example segmentation result;
the distance judgment is used for distinguishing clustering centers to which lane line clustering vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, clustering centers obtained by a preset clustering algorithm and clustering radii.
In one embodiment, inputting the lane line feature vector histogram into a trained clustering network, and obtaining the clustering center and the clustering radius includes:
inputting the lane line feature vector histogram into the trained clustering network to obtain a clustering center classification result;
extracting the coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification result;
and indexing the clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the trained clustering network comprises a clustering center classification branch and a clustering radius regression branch;
inputting the lane line feature vector histogram into the trained clustering network, and obtaining a clustering center and a clustering radius comprises the following steps:
inputting the lane line feature vector histogram into a trained clustering network, and extracting clustering center classification results from the clustering center classification branches;
extracting a clustering center by adopting a connected domain calibration algorithm based on a clustering center classification result;
and according to the extracted clustering centers, indexing the clustering radii corresponding to the clustering centers through clustering radius regression branches.
In one embodiment, the distance judgment of the lane line feature vector based on the clustering center and the clustering radius to obtain the clustering identifier corresponding to the lane line feature vector comprises:
acquiring a cluster identifier corresponding to a cluster center;
and when the pixel point corresponding to the current lane line characteristic vector is in a target range, adding a cluster identifier corresponding to the current cluster center for the current lane line characteristic vector, wherein the target range is an area range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In one embodiment, before inputting the lane line feature vector histogram into the trained clustering network and obtaining the clustering center and the clustering radius, the method further includes:
obtaining a binary segmentation result of a historical lane line and a characteristic vector of the historical lane line;
clustering the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines;
constructing a training data set according to the historical lane line feature vector histogram, a clustering center and a clustering radius obtained by a preset clustering algorithm;
and combining preset loss functions based on the training data set, training the initial clustering network, and obtaining the trained clustering network.
In one embodiment, the lane line binary segmentation result and the lane line feature vector are obtained by performing semantic segmentation on the lane driving scene image by a trained lane line detection network.
In one embodiment, the trained clustering network includes a backbone network and a multitask output network.
A lane line instance clustering apparatus, the apparatus comprising:
the data acquisition module is used for acquiring a lane line binary segmentation result and a lane line feature vector;
the histogram data acquisition module is used for obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
the clustering feature data extraction module is used for inputting the lane line feature vector histogram into the trained clustering network to obtain a clustering center and a clustering radius;
the clustering processing module is used for judging the distance of the lane line characteristic vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line characteristic vector;
the instance segmentation module is used for mapping the lane line characteristic vector, the clustering identification corresponding to the lane line characteristic vector and the lane line binary segmentation result correspondingly to obtain a lane line instance segmentation result;
the distance judgment is used for distinguishing clustering centers to which the lane line feature vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, the clustering centers obtained by a preset clustering algorithm and the clustering radius.
In one embodiment, the apparatus further comprises:
the network training module is used for acquiring historical lane driving scene image data, inputting the historical lane driving scene image data into a trained lane line detection network for semantic segmentation to obtain a historical lane line binary segmentation result and a historical lane line characteristic vector, and clustering the historical lane line binary segmentation result and the historical lane line characteristic vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines, constructing a training data set according to the histogram of the characteristic vectors of the historical lane lines, a clustering center and a clustering radius obtained by a preset clustering algorithm, and training an initial clustering network based on a preset loss function of the training data set to obtain a trained clustering network.
An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a lane line binary segmentation result and a lane line feature vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, performing distance judgment on the lane line characteristic vector to obtain a clustering mark corresponding to the lane line characteristic vector;
correspondingly mapping the lane line characteristic vector and the clustering identification corresponding to the lane line characteristic vector with a lane line binary segmentation result to obtain a lane line example segmentation result;
the distance judgment is used for distinguishing clustering centers to which the lane line feature vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, the clustering centers obtained by a preset clustering algorithm and the clustering radius.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a lane line binary segmentation result and a lane line feature vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, performing distance judgment on the lane line characteristic vector to obtain a clustering mark corresponding to the lane line characteristic vector;
correspondingly mapping the lane line characteristic vector and the clustering identification corresponding to the lane line characteristic vector with a lane line binary segmentation result to obtain a lane line example segmentation result;
the distance judgment is used for distinguishing clustering centers to which the lane line feature vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, the clustering centers obtained by a preset clustering algorithm and the clustering radius.
The lane line example clustering method, the lane line example clustering device, the electronic equipment and the storage medium acquire a lane line binary segmentation result and a lane line feature vector, obtain a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, input the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, perform distance judgment on the lane line feature vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line feature vector, and perform corresponding mapping on the lane line feature vector and the clustering mark corresponding to the lane line feature vector and the lane line feature vector with the lane line binary segmentation result to obtain a lane line example segmentation result. In the whole process, the clustering network is used for replacing the traditional complex clustering algorithm to obtain the clustering center and the clustering radius, so that the clustering algorithm can be transferred to a Graphic Processing Unit (GPU) for operation, a large amount of CPU (Central Processing Unit) calculated amount is saved, meanwhile, the output of the preset clustering algorithm is used as training data of the clustering network, manual labeling data is not needed, and the lane line example segmentation efficiency is improved.
Drawings
FIG. 1 is a diagram of an exemplary environment in which the method for clustering lane lines is applied in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a method for clustering lane line instances in one embodiment;
FIG. 3(a) is a schematic diagram of a two-dimensional histogram of lane line feature vectors in one embodiment;
FIG. 3(b) is a diagram illustrating an output result of the clustering network in one embodiment;
FIG. 3(c) is a diagram illustrating binary segmentation results of lane lines in one embodiment;
FIG. 3(d) is a diagram illustrating an example lane line segmentation result in one embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for clustering lane line instances in another embodiment;
FIG. 5 is a schematic flow chart diagram illustrating the steps for training a clustering network in one embodiment;
FIG. 6 is a block diagram showing the structure of a lane line clustering device according to an embodiment;
FIG. 7 is a block diagram showing the structure of a lane line clustering apparatus according to another embodiment;
FIG. 8 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The lane line example clustering method provided by the application can be applied to the application environment shown in fig. 1. The vehicle 102 is provided with an onboard embedded device, including a camera and a processor. Specifically, the camera of the vehicle 102 may collect images of the driving scene of the lane in real time, and upload the collected images to the processor, the processor performs semantic segmentation processing on the images of the driving scene of the lane through a trained lane line detection network to obtain binary segmentation results of lane lines and feature vectors of the lane lines, then, according to the lane line binary segmentation result and the lane line feature vector, obtaining a lane line feature vector histogram, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, based on the clustering center and the clustering radius, and performing distance judgment on the lane line characteristic vectors to obtain cluster identifications corresponding to the lane line characteristic vectors, and performing corresponding mapping on the lane line characteristic vectors, the cluster identifications corresponding to the lane line characteristic vectors and the lane line binary segmentation results to obtain lane line example segmentation results. The processor may include a GPU and/or a CPU, among others.
In one embodiment, as shown in fig. 2, a lane line example clustering method is provided, which is described by taking the method as an example applied to the vehicle 102 in fig. 1, and includes the following steps:
step 202, obtaining a lane line binary segmentation result and a lane line feature vector.
The lane line binary segmentation result is a lane line binary segmentation image, which shows the lane line binary segmentation result output by the lane line detection network, as shown in fig. 3 (c). In practical application, a camera of a vehicle acquires a lane driving scene image in real time, the acquired image is uploaded to a processor, the processor receives the image, inputs the image to a trained lane line detection network, and performs semantic segmentation on the lane driving scene image to obtain a lane line binary segmentation result and a lane line feature vector. In this embodiment, the binary lane line segmentation result is an H × W two-dimensional image, where H and W are the same as the height and width of the input image. The value of a point on the lane line in the binary segmentation result is 1, and the value of a background point is 0. The lane line eigenvector is a lane line two-dimensional eigenvector, and is specifically a 2 × H × W matrix. The values in the feature vectors can ensure that the distance L2 between the feature vectors of the points on the same lane line is less thanvAnd the distance L2 between the feature vectors of the points on different lane lines is more than 2dWherein, in the step (A),vanddrespectively representing distance edges within classes and distance edges between classes, and specifying during training。
And step 204, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector.
After the lane line binary segmentation result and the lane line feature vector are obtained, histogram statistical processing can be performed according to the lane line binary segmentation result and the lane line feature vector to obtain a lane line feature vector histogram. Histogram statistics is one of the basic and common algorithms in image processing algorithms, and the main principle is to calculate and count the number of pixels of each gray level in an image. In this embodiment, the histogram of the feature vector of the lane line is a two-dimensional histogram of the feature vector of the lane line, which can be specifically shown in fig. 3(a), and can be used as input data of a clustering network. The two-dimensional histogram is constrained in a small-sized two-dimensional space of h x w, thereby achieving the purpose of reducing the calculation amount.
And step 206, inputting the lane line feature vector histogram into the trained clustering network to obtain a clustering center and a clustering radius.
In specific implementation, the trained clustering network may be a network structure of cnn (conditional Neural networks) backbone network + multitask output using shared weights. Wherein, the multitask network output comprises two branches, a clustering center classification branch and a clustering radius regression branch. The cluster center is classified into a two-dimensional image of h × w, and h and w are the same as the height and width of the input histogram, and are a small-sized image (e.g., 56 × 56). The value of the point on the center of the cluster in the binary segmentation result is 1, and the value of the background point is 0. The clustering radius regression branch is an h multiplied by w two-dimensional image, wherein a point corresponding to a clustering center outputs a clustering radius, and a background point outputs a random value. The clustering network can be obtained by training based on historical lane driving scene images, a preset lane line detection network and the like. After the training of the clustering network, the feature vector histogram of the lane line is input, and then the clustering radius and the clustering center can be obtained, specifically, fig. 3(b) is an output result of the clustering network, including the clustering center and the clustering radius.
And step 208, based on the clustering centers and the clustering radii, performing distance judgment on the lane line characteristic vectors to obtain clustering identifications corresponding to the lane line characteristic vectors.
In this embodiment, after the cluster center and the cluster radius are obtained, the distance between the feature vector of the lane line and each obtained cluster center may be determined based on the cluster center and the cluster radius, that is, the distance between the two-dimensional feature vector of all points on the lane line and each obtained cluster center is calculated as L2, and if the feature point falls within a circle defined by a certain cluster center and a cluster radius, the feature point is considered to belong to the cluster center. Through distance judgment, the feature vectors corresponding to all points on the lane line can be uniquely attributed to a certain clustering center, so that a clustering result of a lane line two-dimensional feature vector space is obtained, namely, a unique lane line ID (Identity) is given to each lane line two-dimensional feature vector.
And step 210, correspondingly mapping the lane line characteristic vectors, the cluster identifications corresponding to the lane line characteristic vectors and the lane line binary segmentation results to obtain lane line example segmentation results.
After the clustering result of the lane line two-dimensional feature vector is obtained, because the binary segmentation result of the lane line cannot distinguish the points on each lane line, the lane line feature vector and the clustering mark corresponding to the lane line feature vector can be back-projected to the real space of the binary segmentation result of the lane line, and one-to-one mapping is performed to obtain the lane line example segmentation result (for example, the lane line example segmentation result shown in fig. 3 (d)). In the lane line example division result, the points on the same lane line have the same ID, and the points on different lane lines have different IDs.
The method comprises the steps of obtaining a lane line binary segmentation result and a lane line characteristic vector, obtaining a lane line characteristic vector histogram according to the lane line binary segmentation result and the lane line characteristic vector, inputting the lane line characteristic vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, carrying out distance judgment on the lane line characteristic vector based on the clustering center and the clustering radius to obtain a clustering mark corresponding to the lane line characteristic vector, and carrying out corresponding mapping on the lane line characteristic vector and the clustering mark corresponding to the lane line characteristic vector and the lane line binary segmentation result to obtain the lane line example segmentation result. In the whole process, the clustering network is used for clustering the lane line characteristic vectors instead of the traditional clustering algorithm, so that the post-processing clustering algorithm can be transferred to GPU operation, a large amount of CPU calculated amount is saved, meanwhile, the output of the preset clustering algorithm is directly used as training data of the clustering network, manual marking of data is not needed, and the lane line example segmentation efficiency is improved.
In one embodiment, step 206 comprises: inputting the lane line feature vector histogram into the trained clustering network to obtain a clustering center classification result, extracting the coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the classification result of the clustering centers, and indexing the clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, as shown in fig. 4, the trained clustering network includes a cluster center classification branch and a cluster radius regression branch;
step 206 comprises: step 226, inputting the lane line feature vector histogram into the trained clustering network, extracting clustering center classification results from the clustering center classification branches, extracting clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification results, and indexing the clustering radii corresponding to the clustering centers through clustering radius regression branches according to the extracted clustering centers.
In specific implementation, the clustering network includes a clustering center classification branch and a clustering radius regression branch. The cluster center can be obtained by inputting the lane line feature vector histogram into the trained cluster network and extracting the cluster center classification result from the cluster center classification branch. The pixel value of a point on the cluster center is 1, and the pixel value of a background point is 0. Then, a connected domain calibration algorithm is adopted to mark the classification result of the clustering centers, and the clustering centers are extracted. The cluster centers are known, the coordinates of the cluster centers can also be known, and the cluster radii corresponding to the cluster centers can be indexed in the cluster radius regression branch based on the coordinates of the cluster centers, so that the cluster centers and the corresponding cluster radii can be output. In this embodiment, the connected domain calibration algorithm is adopted to quickly extract the clustering centers, and the clustering radii corresponding to the clustering centers can be quickly indexed through the clustering radius regression branch.
In one embodiment, as shown in fig. 4, the distance determining the lane line feature vector based on the cluster center and the cluster radius to obtain the cluster identifier corresponding to the lane line feature vector includes: step 228, acquiring a cluster identifier corresponding to the addition of the cluster center, and adding the cluster identifier corresponding to the current cluster center to the current lane line feature vector when the pixel point corresponding to the current lane line feature vector is in a target range, wherein the target range is an area range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In specific implementation, a clustering identifier such as a lane line ID may be added to each clustering center, and then, whether a pixel point corresponding to a current lane line feature vector is within a target range is determined, that is, whether a pixel point corresponding to a lane line two-dimensional feature vector is within an area range (circle) formed by a certain clustering center and a clustering radius corresponding to the clustering center is determined, and if the pixel point is within a certain circle, a lane line ID corresponding to the clustering center corresponding to the circle where the point is located is assigned to the pixel point, so that each lane line two-dimensional feature vector corresponds to a unique lane line ID, and a clustering result of a lane line two-dimensional feature vector space is obtained. In this embodiment, the lane line two-dimensional feature vector is compared with the clustering center and the clustering radius, and corresponding identification data is added thereto, which can facilitate later-stage example segmentation.
In one embodiment, as shown in fig. 5, before inputting the lane line feature vector histogram into the trained clustering network and obtaining the cluster center and the cluster radius, the method further includes:
step 205, obtaining a binary segmentation result of a historical lane line and a characteristic vector of the historical lane line;
step 225, clustering the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines;
step 245, constructing a training data set according to the historical lane line feature vector histogram, the clustering center and the clustering radius obtained by a preset clustering algorithm;
and 265, combining preset loss functions based on the training data set, training an initial clustering network, and obtaining the trained clustering network.
In practical application, in order to train the clustering network, a large amount of driving image data, namely historical lane driving scene images, including cart visual angle data and trolley visual angle data, of different illumination conditions, different scenes and different visual angles are required to be collected by the camera. And after the historical lane driving scene image data are collected, the historical lane driving scene image data can be input into a preset trained lane line detection network for semantic segmentation to obtain a binary segmentation result of the historical lane lines and a two-dimensional characteristic vector of the historical lane lines. Then, training of the clustering network is started, and the training process of the clustering network can be as follows: obtaining a binary segmentation result of a historical lane line and a two-dimensional feature vector of the historical lane line, and inputting the binary segmentation result of the historical lane line and the two-dimensional feature vector of the historical lane line into a designed post-processing Clustering algorithm to obtain a Clustering center and a Clustering radius, wherein the post-processing Clustering algorithm can be algorithms such as DBSCAN (Density-Based Spatial Clustering of Applications with Noise), Mean-Shift (Mean Shift algorithm) and the like. And then, performing two-dimensional histogram statistical calculation on the two-dimensional characteristic vectors of the lane line binary segmentation result and the two-dimensional characteristic vectors of the lane line to obtain corresponding historical lane line characteristic vector histograms. Then, a large number of historical lane driving scene images are input into a lane line detection network, the processing is repeatedly carried out, a historical lane line feature vector histogram, a clustering center and a clustering radius corresponding to each image are obtained, and a training data set of the clustering network is constructed based on the data. And taking the clustering center and the clustering radius output by the post-processing clustering algorithm as true values of the training data of the clustering network, and training the clustering network by combining a training data set. Specifically, the clustering network may be trained by using the loss function of the following formula (1):
Figure BDA0002684589190000111
L1(W)=||y1-fW(x)||2 (2)
L2(W)=-log(softmax(y2,fW(x))) (3)
in the above formula, L1(W) is the Euclidean distance between the clustering radius of the clustering radius regression branch output and the real value of the clustering radius, y1Actual value, f, representing the cluster radiusW(x) Representing the parameters of the clustering network to be trained. And only the point corresponding to the clustering center is considered when the distance is calculated, and other points do not participate in training. L is2(W) is a cross-entropy loss function of the cluster center classification branch, where y2Representing the true value of the cluster center. The optimization of equation (1) aims to find the optimal network parameter W, weighting coefficient sigma1And σ2The final goal can be seen as learning the relative weights of each subtask output. Where the numerical value is large σ2Will reduce L2Influence of (W), small value of σ2Will increase L2And (W), the weight of each task can be automatically determined by the training process without manual setting, and the output result of the post-processing clustering algorithm is used as the true values of the clustering radius and the clustering center without manual marking of data, thereby being beneficial to the industrialization of the algorithm.
It should be understood that although the steps in the flowcharts of fig. 2, 4 and 5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 4 and 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 6, there is provided a lane line example clustering device including: a data acquisition module 510, a histogram data acquisition module 520, a cluster feature data extraction module 530, a cluster processing module 540, and an instance segmentation module 550, wherein:
and a data obtaining module 510, configured to obtain a lane line binary segmentation result and a lane line feature vector.
The histogram data obtaining module 520 is configured to obtain a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector.
The clustering feature data extracting module 530 is configured to input the lane line feature vector histogram to a trained clustering network to obtain a clustering center and a clustering radius, where the trained clustering network is obtained by training based on the historical lane line feature vector histogram and the clustering center and the clustering radius obtained by a preset clustering algorithm.
And the clustering processing module 540 is configured to perform distance judgment on the lane line feature vectors based on the clustering centers and the clustering radii to obtain clustering identifications corresponding to the lane line feature vectors, where the distance judgment is used to distinguish the clustering centers to which the lane line feature vectors belong.
And the example segmentation module 550 is configured to map the lane line feature vector and the cluster identifier corresponding to the lane line feature vector with the lane line binary segmentation result to obtain a lane line example segmentation result.
In one embodiment, the clustering feature data extracting module 530 is further configured to extract coordinates of clustering centers by using a connected domain calibration algorithm based on the lane line feature vector histogram, and index a clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the cluster feature data extraction module 530 is further configured to input the lane line feature vector histogram to a trained cluster network, extract a cluster center classification result from the cluster center classification branch, extract a cluster center by using a connected domain calibration algorithm based on the cluster center classification result, and index a cluster radius corresponding to each cluster center through a cluster radius regression branch according to the extracted cluster center.
In one embodiment, the cluster processing module 540 is further configured to obtain a cluster identifier corresponding to a cluster center, and add the cluster identifier corresponding to the current cluster center to the current lane line feature vector when a pixel point corresponding to the current lane line feature vector is within a target range, where the target range is an area range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
In one embodiment, as shown in fig. 7, the apparatus further includes a network training module 560, configured to obtain a historical lane binary segmentation result and a historical lane feature vector, and perform clustering processing on the historical lane binary segmentation result and the historical lane feature vector by using a preset clustering algorithm to obtain a clustering center and a clustering radius; and performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines, constructing a training data set according to the histogram of the characteristic vectors of the historical lane lines, a clustering center and a clustering radius obtained by a preset clustering algorithm, and training an initial clustering network based on a preset loss function of the training data set to obtain a trained clustering network.
For specific limitations of the lane line example clustering device, reference may be made to the above limitations on the lane line example clustering method, which are not described herein again. The modules in the lane line example clustering device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, or can be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a vehicle-mounted embedded device, and the internal structure thereof may be as shown in fig. 8. The electronic device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a lane line instance clustering method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structure shown in fig. 8 is a block diagram of only a portion of the structure relevant to the present disclosure, and does not constitute a limitation on the electronic device to which the present disclosure may be applied, and that a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: acquiring a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, based on the clustering center and the clustering radius, judging the distance of the lane line characteristic vector to obtain a cluster identifier corresponding to the lane line characteristic vector, mapping the lane line characteristic vector and the cluster identifier corresponding to the lane line characteristic vector with a lane line binary segmentation result to obtain a lane line example segmentation result, the distance judgment is used for distinguishing which clustering center the two-dimensional characteristic vector of the lane line belongs to, and the trained clustering network is obtained by training based on the historical lane line characteristic vector histogram, the clustering center obtained by a preset clustering algorithm and the clustering radius.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and extracting the coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the lane line feature vector histogram, and indexing the clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the processor, when executing the computer program, further performs the steps of: inputting the lane line feature vector histogram into a trained clustering network, extracting clustering center classification results from clustering center classification branches, extracting clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification results, and indexing clustering radii corresponding to the clustering centers through clustering radius regression branches according to the extracted clustering centers.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and acquiring a cluster identifier corresponding to the cluster center, and when the pixel point corresponding to the current lane line characteristic vector is in a target range, adding the cluster identifier corresponding to the current cluster center to the current lane line characteristic vector, wherein the target range is an area range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In one embodiment, the processor, when executing the computer program, further performs the steps of: obtaining historical lane driving scene image data, inputting the historical lane driving scene image data into a trained lane line detection network for semantic segmentation to obtain a historical lane line binary segmentation result and a historical lane line characteristic vector, and clustering the historical lane line binary segmentation result and the historical lane line characteristic vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines, constructing a training data set according to the histogram of the characteristic vectors of the historical lane lines, a clustering center and a clustering radius obtained by a preset clustering algorithm, and training an initial clustering network based on a preset loss function of the training data set to obtain a trained clustering network.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor performs the steps of: acquiring a lane line binary segmentation result and a lane line feature vector, obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector, inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius, based on the clustering center and the clustering radius, judging the distance of the lane line characteristic vector to obtain a cluster identifier corresponding to the lane line characteristic vector, mapping the lane line characteristic vector and the cluster identifier corresponding to the lane line characteristic vector with a lane line binary segmentation result to obtain a lane line example segmentation result, the distance judgment is used for distinguishing clustering centers to which the two-dimensional feature vectors of the lane lines belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, clustering centers obtained by a preset clustering algorithm and clustering radii.
In one embodiment, the computer program when executed by the processor further performs the steps of: and extracting the coordinates of the clustering centers by adopting a connected domain calibration algorithm based on the lane line feature vector histogram, and indexing the clustering radius corresponding to each clustering center according to the coordinates of the clustering centers.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the lane line feature vector histogram into a trained clustering network, extracting clustering center classification results from clustering center classification branches, extracting clustering centers by adopting a connected domain calibration algorithm based on the clustering center classification results, and indexing clustering radii corresponding to the clustering centers through clustering radius regression branches according to the extracted clustering centers.
In one embodiment, the computer program when executed by the processor further performs the steps of: and acquiring a cluster identifier corresponding to the cluster center, and when the pixel point corresponding to the current lane line characteristic vector is in a target range, adding the cluster identifier corresponding to the current cluster center to the current lane line characteristic vector, wherein the target range is an area range formed by the current cluster center and the cluster radius corresponding to the current cluster center.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining a historical lane binary segmentation result and a historical lane characteristic vector, and clustering the historical lane binary segmentation result and the historical lane characteristic vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; and performing histogram statistics on the binary segmentation result of the historical lane lines and the characteristic vectors of the historical lane lines to obtain a histogram of the characteristic vectors of the historical lane lines, constructing a training data set according to the histogram of the characteristic vectors of the historical lane lines, a clustering center and a clustering radius obtained by a preset clustering algorithm, and training an initial clustering network based on a preset loss function of the training data set to obtain a trained clustering network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for clustering lane line instances, the method comprising:
acquiring a lane line binary segmentation result and a lane line feature vector;
obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius;
based on the clustering center and the clustering radius, performing distance judgment on the lane line characteristic vector to obtain a clustering mark corresponding to the lane line characteristic vector;
correspondingly mapping the lane line characteristic vector and the clustering identification corresponding to the lane line characteristic vector with the lane line binary segmentation result to obtain a lane line example segmentation result;
the distance judgment is used for distinguishing clustering centers to which lane line clustering vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, clustering centers obtained by a preset clustering algorithm and clustering radii.
2. The method of claim 1, wherein inputting the lane line feature vector histogram to a trained clustering network, and obtaining a cluster center and a cluster radius comprises:
inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center classification result;
based on the clustering center classification result, extracting a clustering center by adopting a connected domain calibration algorithm;
and indexing the clustering radius corresponding to the clustering center according to the clustering center.
3. The method of claim 1, wherein the trained clustering network comprises a cluster center classification branch and a cluster radius regression branch;
the inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius comprises:
inputting the lane line feature vector histogram into a trained clustering network, and extracting a clustering center classification result from the clustering center classification branch;
extracting a clustering center by adopting a connected domain calibration algorithm based on the clustering center classification result;
and according to the extracted clustering centers, indexing the clustering radii corresponding to the clustering centers through the clustering radius regression branches.
4. The method according to claim 1, wherein the distance judgment of the lane line feature vector based on the cluster center and the cluster radius to obtain the cluster identifier corresponding to the lane line feature vector comprises:
acquiring a cluster identifier corresponding to a cluster center;
and when the pixel point corresponding to the current lane line characteristic vector is in a target range, adding a cluster identifier corresponding to a current cluster center for the current lane line characteristic vector, wherein the target range is an area range formed by the current cluster center and a cluster radius corresponding to the current cluster center.
5. The method of claim 1, wherein before inputting the lane line feature vector histogram into the trained clustering network to obtain the cluster center and the cluster radius, the method further comprises:
obtaining a binary segmentation result of a historical lane line and a characteristic vector of the historical lane line;
clustering the historical lane binary segmentation result and the historical lane characteristic vector by adopting a preset clustering algorithm to obtain a clustering center and a clustering radius; performing histogram statistics on the binary segmentation result of the historical lane lines and the feature vectors of the historical lane lines to obtain a histogram of the feature vectors of the historical lane lines;
constructing a training data set according to the historical lane line feature vector histogram, the clustering center and the clustering radius obtained by the preset clustering algorithm;
and combining preset loss functions based on the training data set, training an initial clustering network, and obtaining the trained clustering network.
6. The method of claim 1, wherein the lane line binary segmentation result and the lane line feature vector are obtained by semantic segmentation of a lane driving scene image by a trained lane line detection network.
7. The method of any of claims 1 to 5, wherein the trained clustering network comprises a backbone network and a multitask output network.
8. An apparatus for clustering lane line instances, the apparatus comprising:
the data acquisition module is used for acquiring a lane line binary segmentation result and a lane line feature vector;
the histogram data acquisition module is used for obtaining a lane line feature vector histogram according to the lane line binary segmentation result and the lane line feature vector;
the clustering feature data extraction module is used for inputting the lane line feature vector histogram into a trained clustering network to obtain a clustering center and a clustering radius;
the clustering processing module is used for judging the distance of the lane line characteristic vector based on the clustering center and the clustering radius to obtain a clustering identifier corresponding to the lane line characteristic vector;
the example segmentation module is used for mapping the lane line feature vector and the cluster identifier corresponding to the lane line feature vector with the lane line binary segmentation result correspondingly to obtain a lane line example segmentation result;
the distance judgment is used for distinguishing clustering centers to which lane line clustering vectors belong, and the trained clustering network is obtained by training based on a historical lane line feature vector histogram, clustering centers obtained by a preset clustering algorithm and clustering radii.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010972488.XA 2020-06-08 2020-09-16 Lane line instance clustering method and device, electronic equipment and storage medium Active CN112084988B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020105133575 2020-06-08
CN202010513357 2020-06-08

Publications (2)

Publication Number Publication Date
CN112084988A true CN112084988A (en) 2020-12-15
CN112084988B CN112084988B (en) 2024-01-05

Family

ID=73738004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010972488.XA Active CN112084988B (en) 2020-06-08 2020-09-16 Lane line instance clustering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112084988B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507977A (en) * 2021-01-21 2021-03-16 国汽智控(北京)科技有限公司 Lane line positioning method and device and electronic equipment
CN112906551A (en) * 2021-02-09 2021-06-04 北京有竹居网络技术有限公司 Video processing method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030044061A1 (en) * 2001-08-31 2003-03-06 Pradya Prempraneerach Color image segmentation in an object recognition system
CN101770577A (en) * 2010-01-18 2010-07-07 浙江林学院 Method for extracting information on expansion and elimination of dead wood of pine wilt disease in air photos of unmanned aerial vehicle
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
CN109740609A (en) * 2019-01-09 2019-05-10 银河水滴科技(北京)有限公司 A kind of gauge detection method and device
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
CN110866527A (en) * 2018-12-28 2020-03-06 北京安天网络安全技术有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030044061A1 (en) * 2001-08-31 2003-03-06 Pradya Prempraneerach Color image segmentation in an object recognition system
CN101770577A (en) * 2010-01-18 2010-07-07 浙江林学院 Method for extracting information on expansion and elimination of dead wood of pine wilt disease in air photos of unmanned aerial vehicle
CN109214428A (en) * 2018-08-13 2019-01-15 平安科技(深圳)有限公司 Image partition method, device, computer equipment and computer storage medium
CN110866527A (en) * 2018-12-28 2020-03-06 北京安天网络安全技术有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN109740609A (en) * 2019-01-09 2019-05-10 银河水滴科技(北京)有限公司 A kind of gauge detection method and device
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
CN111178245A (en) * 2019-12-27 2020-05-19 深圳佑驾创新科技有限公司 Lane line detection method, lane line detection device, computer device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AGUS ZAINAL ARIFIN, AKIRA ASANO: "Image segmentation by histogram thresholding using hierarchical cluster analysis", PATTERN RECOGNITION LETTERS *
石振刚 等: "改进的 FCM 聚类医学超声图像分割算法", 沈 阳 理 工 大 学 学 报 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507977A (en) * 2021-01-21 2021-03-16 国汽智控(北京)科技有限公司 Lane line positioning method and device and electronic equipment
CN112507977B (en) * 2021-01-21 2021-12-07 国汽智控(北京)科技有限公司 Lane line positioning method and device and electronic equipment
CN112906551A (en) * 2021-02-09 2021-06-04 北京有竹居网络技术有限公司 Video processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112084988B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
Yi et al. ASSD: Attentive single shot multibox detector
CN112232293B (en) Image processing model training method, image processing method and related equipment
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN111353512B (en) Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN109902548B (en) Object attribute identification method and device, computing equipment and system
CN108830196A (en) Pedestrian detection method based on feature pyramid network
CN113255915B (en) Knowledge distillation method, device, equipment and medium based on structured instance graph
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN112528845B (en) Physical circuit diagram identification method based on deep learning and application thereof
CN113807399A (en) Neural network training method, neural network detection method and neural network detection device
CN112084988B (en) Lane line instance clustering method and device, electronic equipment and storage medium
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN113034514A (en) Sky region segmentation method and device, computer equipment and storage medium
CN111539910B (en) Rust area detection method and terminal equipment
CN112633294A (en) Significance region detection method and device based on perceptual hash and storage device
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
JP2015036939A (en) Feature extraction program and information processing apparatus
CN112001378A (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
CN104050674B (en) Salient region detection method and device
CN112883827B (en) Method and device for identifying specified target in image, electronic equipment and storage medium
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
KR20110037184A (en) Pipelining computer system combining neuro-fuzzy system and parallel processor, method and apparatus for recognizing objects using the computer system in images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230428

Address after: No. 103-63, Xiaojunshan Community Commercial Building, Junshan Street, Wuhan Economic and Technological Development Zone, Wuhan City, Hubei Province, 430119

Applicant after: Wuhan Youjia Innovation Technology Co.,Ltd.

Address before: 518051 1101, west block, Skyworth semiconductor design building, 18 Gaoxin South 4th Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant