CN108345875A - Wheeled region detection model training method, detection method and device - Google Patents

Wheeled region detection model training method, detection method and device Download PDF

Info

Publication number
CN108345875A
CN108345875A CN201810308028.XA CN201810308028A CN108345875A CN 108345875 A CN108345875 A CN 108345875A CN 201810308028 A CN201810308028 A CN 201810308028A CN 108345875 A CN108345875 A CN 108345875A
Authority
CN
China
Prior art keywords
wheeled
wheeled region
road
neural network
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810308028.XA
Other languages
Chinese (zh)
Other versions
CN108345875B (en
Inventor
梁继
夏炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Initial Speed Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Initial Speed Technology Co Ltd filed Critical Beijing Initial Speed Technology Co Ltd
Priority to CN201810308028.XA priority Critical patent/CN108345875B/en
Publication of CN108345875A publication Critical patent/CN108345875A/en
Application granted granted Critical
Publication of CN108345875B publication Critical patent/CN108345875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the present application discloses a kind of training method of wheeled region detection model, obtain road sample image, wheeled region is labeled in road sample image, road sample image is inputted to the initial neural network model pre-established, initial neural network model is trained in the way of supervised learning by road sample image, obtains wheeled region detection model.Based on wheeled region detection model, present invention also provides a kind of drivable region detection methods, present road image is inputted into wheeled region detection model, which can directly learn from present road image and the wheeled area information of export structure.Compared to traditional image Segmentation Technology, the application can make full use of the learning ability of model, improve the detection efficiency in wheeled region and the accuracy of detection.

Description

Wheeled region detection model training method, detection method and device
Technical field
This application involves image processing field more particularly to a kind of training methods of wheeled region detection model, feasible Sail method for detecting area and device.
Background technology
As intellectualizing system is applied in vehicle drive field, being configured on more and more vehicles can realize certainly The dynamic intelligence system for driving function or auxiliary driving function.In order to realize Function for Automatic Pilot or assist driving function, on vehicle Intelligence system usually require to identify wheeled region from the road image of vehicle periphery, to guiding vehicle driving.
Existing wheeled region detection technology mainly utilizes traditional image Segmentation Technology, to obtain pixel scale Wheeled region contour.Specifically, one image of input exports an an equal amount of result figure by image Segmentation Technology, In result figure in the numerical value representative image of each pixel the pixel of same location classification.
However, in the prior art the sensing module of vehicle using image Segmentation Technology output result figure be one pixel-by-pixel Classification figure.Follow-up planning control module is being sent in use, can not directly be used.It also needs to utilize image processing techniques Extract the structured message of profile.Not direct intercommunication between sensing module and control module is unable to fully the study using model Ability affects the accuracy of detection efficiency and detection.
Invention content
In view of this, the application first aspect provides a kind of training method of wheeled region detection model, the side Method is applied to automatic Pilot field, including:
Road sample image is obtained, the road sample image is labeled with wheeled region;
The road sample image is inputted to the initial neural network model pre-established;
The initial neural network model is trained using the road sample image, obtains wheeled region detection model.
Optionally, the wheeled region of the road sample image is set out vertically upward using road sample image bottom edge N items line segment is labeled at equal intervals, N is positive integer, wherein the end position of every line segment is the boundary in wheeled region Position;
Then the initial neural network model that road sample image input is pre-established includes:
The initial neural network model pre-established will be inputted using the N items road sample image that line segment marks at equal intervals;
It is described to train the initial neural network model using the road sample image, obtain wheeled region detection mould Type includes:
The initial neural network model is trained using the N items road sample image that line segment marks at equal intervals using described, Obtain wheeled region detection model.
Optionally, described to train the initial neural network model using the road sample image, obtain wheeled area Domain detection model includes:
The initial neural network model extracts the feature of the road sample image;
The initial neural network model is by the region vector that the Feature Mapping of extraction is 1*N, and N is positive integer, the area Domain vector characterizes predicted value of the initial neural network model to wheeled region, each element value table of the region vector Levy the length of each line segment in road sample image bottom edge;
Region vector is compared by the initial neural network model with label-vector, and the label-vector is mark road The 1*N vectors that the length of each line segment in road sample image wheeled region is constituted;
The parameter of the initial neural network model is updated according to comparison result;
When the loss function of the initial neural network model meets preset condition, then by the initial neural network model Parameter of the "current" model parameter as wheeled region detection model, and the wheeled region is obtained according to the parameter and is examined Survey model.
Optionally, the parameter that the initial neural network model is updated according to comparison result includes:
According to comparison result, the loss function of the initial neural network model is determined;
According to the loss function, the parameter of the initial neural network model is updated.
Optionally, the region vector that the Feature Mapping of extraction is 1*N is included by the initial neural network model:
Bilinear interpolation and asymmetric convolution are carried out to the feature of extraction, to obtain the region vector of 1*N.
Optionally, the initial neural network model is depth residual error network model.
Optionally, the loss function of the initial neural network model is Smooth L1 loss functions.
The application second aspect provides a kind of drivable region detection method, and the method is led applied to automatic Pilot Domain, including:
Obtain present road image;
The present road image is input to wheeled region detection model, and is based on the wheeled region detection mould The output of type is as a result, determine the wheeled region in the present road image;The wheeled region detection model is basis The wheeled region inspection that the training method training for the wheeled region detection model that the embodiment of the present application first aspect provides generates Survey model.
Optionally, described that the present road image is input to wheeled region detection model, and based on described feasible The output of region detection model is sailed as a result, determining that the wheeled region in the present road image includes:
The feature of present road image described in wheeled region detection model extraction;
The wheeled region detection model is by the region vector that the Feature Mapping of extraction is 1*N, and N is integer, the area Domain vector characterizes predicted value of the wheeled region detection model to wheeled region, each element value of the region vector Characterize the length of each line segment in road sample image bottom edge;
The wheeled region detection model determines the wheeled in the present road image according to the region vector Region.
The embodiment of the present application third aspect provides a kind of training device of wheeled region detection model, which includes:
Sample acquisition unit, for obtaining road sample image, the road sample image is labeled with wheeled region;
Training unit utilizes institute for the road sample image to be inputted the initial neural network model pre-established It states road sample image and trains the initial neural network model, obtain wheeled region detection model.
Optionally, the wheeled region of the road sample image is set out vertically upward using road sample image bottom edge N items line segment is labeled at equal intervals, N is positive integer, wherein the end position of every line segment is the boundary in wheeled region Position;
Then the training unit is specifically used for:
The initial neural network model pre-established will be inputted using the N items road sample image that line segment marks at equal intervals;
The initial neural network model is trained using the N items road sample image that line segment marks at equal intervals using described, Obtain wheeled region detection model.
Optionally, the training unit includes:
Extract subelement, the feature for extracting the road sample image;
Subelement is mapped, the region vector that the Feature Mapping for that will extract is 1*N, N is positive integer, the region vector Predicted value of the initial neural network model to wheeled region is characterized, described in each element value characterization of the region vector The length of each line segment in road sample image bottom edge;
Comparing subunit, for region vector to be compared with label-vector, the label-vector is mark road sample The 1*N vectors that the length of each line segment in this image wheeled region is constituted;
Update subelement, the parameter for updating the initial neural network model according to comparison result;
Determination subelement meets preset condition, then using "current" model parameter as wheeled region for working as loss function The parameter of detection model, and the wheeled region detection model is obtained according to the parameter.
Optionally, comparing subunit is specifically used for:
According to comparison result, the loss function of the initial neural network model is determined;
According to the loss function, the parameter of the initial neural network model is updated.
Optionally, mapping subelement is specifically used for:
Bilinear interpolation and asymmetric convolution are carried out to the feature of extraction, to obtain the region vector of 1*N.
Optionally, the initial neural network model is depth residual error network model.
Optionally, which is characterized in that the loss function of the initial neural network model is Smooth L1 loss functions.
The application fourth aspect provides a kind of wheeled regional detection device, and described device includes:
Present road image acquisition unit, for obtaining present road image;
Wheeled region detection unit, for the present road image to be input to wheeled region detection model, and Output based on the wheeled region detection model is as a result, determine the wheeled region in the present road image.
The wheeled region detection model is the instruction according to wheeled region detection model provided by the embodiments of the present application Practice the wheeled region detection model that method training generates.
Optionally, wheeled region detection unit includes:
Extract subelement, the feature for extracting the present road image;
Subelement is mapped, the region vector that the Feature Mapping for that will extract is 1*N, N is integer, the region vector table Predicted value of the wheeled region detection model to wheeled region is levied, described in each element value characterization of the region vector The length of each line segment in road sample image bottom edge;
Determination subelement determines the present road for the wheeled region detection model according to the region vector Wheeled region in image.
As can be seen from the above technical solutions, the embodiment of the present application has the following advantages:
In the embodiment of the present application, a kind of training method of wheeled region detection model is provided, obtains road sample graph Picture is labeled with wheeled region in road sample image, and road sample image is inputted to the initial neural network mould pre-established Type trains initial neural network model by road sample image in the way of supervised learning, obtains wheeled region detection mould Type.Based on wheeled region detection model, present invention also provides a kind of drivable region detection methods, by present road image Wheeled region detection model is inputted, which directly can learn and export from present road image The wheeled area information of structuring.Compared to traditional image Segmentation Technology, the application can make full use of the study of model Ability improves the detection efficiency in wheeled region and the accuracy of detection.
Description of the drawings
Fig. 1 is a kind of flow chart of the training method of wheeled region detection model in the embodiment of the present application;
Fig. 2 is to be labeled to wheeled region in road sample image using line segment at equal intervals in the embodiment of the present application Schematic diagram;
Fig. 3 utilizes the N items road sample image that line segment marks at equal intervals to train initial god to be a kind of in the embodiment of the present application Through network model, the flow chart of wheeled region detection model is obtained;
Fig. 4 is a kind of stream of the drivable region detection method based on wheeled region detection model in the embodiment of the present application Cheng Tu;
Fig. 5 is the present road image to be input to wheeled region detection model in the embodiment of the present application, and be based on The output of the wheeled region detection model is as a result, determine the flow chart in the wheeled region in the present road image;
Present road image present road image and is input to wheeled region detection model by Fig. 6 A and Fig. 6 B respectively Obtain the road image for being identified with wheeled region;
Fig. 7 is an a kind of structural schematic diagram of the training device of wheeled region detection model in the embodiment of the present application;
Fig. 8 is an a kind of structural schematic diagram of wheeled regional detection device in the embodiment of the present application;
Fig. 9 is a kind of structural schematic diagram of server in the embodiment of the present application;
Figure 10 is a kind of structural schematic diagram of server in the embodiment of the present application.
Specific implementation mode
With the development of science and technology, the emerging concept such as automatic Pilot, unmanned vehicle is come into being.Traditional drive manner needs Vehicle travels very important person according to wheeled region in order to control, and in automatic Pilot field, vehicle can be with preset intelligentized system System, region is sailed for automatically identifying vehicle present feasible.It is presently in specifically, the camera of vehicle can shoot vehicle The road conditions on road, the road image obtained to shooting using image processing techniques are identified, and the wheeled area of vehicle can be obtained Domain, to the driving of guiding vehicle.
Using traditional image Segmentation Technology, road image is handled, the wheeled area of pixel scale can be obtained Domain profile.However, sensing module is handled road image using image Segmentation Technology, the result figure of output is one by picture The classification figure of element.Follow-up planning control module is being sent in use, can not directly be used.It also needs to utilize image procossing skill Art extracts the structured message of profile.Not direct intercommunication between sensing module and control module is unable to fully using model Habit ability affects the accuracy of detection efficiency and detection.
In view of this, this application provides a kind of drivable region detection method end to end, this method utilizes instruction in advance The wheeled region detection model perfected is identified the present road image of vehicle shooting, since the model can extract Present road characteristics of image is simultaneously learnt, and then the wheeled region detection model of direct export structure is according to Region vector determines the wheeled area information in the present road image, in this way, when subsequent travel route is planned, it can Directly to utilize the wheeled region of the structuring.This method can make full use of the learning ability of model, improve detection effect The accuracy of rate and detection.
The embodiment of the present application provides a kind of training method of wheeled region detection model and is based on the wheeled region The drivable region detection method of detection model.The training method of above-mentioned wheeled region detection model and the inspection of wheeled region Survey method can be applied to the combination of terminal, server or the two.Wherein, terminal can be it is existing, researching and developing or In the future research and development, can by it is any type of wiredly and/or wirelessly connection (for example, Wi-Fi, LAN, honeycomb, coaxial cable Deng) realize any user equipment interacted with server, including but not limited to:It is existing, researching and developing or research and development in the future Smart mobile phone, non-smart mobile phone, tablet computer, laptop PC, desktop personal computer, minicomputer, in Type computer, mainframe computer etc..
It is also to be noted that server can be existing, researching and developing or research and develop in the future in the embodiment of the present application , an example of the equipment of application service that can provide a user information recommendation.Presently filed embodiment is in this regard It is unrestricted.
The specific implementation of the embodiment of the present application is introduced below in conjunction with the accompanying drawings.
First, to a kind of specific implementation side of the training method of wheeled region detection model provided by the embodiments of the present application Formula is introduced.
Fig. 1 show a kind of flow chart of the training method of wheeled region detection model provided by the embodiments of the present application, Applied to automatic Pilot field, referring to Fig. 1, this method includes:
Step 101:Road sample image is obtained, the road sample image is labeled with wheeled region.
Road sample image can be considered as the sample image for training wheeled region detection model.The embodiment of the present application In, training pattern uses the training method of supervision, thus, it is labeled with wheeled region in road sample image, passes through mark Wheeled region is noted, the rate of model training can be accelerated, improve the accuracy rate of model inspection.
Wherein, wheeled region refers to the region that vehicle can currently travel.Wheeled region is mathematically, it is possible to understand that For a range, can be characterized by diversified forms.For a range, the progress such as boundary, profile usually may be used It indicates, such as the modes such as function, coordinate may be used and be indicated.In view of vehicle travels this scene, the camera of vehicle The image of shooting is typically all the road image from current vehicle position, wherein current vehicle position is generally at image bottom Portion, it is therefore possible to use the bottom a plurality of line segment that sets out marks wheeled region.
In the embodiment of the present application in some possible realization methods, may be used road sample image bottom edge set out vertically to On N items at equal intervals line segment mark corresponding to the road sample image vehicle wheeled region, wherein the end of every line segment Position is the boundary position in wheeled region, and N is positive integer.In order to make it easy to understand, Fig. 2 shows mark wheeled using line segment One specific example in region, referring to Fig. 2, the end position of a plurality of line segment constitutes the boundary position in wheeled region, line segment institute It is wheeled region in region.It should be noted that N numerical value is bigger, line segment distribution is more intensive, and it is accurate to be more conducive to get Wheeled region contour, that is, get more accurate wheeled region.It, can basis in some possible realization methods The value of N is reasonably arranged in the size of road sample image.
It is the mark for carrying out structuring to wheeled region by the way of line segment above, in the embodiment of the present application, some can Can realization method in, can also other modes be labeled, for example, the mode of dot matrix may be used.Compared with other modes, Line segment can be converted to length by the way of line segment, and then obtain the vector in a characterization wheeled region, realized feasible Sail the structuring in region, and by the way of line segment mark needed for data volume it is relatively small, advantageously reduce calculation amount.
In the present embodiment, sample database can be pre-established, road sample image is obtained from sample database.Wherein, sample Library can be collected related data from internet and established by way of web crawlers, can also be from the storage device of vehicle The middle camera acquired image for obtaining vehicle, is labeled the wheeled region in image, to establish sample database. In some cases, road sample image can also be directly acquired, for example, the figure that the camera for directly acquiring vehicle acquires in real time Picture is labeled the wheeled region in image, using the image after mark as road sample image.
Step 102:The road sample image is inputted to the initial neural network model pre-established.
After getting road sample image, road sample image can be inputted the initial neural network mould pre-established Type, to be trained to initial neural network model using road sample graph.
It is initial pre-establishing the input of road sample image in the embodiment of the present application in some possible realization methods Before neural network model, road sample image can also be zoomed to pre-set dimension.In this way, can make initial neural network Model learns the road sample image of uniform sizes, so as to more rapidly, more accurately to road sample image into Row processing, improves the efficiency of model training.
Step 103:The initial neural network model is trained using the road sample image, obtains the inspection of wheeled region Survey model.
In order to make it easy to understand, simply being introduced the concept of neural network model first.Neural network be by it is a large amount of, Simple processing unit widely interconnects and the network system that is formed, it is a highly complex non-linear dynamic study System has large-scale parallel, distributed storage and processing, self-organizing, adaptive and self-learning ability.Neural network model is A kind of mathematical model based on neural network, the powerful learning ability based on neural network model, neural network model It is widely used in many fields.
Wherein, in image procossing and area of pattern recognition, usually convolutional neural networks model is used to carry out pattern-recognition.By The convolutional layer characteristic that locally connection and weights are shared in convolutional neural networks model so that need trained parameter significantly It reduces, simplifies network model, improve training effectiveness.
Specific to the present embodiment, convolutional neural networks model may be used as initial neural network model.Using initial Road sample image is trained initial neural network model, and the convolutional layer in specially initial neural network model is fully learned The feature for practising the wheeled region in road sample image, according to the correlated characteristic of the road sample image learnt, initial god Correlated characteristic can be mapped through the full articulamentum in network model, obtains the recognition result in wheeled region, it will be feasible It sails the recognition result in region and wheeled region that road sample image marks in advance is compared, it can be to initial neural network The parameter of model optimizes, and when initial neural network model is after the repetitive exercise of more training sample, can obtain can Running region detection model.
It in the present embodiment, can be by wheeled area when being labeled to wheeled region using equally spaced line segment The information of domain structuring is indicated.Specifically, what the length of each line segment in mark wheeled region was to determine, according to each The length of line segment, can obtain one and the relevant vector of line segment length.Wherein, line segment quantity be N, then vector be 1*N to Amount.Therefore, in road sample image, the 1*N vectors may be used, the wheeled region of road sample image are labeled, In order to facilitate statement, the 1*N vectors that wheeled region is marked in this road sample image are known as label-vector.
It is possible to further be trained using using the N items road sample image that line segment marks at equal intervals, initial neural network Model obtains wheeled region detection model.Wherein, since road sample image is labeled using the label-vector of 1*N , when being trained to initial neural network model using the road sample image, initial neural network model can learn Label-vector feature, the wheeled region detection model trained by first trial neural network model, to wheeled region 1*N vectors can also be obtained when being predicted, for characterizing the predicted value to wheeled region.It, will be feasible in order to facilitate statement It sails the 1*N vectors that region detection model prediction obtains and is known as region vector.
In the present embodiment, using region vector characterization wheeled region, the message structure to wheeled region is realized Change.For the application of automatic Pilot scene, the complexity of structured message is reduced, and is remained valuable to automatic Pilot Information content.
From the foregoing, it will be observed that this application provides a kind of training method of wheeled region detection model, road sample graph is obtained Picture is labeled with wheeled region in road sample image, and road sample image is inputted to the initial neural network mould pre-established Type trains initial neural network model by road sample image in the way of supervised learning, obtains wheeled region detection mould Type.Compared to traditional image Segmentation Technology, the method for trained wheeled region detection model provided by the present application can be direct End-to-end study, and the wheeled area information of export structure need not recycle the knot of image processing techniques extraction profile Structure information is so that intercommunication between disparate modules.
Also, initial neural network model is trained using the road sample image for being labeled with wheeled region, greatly The road sample image of amount may be such that the wheeled region detection model that training obtains is predicted to wheeled region When have higher accuracy and efficiency.
In order to make the technical solution of the application become apparent from, below in conjunction with specific embodiment to utilizing using N items at equal intervals The road sample image of line segment mark trains initial neural network model, and the process for obtaining wheeled region detection model carries out in detail It describes in detail bright.
Fig. 3 shows a kind of using using the N items initial neural network of road sample image training that line segment marks at equal intervals Model obtains the flow chart of wheeled region detection model.Referring to Fig. 3, this method includes:
Step 301:The initial neural network model extracts the feature of the road sample image.
Road sample image includes more information, wherein it is existing with the relevant information in wheeled region, also have with can The unrelated information of running region can be to road sample graph in order to be detected to the wheeled region in road sample image As carrying out feature extraction.
It is appreciated that initial neural network model can carry out feature extraction to road sample image.As a kind of possibility Realization method, can utilize initial neural network model in convolutional layer extraction road sample image feature.It is appreciated that The space relationship of image is local, and each neuron only need to experience local image-region, then without experiencing global image In higher, these are experienced into different local neurons and is integrated, you can obtains global information, subtracts in this way, can reach Few neural network needs the number of the weighting parameter of training.Specifically, in a convolutional calculation of convolutional layer, to an image Different zones a kind of feature of the image is extracted using identical convolution kernel, such as along the edge of a direction, not same district Realize that weights are shared, can so substantially reduce trained parameter between domain.Further, using a variety of convolution kernels respectively to figure The different zones of picture carry out feature extraction, can obtain the various features of the image.Pass through volume of the setting with different convolution kernels Lamination extracts different types of feature, more with the relevant effective information in wheeled region to retain.
Step 302:The initial neural network model is vectorial by the region that the Feature Mapping of extraction is 1*N.
Wherein, N is positive integer, and the region vector characterizes prediction of the initial neural network model to wheeled region Value, each element value of the region vector characterize the length of each line segment in road sample image bottom edge.
Initial neural network model can map feature to obtain table after the feature for extracting road sample image Levy the region vector of wheeled regional prediction value.In the embodiment of the present application in some possible realization methods, when initial nerve net When network model is convolutional neural networks model, the full articulamentum in convolutional neural networks model can carry out the feature extracted Mapping obtains region vector.
In some possible realization methods, bilinear interpolation and asymmetric convolution can be carried out to the feature of extraction, from And obtain the region vector of 1*N.Wherein, bilinear interpolation refers to carrying out once linear interpolation respectively in two directions.For example, We expect unknown function f point P (x, y) value, it is assumed that function f is in Q11=(x1, y1), Q12=(x1, y2), Q21= (x2, y1), Q22=(x2, y2) four points value, then can carry out linear interpolation in the directions x first, obtain function f in R1And R2 This 2 points value, is then based on R1And R2The directions y carry out linear interpolation can be obtained function f P points value.
Asymmetric convolution is that a kind of convolution kernel using 1*n and n*1 replaces n*n convolution kernels to carry out the convolution mode of convolution. Symmetrical convolution this compared to n*n, calculation amount can be significantly reduced by carrying out convolution using 1*n and n*1, reduce the parameter of network, And increase the depth of network.
In the present embodiment, it by bilinear interpolation and asymmetric convolution, may be implemented that characteristic pattern is amplified and flattened, To obtain the region vector of 1*N.
Step 303:Region vector is compared by the initial neural network model with label-vector.
Since vector characterization in region is to the predicted value in wheeled region, label-vector characterizes the predicted value in wheeled region, The purpose of training pattern is so that the predicted value to wheeled region of model is close to actual value, by by region vector and mark Note vector is compared the extent of deviation that can obtain model prediction.
In the present embodiment, vector sum label-vector in region is 1*N vectors, and region vector sum label-vector is compared It relatively can be to be compared respectively to each dimension of region vector sum label-vector, can so determine model prediction each Extent of deviation in dimension.
Step 304:The parameter of the initial neural network model is updated according to comparison result.
After being compared region vector with label-vector, comparison result can be obtained, which can reflect The extent of deviation of model prediction.The extent of deviation contains the extent of deviation that model is predicted in each dimension, by each The extent of deviation predicted in a dimension can be updated the parameter of initial neural network model, to reduce model prediction Extent of deviation so that wheeled region detection model have higher Detection accuracy.
In the present embodiment, the extent of deviation of model prediction may be used loss function and be weighed.As a kind of possibility Realization method, the loss function of the initial neural network model can be determined according to comparison result, according to the loss letter Number updates the parameter of the initial neural network model.
Wherein, according to the type of initial neural network model difference, different types of loss function can be set.For example, Loss function includes logarithm loss function, hinge loss function, figure penalties function or quadratic loss function etc., can be according to need Seek the corresponding loss function of setting.In the embodiment of the present application in some possible realization methods, Smooth L1 can be selected to damage Lose loss function of the function as initial neural network model.It is appreciated that in utilization loss function to neural network model When parameter is updated, typically the methods of gradient descent method is used to be updated, that is to say, that loss function can be solved Gradient, the update of implementation model parameter.
It is quick-fried to be susceptible to gradient when predicted value differs larger with actual value for traditional loss function including L2 regular terms Fried phenomenon, greatly affected the training effectiveness of model, for this purpose, Smooth L1 loss functions may be used, by original L2 gradients In x-t be substituted for ± 1, can so explode to avoid gradient, model have preferable robustness.
Step 305:When the loss function of the initial neural network model meets preset condition, then by the initial nerve Parameter of the "current" model parameter of network model as wheeled region detection model, and obtained according to the parameter described feasible Sail region detection model.
After by a large amount of road sample image to initial neural network model, the loss letter of initial neural network model Number can drop to smaller value, and loss function is smaller, and the precision of model prediction is relatively higher.When initial neural network model It, can be using the "current" model parameter of initial neural network model as wheeled region detection when loss function meets preset condition The parameter of model can obtain wheeled region detection model according to the parameter.
Wherein, preset condition be it is a kind of pre-set, the condition for weighing loss function mathematical characteristic.In the application reality It applies in example some possible modes, preset condition can be that loss function tend to restrain or loss function reaches minimum value.
From the foregoing, it will be observed that this application provides a kind of training method of wheeled region detection model, initial neural network mould Type extracts the feature of road sample image, by the Feature Mapping extracted be characterize to the region of the predicted value in wheeled region to Amount, region vector is compared with label-vector, can basis since comparison result reflects the extent of deviation of model prediction Extent of deviation updates the parameter of initial neural network model, when the loss function of initial neural network model meets preset condition When, using "current" model parameter as the parameter of wheeled region detection model, wheeled region detection mould is obtained according to the parameter Type.Compared with traditional images cutting techniques, the application uses the wheeled region detection that the mode of deep learning is trained Model, can directly end-to-end study, the wheeled region of the wheeled area information of export structure, the structuring can be straight It connects and is used for assisting vehicle travel, without making other conversions, be conducive to automatic Pilot application and promote.
A kind of specific implementation of the training method of wheeled region detection model based on above-described embodiment offer, this Application embodiment additionally provides a kind of drivable region detection method based on wheeled region detection model.
Next, being carried out specifically to a kind of drivable region detection method provided by the embodiments of the present application in conjunction with attached drawing It is bright.
Fig. 4 is a kind of flow chart of area of feasible solutions detection method provided by the embodiments of the present application, and this method is applied to automatic Driving field, referring to Fig. 4, this method includes:
Step 401:Obtain present road image.
Present road image refers to the image that vehicle is currently located road.In the present embodiment, present road image is to need Detect the image in wheeled region.
It is appreciated that present road image can be the road image obtained in real time.In the embodiment of the present application, some may Realization method in, can shoot to obtain present road image by the forward sight camera of vehicle.Expansion as above-described embodiment Exhibition, present road image can also be the image that the rearview camera of vehicle is shot.For example, when vehicle needs reversing, it can To shoot the image of rear of vehicle by rearview camera, as present road image.Similar, when vehicle is turned left or is turned right When, the road image of respective direction can also be shot by corresponding vehicle-mounted camera, obtain present road image.Some can In the realization method of energy, road image where camera shoots vehicle can also be looked around by vehicle, to obtain current road Road image.
In some cases, present road image can also be the road image transmitted by the other equipment that receives.Example Such as, when the camera of vehicle breaks down, the passenger on vehicle can shoot vehicle by equipment such as mobile phones and be currently located road Image carry out wheeled region inspection will pass through the image in this way, present road image can be obtained from the equipment of user It surveys.
It these are only that some specific examples for obtaining present road image, the application do not do the acquisition of present road image It limits, different realization methods can be used according to demand.
Step 402:The present road image is input to wheeled region detection model, and is based on the wheeled area The output of domain detection model is as a result, determine the wheeled region in the present road image.
The wheeled region detection model is the training of the wheeled region detection model provided according to above-described embodiment The wheeled region detection model that method training generates.
After present road image is input to wheeled region detection model, wheeled region detection model can be by right Present road image carries out feature extraction, and maps the feature extracted, obtain the region in characterization wheeled region to Amount, the region vector are the output of wheeled region detection model as a result, since each element value of region vector is can The length of each line segment on running region, the terminal of each line segment are the boundary in wheeled region, and the terminal of each line segment connects The line for picking up to obtain is the partial contour in wheeled region, the line and current road that the terminal of each line segment connects The both sides of road image and bottom area encompassed are wheeled region.
From the foregoing, it will be observed that the embodiment of the present application provides a kind of detection method in wheeled region, by by present road figure As being input to wheeled region detection model trained in advance, the output based on wheeled region detection model is as a result, can be true Wheeled region in settled preceding road image.Drivable region detection method provided by the embodiments of the present application can directly be held End study, and the wheeled area information of export structure need not recycle the structuring of image processing techniques extraction profile Information is so that intercommunication between disparate modules.Also, the model is by the way of magnanimity road sample image combination deep learning Training obtains, and has higher accuracy and efficiency when predicting wheeled region.
In order to be more clearly understood that the specific implementation process of wheeled region detection, with reference to specific embodiment to step 402 are described in detail.
Fig. 5 is provided by the embodiments of the present application a kind of the present road image to be input to wheeled region detection mould Type, and the output based on the wheeled region detection model is as a result, determine the wheeled region in the present road image Method flow chart, referring to Fig. 5, this method includes:
Step 501:The feature of present road image described in wheeled region detection model extraction.
Wheeled region detection model trains to obtain using a large amount of road sample image, in the training process, When being trained to road sample image, model parameter can be optimized so that extract more with wheeled region Relevant feature, in this way, being detected to the wheeled region in present road image using wheeled region detection model When, can extract in present road image with the relevant feature in wheeled region.Wherein, the feature extracted can be with feature The mode of figure is presented.
Step 502:The wheeled region detection model is vectorial by the region that the Feature Mapping of extraction is 1*N.
Wherein, N is integer, and the region vector characterizes prediction of the wheeled region detection model to wheeled region Value, each element value of the region vector characterize the length of each line segment in road sample image bottom edge.
Remained in the feature of extraction it is more with the relevant effective information in wheeled region, can by bilinear interpolation with And the modes such as asymmetric convolution, characteristic pattern is mapped as to the region vector in characterization wheeled region.
Step 503:The wheeled region detection model is determined according to the region vector in the present road image Wheeled region.
Wheeled region detection model can determine in present road image after obtaining region vector according to region vector Wheeled region.Wherein, each element value of region vector corresponds to the length of a plurality of line segment in wheeled region, this plurality of line The terminal of section is the boundary position in wheeled region, is to work as with the bottom of present road image and both sides area encompassed The wheeled region of preceding road image.
It should be noted that if present road image has carried out scaling processing being input to wheeled region detection model, Such as zoom to pre-set dimension, then it, can also be to the area in the corresponding region vector of present road image after getting scaling Domain vector is converted, the corresponding region of the present road image before being scaled vector, so as to according to transformed region to Amount determines the wheeled region of present road image.
From the foregoing, it will be observed that the embodiment of the present application provides a kind of drivable region detection method, this method includes by feasible The feature for sailing present road image described in region detection model extraction, by the region vector that the Feature Mapping of extraction is 1*N, the area Domain vector characterization wheeled region detection model passes through region vector to the predicted value in wheeled region in present road image It can determine the wheeled region in present road image.Compared to traditional image partition method, the embodiment of the present application provides Method can export the structured message in wheeled region end-to-endly, planning control module can directly utilize the structuring Information carries out route planning etc., and help can be brought to automatic Pilot, is conducive to the application and popularization of automatic Pilot.
For above example mainly using convolutional neural networks model as initial neural network model, being trained to obtain can Running region detection model, and the wheeled region in present road image is examined based on the wheeled region detection model It surveys.And with the continuous development of machine learning, convolutional neural networks model is also evolving.Specifically, based on to be trained Model function and model data to be dealt with, different types of convolutional neural networks may be used as initial god Through network.Convolutional neural networks include VGG Net (Visual Geometry Group), AlexNet, Network in Network, ResNet depth residual error network model etc..In some possible realization methods, ResNet conducts may be used Initial neural network model is trained, and obtains wheeled region detection model.
The convolutional layer of the models such as VGG Net, AlexNet or full articulamentum can more or less have letter when information is transmitted The problems such as breath is lost, is lost.ResNet solves the problems, such as this to a certain extent, is passed to by the way that directly input information detours Output, the integrality of protection information, whole network then only need that other part of study input, output difference, simplify study mesh Mark and difficulty.Specific to the present embodiment, since using more bodies, line segment is indicated at equal intervals in wheeled region, model needs to return The length of each line segment obtains region vector, can reduce the damage of transport overhead and information in transmission process using ResNet It loses, to improve the accuracy rate of model prediction.
Next, in conjunction with concrete application scene to the training side of wheeled region detection model provided by the embodiments of the present application Method and drivable region detection method are described in detail.
First, an initial neural network model is initialized.In this example, with the full convolutional network of ResNet18 structures As initial neural model, the loss function using Smooth L1 as the model.
Then, road sample image is obtained from the training set pre-established.Wherein, 1,000,000 are included at least in training set With the last road sample image for being labeled with wheeled region.Wheeled region in road sample image is gone out using image base Line segment is labeled 224 of hair at equal intervals, and wheeled region is indicated with the vector of 1*224.Using road sample image into Before row training, road sample image can also be zoomed to pre-set dimension, such as zoom to 448*448.
Then, the road sample image got is input to the full convolutional network of 18 structures of ResNet in batches, can be obtained To the characteristic pattern of 7*7, characteristic pattern constantly can be amplified and be flattened, feature in conjunction with asymmetric convolution using bilinear interpolation The size of figure is 14*14 by 7*7 variations, and then variation is 28*56, and it is 7*224 then to flatten, and it is 1*224 further to flatten, The structured message finally exported is the vector of 1*224.
In the present embodiment, it needs the road sample image in sample set training about 16 epoch, preceding 8 epoch Learning rate can be set as 1*e-5, the learning rate of rear 8 epoch can be set as 1*e-6
It is to be appreciated that above-mentioned network structure, training parameter are arranged based on experience value, in the embodiment of the present application, other can In the realization method of energy, other network structures, training parameter can also be set.
In order to keep the advantageous effect of the application more prominent, the embodiment of the present application also provides present road image and using this The model that training obtains in embodiment is detected present road image to obtain the image in wheeled region.Referring to Fig. 6 A and figure 6B, Fig. 6 A are present road image, and Fig. 6 A are input to trained wheeled region detection model, can be obtained shown in Fig. 6 B The image for being identified with wheeled region.
It can be seen that the embodiment of the present application provides a kind of training method and wheeled of wheeled region detection model Method for detecting area, the feature of initial neural network model extraction road sample image, is characterization by the Feature Mapping extracted To the region vector of the predicted value in wheeled region, region vector is compared with label-vector, since comparison result reflects The extent of deviation of model prediction, can update the parameter of initial neural network model, when initial nerve net according to extent of deviation When the loss function of network model meets preset condition, using "current" model parameter as the parameter of wheeled region detection model, root Wheeled region detection model is obtained according to the parameter.Then, present road image described in wheeled region detection model extraction Feature, by the region vector that the Feature Mapping of extraction is 1*N, which characterizes wheeled region detection model to current road The predicted value in wheeled region in the image of road can determine the wheeled region in present road image by the region vector. Compared to traditional image partition method, method provided by the embodiments of the present application can export the knot in wheeled region end-to-endly Structure information, planning control module directly can carry out route planning etc. using the structured message, can give automatic Pilot band It helps, is conducive to the application and popularization of automatic Pilot.
Training method and drivable region detection method based on the wheeled region detection model that above-described embodiment provides, The embodiment of the present application also provides a kind of training device of wheeled region detection model and wheeled regional detection devices.Below Device provided by the embodiments of the present application will be introduced from the angle of function modoularization in conjunction with attached drawing.
First, it is situated between to the training device of wheeled region detection model provided by the embodiments of the present application in conjunction with attached drawing It continues.
Fig. 7 is a kind of structural schematic diagram of the training device of wheeled region detection model provided by the embodiments of the present application, Referring to Fig. 7, which includes:
Sample acquisition unit 710, for obtaining road sample image, the road sample image is labeled with wheeled area Domain;
Training unit 720 is utilized for the road sample image to be inputted the initial neural network model pre-established The road sample image trains the initial neural network model, obtains wheeled region detection model.
Optionally, the wheeled region of the road sample image is set out vertically upward using road sample image bottom edge N items line segment is labeled at equal intervals, N is positive integer, wherein the end position of every line segment is the boundary in wheeled region Position;
Then the training unit 720 is specifically used for:
The initial neural network model pre-established will be inputted using the N items road sample image that line segment marks at equal intervals;
The initial neural network model is trained using the N items road sample image that line segment marks at equal intervals using described, Obtain wheeled region detection model.
Optionally, the training unit 720 includes:
Extract subelement, the feature for extracting the road sample image;
Subelement is mapped, the region vector that the Feature Mapping for that will extract is 1*N, N is positive integer, the region vector Predicted value of the initial neural network model to wheeled region is characterized, described in each element value characterization of the region vector The length of each line segment in road sample image bottom edge;
Comparing subunit, for region vector to be compared with label-vector, the label-vector is mark road sample The 1*N vectors that the length of each line segment in this image wheeled region is constituted;
Update subelement, the parameter for updating the initial neural network model according to comparison result;
Determination subelement meets preset condition, then using "current" model parameter as wheeled region for working as loss function The parameter of detection model, and the wheeled region detection model is obtained according to the parameter.
Optionally, comparing subunit is specifically used for:
According to comparison result, the loss function of the initial neural network model is determined;
According to the loss function, the parameter of the initial neural network model is updated.
Optionally, mapping subelement is specifically used for:
Bilinear interpolation and asymmetric convolution are carried out to the feature of extraction, to obtain the region vector of 1*N.
Optionally, the initial neural network model is depth residual error network model.
Optionally, which is characterized in that the loss function of the initial neural network model is Smooth L1 loss functions.
From the foregoing, it will be observed that this application provides a kind of training device of wheeled region detection model, which includes sample Acquiring unit and training unit, wherein sample acquisition unit can obtain road sample image, be labeled in road sample image Road sample image can be inputted the initial neural network model pre-established, utilize road by wheeled region, training unit Sample image trains initial neural network model in a manner of supervised learning, obtains wheeled region detection model.Compared to biography The image Segmentation Technology of system, the method for trained wheeled region detection model provided by the present application can directly end-to-end study, And the wheeled area information of export structure, need not recycle image processing techniques extraction profile structured message so that Intercommunication between disparate modules.Also, using be labeled with the road sample image in wheeled region to initial neural network model into Row training, a large amount of road sample image can make the obtained wheeled region detection model of training to wheeled region into There is higher accuracy and efficiency when row prediction.
Next, a kind of wheeled regional detection device provided by the embodiments of the present application is introduced in conjunction with attached drawing.
Fig. 8 is a kind of structural schematic diagram of wheeled regional detection device provided by the embodiments of the present application, should referring to Fig. 8 Device includes:
Present road image acquisition unit 810, for obtaining present road image;
Wheeled region detection unit 820, for the present road image to be input to wheeled region detection model, And the output based on the wheeled region detection model is as a result, determine the wheeled region in the present road image.
The wheeled region detection model is the instruction according to wheeled region detection model provided by the embodiments of the present application Practice the wheeled region detection model that method training generates.
Optionally, wheeled region detection unit 820 includes:
Extract subelement, the feature for extracting the present road image;
Subelement is mapped, the region vector that the Feature Mapping for that will extract is 1*N, N is integer, the region vector table Predicted value of the wheeled region detection model to wheeled region is levied, described in each element value characterization of the region vector The length of each line segment in road sample image bottom edge;
Determination subelement determines the present road for the wheeled region detection model according to the region vector Wheeled region in image.
From the foregoing, it will be observed that the embodiment of the present application provides a kind of wheeled regional detection device.The device includes present road Image acquisition unit obtains present road image, then wheeled region detection unit, can for preceding road image to be input to Running region detection model, and the output based on the wheeled region detection model is as a result, determine the present road image In wheeled region.Wheeled regional detection device provided by the embodiments of the present application can directly end-to-end study, and export The wheeled area information of structuring need not recycle the structured message of image processing techniques extraction profile so that different moulds Intercommunication between block.Also, the model is to train to obtain by the way of magnanimity road sample image combination deep learning, pair can There is higher accuracy and efficiency when running region is predicted.
It is above the training from the angle of function modoularization to wheeled region detection model provided by the embodiments of the present application Device and wheeled regional detection device are introduced, next from the angle of device hardware to provided by the embodiments of the present application Above-mentioned apparatus is introduced.
Fig. 9 is a kind of server architecture schematic diagram provided by the embodiments of the present application, which can be because of configuration or performance It is different and generate bigger difference, may include one or more central processing units (central processing Units, CPU) 922 (for example, one or more processors) and memory 932, one or more storages apply journey The storage medium 930 (such as one or more mass memory units) of sequence 942 or data 944.Wherein, 932 He of memory Storage medium 930 can be of short duration storage or persistent storage.The program for being stored in storage medium 930 may include one or one With upper module (diagram does not mark), each module may include to the series of instructions operation in server.Further, in Central processor 922 could be provided as communicating with storage medium 930, be executed on server 900 a series of in storage medium 930 Instruction operation.
Server 900 can also include one or more power supplys 926, one or more wired or wireless networks Interface 950, one or more input/output interfaces 958, and/or, one or more operating systems 941, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The server architecture shown in Fig. 9 can be based on by the step performed by server in above-described embodiment.
Wherein, CPU 922 is for executing following steps:
Road sample image is obtained, the road sample image is labeled with wheeled region;
The road sample image is inputted to the initial neural network model pre-established;
The initial neural network model is trained using the road sample image, obtains wheeled region detection model.
The embodiment of the present application also provides another wheeled equipment for area detection equipment, which can be server.Figure 10 It is a kind of server architecture schematic diagram provided by the embodiments of the present application, which can generate because configuration or performance are different Bigger difference may include one or more central processing units (central processing units, CPU) 1022 (for example, one or more processors) and memory 1032, one or more storage application programs 1042 or The storage medium 1030 (such as one or more mass memory units) of data 1044.Wherein, memory 1032 and storage Medium 1030 can be of short duration storage or persistent storage.The program for being stored in storage medium 1030 may include one or one with Upper module (diagram does not mark), each module may include to the series of instructions operation in server.Further, central Processor 1022 could be provided as communicating with storage medium 1030, and the system in storage medium 1030 is executed on server 1000 Row instruction operation.
Server 1000 can also include one or more power supplys 1026, one or more wired or wireless nets Network interface 1050, one or more input/output interfaces 1058, and/or, one or more operating systems 1041, example Such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The server architecture shown in Fig. 10 can be based on by the step performed by server in above-described embodiment.
Wherein, CPU 1022 is for executing following steps:
Obtain present road image;
The present road image is input to wheeled region detection model, and is based on the wheeled region detection mould The output of type is as a result, determine the wheeled region in the present road image;The wheeled region detection model is basis The wheeled region detection model that the training method training of wheeled region detection model provided by the embodiments of the present application generates.
The embodiment of the present application also provides a kind of computer readable storage medium, for storing program code, the program code Any one implementation in training method for executing a kind of wheeled region detection model described in foregoing individual embodiments Mode.
The embodiment of the present application also provides a kind of computer readable storage medium, for storing program code, the program code For executing any one embodiment in a kind of drivable region detection method described in foregoing individual embodiments.
It includes the computer program product instructed that the embodiment of the present application, which also provides a kind of, when run on a computer, It is any one in a kind of training method of wheeled region detection model described in foregoing individual embodiments so that computer executes Kind embodiment.
It includes the computer program product instructed that the embodiment of the present application, which also provides a kind of, when run on a computer, So that computer executes any one embodiment in a kind of drivable region detection method described in foregoing individual embodiments.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be the indirect coupling by some interfaces, device or unit It closes or communicates to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can be stored in a computer read/write memory medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application Portion or part steps.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (full name in English:Read-Only Memory, english abbreviation:ROM), random access memory (full name in English:Random Access Memory, english abbreviation: RAM), the various media that can store program code such as magnetic disc or CD.
The above, above example are only to illustrate the technical solution of the application, rather than its limitations;Although with reference to before Embodiment is stated the application is described in detail, it will be understood by those of ordinary skill in the art that:It still can be to preceding The technical solution recorded in each embodiment is stated to modify or equivalent replacement of some of the technical features;And these Modification or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
It should be appreciated that in this application, " at least one (item) " refers to one or more, and " multiple " refer to two or two More than a."and/or", the incidence relation for describing affiliated partner indicate may exist three kinds of relationships, for example, " A and/or B " It can indicate:A is only existed, B is only existed and exists simultaneously tri- kinds of situations of A and B, wherein A, B can be odd number or plural number.Word It is a kind of relationship of "or" that symbol "/", which typicallys represent forward-backward correlation object,.At least one of " following (a) " or its similar expression refers to Arbitrary combination in these, including individual event (a) or the arbitrary combination of complex item (a).At least one of for example, in a, b or c (a) can indicate:A, b, c, " a and b ", " a and c ", " b and c ", or " a and b and c ", wherein a, b, c can be single, also may be used To be multiple.

Claims (11)

1. a kind of training method of wheeled region detection model, which is characterized in that the method is applied to automatic Pilot field, Including:
Road sample image is obtained, the road sample image is labeled with wheeled region;
The road sample image is inputted to the initial neural network model pre-established;
The initial neural network model is trained using the road sample image, obtains wheeled region detection model.
2. according to the method described in claim 1, it is characterized in that, the wheeled region of the road sample image is to use road Road sample image bottom edge is set out, and line segment is labeled at equal intervals for N items vertically upward, and N is positive integer, wherein every line segment End position is the boundary position in wheeled region;
Then the initial neural network model that road sample image input is pre-established includes:
The initial neural network model pre-established will be inputted using the N items road sample image that line segment marks at equal intervals;
It is described to train the initial neural network model using the road sample image, obtain wheeled region detection model packet It includes:
The initial neural network model is trained using the N items road sample image that line segment marks at equal intervals using described, is obtained Wheeled region detection model.
3. according to the method described in claim 2, it is characterized in that, described described initial using road sample image training Neural network model, obtaining wheeled region detection model includes:
The initial neural network model extracts the feature of the road sample image;
The initial neural network model by the Feature Mapping of extraction be 1*N region vector, N is positive integer, the region to Scale levies predicted value of the initial neural network model to wheeled region, and each element value of the region vector characterizes institute State the length of each line segment in road sample image bottom edge;
Region vector is compared by the initial neural network model with label-vector, and the label-vector is mark road sample The 1*N vectors that the length of each line segment in this image wheeled region is constituted;
The parameter of the initial neural network model is updated according to comparison result;
When the loss function of the initial neural network model meets preset condition, then working as the initial neural network model Parameter of the preceding model parameter as wheeled region detection model, and the wheeled region detection mould is obtained according to the parameter Type.
4. according to the method described in claim 3, it is characterized in that, described update the initial neural network according to comparison result The parameter of model includes:
According to comparison result, the loss function of the initial neural network model is determined;
According to the loss function, the parameter of the initial neural network model is updated.
5. according to the method described in claim 3, it is characterized in that, the initial neural network model is by the Feature Mapping of extraction Include for the region vector of 1*N:
Bilinear interpolation and asymmetric convolution are carried out to the feature of extraction, to obtain the region vector of 1*N.
6. according to the method described in claim 1-5 any one, which is characterized in that the initial neural network model is depth Residual error network model.
7. according to the method described in claim 1-5 any one, which is characterized in that the loss of the initial neural network model Function is Smooth L1 loss functions.
8. a kind of drivable region detection method, which is characterized in that the method is applied to automatic Pilot field, including:
Obtain present road image;
The present road image is input to wheeled region detection model, and based on the wheeled region detection model Output is as a result, determine the wheeled region in the present road image;The wheeled region detection model is according to right It is required that the wheeled region detection mould that the training method training of the wheeled region detection model described in 1-7 any one generates Type.
9. according to the method described in claim 8, it is characterized in that, described be input to wheeled area by the present road image Domain detection model, and output based on the wheeled region detection model as a result, determine in the present road image can Running region includes:
The feature of present road image described in wheeled region detection model extraction;
The wheeled region detection model by the Feature Mapping of extraction be 1*N region vector, N is integer, the region to Scale levies predicted value of the wheeled region detection model to wheeled region, each element value characterization of the region vector The length of each line segment in road sample image bottom edge;
The wheeled region detection model determines the wheeled region in the present road image according to the region vector.
10. a kind of training device of wheeled region detection model, which is characterized in that described device includes:
Sample acquisition unit, for obtaining road sample image, the road sample image is labeled with wheeled region;
Training unit utilizes the road for the road sample image to be inputted the initial neural network model pre-established Road sample image trains the initial neural network model, obtains wheeled region detection model.
11. a kind of wheeled regional detection device, which is characterized in that described device includes:
Present road image acquisition unit, for obtaining present road image;
Wheeled region detection unit for the present road image to be input to wheeled region detection model, and is based on The output of the wheeled region detection model is as a result, determine the wheeled region in the present road image;It is described feasible Sail the training method training that region detection model is the wheeled region detection model according to claim 1-7 any one The wheeled region detection model of generation.
CN201810308028.XA 2018-04-08 2018-04-08 Driving region detection model training method, detection method and device Active CN108345875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810308028.XA CN108345875B (en) 2018-04-08 2018-04-08 Driving region detection model training method, detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810308028.XA CN108345875B (en) 2018-04-08 2018-04-08 Driving region detection model training method, detection method and device

Publications (2)

Publication Number Publication Date
CN108345875A true CN108345875A (en) 2018-07-31
CN108345875B CN108345875B (en) 2020-08-18

Family

ID=62957750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810308028.XA Active CN108345875B (en) 2018-04-08 2018-04-08 Driving region detection model training method, detection method and device

Country Status (1)

Country Link
CN (1) CN108345875B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726627A (en) * 2018-09-29 2019-05-07 初速度(苏州)科技有限公司 A kind of detection method of neural network model training and common ground line
CN109977845A (en) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 A kind of drivable region detection method and car-mounted terminal
CN110969664A (en) * 2018-09-30 2020-04-07 北京初速度科技有限公司 Dynamic calibration method for external parameters of camera
CN111126204A (en) * 2019-12-09 2020-05-08 上海博泰悦臻电子设备制造有限公司 Drivable region detection method and related device
CN111209779A (en) * 2018-11-21 2020-05-29 北京市商汤科技开发有限公司 Method, device and system for detecting drivable area and controlling intelligent driving
CN111256693A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Pose change calculation method and vehicle-mounted terminal
CN111259705A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Special linear lane line detection method and system
CN111259704A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of dotted lane line endpoint detection model
CN111434553A (en) * 2019-01-15 2020-07-21 初速度(苏州)科技有限公司 Brake system, method and device, and fatigue driving model training method and device
CN111435537A (en) * 2019-01-13 2020-07-21 北京初速度科技有限公司 Model training method and device and pose optimization method and device based on splicing map
CN111435425A (en) * 2019-01-14 2020-07-21 广州汽车集团股份有限公司 Method and system for detecting travelable area, electronic device and readable storage medium
CN111680730A (en) * 2020-06-01 2020-09-18 中国第一汽车股份有限公司 Method and device for generating geographic fence, computer equipment and storage medium
CN111832368A (en) * 2019-04-23 2020-10-27 长沙智能驾驶研究院有限公司 Training method and device for travelable region detection model and application
CN113108794A (en) * 2021-03-30 2021-07-13 北京深睿博联科技有限责任公司 Position identification method, device, equipment and computer readable storage medium
CN113518425A (en) * 2021-09-14 2021-10-19 武汉依迅北斗时空技术股份有限公司 Equipment positioning method and system
CN114674338A (en) * 2022-04-08 2022-06-28 石家庄铁道大学 Road travelable area fine recommendation method based on hierarchical input and output and double-attention jump connection
CN116863429A (en) * 2023-07-26 2023-10-10 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1275936A2 (en) * 2001-07-09 2003-01-15 Nissan Motor Company, Limited Information display system for vehicle
US6678394B1 (en) * 1999-11-30 2004-01-13 Cognex Technology And Investment Corporation Obstacle detection system
US20150332097A1 (en) * 2014-05-15 2015-11-19 Xerox Corporation Short-time stopping detection from red light camera videos
CN105488534A (en) * 2015-12-04 2016-04-13 中国科学院深圳先进技术研究院 Method, device and system for deeply analyzing traffic scene
CN106372618A (en) * 2016-09-20 2017-02-01 哈尔滨工业大学深圳研究生院 Road extraction method and system based on SVM and genetic algorithm
CN106446914A (en) * 2016-09-28 2017-02-22 天津工业大学 Road detection based on superpixels and convolution neural network
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678394B1 (en) * 1999-11-30 2004-01-13 Cognex Technology And Investment Corporation Obstacle detection system
EP1275936A2 (en) * 2001-07-09 2003-01-15 Nissan Motor Company, Limited Information display system for vehicle
US20150332097A1 (en) * 2014-05-15 2015-11-19 Xerox Corporation Short-time stopping detection from red light camera videos
CN105488534A (en) * 2015-12-04 2016-04-13 中国科学院深圳先进技术研究院 Method, device and system for deeply analyzing traffic scene
CN106372618A (en) * 2016-09-20 2017-02-01 哈尔滨工业大学深圳研究生院 Road extraction method and system based on SVM and genetic algorithm
CN106446914A (en) * 2016-09-28 2017-02-22 天津工业大学 Road detection based on superpixels and convolution neural network
CN106558058A (en) * 2016-11-29 2017-04-05 北京图森未来科技有限公司 Parted pattern training method, lane segmentation method, control method for vehicle and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOSÉ M. ÁLVAREZ等: "Vision-based road detection using road models", 《2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
刘丹等: "融合超像素3D与Appearance特征的可行驶区域检测", 《计算机工程》 *
宋怀波等: "基于机器视觉的非结构化道路检测与障碍物识别方法", 《农业工程学报》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726627A (en) * 2018-09-29 2019-05-07 初速度(苏州)科技有限公司 A kind of detection method of neural network model training and common ground line
CN110969664B (en) * 2018-09-30 2023-10-24 北京魔门塔科技有限公司 Dynamic calibration method for external parameters of camera
CN110969664A (en) * 2018-09-30 2020-04-07 北京初速度科技有限公司 Dynamic calibration method for external parameters of camera
CN111209779A (en) * 2018-11-21 2020-05-29 北京市商汤科技开发有限公司 Method, device and system for detecting drivable area and controlling intelligent driving
CN111259704B (en) * 2018-12-03 2022-06-10 魔门塔(苏州)科技有限公司 Training method of dotted lane line endpoint detection model
CN111256693A (en) * 2018-12-03 2020-06-09 北京初速度科技有限公司 Pose change calculation method and vehicle-mounted terminal
CN111259705A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Special linear lane line detection method and system
CN111259704A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of dotted lane line endpoint detection model
CN111259705B (en) * 2018-12-03 2022-06-10 魔门塔(苏州)科技有限公司 Special linear lane line detection method and system
CN111256693B (en) * 2018-12-03 2022-05-13 北京魔门塔科技有限公司 Pose change calculation method and vehicle-mounted terminal
CN111435537A (en) * 2019-01-13 2020-07-21 北京初速度科技有限公司 Model training method and device and pose optimization method and device based on splicing map
CN111435537B (en) * 2019-01-13 2024-01-23 北京魔门塔科技有限公司 Model training method and device and pose optimization method and device based on mosaic
CN111435425A (en) * 2019-01-14 2020-07-21 广州汽车集团股份有限公司 Method and system for detecting travelable area, electronic device and readable storage medium
CN111435425B (en) * 2019-01-14 2023-08-18 广州汽车集团股份有限公司 Method and system for detecting drivable region, electronic device, and readable storage medium
CN111434553A (en) * 2019-01-15 2020-07-21 初速度(苏州)科技有限公司 Brake system, method and device, and fatigue driving model training method and device
CN109977845B (en) * 2019-03-21 2021-08-17 百度在线网络技术(北京)有限公司 Driving region detection method and vehicle-mounted terminal
CN109977845A (en) * 2019-03-21 2019-07-05 百度在线网络技术(北京)有限公司 A kind of drivable region detection method and car-mounted terminal
CN111832368A (en) * 2019-04-23 2020-10-27 长沙智能驾驶研究院有限公司 Training method and device for travelable region detection model and application
CN111126204A (en) * 2019-12-09 2020-05-08 上海博泰悦臻电子设备制造有限公司 Drivable region detection method and related device
CN111680730A (en) * 2020-06-01 2020-09-18 中国第一汽车股份有限公司 Method and device for generating geographic fence, computer equipment and storage medium
CN113108794A (en) * 2021-03-30 2021-07-13 北京深睿博联科技有限责任公司 Position identification method, device, equipment and computer readable storage medium
CN113518425A (en) * 2021-09-14 2021-10-19 武汉依迅北斗时空技术股份有限公司 Equipment positioning method and system
CN113518425B (en) * 2021-09-14 2022-01-07 武汉依迅北斗时空技术股份有限公司 Equipment positioning method and system
CN114674338A (en) * 2022-04-08 2022-06-28 石家庄铁道大学 Road travelable area fine recommendation method based on hierarchical input and output and double-attention jump connection
CN114674338B (en) * 2022-04-08 2024-05-07 石家庄铁道大学 Fine recommendation method for road drivable area based on layered input and output and double-attention jump
CN116863429A (en) * 2023-07-26 2023-10-10 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area
CN116863429B (en) * 2023-07-26 2024-05-31 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area

Also Published As

Publication number Publication date
CN108345875B (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN108345875A (en) Wheeled region detection model training method, detection method and device
CN111797893B (en) Neural network training method, image classification system and related equipment
CN110633745B (en) Image classification training method and device based on artificial intelligence and storage medium
CN112990211B (en) Training method, image processing method and device for neural network
CN112183577A (en) Training method of semi-supervised learning model, image processing method and equipment
CN113807399B (en) Neural network training method, neural network detection method and neural network training device
KR102279376B1 (en) Learning method, learning device for detecting lane using cnn and test method, test device using the same
CN111079602A (en) Vehicle fine granularity identification method and device based on multi-scale regional feature constraint
CN111931764B (en) Target detection method, target detection frame and related equipment
CN110135562B (en) Distillation learning method, system and device based on characteristic space change
Li et al. A novel spatial-temporal graph for skeleton-based driver action recognition
EP4086818A1 (en) Method of optimizing neural network model that is pre-trained, method of providing a graphical user interface related to optimizing neural network model, and neural network model processing system performing the same
CN111401517A (en) Method and device for searching perception network structure
Chen et al. Unsupervised segmentation in real-world images via spelke object inference
CN111797970B (en) Method and device for training neural network
CN113191241A (en) Model training method and related equipment
CN110059646A (en) The method and Target Searching Method of training action plan model
US20230004816A1 (en) Method of optimizing neural network model and neural network model processing system performing the same
Ou et al. Design of an end-to-end dual mode driver distraction detection system
CN116310318A (en) Interactive image segmentation method, device, computer equipment and storage medium
CN113342029B (en) Maximum sensor data acquisition path planning method and system based on unmanned aerial vehicle cluster
CN112668421B (en) Attention mechanism-based rapid classification method for hyperspectral crops of unmanned aerial vehicle
CN115620122A (en) Training method of neural network model, image re-recognition method and related equipment
Saha et al. Federated learning–based global road damage detection
Mensah et al. EFedDNN: Ensemble based federated deep neural networks for trajectory mode inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220422

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Patentee after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: Room 301, floor 3, block C, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100083

Patentee before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right