CN111046897A - Method for defining fuzzy event probability measure spanning different spaces - Google Patents

Method for defining fuzzy event probability measure spanning different spaces Download PDF

Info

Publication number
CN111046897A
CN111046897A CN201811213228.3A CN201811213228A CN111046897A CN 111046897 A CN111046897 A CN 111046897A CN 201811213228 A CN201811213228 A CN 201811213228A CN 111046897 A CN111046897 A CN 111046897A
Authority
CN
China
Prior art keywords
probability
space
value
distribution
fuzzy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811213228.3A
Other languages
Chinese (zh)
Inventor
顾泽苍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811213228.3A priority Critical patent/CN111046897A/en
Publication of CN111046897A publication Critical patent/CN111046897A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for defining fuzzy event probability measure spanning different spaces in the field of information processing, which is characterized by comprising the following steps: the spatial distance between the data is considered, and the probability value of the probability distribution of the data in the probability space is also considered, the formula of the membership function of the fuzzy event probability measure is any fuzzy value which can form a result of 0-1 by an objective function according to a certain rule given by people. The implementation effect of the method is as follows: the problem of classification between data interwoven by probability distribution can be solved, microscopic uncertain information can be generated through macroscopic integration, and definite and stable valuable information can be generated, so that unexpected application effects can be realized.

Description

Method for defining fuzzy event probability measure spanning different spaces
[ technical field ] A method for producing a semiconductor device
The invention belongs to a method for defining fuzzy event probability measure spanning different spaces in the field of artificial intelligence.
[ background of the invention ]
Autopilot is the primary battlefield for artificial intelligence, but unfortunately the research efforts for dedicated machine learning for autopilot applications are few and have not yet drawn extensive attention.
Japanese well-known toyota corporation has issued a patent "driving direction estimating device" (patent document 1) which proposes to automatically select a driving state by a machine learning algorithm of an artificial intelligence inverse transmission neural network in response to an unexpected situation even if a driver does not reflect the situation during automatic driving of a vehicle so as to avoid occurrence of a driving accident or the like.
Japanese NHK, 10/9/2016, "how to cross the barriers to automatic driving", also taught by the commentary committee mountain philosophy (non-patent document 1). On this topic, the philosophy also proposes several problems that are not currently solved in automatic driving systems:
man-machine conflict judgment: the experiment of carrying out autopilot in *** this year 2 month, when *** car right turn, there is a barrier of sand heap in the front, *** car suddenly the lane of right side dodges, has just in time come a big bus behind the right lane, and the driver of big bus thinks that *** car can brake rapidly, does not think can dodge to the right lane, has consequently appeared the serious accident of colliding, and similar such accident Uber also appears constantly.
The difficulty of man-machine feeling fusion of vehicle distance selection: according to the investigation of a certain company, when a vehicle runs on the road, 41% of drivers think that the farther the vehicle is, the better the driver is, but also think that the distance is better, or think that the distance is close to a little, and still hold curiosity to catch up with the front vehicle. This is how to solve the human-machine perception fusion problem as an autonomous driving car and how to select the driving problem closest to the human, which becomes a complicated automatic control problem.
The problem of transferring the authority of the man-machine: in the transition phase of man-machine operation, the awareness between man-machine is not transferred. For example, in an emergency, the automatic driving needs to be switched to the manual driving immediately, and the emergency scheme selected by the automatic driving is different from the manual driving, so that accidents are easily caused, or the time is easily delayed.
Trolley problem: how to minimize the number of victims in an emergency? The problem is famous Trolley, and not only relates to complicated ethical difficulties, but also is a technical difficulty. In the theory of automatic driving in machine learning, no one has yet proposed a valuable solution.
A paper "realizing train automatic driving by fuzzy predictive control", published by hong' an of hitachi, great island (non-patent document 2), which proposes that automatic driving of a train can be realized by a rule base of fuzzy inference.
In the "train automatic driving system using fuzzy predictive control" published in peace and honest II of Hippocampus, Nonpatent document 3, it is proposed that although the conventional PID control can accurately control the running of train automatic driving, smooth running is a key point of automatic driving and is a key point of comfort for passengers. And the core problem of automatic driving is a multi-purpose control problem that needs to be taken into consideration of safety, traveling speed, traveling time between stations, comfort, power consumption, and stopping accuracy.
[ patent document ]
[ patent document 1 ] (Japanese patent laid-open No. 2008 + 225923)
[ non-patent document 1 ]http://www.nhk.or.jp/kaisetsu-blog/100/255089.html
[ non-patent document 2 ]
https://www.jstage.jst.go.jp/article/ieejeiss1987/109/5/109_5_337/_pdf
[ non-patent document 3 ]
http: ac, coocan, jp, yansunobu, edu, intconttms, text, Sic07a _ train ato. pdf # search 27% is visible to the control フアジイ a message システム% 27
In the above (patent document 1), an artificial intelligence neural network algorithm is adopted, but in the neural network algorithm, mainly, the information of the objective function is loaded on the parameters of massive weighted values through training, the weighted value W and the threshold value T need to be tested in order to obtain the best solution in the learning process, and the total number of times of combination is { (W × T)nP, where n is the number of nodes of a neural network of one layer, P is the number of layers of the neural network, the computation complexity of such a high index makes the computation huge, the required hardware overhead is huge, the probability gradient descent method adopted by the loss function of the learning effect of deep learning is abbreviated as SGD, the obtained training value is a local optimal solution, so that the problem of "black box" inevitably occurs, and furthermore, the threshold value in the model of the neural network belongs to artificial definition, the mechanism of the neural network of the same person is irrelevant, the principle of the stimulation signal of cranial nerves cannot be embodied in the traditional neural network model at all, the mechanism of different judgments performed by the head of the person according to different excitation degrees generated by the neural signals of neurons cannot be embodied in the current model of the neural network, and the like, and the current model of the neural network can only be academic, represents a directionThe level difference between the theory of sex and the practical application is very large. At the stage of entering deep learning, compared with the traditional neural network, the number of hidden layers is increased, so that the complexity of calculation is increased, the fatal black box problem of the traditional neural network cannot be solved, potential safety hazards exist in automatic driving, and the application prospect is difficult to expect.
The "human-machine judgment conflict", the human-machine feeling fusion problem of vehicle distance selection, the "human-machine authority transfer problem" and the "Trolley problem" proposed in the above (non-patent document 1) should be problems to be solved in artificial intelligence automatic driving, and should be a core problem of artificial intelligence, but at present, no wide attention is paid.
The above (non-patent document 2) mainly solves the problem of automatic train driving, and the proposed rule base based on fuzzy inference is used to realize automatic train driving, so that the establishment of a huge knowledge base requires the study of big data, and only can solve the objective functions within two or three, and the application in automatic train driving is difficult.
Although the above-described (non-patent document 3) proposes the multi-purpose fuzzy control, the employed fuzzy control is rather difficult to control for a large number of objective functions, and thus the control is still performed individually for each specific objective function. Especially, in the fusion of human-machine feeling and target functions of safety, energy saving, comfort and the like in automatic driving, simultaneous control of multiple targets cannot be found, because different target functions are not in the same space, a common optimal control point cannot be found, even if the different target functions are mapped to the same space, a common optimal intersection point cannot be obtained by using a traditional method, therefore, redundancy among the optimal control of multiple targets must be found, the optimal control of multiple targets in an optimal range is really realized, and therefore, the establishment of a machine learning model of the optimal control of multiple targets needs to be solved.
[ summary of the invention ]
The first object of the present invention is: the method is suitable for automatic driving, and the unstructured 3-dimensional image is subjected to structured feature extraction, so that the understanding recognition effect of the automobile environment image is achieved.
The second object of the present invention is: a more strengthened unsupervised machine learning model of the composite model is provided, and the computational power of machine learning is improved, so that the image approximation of machine learning can be realized without training.
A third object of the invention is: the automatic driving can learn the high driving skill of a good driver, the control level of the automatic driving is improved, the complexity of the automatic driving control can be reduced, and the NP problem in the traditional automatic driving control is solved.
A fourth object of the invention is: a multi-purpose optimal control machine learning model and a system device suitable for automatic driving are provided, and the optimal machine learning model can be controlled by a multi-purpose objective function of safe driving, quick arrival, comfortable riding, energy consumption saving and the like.
A fifth object of the present invention is: a method for extracting an image by introducing an SDL model is provided, and a new way for processing a machine learning image can be developed to improve the accuracy of image processing and image recognition.
A sixth object of the present invention is: a method for calculating the distance between space in Europe and probability space features that the distance formula is based on the distance scales of non-negativity, non-degeneracy, symmetry and triangle inequality.
A seventh object of the present invention is: the method for calculating the probability measure of the fuzzy event which can span between space and probability space in Europe and Europe can solve the problem of classification between data interlaced by probability distribution, can generate definite and stable valuable information by microscopic uncertain information and macroscopic integration, and can realize unexpected application effects.
In order to achieve at least one of the above purposes, the invention provides the following technical scheme:
a method of defining probability measures of fuzzy events across different spaces, characterized by:
considering the spatial distance between the data and the probability value of the probability distribution of the data in the probability space; both is
The formula for the probability measure of a fuzzy event, which sets R belong to the set V of probability distributions, can be derived from the following formula:
Figure BSA0000172245180000031
Figure BSA0000172245180000041
Figure BSA0000172245180000042
here, the first and second liquid crystal display panels are,
Figure BSA0000172245180000043
Figure BSA0000172245180000044
in addition, the first and second substrates are,
βj (vj)=(1+pfj (vj)
αj=(1+phj (vj)+phj (wj))
here,. DELTA.j (wj)For the characteristic value w with probability distributionjE W (j ═ 1, 2, …, n), in probability space WjDistance error, m, presented inj (wj)To be in a probability space wjNumber of discrete distributions of probabilities, Dij (wj)To be in a probability space wjLength of the probability dispersion distribution, Pij (wj)Probability values which are probability discrete distribution in probability space;
in addition, Δj (vj)For the characteristic values v with probability distributionjE.v (j ═ 1, 2, …, n), in probability space VjIs presented inDistance error of (m)j (vj)To be in a probability space vjNumber of discrete distributions of probabilities, Dij (vj)To be in a probability space vjLength of the probability dispersion distribution, Pij (vj)To be in a probability space vjA probability value of the probability dispersion distribution;
furthermore, pfj (vj)Is a collection element rjE.r (j ═ 1, 2, …, n), or a probability distribution set element wjE.w (j ═ 1, 2, …, n) at the probability distribution set element vjE.v (j ═ 1, 2, …, n) probability distribution values for the positions in the probability space;
in the same way, phj (wj)Is a collection element rjE.r (j ═ 1, 2, …, n), or a probability distribution set element vjE.v (j ═ 1, 2, …, n) at the probability distribution set element wjE.g. the probability distribution value of the position in the probability space of W (j ═ 1, 2, …, n);
moreover, the formula of the membership function of the fuzzy event probability measure is a fuzzy numerical value which comprises any fuzzy numerical value and can enable the objective function to form a result of 0-1 according to a certain rule given by people; or considering the joint expression of fuzzy information and probability information; or a formula in which a common expression of spatial information and probability information is taken into consideration.
Moreover, the spatial information of the fuzzy event probability measure is based on the distance between the unified Euclidean space and the probability space, and the following distance condition is satisfied;
(1) nonnegativity:
Figure BSA0000172245180000051
(2) non-degradability: d (w, v) is 0, then w is v;
(3) symmetry:
Figure BSA0000172245180000052
(4) the triangle inequality:
Figure BSA0000172245180000053
the invention provides a method for defining fuzzy event probability measure spanning different spaces, which has the implementation effects that: the problem of classification between data interwoven by probability distribution can be solved, microscopic uncertain information can be generated through macroscopic integration, and definite and stable valuable information can be generated, so that unexpected application effects can be realized.
Drawings
FIG. 1 is a schematic diagram of a method for approximating a road lane by automatic machine learning
FIG. 2 is a flow chart of an approach to a roadway lane through automatic machine learning
FIG. 3 is a schematic diagram of the definition of distances across different spaces including a probability space
FIG. 4 is a flow chart of the extraction of the overall characteristics of the lane line environment
FIG. 5 is a flow chart of a calculation for obtaining a maximum probability gray scale value
FIG. 6 is a flow chart of image feature machine learning for lane line environment
FIG. 7 is a flowchart of lane line image extraction
FIG. 8 is an effect diagram of lane line approximation using automated machine learning
FIG. 9 is an effect diagram of extracting a lane line image by introducing an SDL model
FIG. 10 is a schematic diagram of a method of solving for a derivative value of a maximum probability
FIG. 11 is an effect diagram of marginalizing an image
FIG. 12 is a graph of four characteristics of "decision of consciousness" regularized membership functions
FIG. 13 is a customized model for automated driving "awareness determination
FIG. 14 is a schematic view of a machine-dependent block configuration
FIG. 15 is a schematic diagram of "intelligence gain" of process control for an autonomous vehicle
FIG. 16 is a schematic illustration of a situation that may be encountered during autonomous driving
FIG. 17 is a schematic diagram of the fusion method of "intellectual acquisition" and "consciousness determination
Description of the symbols
300 is a discrete lattice of points near the location of a lane on an edge image
301 is the center line of the discrete dot matrix located near the lane position
302 and 303 are straight lines on either side of the center line 301
301 is a euclidean space covering a probability space
302 is the center point of probability distribution 320
303 is the scale of the first probability distribution value of the probability distribution 320
304 is a scale of the second probability distribution value of probability distribution 320
305 is a scale of the third probability distribution value of probability distribution 320
306 is the domain to which the scale of the first probability distribution of probability distributions 320 belongs
307 is the domain to which the scale of the second probability distribution of probability distributions 320 belongs
308 is the domain to which the scale of the third probability distribution of probability distributions 320 belongs
309 is a point of Euclidean space
310 is the center point of the probability distribution 330
311 is a scale of the first probability distribution value of the probability distribution 330
312 is a scale of the second probability distribution value of probability distribution 330
313 is the scale of the third probability distribution value of the probability distribution 330
314 is the domain to which the scale of the first probability distribution of probability distributions 330 belongs
315 is the domain of the scale of the second probability distribution of probability distributions 330
316 is the field of scale of the third probability distribution of probability distributions 330
320 and 330 are probability spaces
Detailed Description
The embodiments of the present invention will be described in further detail below with reference to the attached drawings, but the embodiments of the present invention are illustrative rather than restrictive.
Fig. 1 is a schematic diagram of an approach method for realizing a road lane by automatic machine learning.
The sdl (super Deep learning) model proposed in the present application refers to an artificial intelligence system formed by a plurality of probability scale self-organizing machine learning models, or a plurality of automatic machine learning models, and all or a part of models in a distance formula that can unify space in europe and county and probability space, or a fuzzy event probability measure formula that can unify space in europe and county and probability space.
As shown in fig. 1 (a): this is an unsupervised machine learning model similar to the above definition of probability scale self-organization, whose iterative approach is: in a given space, by using a given scale, a new space is necessarily generated, and the new space can generate a new scale, so that a space converging to the scale can be necessarily obtained through a plurality of iterations. For example, the scale is the maximum probability scale, and then a maximum probability space, a maximum probability scale, and a maximum probability distribution can be obtained after several iterations.
As shown in fig. 1 (b): the method is a more powerful automatic machine learning model of a composite model, and the specific principle is as follows: in a given space, based on an optimized scale, or the scale of the maximum information quantity, or the scale of the maximum probability, an optimized, or the space of the maximum information quantity, the maximum density, or the maximum probability is generated by iteration, a function approximation model is added in the new space, the approximation effect of the function approximation is higher than the approximation degree of the previous space, a new optimized scale, or the scale of the maximum information quantity, or the scale of the maximum probability is generated in the new space, a new optimized, or the scale of the maximum information quantity, the maximum density, or the space of the maximum probability is generated by iteration, so that the function approximation model is continuously used for carrying out the function optimal approximation in the space in rows, the function approximation effect is better, the function approximation effect reaches the optimal state after a plurality of iterations, and the machine learning of the composite model can reach the optimal function approximation effect without training, and thus may be referred to as automatic machine learning.
The one optimized scale is as follows: including fractal, or genetic operations simulating the inheritance and evolution of an organism in a natural environment, or one of a maximum fuzzy value, a maximum density value, a maximum approximation value, or a maximum similarity relation value.
Or a Distance (Euclidean Distance) scale which may also be extended to Euclidean space for non-probability spaces, or a Manhattan Distance (Manhattan Distance) scale, or a chebyshev Distance (chebyshev Distance) scale, or a Minkowski Distance (Minkowski Distance) scale, or a Mahalanobis Distance (Mahalanobis Distance) scale, or an included angle Cosine (Cosine) scale, or a Distance scale of a space in unified euros and a probability space, or a fuzzy event probability measure of a space in unified euros and a probability space.
Or may be one of a Jacgardsimilarity Coefficient (Jacgardsimilarity) scale or a Hamming Distance (Hamming Distance) scale.
The maximum information amount is: and (5) maximum information moisture.
The above-mentioned scale of maximum probability means: the maximum probability value is one of normal Distribution, multivariate normal Distribution, lognormal Distribution exponential Distribution, t Distribution, F Distribution, X2 Distribution, binomial Distribution, negative binomial Distribution, multinomial Distribution, Poisson Distribution, Erlang Distribution, hyper-geometric Distribution, traffic Distribution, Weber Distribution (Weelbull Distribution), triangular Distribution, Beta Distribution (bet Distribution) and Gamma Distribution (Gamma Distribution).
The above-mentioned model for performing the function approximation may be linear regression approximation, optimum square approximation, least square approximation, chebyshev polynomial approximation, spline function approximation, interpolation polynomial approximation, triangular polynomial approximation, rational approximation, pade approximation, or the like.
Fig. 1(c) is a schematic diagram of lane line approximation implemented by automatic learning.
As shown in fig. 1 (c): (300) the lane line image is a diagonal line composed of discrete dot matrixes, and (301) is a central line of the discrete dot matrixes located near the position of the lane line, and the line is obtained by an approximation function according to an initially given space in an initial state. A dimension necessarily exists in a given space, the lattice within the dimension is enabled to be closest to an approach line (301) of a lane line, the lattice within the dimension is reserved, and the lattices outside the dimension are removed, so that a new space can be generated. Here, this scale is a variance of the probability distribution, and may be a lattice density of a region surrounded by (302) and (301), a lattice density of a region surrounded by (303) and (301), a two-dimensional probability distribution of a lattice of a region surrounded by (302) and (301), or a two-dimensional probability distribution of a lattice of a region surrounded by (303) and (301).
When the density is taken as a scale, the lattice density of the area enclosed by (302) and (301), or the lattice density of the area enclosed by (303) and (301) can be reduced by the distance between (302) and (301) when increasing, or the distance between (303) and (301) when decreasing, or conversely, the lattice density of the area enclosed by (302) and (301), or the lattice density of the area enclosed by (303) and (301) can be increased by the distance between (302) and (301) when decreasing, or the distance between (303) and (301) when increasing.
When the maximum probability distribution value of the two-dimensional probability distribution is used as a scale; a two-dimensional probability distribution of rectangular areas should be used.
FIG. 2 is a flow chart for achieving an approximation of a roadway lane by automatic machine learning.
As shown in fig. 1 and fig. 2: the processing steps for recognizing the lane by adopting the automatic machine learning are as follows:
S1is an initialization step: the identification of the lane lines is a binary image transformed from an environmental image of the autonomous vehicle, or fromAnd extracting the lane line image from the environment image of the mobile driving automobile through machine learning, and then performing binarization processing on the extracted image.
In the initialization process, an approximate initial range where the lane exists is mainly given, the initial iteration range can be given as a range which is about half of the image, and at least one part of the processed lane line image is included in the initial range. Let the position of the lattice with 256 gray values be x in a given rangeij,yij(i is 1, 2, …, n, j is 1, 2, …, m), where i is the result of the ith iteration and j is the order of the lattice, which is independent of the processing result.
S2The method comprises the following steps of: at S1The center line (301) is determined by the following formula within the range where the lane generated in the initialization step exists or the range obtained in the iterative process.
Let the set P of lattices in the ith given rangei(i-1, 2, …, n) and m lattices p belonging to this setij∈Pi( i 1, 2, …, n, j 1, 2, …, m), for each lattice pijHas a coordinate position of xij,yij( i 1, 2, …, n, j 1, 2, …, m), and the following calculations were performed:
[ EQUATION 1 ]
Figure BSA0000172245180000081
[ equation 2 ]
Figure BSA0000172245180000082
[ equation 3 ]
Figure BSA0000172245180000083
[ EQUATION 4 ]
ai=yi′-bixi
[ EQUATION 5 ]
yi=ai+bixi
By using these formulas, a lattice set P within a given distance range can be obtainediAnd all the lattices p belonging to this setij∈PiThe lattice between which is the closest to the straight line (301).
S3The method comprises the following steps of calculating the distance from a dot matrix to a center line: for a set P of lattices in a given range iiAnd all the lattices p belonging to this setij∈Pi( i 1, 2, …, n, j 1, 2, …, m), and S2The specific calculation method of the distance formed by one straight line is as follows:
[ equation 6 ]
Figure BSA0000172245180000091
By judging dijPositive or negative in (2) can be known that the pixels in a given region are pixels in the direction of the linear regression line, and the negative sign distance is then taken as a positive value after classification.
S4The method comprises the following steps of: i.e. solving the variance of the distances from the centre line to all the lattices in the given ith range
Figure BSA0000172245180000094
And the average value di′。
[ EQUATION 7 ]
Figure BSA0000172245180000092
[ EQUATION 8 ]
Figure BSA0000172245180000093
S5Is a new space step, namely S4Solved S using an iterative formulaiTaking the scale as a reference, obtaining a new space by considering a formula7, and equation 8 is a one-dimensional probability distribution, the distortion problem of function approximation occurs, and the linear regression line (301) and S of equation 8 can be usediThe density of the lattice of the area enclosed by the two sides of the dimension (302) or (303) is used as an iterative dimension for generating a new space, and when the density is increased, the distance between (302) and (301) or (303) and (301) is reduced, and conversely, the distance between (302) and (301) or (303) and (301) is increased.
The other method is to directly calculate the iteration scale by using a two-dimensional rectangular probability distribution formula, thereby generating a new iteration space.
S6Judging whether iteration is finished or not: jump to S if' no2Continuing the iterative process, if yes, entering S8And (4) ending the iteration. The judgment basis is as follows: is the number of iterations reached maximum? Or if the iteration reaches the result of the best approximation? If yes, the iterative processing step is ended, otherwise, the step jumps to S2The iterative process continues.
S7Is the end step.
Through the iterative processing, the position of the lane can be obtained, and the lane recognition function is achieved.
FIG. 3 is a diagram illustrating the definition of distances across different spaces, including a probability space.
As shown in fig. 3: (301) is a euclidean space that covers the probability space. There are two probability spaces (320) and (330) in euclidean space. (302) Is the center point of the probability distribution (320). (303) Is a scale of a first probability distribution value of the probability distribution (320), (304) is a scale of a second probability distribution value of the probability distribution (320), and (305) is a scale of a third probability distribution value of the probability distribution (320). In addition, (306) is the domain of the scale of the first probability distribution of the probability distributions (320), the scale spacing of the scale between 302 and 303 is D1j (320)The value of the probability distribution in this field is p1j (320). (307) Is the field of the scale of the second probability distribution of the probability distributions (320), the scale distance of the scale between 303 and 304 being D2j (320)The value of the probability distribution in this field is p2j (320). (308) Is the domain of the scale of the third probability distribution of the probability distributions (320), the scale distance of the scale between 304 and 305 being D3j (320)The value of the probability distribution in this field is p3j (320)
Also, (310) is the center point of the probability distribution (330). (311) Is a scale of a first probability distribution value of the probability distribution (330), (312) is a scale of a second probability distribution value of the probability distribution (330), and (313) is a scale of a third probability distribution value of the probability distribution (330). Further, (314) is the field of scale of the first probability distribution of the probability distributions (330), the scale spacing of the scale between 310 and 311 being D1j (330)The value of the probability distribution in this field is p1j (330). (315) Is the field of the scale of the second probability distribution of the probability distribution (330), the scale distance of the scale between 311 and 312 being D2j (330)The value of the probability distribution in this field is p2j (330). (316) Is the domain of the scale of the third probability distribution of the probability distributions (330), the scale spacing of the scale between 312 and 313 being D3j (330)The value of the probability distribution in this field is p3j (330)
Further, let the centers of probability distributions of probability spaces (320) and (330) be (302) and (310) be elements w of two data setsjE.g. W and vjE.g. V. Then, the probability distribution centers (302) and (310) are connected with a straight line, and any point r is arranged in the middle of the straight linejE.g. R, find any point RjE.r belongs to a probability space (320) or a probability space (330).
Then, m is setj (wj)Is rjE.r and the center of the probability distribution wjE number of graduations of probability distribution between W, mj (vj)Is rje.R and the center of the probability distribution vjE.v, number of graduations of the probability distribution. For example, in FIG. 3, mi (wj)=3,pij (wj)=pij (520)、pij (vj)=pii (530)[i=1,2,…,(mj (wj)=mj (vj))]。
The distance G (V, W) between the set V of probability spaces (330) and the set W of probability spaces (320) can be uniformly calculated by the following equation.
[ equation 9 ]
Figure BSA0000172245180000111
Figure BSA0000172245180000112
Here, the
Figure BSA0000172245180000113
In addition, the
Figure BSA0000172245180000114
The basis for the above equation is: the distance from the set V of probability spaces to the set W of probability spaces may be a set R introduced between the set V and the set W, the distance from the set R to the set V of probability spaces plus the distance from the set R to the set W of probability spaces, and at this time, the distance from the set V of probability spaces to the set W of probability spaces holds regardless of the probability of whether the probability space 330 and the probability space 320 are simultaneous, and the symmetry of distance scale and the triangle inequality are satisfied.
In the above formula, (Δ)j (vj)j (wj)) In the probability spaces 320 and 330, since the distance of the probability space in the region with the probability distribution of "1" should be "0", it is the error value between the euclidian distance and the distance of the probability space, and by eliminating these two error values, it is possible to obtain the strict unity of the euclidian space and the probability space of the probability spaces 320 and 330Distance of the rate space.
Summarizing the method proposed in fig. 3 for obtaining a distance between a euclidean space and a probability space, which is characterized in that at least one probability space exists in the euclidean space, the probability distance of a section is related to the probability value of the passed section when the section crosses a section of the probability space.
The euclidean space above may extend to: including Manhattan Space (Manhattan Space); chebyshev Space (Chebyshev Space); minkowski Space (Minkowski Space); mahalanobis space (mahalanobis space); one of the Cosine spaces of the angle (Cosine Space).
The above formula (9) can unify the distances between euclidean space and probability space and satisfy the following distance condition;
(1) nonnegativity:
Figure BSA0000172245180000121
(2) non-degradability: d (w, v) is 0, then w is v;
(3) symmetry:
Figure BSA0000172245180000122
(4) the triangle inequality:
Figure BSA0000172245180000123
based on the equation 9 that can unify the distance between euclidean space and probability space and all the conditions of distance scale, a more rigorous scale formula for fuzzy event probability measure can be derived as follows.
As mentioned above, consider rjE R and the central value v of the probability distribution value of the probability space (330)jE.g. fuzzy event probability measure between V, if rje.R is exactly in a certain domain of the probability distribution of the probability space (330), the probability distribution value that can be set in this domain is pfj (vj)And, in addition, the center value w of the probability distribution value of the contingency probability space (320)jE W is also exactly in a certain domain of the probability distribution in the probability space (330), the value of the probability distribution that can be set in this domain is pfj (wj)This amounts to the two probability distributions almost coinciding together.
From equation 9, the formula for the probability measure of a fuzzy event that the set R belongs to the set V can be derived from the following formula:
[ EQUATION 10 ]
Figure BSA0000172245180000124
Figure BSA0000172245180000125
Figure BSA0000172245180000126
Here, the first and second liquid crystal display panels are,
Figure BSA0000172245180000127
Figure BSA0000172245180000131
in addition, the first and second substrates are,
βj (vj)=(1+pfj (vj))
αj=(1+phj (vj)+phj (wj))
referring to the above equation 9 and equation 10, Dij (wj)And Dij (vj)、pij (wj)And pij (vj)、mj (wj)And mj (vj)、pfj (vj)And pfj (wj)、phj (vj)And phj (wj)Can be calculated, and the fuzzy event probability measure formula of the set R belonging to the set W can be calculated by the following formula.
[ equation 11 ]
Figure BSA0000172245180000132
Figure BSA0000172245180000133
Figure BSA0000172245180000134
Here, the first and second liquid crystal display panels are,
Figure BSA0000172245180000135
Figure BSA0000172245180000136
in addition, the first and second substrates are,
βj (wj)=(1+pfj (wj))
αj=(1+phj (vj)+phj (wj))
finally, according to the formula 10 and the formula 11, the result of the ultra-deep countermeasure learning can be obtained by the following formula:
[ EQUATION 12 ]
F=(F(W)/F(V))
From equation 12, the best classification can be obtained for any set R between the two probability distributions. Like equation 9, equation 10 and equation 11 also satisfy all the conditions of the distance scale.
The countermeasure learning of the above equation 12 can also start the countermeasure from the microscopic level as in the following equation 13.
[ equation 13 ]
Figure BSA0000172245180000141
Figure BSA0000172245180000142
Figure BSA0000172245180000143
Here, the first and second liquid crystal display panels are,
Figure BSA0000172245180000144
Figure BSA0000172245180000145
in addition, the first and second substrates are,
βj (wj)=(1+pfj (wj))
βj (vj)=(1+pfj (vj))
formula 13 is a formulaic counterstudy model, which integrates the microscopically uncertain spatial information and stochastic probability information through counterstudy, and macroscopically generates a deterministic and stable valuable information, which is the superiority of the counterstudy of the probability measure of the fuzzy event.
The above formula of the membership function of the probability measure of the fuzzy event is only an example, and any formula which can form the objective function into a fuzzy numerical value with a result of 0-1 according to a certain rule given by people, or a formula which considers the fuzzy information and the probability information, or a formula which considers the spatial information and the probability information, is within the scope of the present invention.
Fig. 4 is a flowchart of extracting the overall characteristics of the lane line environment.
As shown in fig. 4: the lane line environment overall characteristic learning program reads the lane line environment overall RGB image, or converts the RGB image into Lab image, and the lane line environment overall characteristic can be obtained through the following 7 steps. The lane line environment overall image mainly represents environmental features such as daytime, night, cloudy day, sunny day, when the road surface illumination is good, when the road surface illumination is bad, and the like.
S1Is a step of reading RGB image: in this step, the read RGB image of the entire lane line environment or the RGB image is converted into a Lab image, and only four colors of + a, -a, + b, and b are used to remove the brightness in order to prevent the brightness of the image from affecting the accuracy of recognition.
S2Is a color image selection step: one color is selected from three colors of R color, G color and B color, or + a color, -a color, + B color and-B color of Lab color for processing.
S3The method comprises the following steps of calculating a maximum probability gray value: in this step, a "maximum probability gray value calculation" subroutine is called to find the maximum gray feature value of the maximum probability of the given color image.
S4Is an image gray level inversion step: considering that two feature values, the maximum gray-scale value feature value, and the minimum gray-scale value feature value are extracted for the lane line environment whole image of one color, the R color, the G color, and the B color image may extract 6 feature values, or the + a color, -a color, + B color, -B color four colors, 8 feature values.
S5The method comprises the following steps of calculating a maximum probability gray value: and a third step S3Similarly, in this step, a "maximum probability gray value calculation" subroutine is also called to find the maximum probability minimum gray feature value of a given color image.
S6Judging the completion of feature extraction: is two image features of the maximum probability maximum grayscale value and the maximum probability minimum grayscale value of one image of the four colors of R color, G color, and B color, or the + a color, -a color, + B color, and-B color of the Lab color completely extracted? If the image of another color is not completely extracted for processing, the second step S is skipped3、2If the extraction is complete, the next step is performed.
S7The method comprises the following steps: and returning to the main program.
The above calculation of the maximum probability gray scale value is performed by the following method.
Fig. 5 is a flowchart of the calculation for obtaining the maximum probability gray scale value.
As shown in fig. 5: the maximum probability gray value calculation for a given color image is done by the following 5 steps.
S1Is an initialization step: in the step, the maximum iteration number MN is set, generally 5-10 times can be selected, and then an iteration progress constant v is set, which is mainly used for judging whether the iteration has effects.
S2The method comprises the following steps of solving the mean and variance of gray values:
let a the image be n × m pixels aij( i 1, 2.. times, n, j 1, 2.. times, m), the average gray value g of the a-image of the k-th iteration(k)Comprises the following steps:
[ equation 14 ]
Figure BSA0000172245180000151
The dispersion of the probability distribution of the gray values of the image a is:
[ equation 15 ]
Figure BSA0000172245180000161
S3Comprises the following self-organizing treatment steps: in g(k)Centered at S2(k)The method is characterized in that the method is a two-boundary maximum probability scale, pixels meeting the boundary are reserved, pixels outside the boundary are removed, and a new pixel set, namely a new maximum probability space, is formed.
S4An iteration completion judgment step: is the number of iterations subtracted from the maximum number of iterations (MN-k)? Or | S2(k+1)-S2(k)| ≦ v? If yes, the iteration is finished, and the step is shifted to the fifth step S5If not, jumping to the second step S2The iterative process continues.
S5And (3) iteration return step: and returning to the main program.
Here, a lane line local feature extraction method is also provided for extracting a lane line image, in which a feature value of a grayscale image of a lane line image and a feature value of a grayscale difference between a lane line and a background of the lane line are extracted for a given labeling range of the lane line.
By following the above method for extracting global feature of lane line environment, first, using the machine learning method of maximum probability gray scale value of fig. 5, the maximum probability gray scale values of R color, G color, and B color image, or + a color, -a color, + B color, -B color image, and R color, G color, and B color image of the lane line are respectively obtained, 6 maximum probability gray scale values, or + a color, -a color, + B color, and B color image are respectively obtained, 8 maximum probability gray scale values are respectively obtained, the maximum probability gray scale values of three RGB colors of the non-lane line of the corresponding color, or + a color, -a color, + B color, and B color of the lane line are respectively subtracted from the maximum probability gray scale values of four colors of the corresponding color, -a color, + b color, -b color maximum probability gray scale value of four colors, note that absolute values of the result are taken, and then feature values of 3 lane line difference values of three colors of RGB, and 3 feature values of maximum probability gray scale value of three colors of RGB of the lane line itself, plus 6 lane line environment global features are obtained, and there are 12 feature vectors formed by feature values that can reflect lane line images, or feature values of 4 lane line difference values of four colors of + a color, -a color, + b color, -b color, two lanes are feature values of 8 lane line difference values, and feature values of 4 lane line difference values of four colors of the lane line itself, two lanes are feature values of 8 lane line difference values, plus 6 lane line environment global features, there are 22 feature vectors formed by feature values that can reflect lane line images.
The specific lane line image extraction is performed in two steps, one lane line labeling learning step in which "what is a lane line image" is learned to a person through machine learning? And generating feature vector query data which can express the features of the lane line images and is stored in a database through machine learning. The data labeling technology adopted here is to perform machine learning on the labeled data to obtain the probability distribution of the labeled data, so that the effect of large data labeling necessary for the traditional deep learning can be realized only by data labeling of small data.
The other step of extracting the lane line image is to use the lane line environment image read on line to calculate 22 sample characteristic values by the method, calculate the distance between the 22 sample characteristic values and the characteristic query data stored in the database, and find the lane line gray value in a characteristic vector with the closest distance from the calculation as the lane line gray value of the sample image, and extract the gray values of three RGB images, or + a color, -a color, + b color and-b color, so as to obtain a lane line image.
Fig. 6 is a flow chart of machine learning of lane line environment image features.
As shown in fig. 6: the learning of the overall image characteristics of the lane line environment is completed by the following nine steps.
S1Is an initialization step: and setting the learning times, and performing one-time learning aiming at the whole image of the lane line environment. Let the learning times be g.
S2Reading the learned video image: and reading the video image needing to be learned.
S3Is a function conversion step: and performing timely switching processing of a lane line environment image characteristic machine learning function and a lane line identification function. The lane marking can be directly carried out by manual work during automatic driving.
S4Judging lane line marking: judging whether the video image has the marked sign or not, if so, turning to the next step S5Is returned to S without2The step continues to read the video image.
S5A lane line image feature extraction step: in this step, the above sub-routine of "extracting the feature of the whole image of the environment" is called first, and the above-mentioned local feature extraction of the lane line is performed according to the marked positionAnd obtaining a feature vector consisting of 12 feature values.
S6Machine learning data login step: and registering the extracted values of the environmental overall image features obtained by the sub-program of extracting the environmental overall image features and registering a database of a feature vector consisting of 12 feature values of the lane line local feature values.
S7Judging whether learning is finished: judging whether the machine learning for g times is completely finished, if yes, turning to the next step, and if no, skipping to the second step S2
S8A machine learning data processing step: in this step, g times of learning of the same lane line image are performed, g feature vectors respectively composed of 12 feature values are learned by a machine to obtain a maximum probability feature vector of 12 maximum probability values and 12 maximum probability scales, and then according to statistical probability distribution characteristics, the maximum probability value and the maximum probability scales constitute maximum probability distribution information of the lane line image.
S9And (5) finishing the steps: the program processing is completed.
Fig. 7 is a flow chart of lane line image extraction.
As shown in fig. 7: the extraction of the lane line image can be obtained by the following steps:
S1an initialization step: probability scale, 2 times variance of the minimum distance, which is the value in the record to which the minimum distance corresponds.
S2Reading a sample image: and reading one image in the lane line environment image video.
S3And (3) solving a characteristic value: 6 characteristic values of a background image are respectively obtained according to the RGB three-color image of the sample image, and the lane line difference characteristic comprises 12 characteristics of a left lane and a right lane, and 18 characteristics in total. Note that the learned feature value data is associated with each of the three RGB colors, or 8 feature values of the background image are obtained for each Lab color image of the sample image, and the lane line difference feature includes 16 features of the left and right lanes, for 24 features.
S4Calculating a sample characteristic value distance:
setting the environment image of the lane line in different states as Fz(z ═ 1, 2, …, g), each image FzH features can be generated, and the maximum probability feature value data which can be learned by h × g machines: l isij∈Lj( i 1, 2, …, h, j 1, 2, …, g), and maximum probability scale data Mij∈Mj(i-1, 2, …, h, j-1, 2, …, g), i.e.:
[ equation 16 ]
Figure BSA0000172245180000181
[ equation 17 ]
And
Figure BSA0000172245180000182
known sample feature data SjE S (j ═ 1, 2, …, h) is:
(S1,S1,...,Sh)
sample feature data SjE S (j is 1, 2, …, h) and the i-th registered machine learning data LiThe distance of (a) is:
[ equation 18 ]
Figure BSA0000172245180000183
Here, the
Figure BSA0000172245180000184
Of the above formula, MijIs the maximum probability scale of the jth eigenvalue of the ith eigenvector at the maximum probability value LijWith the maximum probability scale MijThe probability distance of the maximum probability space formed by the space is '0', and the distance between the space in Europe and the probability space can be regarded as the distance between the space in Europe and the probability spaceThe error is subtracted from the error to obtain the simplified formula of the unified euclidean space and the probabilistic space plus distance of formula 17.
S5Minimum distance calculation step: for the g most probable feature vectors of equation 16, and the g most probable scale feature vectors of equation 17, h distances can be derived between the calculated sample feature vectors of equation 18:
G(1),G(2),…,G(h)
find the minimum distance min G(i)
S6The method comprises the following steps of: and extracting the lane line as the gray value of the lane line image by using the maximum probability gray value of the RGB three colors or the maximum probability gray value of the Lab four colors of the lane line image corresponding to the ith feature vector.
S7The method comprises the following steps: the program processing ends.
Fig. 8 is an effect diagram of lane line approximation by automatic machine learning.
As shown in fig. 8: (a) the method is an iterative process for approaching the lane line by automatic machine learning, and can clearly see that the lane line is further approached than the last iterative lane line. (b) The lane line recognition result clearly shows that the lane line is the best approaching lane line.
Fig. 9 is an effect diagram of extracting a lane line image by introducing the SDL model.
As shown in fig. 9; compared with the traditional binary image, the lane line image extraction effect of the SDL model has the advantages that the lane line is extracted clearly, and no interference image exists around the lane line image extraction effect, so that the lane line identification accuracy is higher than that of the traditional method.
In order to prevent the extraction of the lane line image and reduce the number of times of learning of the lane line image as much as possible, each color of the lane line may be associated with each color of the background image, such as a ratio of a gray level of a certain color of the lane line to a maximum gray level of the background image, or a ratio of a gray level of a certain color of the lane line to a minimum gray level of the background image.
Or the gray level of the lane line color, the maximum probability value which is not the closest of the lane line and is less than the gray level of the lane line, and the maximum probability value which is not the closest of the lane line and is more than the gray level of the lane line are found through counterstudy, the lane line is subjected to counterstudy between the two values, the probability distribution of the difference between the lane line and the two values and the characteristics that the lane line and an adjacent image (for example, the lane line is on the road, the images on the two sides are road images, and the general gray level is lower) are found, and the feature vector extracted from the lane line is obtained, so that the feature vector is used as the basis for extracting the lane line.
The image extraction method using the SDL model is composed of a feature vector generation unit and an image extraction unit.
The feature vector generation section: taking the gray value of each color of the image to be extracted as a main characteristic value; or establishing a plurality of characteristic values which are expected to extract the difference between the gray value of each color of the image and the gray value of the color of other images; the feature vectors correspond to gray values of respective colors of the image desired to be extracted; training each characteristic value of the characteristic vector for a plurality of times aiming at different images; obtaining the maximum probability value of each characteristic value in the characteristic vector; a maximum probability space; a maximum probability distribution; and logging the result in a database.
An image extraction section: calculating a feature vector according to the method for the sample image data; and calculating the distance scale of the characteristic vector of the sample image and each characteristic vector registered by the database, wherein the distance scale can unify Euclidean space and probability space. The gray value of each color of the image to be extracted is found out from the learned feature vector with the minimum distance.
The above maximum probability space, as shown in fig. 5; maximum probability scale obtained for iterative results of machine learning with probability scale self-organization2(k)The space enclosed, i.e. with the maximum probability g(k)Centered on the maximum probability scale S2(k)The enclosed space. The maximum probability distribution can be obtained for the iteration result of the machine learning with probability scale self-organization by referring to formula 18Value g(k)With the maximum probability scale S2(k)And (3) forming.
The invention further provides an edge image processing method.
In an automatic driving automobile, a binocular camera is needed for optically identifying an environment image, an FPGA chip is generally adopted for rapidly reading video data of the binocular camera, however, FPGA can only start processing from a binary image, and in addition, a high-precision edge image conversion method is provided for converting a binary image of the video image read by the binocular camera into an edge image for identifying the automobile environment image with high precision and high speed.
Firstly, a method for calculating a first derivative of an image is as follows:
assuming that the functional expression of the two-dimensional image is F (x, y), the method for calculating the first derivative of the image is as follows:
[ equation 19 ]
Figure BSA0000172245180000201
Figure BSA0000172245180000202
The method of second derivative of the image is as follows:
[ equation 20 ]
Figure BSA0000172245180000203
Figure BSA0000172245180000204
[ equation 21 ]
Figure BSA0000172245180000211
Figure BSA0000172245180000212
A method for machine learning import by derivation of images. Derivation of images causes strong noise, and in order to suppress noise, Prewitt's algorithm and Sobel's algorithm are conventionally used, where a probability-scale self-organizing machine learning algorithm is introduced, and a method for solving a derivative value of a maximum probability of a pixel in the middle from derivatives of a plurality of pixels is introduced.
FIG. 10 is a schematic diagram of a method of solving for a derivative value of a maximum probability
As shown in fig. 10: with pixel F (x, y) as the center, the first derivative values of 25 pixels of the 5 x 5 pixel matrix are solved, the probability scale self-organizing machine learning shown in fig. 5 is used to solve the maximum probability value among the 25 derivative values, and the maximum probability derivative value is used as the formal derivative value of the center point F (x, y). The derivative value of the whole image is finally calculated according to the horizontal and vertical translation of each point.
For the maximum probability value of the first derivative value obtained by the maximum probability self-organizing model, the pixel less than the maximum probability value of the first derivative value is set to be 0, the pixel more than the maximum probability value of the first derivative value is set to be 256, and the result of the edge image can be obtained, or the pixel to which the maximum probability space within the maximum probability scale of the first derivative value obtained by the maximum probability self-organizing model belongs is set to be 256, and other pixels are set to be 0.
On this result, the first derivative of 25 image pixels, 5 × 5, can be solved again to obtain the second derivative result, following the derivation method above. Solving the second derivative can also be directly carried out according to the above formulas (20) and (21), after the image pixels are respectively subjected to the calculation of the second derivative, each gray value of the second derivative is obtained, then the maximum probability value of the gray value of the second derivative is obtained by using the maximum probability self-organizing model shown in fig. 5, namely, the pixel with the maximum probability value smaller than the gray value of the second derivative is set to be '0' gray, and the other pixels are set to be '256', or the pixel with the maximum probability space within the maximum probability scale of the maximum probability value of the gray value of the second derivative is obtained by using the maximum probability self-organizing model, and the other pixels are set to be '0'. An edge image of the second derivative can be obtained.
Fig. 11 is an effect diagram of performing marginalization processing on an image.
It can be seen from fig. 11 that the marginalization processing performed by the machine learning of the probability scale self-organization has a significant effect.
To formulate the "decision on consciousness", Membership functions (Membership functions) are introduced here.
FIG. 12 is a graph of four characteristics of "consciousness-determined" regularized membership functions.
As shown in (a): the smaller the argument value is, the larger the value of the membership function MF is, whereas the larger the argument value is, the smaller the value of the membership function MF is. For example, the closer the speed of the autonomous vehicle is to the safe speed, the smaller the independent variable, the larger the MF value, indicating that the autonomous vehicle is safer, and conversely, the more dangerous the autonomous vehicle is. The state of the autonomous vehicle can be described simply by using such a formula. Here, T is a threshold value of the dangerous state.
As shown in fig. 12 (b): the larger the argument value is, the larger the value of the membership function MF is, whereas the smaller the argument value is, the smaller the value of the membership function MF is. For example, the greater the distance between the autonomous vehicle and a vehicle traveling in the same lane, the greater the independent variable, the greater the MF, indicating that the autonomous vehicle is safer, and conversely the autonomous vehicle is more dangerous. The driving state of the autonomous vehicle can be described simply according to the distance between the autonomous vehicle and the co-driver by using the fixed form. Here, T is a threshold value of the dangerous state.
As shown in fig. 12 (c): is a function of the automatic driving consciousness decision reflected by the distance between vehicles from front to back for the vehicles in the same line of the nearby lane line. Initially, when there is a co-traveling vehicle on a nearby lane line in front of the autonomous vehicle, the farther the autonomous vehicle is from the co-traveling vehicle, the greater the MF value, and the safer the autonomous vehicle. However, since the vehicle speed of the autonomous vehicle is greater than the vehicle speed of the same vehicle, the two vehicles approach each other gradually, and when T is reached1In the state, the automatic driving vehicle is drivenA dangerous situation is entered. The automatic driving automobile continues to overtake the passing automobile when T is reached2The automatic driving automobile is separated from the dangerous state, and the more distant the automatic driving automobile is, the larger the MF value is, and the safer the automatic driving automobile is.
As shown in fig. 12 (d): given an optimum value, as the argument goes from a value higher than this value, it gradually approaches this value, and the MF value also gradually approaches the maximum range, and as the argument gradually decreases from the maximum value, the MF value gradually decreases. For example, the MF value is changed from small to large when the speed of the automatic driving automobile is higher than the safety value and gradually approaches the safety value, and is smaller than T1When the threshold value is reached, the automatic driving automobile approaches a safe state. When the speed of the autonomous vehicle is below the safe value range, the autonomous vehicle is more dangerous as the autonomous vehicle is driven at a lower speed than the safe value.
Since the change of the safe state and the dangerous state is an exponential proportional change, the membership function should be a non-linear function.
Fig. 13 is a regularization model of automated driving "awareness determination".
As shown in fig. 13: automatic driving automobile C1When driving on a straight lane, the same vehicle C is encountered in the front2Is provided with C1Position p of1With the same vehicle C2Position p of2Are spaced apart by a distance d0. The same vehicle C is also encountered in front of the left lane2', is provided with C1Position p of1With the same vehicle C2Position p of `2Are spaced apart by a distance d0'. In addition, the front of the right lane can also meet the co-running vehicle C2", setting C1Position p of1With the same vehicle C2"position p of2"is spaced apart by a distance d0”。
Then, a is set1Is for an autonomous vehicle C1The danger area is absolutely impermissible, and as each control unit of any autonomous decentralized system, measures are automatically taken when the situation is met, and the dangerous state is eliminated by the aid of emergency braking,
then, a is set2Is for an autonomous vehicle C1A second danger zone, in which the danger zone can be excluded by emergency braking. a is3Is for an autonomous vehicle C1A third danger zone is located where the autonomous vehicle has to enter, where it is not possible to change lanes, but it is desirable to avoid this zone as quickly as possible.
Automobile with automatic driving device C1Has a speed of S1Go with each other vehicle C2,C2' or C2"has a speed of S2,S2' or S2", automatic driving automobile C1With the same vehicle C2,C2' or C2"an initial distance of d0,d0' or d0". Then the automobile is automatically driven C1With the same vehicle C2,C2' or C2"dynamic distance is:
[ equation 22 ]
d=[d0-(s1-s2)t]
The formula of the dynamic membership function with respect to the car-to-car distance is then as follows:
[ equation 23 ]
Figure BSA0000172245180000231
With this formula, all the conscious decisions of the autonomous vehicle during straight-line travel can be reflected by the formula 23, which is much simpler to describe than making conscious decisions by regular stacking.
Still further, the co-traveling vehicle is far away from a3In the region, the probability of traffic accident is 0, but at a1In the area, if the co-running vehicle is the front vehicle of the co-line, the probability of the traffic accident is 0.62 at a2In the area, if the co-running vehicle is the front vehicle of the co-line, the probability of the traffic accident is 0.34, at a3In the area, if the co-travelling vehicle is in front of the co-travelling lineAnd the probability of traffic accidents is 0.04.
The probability value of traffic accidents occurring is P according to the existence of the same vehicleWDThen a fuzzy event probability measure WD concerning the inter-vehicle distance taking into account the probability informationFThe formula of (1) is as follows:
[ equation 24 ]
Figure BSA0000172245180000232
Therefore, one state of the automatic driving automobile in the driving process is dynamically described by a formula, so that the driving according with the consciousness decision can be obtained, a plurality of road conditions are summarized into the formula, and the function of simplifying the complexity of system control is achieved.
Fuzzy inference needs to be introduced when automating the specific controls of the vehicle. The format of the fuzzy inference is as follows:
if the automobile is automatically driven C1Is less than the optimum speed value OS, AND autonomous vehicle C1With the same vehicle C2Is less than threshold value T, AND the right lane of AND is corresponding to the co-driving vehicle C2If the "WD" value is equal to or greater than the threshold value T, the autonomous vehicle can change lanes to the right lane.
[ equation 25 ]
Figure BSA0000172245180000233
Also, if the car C is driven automatically1Is below the optimal speed value OS, AND autonomous vehicle C1With the same vehicle C2Is less than threshold value T, AND the AND left lane is corresponding to the co-driving vehicle C2The WD 'value of' is 100 or more, then the autonomous automobile can change lanes to the left lane.
Fuzzy inference can also be expressed as: if the automobile is automatically driven C1The same-driving vehicle C in front of the same-driving lane line2A spacing WD of approximately a3Hazardous area, autonomous vehicle C1Behind the lane lineCo-operating vehicle C of noodle3Is also close to a3Danger zone, left lane to co-driving C2The value WD 'of' is greater than or equal to the threshold value T, then the autonomous vehicle can change lane to the left lane, or the right lane to the co-driver C2If the "WD" value is equal to or greater than the threshold value T, the autonomous vehicle can change lanes to the right lane.
Although the control method is similar to the knowledge base, each condition corresponds to a membership function, and each formula can cover a plurality of road conditions, so that the number of rules can be greatly reduced.
The automatic driving automobile mainly embodies two kinds of consciousness decisions, the above introduces that the automatic driving process is described by using a membership function according to traffic rules, and the relation of complex road conditions around the driving process is generated through fuzzy inference, so that the consciousness decisions form an optimal state instruction which can be provided for controlling the automatic driving according to the complex road condition relation.
The present application also proposes a control of "decision of consciousness" by antagonistic learning. Here, suppose an autonomous vehicle C1Fuzzy event probability measure FP of same vehicle needing to be accelerated forward to approach frontfProbability measure of fuzzy event far from the preceding co-driving vehicle FP-fProbability measure FP of fuzzy event of same vehicle needing to be decelerated and approaching to the rearbOn the contrary, the probability measure FP of a fuzzy event far from the following co-driving-b
And is further provided with C1Fuzzy event probability measure FP requiring lane change to left lanelAnd a measure of probability of a fuzzy event FP that the lane cannot be changed to the left lane-lSame as C1Fuzzy event probability measure FP requiring lane change to right lanerAnd a measure of probability of a fuzzy event FP that a lane change to the right lane is not possible-r
Referring now to FIG. 13, an autonomous vehicle C1Fuzzy event probability measure FP of same vehicle needing to be accelerated forward to approach frontfThe value depending on the distance between vehicles from the preceding co-travelling vehicleFuzzy event probability measure WDF(formula); automatic driving automobile C1Speed s of the vehicle1Lower than the requested vehicle speed Ss(ii) a The distance between the following co-driving vehicles is too close to the minimum distance DS13And always in a close state for a certain time.
[ equation 26 ]
Figure BSA0000172245180000241
Here, ω71~ω75Is the weight of each element and needs to be selected by practice. In addition, the probability measure FP of the fuzzy event far away from the same vehicle-f=1-FPf
Referring now to FIG. 13, the fuzzy event probability measure FP of the same vehicle approaching the rear and needing to be deceleratedbAnd (5) carrying out fixed expression. FPbThe value depending on the autonomous vehicle C1Distance D between the vehicle and the preceding co-travelling vehicleS12Too close, need to be pulled apart, and automatically drive the automobile C1Speed s of the vehicle1Above the requested vehicle speed SsThe speed of the following co-operating vehicle C3 becomes slower than the required vehicle speed Ss
[ equation 27 ]
Figure BSA0000172245180000251
Figure BSA0000172245180000252
Here, ω81~ω86Is the weight of each element and needs to be selected by practice. In addition, the automatic driving vehicle C1Fuzzy event probability measure FP far from following co-driving vehicle-b=1-FPb
Referring now also to FIG. 13, pair C will be1Fuzzy event probability measure FP requiring lane change to left lanelAnd (5) carrying out fixed expression. FPlThe value depending on the autonomous vehicle C1With the co-operating vehicle on the leftC2' with a certain intervehicular distance WDF2‘Automatic driving automobile C1Speed s of the vehicle1Lower than the requested vehicle speed SsAnd automatically drives the car C1Travelling in the same direction as the front2Distance of (d)0-(s1-s2)T]Too close and autonomous driving car C1Co-operating with the right lane C2"distance [ d0-(s1-s2”)T]Too close as well; furthermore FPlThe value depending on the autonomous vehicle C1Driving in the same direction as the left2' there is a fixed inter-vehicle distance, co-driving with the front C2Distance of (d)0-(s1-s2)T]Too close, simultaneously with the rear vehicle C3Distance of (d)0-(s1-s3)T]Too close.
[ EQUATION 28 ]
Figure BSA0000172245180000253
Here, ω91~ω102Is the weight of each element and needs to be selected by practice. In addition, fuzzy event probability measure FP of not turning to left lane-l=1-FPl
Finally, in setting C1Fuzzy event probability measure FP requiring lane change to right lanerThe formula (2). FPrThe value depending on the autonomous vehicle C1The same vehicle C on the right2' automatic driving car C with a certain car-to-car distance1Speed s of the vehicle1Lower than the requested vehicle speed SsAnd automatically drives the car C1Travelling in the same direction as the front2Too close in pitch; furthermore FPlThe value depending on the autonomous vehicle C1The same vehicle C on the right2' there is a certain distance between cars, and the car is travelling with the front2Too close in distance and co-driving with the rear C3Too close in pitch.
[ equation 29 ]
Figure BSA0000172245180000254
Here, ω111~ω120Is the weight of each element and needs to be selected by practice. In addition, fuzzy event probability measure FP of not turning right lane-r=1-FPr
Equations 13-38 actually describe the dynamic safe driving conditions of an autonomous vehicle traveling on a straight lane, with the various conditions varying with forward and backward vehicle speed, as well as distance traveled. The present application proposes deciding on an autonomous vehicle C through counterstudy1Whether the vehicle is accelerated forwards to approach the same vehicle in front or decelerated to approach the same vehicle in back C, or the vehicle is changed to the left lane or the right lane.
FIG. 14 is a schematic diagram of a machine resolution mechanism.
A machine decision machine is proposed here, as shown in fig. 14; a method of "conscious decision" is constructed using machine decision machines. In a straight-through lane, in a complex relationship between passing vehicles, deciding whether an autonomous vehicle accelerates ahead, decelerates close to a following vehicle, changes lanes to the left, or changes lanes to the right requires a linear, decisive, and optimal decision, and for this purpose, a machine-decision machine is introduced to perform a process of "consciousness decision".
As shown in fig. 14: automobile with automatic driving device C1Advanced acceleration and FPfIs related to the fuzzy time probability value FP of not slowing down backward and approaching-bFuzzy time probability value FP of lane change not to left-lAnd fuzzy time probability value FP of lane change to right-rIs related to the value of (FP)f+FP-b+FP-l+FP-r)≥(FPb+FP-f+FP-l+FP-r) In time, order the automobile C1Fuzzy probability measure FP of forward accelerationf' is "1", then the vehicle is automatically driven C1Fuzzy probability measure FP of backward deceleration drivingb' is:
[ equation 30 ]
Figure BSA0000172245180000261
Therefore, the information of the positive direction and the negative direction can be utilized, the result is the strong antagonistic learning of the positive direction and the negative direction, the optimal, accurate and most decisive decision can be realized, and the concept of 'machine decision machine' is generated.
In the same way, when (FP)f+FP-b+FP-l+FP-r)<(FPb+FP-f+FP-l+FP-r) In time, order the automobile C1Fuzzy probability measure FP of backward deceleration drivingb' is "1", then the vehicle is automatically driven C1Fuzzy probability measure FP of forward accelerationf' is:
[ equation 31 ]
Figure BSA0000172245180000262
The autonomous driving vehicle C is clear from the traffic control rules that the vehicle does not change lanes frequently1Whether or not to change lanes to the left is not only dependent on the autonomous vehicle C1The relation of fuzzy event probability measure between the left lane line and the vehicle, the relation between the right lane line and the vehicle, and the relation between the straight lane line and the right lane line and the vehicle. Following the above described formulation method, when (FP)l+FP-f+FP-b+FP-r)≥(FPf+FPb+FP-l+FPr) In time, order the automobile C1Fuzzy probability measure FP of lane change towards left lanel' is "1", then the vehicle is automatically driven C1Fuzzy probability measure FP of lane change without facing left lane-l' is:
[ equation 32 ]
Figure BSA0000172245180000271
In the same way, the vehicle C is driven automatically1Whether to change lanes to the right lane or not, following the above-described formulation method, as (FP)r+FP-f+FP-b+FP-l)≥(FPf+FPb+FP-r+FPl) In time, order the automobile C1Fuzzy probability measure FP of lane change to rightr' is "1", then the vehicle is automatically driven C1Fuzzy probability measure FP of lane change without facing right lane-r' is:
[ equation 33 ]
Figure BSA0000172245180000272
The formula 30-33 is used for driving the automatic driving automobile on a straight road due to the function FPb,FPf,FPl,FPrAll contain speed variable, so it can be seen as a function with time τ as argument, FPb(τ),FPf(τ),FPl(τ),FPr(τ) determination of model FP of machine by thisf’,FPb’FPl’FPr' may also constitute a functional formula for the time argument:
automatic driving automobile C1Fuzzy probability measure function formula FP for forward acceleration drivingf' (τ) is:
[ equation 34 ]
Figure BSA0000172245180000273
Then the automobile is automatically driven C1Fuzzy probability measure function formula FP for backward deceleration runningb' (τ) is:
[ equation 35 ]
Figure BSA0000172245180000274
Then the automobile is automatically driven C1Fuzzy probability measure function formula FP of lane change towards left lanel' (τ) is:
[ equation 36 ]
Figure BSA0000172245180000281
Then the automobile is automatically driven C1Fuzzy probability measure function formula FP of lane change towards right laner' (τ) is:
[ equation 37 ]
Figure BSA0000172245180000282
This is a model for "conscious decision" of process control of the dynamics of autonomous driving, since it is possible to predict which segment of autonomous driving is a safe driving area and which segment will start to present a hazard.
The consciousness decision model can predict the walking condition and call the data of the intelligence acquisition, so that a mutually harmonious control process is formed, and the three-element mechanism of a sensing layer, a judging layer and an executing layer of the biological nerve is also met.
The automatic driving consciousness decision model is formed according to safe driving rules, and a dynamic fuzzy event probability measure relation is established according to an automatic driving automobile and surrounding co-driving automobiles; or a fuzzy relationship; or the probability relation is established, the traffic rule, the danger prediction rule and the danger avoidance rule are absorbed through the membership function, and the method is realized by using the probability measurement of the fuzzy event in the positive direction and the negative direction, or the fuzzy relation, or the confrontation result of the probability relation.
The consciousness decision model is a machine judgment machine which divides an automatic driving automobile into a plurality of different road conditions in the driving process.
In view of the control characteristics of the automatic driving system proposed in the present application, it is first considered how to circumvent the complicated NP problem of automatic driving in control, and the conventional control method sets a threshold value for each control point to control. At least dozens of road conditions are needed, and each road condition needs to be adjusted by dozens of control points, which is an NP problem in a typical combination theory and is a difficult problem which cannot be solved by a turing machine.
In order to solve the NP problem of automatically driving an automobile on complex control, the method of machine-in-person learning is used for avoiding the NP problem of automatically driving the automobile on the complex control, the machine-in-person learning generates various automatic driving knowledge, the machine realizes the machine intelligence acquisition from the people, the machine generates intelligence, the automatically driving automobile can realize the control result of the machine intelligence acquisition which is closest to the people, the complexity of the automatically driving automobile is greatly simplified, the control of the automatically driving automobile is free from the trouble of the complex NP problem, and the automatically driving automobile system capable of realizing the graphic testing effect is hopeful.
Fig. 15 is a schematic diagram of "intellectual acquisition" of process control for automatic driving.
As shown in fig. 15: firstly, the Driving Distance (DD), the initial speed (IV), the Target Speed (TS) and the current traveling distance (position of a monitoring point in the driving process) Dis belong to retrieval items, and the control items comprise conditions such as a steering wheel angle (Sw), an accelerator size (Tv), a brake braking condition (BS), a driving direction (P/N), turning control (Tc), a turning lamp (T1), a control interval (Ci), a road condition type (RC) and the like.
Probability scale self-organizing machine learning DL1Is responsible for obtaining the maximum probability value from the training data of a plurality of automatic driving automobiles and inputting the maximum probability value into the perception layer (P)1) Is connected to the sensing layer and the neural layer (P)2) Probability scale self-organizing machine learning DL1Is responsible for
And solving the maximum probability distribution of the training data for multiple times, eliminating incorrect training data, and identifying and establishing new training data. When the calculation of the calculated data through a distance formula (10) capable of unifying Euclidean space and probability space and a fuzzy event probability measure formula (11) of different spaces exceeds a certain range, the data are put into a storage space to be observed, if the subsequent training also has some training results similar to the data, the data can be subjected to probability scale self-organized machine learning to form a new 'intellectual acquisition' result, and otherwise the data are rejected.
The EPD is a storage space for data retrieval, retrieves the content of an EPD database according to a state instruction of consciousness acquisition, and takes out the data of a control item to control the running of the automatic driving automobile. The specific retrieval method comprises the following steps: by a distance formula (10) capable of unifying Euclidean space and probability space and a fuzzy event probability measure formula (11) of different spaces, the distance between the driving requirement required by the state instruction of consciousness acquisition and the probability distribution of data in an EPD database or the fuzzy event probability measure is calculated to obtain the closest database of machine learning intelligence acquisition in the process control of automatic driving, and the automatic driving automobile can be controlled by using each data of a control item. In addition to this, there are control of the attitude of the autopilot by the gyroscope, control of the positioning, control of the lane line, and the like.
The "intellectual development" of autonomous driving is to solve the problem of the robot learning the driving skills and to simplify the control of the complicated autonomous driving, so that it is necessary to train the autonomous driving car system a lot in advance so that the "intellectual development" has sufficient knowledge to cope with various driving conditions.
The automatic driving constitutes a method for acquiring the 'intelligent acquisition' data; the relationship between a certain vehicle and the same driving vehicle or a state instruction given by road conditions is obtained through the consciousness decision unit. After receiving the state command, the intelligence obtaining unit generates the following information on the training automatic driving automobile: at least one of steering wheel information, throttle information, brake information, gear engaging information and turning indicator light information is logged in to form an intelligent acquisition database.
The "consciousness determination" for obtaining the relationship between one of the cars and the passing car means: the method comprises at least one of fuzzy event probability measure relation, fuzzy relation, probability relation and workshop distance relation.
The data obtained by the intelligence is a plurality of data obtained by a plurality of times of training in the same consciousness decision instruction and the maximum probability value of the training data is obtained by learning through a probability scale self-organizing machine; a maximum probability space of training data; a maximum probability distribution of the training data.
Here, the maximum probability value of the training data is used as a control value of the "intellectual acquisition" unit, the maximum probability space of the training data is used as a basis for judging the training quality, performing the accepting or rejecting of the training result, and establishing a new "intellectual acquisition" data. The maximum probability distribution of the training data serves as redundancy of the automatic driving on the control, and necessary conditions for establishing sample data and registered data retrieval with probability distribution property are established.
The control method of the automatic driving leading-in machine intelligence acquisition model is as follows; acquiring a relation between a certain vehicle and a passing vehicle or a state instruction of road conditions through consciousness determination, calling the 'intelligence acquisition' data corresponding to the state instruction after acquiring the state instruction, and controlling the running of the automatic driving vehicle according to the 'intelligence acquisition' data.
The "consciousness determination" for obtaining the relationship between one of the cars and the passing car means: the method comprises at least one of fuzzy event probability measure relation, fuzzy relation, probability relation and workshop distance relation.
The above-mentioned calling of the "machine intelligence acquisition" data corresponding to the state means: the conditions of the instructions given by the consciousness decision and the data in the intellectually acquired database are calculated by a distance formula of a uniform Europe Level space and a probability space or a fuzzy event probability measurement formula, and the intellectually acquired data with the minimum distance or measurement is used as control data.
The principle of 'comfortable riding' proposed by ergonomics is used; controlling acceleration of the autonomous driving; or not more than + -x m/s at deceleration2]Or not more than y [ m/s3]The acceleration of the vehicle is avoided to give a certain uncomfortable riding feeling; the effect of 'comfortable riding' is achieved.
The realization of the control of the automatic driving is not only that the 'intelligent acquisition' data is called, according to the self-discipline decentralized control theory, the control of the automatic driving has independent control capability, various accidental events can randomly occur under a road condition, and various information of an automatic driving automobile sensing layer is required to be mastered in the 'intelligent acquisition' functional unit, and the automatic driving automobile can also autonomously carry out a certain range under the condition of being separated from the 'consciousness decision' functional unit, and the running of the automatic driving automobile under a certain condition.
The intelligent acquisition data is continuously controlled according to the distance measured by the gyroscope in the walking process of the automatic driving automobile, the intelligent acquisition is realized by acquiring the data according to the distance in the training process, and the intelligent acquisition data is read according to the distance in the walking process of the automatic driving automobile to control corresponding parameters.
The control of the automatic driving automobile is not only simply reading the 'intellectual acquisition' data and controlling according to the 'intellectual acquisition' data, but also as an autonomous distributed control system, the automatic driving automobile executes the read 'intellectual acquisition' data, performs necessary control according to the 'intellectual acquisition' data, receives information from a sensing layer, can autonomously judge the possibility of various events according to the occurrence of sudden events or the possibility of the occurrence of the sudden events, and performs corresponding processing.
FIG. 16 is a schematic illustration of a situation that may be encountered during autonomous driving.
FIG. 16 (a) shows an autonomous vehicle C1Bus C about to pass through right lane and just stopped2In the front of a bus, a blind area invisible to an autonomous vehicle is provided, and it is considered that a passenger should come out of the front of the bus as a control unit of the autonomous vehicle, so that the bus can be stopped in an emergency.
FIG. 16 (b) shows an autonomous vehicle C1Will pass through a crossroad without signal lamps, and a co-traveling vehicle C is also arranged on the other road at the left of the crossroad2When coming to the crossroad, the co-running vehicles C2The position is a blind area for the autonomous vehicle as the autonomous vehicle C1Consider the driving of a vehicle C together2In the direction of the crossTraffic accidents may occur at the intersection probe, so the automobile C is automatically driven1To ensure that the co-operating vehicle C appears2The probe can also avoid traffic accidents at the crossroad.
However, in the driving process of the automatic driving automobile, there are many places like the places where traffic accidents are likely to occur, the situation that the driving process is extremely uncomfortable can be processed badly, the automatic driving automobile needs to be determined by considering consciousness and can be used for riding comfortably, and because the probability that passengers run out of the front of a bus is very small, the passengers can be braked emergently even if the passengers come out according to the distance between the automatic driving automobile and the bus, the current speed and the time when the passengers approach the front end of the bus, so that accidents can not occur, the best comfortable riding and the like can be ensured, and people in the 'intelligence obtaining' can be called to teach the machine how to run.
The method for solving the problem is as follows: firstly, machine learning of various driving conditions such as initial speed, target speed, driving distance, terminal speed and the like is carried out, so that how to meet the safety rule and how to ride a car comfortably is achieved; how to satisfy the "security rules"; and how to comfortably ride in a vehicle while achieving "quick arrival". These are all through the human teaching machine, form a large amount of "machine wisdom acquisition" data, and let the curve that "machine wisdom acquisition" data was gone on merge with the curve of going that "consciousness decision" forecasted.
Here, a method is proposed which can teach the driving skill of the excellent driving coach to the automatic driving car through the machine intelligence acquisition of the machine learning, and can smoothly carry out the 'comfortable riding' of the automatic driving car through the state command issued by the 'consciousness determination', and solve the complex NP control problem faced by the automatic driving car.
In order to solve the complexity of various driving processes, the proposal is that the robot learns various driving skills through machine learning, and how to smoothly change a driving state into a 'driving flow' in the face of each state command of 'consciousness decision', which is also dependent on the learning of the robot. Under the guiding idea, the multi-purpose control of ' safe driving ', ' comfortable riding ', fast arrival ', ' energy consumption saving ' and the like is realized.
FIG. 17 is a schematic diagram of the fusion method of "intellectual acquisition" and "consciousness determination".
The application provides the integration of four objective functions of 'machine intelligence acquisition', 'consciousness decision', 'comfortable riding' and 'quick arrival', firstly depends on 'machine intelligence acquisition', when man-machine learning is carried out, data after learning has the characteristics of 'comfortable riding' as much as possible, and 'quick arrival', and man-machine learning is carried out under various driving conditions, the superior driving skills of good drivers are let, the drivers can enjoy the pleasure of 'comfortable riding', the driving skills are taught to the automatic driving automobile, and the method can be as shown in figure 17: data MLD obtained according to' intelligence under normal condition1In motion, when the car C is automatically driven1In the same-time driving C2Regional "decisions of consciousness" suggest exceeding co-operating vehicles C2Region, according to given overrun co-driving C2Time of zone, and speed required, i.e. in the autonomous vehicle C1At time t, the higher speed "wisdom acquisition" data MLD is invoked by "consciousness decision2When the user needs to pay attention to 'comfortable riding' and 'consciousness decision', the data MLD is called early2So as to slowly increase the speed variation of the vehicle.
For the objective function of "comfortable ride", the realization of "comfortable ride" can be proposed according to interpersonal studies, mainly exceeding ± 7 m/s in acceleration and deceleration of an autonomous vehicle2]Or about 10[ m/s3]The vibrations of the acceleration of the vehicle will give an uncomfortable ride. The problem that the vehicle is not suitable for comfortable riding in the acceleration and deceleration process is solved, and the effect of comfortable riding can be achieved.
The application provides the fusion of four objective functions of 'intellectual acquisition', 'consciousness decision', 'comfortable riding', and 'quick arrival', and the second method is as follows: the data of the intelligent acquisition and the data of the consciousness decision are corrected by using the principle of comfortable riding proposed by the interpersonal co-study, so that the data of the intelligent acquisition and the consciousness decision meet the requirement of comfortable riding.
The application provides the fusion of four objective functions of 'intellectual acquisition', 'consciousness decision', 'comfortable riding' and 'quick arrival', and the third method is as follows: on the driving curve obtained by the 'intelligence obtaining', the driving curve obtained by the 'consciousness determining' and the driving curve obtained by the 'comfortable riding', the optimal approximation of the three curves is carried out by the least square method, and the automatic driving automobile is driven on the curve after the function approximation. Or calculating the maximum probability value of each discrete point of the three curves through the maximum probability self-organizing unsupervised machine learning shown in fig. 5, and enabling the automatic driving automobile to drive on the curve formed by the connected maximum probability values. The buffing treatment may also be performed with spline functions. Antagonistic learning emulating figure 14 relying on "intellectual acquisition", "conscious decision", "comfortable ride", and "fast reaching" four objective functions can also be achieved.
The concept of counterlearning is to reconcile in a negative direction when data obtained on site, such as speed, acceleration, and probability distribution of rapid destination arrival, are close to the probability distribution of energy consumption and safe driving, and are far away from the probability distribution. On the contrary, the acceleration is far away from the probability distribution of the rapid target arrival, but is close to the probability distribution of the energy consumption and the safe driving, and then the acceleration is adjusted according to the direction favorable for the positive direction, so that the multi-purpose control of the counterstudy is realized. The automatically driven vehicle is in the optimal control state.
The above summary of "consciousness determination" and "intellectual development" provides a specific training method for automatically driving a vehicle.
The running state can simultaneously control the accelerator to accelerate and decelerate, control the steering wheel to turn the automatic driving automobile, control the brake to decelerate or stop, control the gear shifting to advance or retreat the automobile, control the turning direction indication and the like. When a driving state is changed when a decision layer is not received, or a sudden state is met, the driving state is always kept unchanged, and if one state is completed, the next driving state can be entered according to the state command of the consciousness decision unit.
Here, it is proposed that when a good driver drives a car, based on the "consciousness-determined" road condition information formed by the "consciousness-determined" function unit mounted inside the auto-driven car and received by the auto-driven car's surrounding co-driving state information, such as the distance between the auto-driven car and each co-driving car, the speed of each co-driving car, etc., under each different road condition, the good driver's controlled accelerator acceleration and deceleration data, the steering wheel controlled steering data of the auto-driven car, the brake controlled deceleration or stop data, the shift controlled forward or backward data of the car, the turning direction indication data, etc. are stored in the database, the training of the good driver's continuous auto-driven car is automatically divided into each road condition by the "consciousness-determined" road condition information, the good driver's controlled accelerator acceleration is obtained under each road condition, the data of deceleration, control the steering wheel to make the automatic driving car turn the data, control the brake to make its deceleration or stop the data, control and shift to make the car go forward or go backward the data, control the direction indicating data of turning a corner, etc., form "machine intelligence acquisition" data to the automatic driving car control with these data, can train the automatic driving car automatically in this way, make the automatic driving car obtain the knowledge that the people teach, thus produce the intelligence of the machine. The automatic driving automobile can have a certain driving level as people, the complexity of the automatic driving automobile control can be reduced, and NP problem caused by too complicated control in the automatic driving automobile can be avoided.
The control method of the automatic driving automobile obtains the control data under various road conditions through the training of excellent drivers, and realizes the machine intelligence acquisition, so that the corresponding machine intelligence acquisition data can be called according to the state instructions sent by the consciousness decision unit according to various road conditions, and the running of the automatic driving automobile is controlled.
The membership function definition method provided by the invention can be defined in various ways according to the idea of the invention, but the application of the membership function definition method in the field of automatic driving automobiles and the control of the automatic driving automobiles by adopting the fuzzy mathematics theory belong to the scope of the invention.
In the present invention, the control data for the autonomous vehicle is obtained by recording data related to driving training of a good driver in the traveling process of the autonomous vehicle by machine learning, and the obtained data is used as control data for various road conditions of the autonomous vehicle.
Various "consciousness determination" is possible by the safety rules, and any theory using fuzzy mathematics falls within the scope of the present invention.

Claims (3)

1. A method of defining probability measures of fuzzy events across different spaces, characterized by:
considering the spatial distance between the data and the probability value of the probability distribution of the data in the probability space; both is
The formula for the probability measure of a fuzzy event, which sets R belong to the set V of probability distributions, can be derived from the following formula:
Figure FSA0000172245170000011
Figure FSA0000172245170000012
Figure FSA0000172245170000013
here, the first and second liquid crystal display panels are,
Figure FSA0000172245170000014
Figure FSA0000172245170000015
in addition, the first and second substrates are,
βj (vj)=(1+pfj (vj))
αj=(1+phj (vj)+phj (wj))
here,. DELTA.j (wj)For the characteristic value w with probability distributionjE W (j ═ 1, 2, …, n), in probability space WjDistance error, m, presented inj (wj)To be in a probability space wjNumber of discrete distributions of probabilities, Dij (wj)To be in a probability space wjLength of the probability dispersion distribution, Pij (wj)Probability values which are probability discrete distribution in probability space;
in addition, Δj (vj)For the characteristic values v with probability distributionjE.v (j ═ 1, 2, …, n), in probability space VjDistance error, m, presented inj (vj)To be in a probability space vjNumber of discrete distributions of probabilities, Dij (vj)To be in a probability space vjLength of the probability dispersion distribution, Pij (vj)To be in a probability space vjA probability value of the probability dispersion distribution;
furthermore, pfj (vj)Is a collection element rjE.r (j ═ 1, 2, …, n), or a probability distribution set element wjE.w (j ═ 1, 2, …, n) at the probability distribution set element vjE.v (j ═ 1, 2, …, n) probability distribution values for the positions in the probability space;
in the same way, phj (wj)Is a collection element rjE.r (j ═ 1, 2, …, n), or a probability distribution set element vjE.v (j ═ 1, 2, …, n) at the probability distribution set element wjE.w (j ═ 1, 2, …, n) for the position in the probability space.
2. A method of defining probability measures of fuzzy events across different spaces according to claim 1, characterized by: the formula of the membership function of the fuzzy event probability measure comprises any fuzzy numerical value of which the result is 0-1 and can be formed by the target function according to a certain rule given by people; or considering the joint expression of fuzzy information and probability information; or a formula in which a common expression of spatial information and probability information is taken into consideration.
3. A method of defining probability measures of fuzzy events across different spaces according to claim 1, characterized by: the spatial information of the fuzzy event probability measure is based on the distance between the unified Euclidean space and the probability space, and meets the following distance condition;
(1) nonnegativity:
Figure FSA0000172245170000021
v,d(w,v)≥0;
(2) non-degradability: d (w, v) is 0, then w is v;
(3) symmetry:
Figure FSA0000172245170000022
v,d(w,v)=d(v,w);
(4) the triangle inequality:
Figure FSA0000172245170000023
r,v d(w,v)≤d(w,r)+d(r,v)。
CN201811213228.3A 2018-10-11 2018-10-11 Method for defining fuzzy event probability measure spanning different spaces Pending CN111046897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811213228.3A CN111046897A (en) 2018-10-11 2018-10-11 Method for defining fuzzy event probability measure spanning different spaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811213228.3A CN111046897A (en) 2018-10-11 2018-10-11 Method for defining fuzzy event probability measure spanning different spaces

Publications (1)

Publication Number Publication Date
CN111046897A true CN111046897A (en) 2020-04-21

Family

ID=70230586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811213228.3A Pending CN111046897A (en) 2018-10-11 2018-10-11 Method for defining fuzzy event probability measure spanning different spaces

Country Status (1)

Country Link
CN (1) CN111046897A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814917A (en) * 2020-08-28 2020-10-23 成都千嘉科技有限公司 Character wheel image digital identification method with fuzzy state
WO2022099526A1 (en) * 2020-11-12 2022-05-19 深圳元戎启行科技有限公司 Method for training lane change prediction regression model, and lane change predicton method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814917A (en) * 2020-08-28 2020-10-23 成都千嘉科技有限公司 Character wheel image digital identification method with fuzzy state
WO2022099526A1 (en) * 2020-11-12 2022-05-19 深圳元戎启行科技有限公司 Method for training lane change prediction regression model, and lane change predicton method and apparatus

Similar Documents

Publication Publication Date Title
CN111045422A (en) Control method for automatically driving and importing 'machine intelligence acquisition' model
Bachute et al. Autonomous driving architectures: insights of machine learning and deep learning algorithms
CN110796856B (en) Vehicle lane change intention prediction method and training method of lane change intention prediction network
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN111046710A (en) Image extraction method for importing SDL (software development language) model
WO2020264010A1 (en) Low variance region detection for improved detection
CN107272687A (en) A kind of driving behavior decision system of automatic Pilot public transit vehicle
Huang et al. A probabilistic risk assessment framework considering lane-changing behavior interaction
Zhang et al. Collision avoidance predictive motion planning based on integrated perception and V2V communication
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system
CN111038521A (en) Method for forming automatic driving consciousness decision model
CN110182217A (en) A kind of traveling task complexity quantitative estimation method towards complicated scene of overtaking other vehicles
CN114030485B (en) Automatic driving automobile person lane change decision planning method considering attachment coefficient
Chen Multimedia for autonomous driving
CN114932918A (en) Behavior decision method and system for intelligent internet vehicle to drive under various road conditions
CN111046897A (en) Method for defining fuzzy event probability measure spanning different spaces
Ming Exploration of the intelligent control system of autonomous vehicles based on edge computing
Zhao et al. Improving autonomous vehicle visual perception by fusing human gaze and machine vision
CN111126612A (en) Automatic machine learning composition method
CN117115690A (en) Unmanned aerial vehicle traffic target detection method and system based on deep learning and shallow feature enhancement
CN116729433A (en) End-to-end automatic driving decision planning method and equipment combining element learning multitask optimization
CN109543497A (en) A kind of construction method of more purposes control machine learning model suitable for automatic Pilot
CN114120246B (en) Front vehicle detection algorithm based on complex environment
CN111047004A (en) Method for defining distance spanning different spaces
YU et al. Vehicle Intelligent Driving Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination