CN114626472A - Auxiliary driving method and device based on machine learning and computer readable medium - Google Patents

Auxiliary driving method and device based on machine learning and computer readable medium Download PDF

Info

Publication number
CN114626472A
CN114626472A CN202210273214.0A CN202210273214A CN114626472A CN 114626472 A CN114626472 A CN 114626472A CN 202210273214 A CN202210273214 A CN 202210273214A CN 114626472 A CN114626472 A CN 114626472A
Authority
CN
China
Prior art keywords
image
vehicle
visibility
machine learning
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210273214.0A
Other languages
Chinese (zh)
Inventor
霍宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202210273214.0A priority Critical patent/CN114626472A/en
Publication of CN114626472A publication Critical patent/CN114626472A/en
Priority to PCT/CN2022/116658 priority patent/WO2023173699A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a driving assisting method, a driving assisting device and a computer readable medium based on machine learning. The method comprises the following steps: acquiring training data, wherein the training data comprises a plurality of images of a target road section under different weather conditions, and each image has visibility; constructing a machine learning model, inputting the training data into the machine learning model, wherein the output result of the machine learning model comprises the visibility of each image; acquiring a first image of the target road section, inputting the first image into the trained machine learning model, wherein a first output result of the trained machine learning model comprises the current visibility of the first image; performing defogging processing on the first image according to the current visibility to obtain a second image; and providing visual information on the vehicle-mounted display device according to the second image.

Description

Auxiliary driving method and device based on machine learning and computer readable medium
Technical Field
The invention mainly relates to the field of intelligent automobiles, in particular to a driving assisting method and device based on machine learning and a computer readable medium.
Background
With the rapid development of new energy vehicles and intelligent vehicles, the vehicles have more and more functions of automatic driving, computer-aided driving and the like besides the traditional driving function, so that various auxiliary functions are provided for drivers, and more importantly, the driving safety is improved. In the severe weather such as foggy weather, rainy weather, snowy weather and the like, the observation and judgment of drivers are influenced by factors such as reduced visibility, slippery ground, complex environment and the like, and traffic accidents are easy to happen. At present, no effective scheme is available for helping a driver to eliminate the influence of the environment in severe weather.
Disclosure of Invention
The invention aims to provide a machine learning-based auxiliary driving method, a machine learning-based auxiliary driving device and a computer readable medium for improving the driving safety in severe weather.
The invention adopts a technical scheme for solving the technical problems, and the driving assisting method based on machine learning comprises the following steps: acquiring training data, wherein the training data comprises a plurality of images of a target road section under different weather conditions, and each image has visibility; constructing a machine learning model, inputting the training data into the machine learning model, wherein the output result of the machine learning model comprises the visibility of each image; acquiring a first image of the target road section, inputting the first image into the trained machine learning model, wherein a first output result of the trained machine learning model comprises the current visibility of the first image; carrying out defogging processing on the first image according to the current visibility to obtain a second image; and providing visual information on the vehicle-mounted display device according to the second image.
In an embodiment of the present application, the method further includes: acquiring multiple groups of radar data of the vehicle while acquiring the multiple images, wherein the radar data is used for providing obstacle distribution around the vehicle, and the multiple groups of radar data correspond to the multiple images one by one; determining the visual range of a driver according to the obstacle distribution and the visibility; the training data comprises the plurality of sets of radar data and the visible range; the output result of the machine learning model also comprises a visual range corresponding to each image; acquiring a first image of the target road section and first radar data of the vehicle at the same time, inputting the first image and the first radar data into the trained machine learning model, wherein a second output result of the trained machine learning model comprises a current visual range; generating a third image according to the second image and the current visual range; and providing visual information on the vehicle-mounted display device according to the third image.
In an embodiment of the present application, the step of performing defogging processing on the first image according to the current visibility includes: acquiring a clear image of the target road section in a sunny day; grouping each image in the training data according to the visibility, wherein the visibility of the images in the same visibility group is within a corresponding preset range, and each visibility group comprises at least 2 images; respectively calculating the similarity of the plurality of images in each visibility group and the clear image, and acquiring a target image with the maximum similarity in each group; similar characteristic parameters are extracted according to the target image and the clear image, and each visibility group has one similar characteristic parameter; and processing all images in the visibility group according to the similar characteristic parameters to obtain the second image.
In an embodiment of the present application, the sources of the plurality of images include: one or more of a vehicle-mounted camera, a road-mounted camera and a mobile terminal.
In an embodiment of the application, the vehicle-mounted display device includes a vehicle-mounted large screen, and the step of providing visual information on the vehicle-mounted display device according to the second image includes: and displaying the second image on the vehicle-mounted large screen.
In an embodiment of the application, the vehicle-mounted display device includes a vehicle-mounted large screen, and the step of providing visual information on the vehicle-mounted display device according to the third image includes: and displaying the third image on the vehicle-mounted large screen.
In an embodiment of the application, the third image comprises a simulated image of the vehicle and visualized first radar data around the vehicle.
In an embodiment of the application, the different weather conditions comprise at least one of sunny, rainy, snowy, and foggy days.
The present application further provides a driving assistance device based on machine learning for solving the above technical problem, which is characterized by comprising: an image acquisition unit, a machine learning model, a defogging processing unit and a vehicle-mounted display device,
the image acquisition unit is used for acquiring a first image of a target road section; the machine learning model is used for outputting the current visibility of the first image according to the input first image; the defogging processing unit is used for defogging the first image according to the current visibility to obtain a second image; the vehicle-mounted display equipment is used for providing visual information on the vehicle-mounted display equipment according to the second image.
In an embodiment of the present application, the method further includes: a radar data acquisition unit configured to acquire first radar data of a vehicle while the image acquisition unit acquires the first image; the machine learning model is further configured to output a current visibility range of a driver based on the first image and the first radar data.
The present application further provides a driving assistance device based on machine learning for solving the above technical problem, including: a memory for storing instructions executable by the processor; a processor for executing the instructions to implement the method as described above.
The present application also proposes a computer readable medium storing computer program code, which when executed by a processor implements the method as described above.
According to the auxiliary driving method based on machine learning, the visibility of the first image acquired by the vehicle in real time can be acquired through the machine learning model, the first image is subjected to defogging processing according to the visibility, the relatively clear second image is acquired, the second image is visually displayed to the driver through the vehicle-mounted display device, and clear and safe driving information can be provided for the driver.
Drawings
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below, wherein:
FIG. 1 is an exemplary flow diagram of a method for machine learning based assisted driving according to an embodiment of the present application;
fig. 2 is an exemplary flowchart of a defogging process performed on a first image according to a current visibility in the driving assistance method according to the embodiment of the present application;
FIG. 3 is a block diagram of a machine learning based driving assistance apparatus according to an embodiment of the present application;
fig. 4 is a system block diagram of a driving assistance device based on machine learning according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the present application are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be practiced in other ways than those described herein and thus is not limited to the specific embodiments disclosed below.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
The machine learning-based driving assistance method, device and computer readable medium of the present application are suitable for safe driving in severe weather with reduced visibility, such as fog, rain, snow, etc.
Fig. 1 is an exemplary flowchart of a machine learning-based driving assistance method according to an embodiment of the present application. Referring to fig. 1, the driving assist method of the embodiment includes the steps of:
step S110: acquiring training data, wherein the training data comprises a plurality of images of a target road section under different weather conditions, and each image has visibility;
step S120: constructing a machine learning model, inputting training data into the machine learning model, wherein the output result of the machine learning model comprises the visibility of each image;
step S130: acquiring a first image of a target road section, inputting the first image into a trained machine learning model, wherein a first output result of the trained machine learning model comprises the current visibility of the first image;
step S140: carrying out defogging processing on the first image according to the current visibility to obtain a second image; and
step S150: and providing visual information on the vehicle-mounted display device according to the second image.
The above steps S110 to S150 will be described with reference to the drawings.
In step S110, the training data refers to data used for training a machine learning model. The target link is used for specifying the position of the image to be captured. The application does not limit the position, the characteristics and the like of the target road section. It may be a road segment of interest or a region with multiple accidents, such as a crossroad, a curve, etc.
The present application does not limit how the plurality of images are acquired as the training data.
In some embodiments, the sources of the plurality of images include: one or more of a vehicle-mounted camera, a road-mounted camera and a mobile terminal.
In some embodiments, multiple images of the target road segment under different weather conditions are obtained through the vehicle-mounted camera device. In these embodiments, multiple images of the same target road segment may be obtained by onboard cameras on multiple different vehicles. It can be understood that due to differences in vehicle speed, environment, and the like, there are also differences in the multiple images of the same target road segment obtained by different vehicles. The environment here refers to the environment around the vehicle, including other vehicles, pedestrians, non-motor vehicles, and the like. However, since the images are all obtained at the target link, the images have a certain correlation.
In some embodiments, multiple images of the target road segment under different weather conditions are obtained through the on-board camera device. In these embodiments, the onboard camera may be, for example, a camera fixed at the roadside, such as a camera mounted on a traffic light, or a specialized traffic camera, or the like. For these embodiments, since the position of the on-board camera is typically fixed, the obtained image has a higher correlation with respect to the on-board camera. The multiple images of the target road segment obtained by the on-board camera device are also affected by the environment, so that different images have certain differences.
In some embodiments, the plurality of images originate from a mobile terminal. The mobile terminal includes, but is not limited to, a mobile terminal of a driver or a member in a vehicle, or a vehicle-mounted mobile terminal.
In some embodiments, the different weather conditions in step S110 include at least one of sunny, rainy, snowy, and foggy days.
In step S110, each image has a visibility. The visibility is the maximum distance that a person with normal vision can recognize an object from the background. That is, the sky near the horizon is used as the background in the daytime, the outline of a dark target object on the ground with a visual angle larger than 20 degrees can be clearly seen, the object can be identified, and the luminous point of the target lamp can be clearly seen at night. In m (meters). The amount of visibility is mainly determined by two factors: the luminance difference between the object and the background against which it is set off. The larger (smaller) the difference, the larger (smaller) the visible distance. But this difference in brightness typically does not vary much. ② atmospheric transparency. The air layer between the observer and the target can reduce the aforementioned brightness difference. The poorer (better) the atmospheric transparency, the smaller (greater) the visible distance.
In severe weather, particularly in foggy days, the atmospheric transparency is greatly reduced compared with that in sunny days, so that the visibility is low, the driver is difficult to judge the surrounding environment, the driving is difficult, and traffic accidents are easy to happen.
Visibility is generally measured by visual observation, and may be measured by instruments, such as an atmospheric projector, a laser visibility automatic measuring instrument, and the like.
The visibility obtaining mode is not limited by the application.
In some embodiments, when training data is acquired, a visibility measuring instrument is simultaneously arranged on a target road segment, and the acquisition time of an image and the measurement time of the visibility measuring instrument are corresponding to each other, so that each image has corresponding visibility.
In some embodiments, the visibility data is obtained centrally by the meteorological center, and the visibility corresponding to the obtained image can be obtained from a database of the meteorological center according to the time and the position of the image.
In step S120, the present application does not limit the specific real-time manner of the machine learning model. The machine learning model can be a traditional neural network model and can also be constructed by adopting principles such as a support vector machine, logistic regression, decision trees, Bayes and the like. The machine learning model of the present application can also be implemented jointly using a variety of different algorithms and methods, using a deep learning model.
In step S120, the training data acquired in step S110 is input into the machine learning model, and the output result of the machine learning model includes visibility of each image in the training data.
In some embodiments, step S120 further comprises testing the machine learning model. At the time of testing, a part of the data may be selected as test data, which are a plurality of images of the target link under different weather conditions, similarly to the training data in step S110. But the test data is not training data. Through the test of the test data, the machine learning model can be optimized according to the test result, so that the machine learning model can be classified according to the visibility of the image, and the visibility of the image is obtained.
In step S130, a first image of the target link is acquired.
In some embodiments, the first image is obtained by an onboard camera, rather than an onboard camera device.
The specific arrangement mode of the vehicle-mounted camera device on the vehicle is not limited. For example, the on-vehicle imaging device may be provided inside or outside the vehicle body, and the on-vehicle imaging device may obtain images of different directions and different angles in front of the vehicle, behind the vehicle, on the side of the vehicle, and the like, depending on the angle of the arrangement.
The first image is input into the machine learning model trained by step S120, which outputs a first output result including the current visibility of the first image.
According to step S130, any first image of the target road segment input to the machine learning model may be processed, and the current visibility corresponding to the first image may be obtained. It will be appreciated that if the first image is taken on a sunny day, then the current visibility is higher; if the first image is taken in fog, then the current visibility is low.
In step S140, the first image is defogged according to the obtained current visibility, and a second image is obtained.
The present application does not limit how the first image is subjected to the defogging process. The first image can be processed by various image processing means such as image enhancement, segmentation, sharpening and the like in the field of image processing, so that the visibility of the first image with low visibility is improved, and the recognition degree of the characteristic image in the first image is improved. In an embodiment of the present application, the feature image refers to image content having influence on a driving process, including: traffic lights, pedestrians, various vehicles, roadblocks, and the like.
Fig. 2 is an exemplary flowchart of a defogging process performed on a first image according to current visibility in the driving assistance method according to an embodiment of the present application. Referring to fig. 2, in this embodiment, the step of performing defogging processing on the first image according to the current visibility in step S140 includes:
step S141: acquiring a clear image of a target road section in sunny days;
step S142: grouping each image in the training data according to visibility, wherein the visibility of the images in the same visibility group is within a corresponding preset range, and each visibility group comprises at least 2 images;
step S143: respectively calculating the similarity of the multiple images and the clear images in each visibility group, and acquiring a target image with the maximum similarity in each group;
step S144: extracting similar characteristic parameters according to the target image and the clear image, wherein each visibility group has one similar characteristic parameter;
step S145: and processing all images in the visibility group according to the similar characteristic parameters to obtain a second image.
The following describes steps S141 to S145.
In step S141, the sharp image may be acquired from the training data. Whether an image is a sharp image can be judged according to the weather condition when the image is acquired. For example, if the weather condition at the time of image acquisition is sunny, the image is a clear image; if the weather condition at the time of image acquisition is rainy or snowy, the image is not a sharp image.
In some embodiments, whether an image is a sharp image may be determined according to visibility of the image. For example, a visibility threshold S is set, if the visibility of an image is greater than S, the image is a clear image, and if the visibility of the image is less than S, the image is not a clear image.
In step S142, each image in the training data is grouped, for example, the images with visibility in the first preset range F1 are F1 groups, the images with visibility in the second preset range F2 are F2 groups, and so on.
The application does not limit the specific numerical value of the preset range of the visibility. For example, the first predetermined range F1 is 0-10 meters, the second predetermined range is 10-30 meters, and so on.
According to step S142, images with similar visibility can be placed in the same visibility group.
In step S143, the similarity between the plurality of images and the clear image in each visibility group is calculated, respectively. The calculation method of the similarity is not limited in the application. For example, n images are included in a visibility group, in step S143, the similarity between the n images and a clear image is calculated, and n similarities can be obtained. When m clear images exist, the similarity between the n images and the m clear images is calculated respectively, and n × m similarities can be obtained.
In some embodiments, the similarity is calculated by calculating mutual information or fuzzy mutual information of the plurality of images and the clear images in each visibility group. The calculation of the fuzzy mutual information is as follows:
let image EA、EBHas a gray scale value range of { a1,a2,…,ak}{b1,b2,…,bJIs given by a joint probability distribution of { P (a) }k,bj)},k=1,2,…,K;j=1,2,…,J,
Figure BDA0003554674180000081
For its gray scale registration relationship, image E is definedA、EBThe fuzzy mutual information is as follows:
Figure BDA0003554674180000082
Figure BDA0003554674180000083
as an image EA、EBGray scale registration correlation of, representing akAnd bjThe degree of registration relationship.
Figure BDA0003554674180000084
The following two conditions are satisfied:
(1)
Figure BDA0003554674180000085
(2)
Figure BDA0003554674180000086
when in use
Figure BDA0003554674180000091
When the temperature of the water is higher than the set temperature,
Figure BDA0003554674180000092
formula (2) shows that mutual information I (a, B) is a special case when the gray scale registration correlation function of the fuzzy mutual information fI (a; B) takes a value of 1, that is: the fuzzy mutual information fI (A; B) is more universal.
In the field of image registration, when the spatial position of an image to be registered is completely consistent with a target image, the information expressed between the images is the maximum, meanwhile, the gray registration correlation degree of a gray sequence pair is the maximum, and then the fuzzy mutual information reaches the maximum. Therefore, the fuzzy mutual information can be used as a registration measure for image registration, and the position where the fuzzy mutual information is the largest is the registration position of the image.
Based on the above characteristics of the blur mutual information, the blur mutual information between the plurality of images and the sharp image is calculated in step S142, and the obtained result is taken as the similarity consideration of the plurality of images and the sharp image. The target image having the greatest similarity in each group may be the target image having the greatest mutual blur information in each group.
Continuing with the previous example, in a visibility group having n images, the target image is the image having the largest blur mutual information among the n blur mutual information. When there are m sharp images, a fuzzy mutual information matrix of n rows by m columns can be obtained, wherein the ith row corresponds to the ith image, i is 1: n is the same as the formula (I). And acquiring the average value or the maximum value of the m pieces of fuzzy mutual information of the ith row as the similarity value of the ith row. Comparing the n similarity values, the target image having the largest similarity value can be determined.
In step S144, the gray values of the target image and the clear image may be compared, and similar feature parameters may be extracted therefrom. For example, the relationship of the gray values between the target image and the sharp image is established in the following manner:
f(x)=a0+a1*g(x) (3)
wherein, f (x) is used for representing the gray value of the clear image at the position x, g (x) is used for representing the gray value of the target image at the position x, a0And a1Are all similar characteristic parameters. The position x may be a number that is numbered from the top left corner of the image to the next column, indicating the position of each pixel.
Substituting the specific values of f (x), g (x) into the above formula (3) to obtain the similar characteristic parameter a0And a1
It should be noted that equation (3) is merely an example. In other embodiments, other linear or non-linear forms of equations may be used to obtain corresponding similar characteristic parameters.
In step S145, all images in each visibility group are processed according to the similar characteristic parameters.
For example, continuing the example shown in equation (3), based on the obtained similar feature parameter a0And a1And substituting the gray values of all the images in one group into the formula (3) can reversely obtain a second image corresponding to each image, wherein the second image is clearer compared with the original image, and the unclear factors in the original image are removed.
According to the above-described steps S142-S145, a correspondingly sharp second image may be obtained for each first image.
Thus, in step S150, the visualized information is provided on the in-vehicle display device according to the second image.
In some embodiments, the displaying of the second image according to the second image of step S150 is directly on the in-vehicle display device.
In some embodiments, the second image is further processed according to the second image finger of step S150 and then displayed on the in-vehicle display device.
In some embodiments, the in-vehicle display device includes an in-vehicle large screen, and the step of providing visual information on the in-vehicle display device according to the second image includes: and displaying the second image on the vehicle-mounted large screen.
In some embodiments, the driving assistance method of the present application further includes controlling the vehicle to automatically turn on the fog light according to visibility.
In some embodiments, the driving assistance method of the present application further includes automatically reducing the vehicle speed according to visibility.
In some embodiments, the driving assistance method of the present application further comprises providing warnings and reminders based on visibility.
According to the auxiliary driving method based on the machine learning, the visibility of the first image acquired by the vehicle in real time can be acquired through the machine learning model, the defogging processing is carried out on the first image according to the visibility, the relatively clear second image is acquired, the second image is visually displayed to the driver through the vehicle-mounted display device, and clear and safe driving information can be provided for the driver.
In some embodiments, the machine learning-based driving assistance method of the present application further comprises: the method comprises the steps of acquiring multiple images and multiple groups of radar data of a vehicle, wherein the radar data are used for providing obstacle distribution around the vehicle, and the multiple groups of radar data correspond to the multiple images one by one; determining the visual range of a driver according to the distribution and visibility of obstacles; the training data comprises a plurality of groups of radar data and visual ranges; the output result of the machine learning model also comprises a visual range corresponding to each image; the method comprises the steps of acquiring a first image of a target road section and first radar data of a vehicle, inputting the first image and the first radar data into the trained machine learning model, and enabling a second output result of the trained machine learning model to comprise a current visual range.
The method further comprises the following steps:
step S160: generating a third image according to the second image and the current visual range; and
step S170: and providing visual information on the vehicle-mounted display device according to the third image.
In particular, in such embodiments, radar data may be derived from onboard radar devices for providing obstacle distribution information around the vehicle including, but not limited to, front, rear, left, and right. And, the radar data and the image in step S110 correspond one-to-one.
In some cases, the image or the first image is acquired by an in-vehicle camera device while radar data is acquired by an in-vehicle radar device of the same vehicle.
And determining the visual range of the driver according to the obstacle distribution and the visibility. For the image, visibility only takes into account the influence of bad weather on air visibility, but does not take into account the influence of the vehicle surroundings on the visibility range of the driver. In embodiments that include radar data, both visibility and radar data are considered, which may help provide the driver with complex environmental information to assist the driver in making safety decisions.
According to the above-described embodiment, the training data of the machine learning model includes both a plurality of sets of radar data and a visible range, and the output result of the machine learning model includes the visible range of each image in addition to the visibility of each image.
The first image and the first radar data of the target road section acquired by the vehicle in real time are input into the trained machine learning model, and the visual range corresponding to the first image and the first radar data can be obtained.
In the above-described step S160, a third image is generated from the second image subjected to the defogging process and the visible range. It is understood that the obstacle distribution information is included in the third image, and there is more complex environment information than the second image.
In step S170 described above, the visualized information is provided on the in-vehicle display device according to the third image obtained in step S160.
In some embodiments, the in-vehicle display device includes an in-vehicle large screen, and the step of providing visual information on the in-vehicle display device according to the third image includes: and displaying the third image on the vehicle-mounted large screen.
It should be noted that, the specific setting position, number and display mode of the vehicle-mounted display device are not limited in the present application. In some embodiments, the in-vehicle display device is a liquid crystal screen.
In some embodiments, the third image includes a simulated image of the vehicle and the visualized first radar data around the vehicle. According to the embodiments, the relative positional relationship between the vehicle and the surrounding environment can be displayed to the driver, helping the driver to perform safe driving.
According to the driving assisting method of the embodiment, the visibility of the first image and the visual range of the driver can be obtained according to the first image and the first radar information of the vehicle in real time through the machine learning model, and the visibility and the visual range are visually displayed to the driver through the vehicle-mounted display device, so that clear and safe driving information can be provided for the driver.
Fig. 3 is a block diagram of a driving assistance device based on machine learning according to an embodiment of the present application. Referring to fig. 3, the driving assistance apparatus 300 includes an image acquisition unit 310, a machine learning model 320, a defogging processing unit 330, and an in-vehicle display device 340. The image acquiring unit 310 is configured to acquire a first image of a target road segment; the machine learning model 320 is used for outputting the current visibility of the first image according to the input first image; the defogging processing unit 330 is configured to perform defogging processing on the first image according to the current visibility to obtain a second image; the in-vehicle display device 340 is configured to provide visual information on the in-vehicle display device according to the second image.
The driving assistance apparatus 300 shown in fig. 3 can be used to perform the driving assistance method described above, and therefore, fig. 1, fig. 2, and the description above can be used to describe the driving assistance apparatus 300 of the present application.
In some embodiments, as shown in fig. 3, the driving assistance apparatus 300 further includes a radar data acquisition unit 350 for acquiring radar data of the vehicle while the image acquisition unit 310 acquires the first image; in this embodiment, the machine learning model 320 is also used to output the current visual range of the driver based on the first image and the first radar data.
The driving assistance apparatus 300 including the radar data acquisition unit 350 may be used to perform the driving assistance method including acquiring multiple sets of radar data, and the related descriptions may be used to describe the driving assistance apparatus 300 and will not be further described.
The invention also includes a machine learning-based driving assistance apparatus comprising a memory and a processor. Wherein the memory is to store instructions executable by the processor; the processor is configured to execute the instructions to implement the machine learning based assisted driving method described above.
Fig. 4 is a system block diagram of a driving assistance device based on machine learning according to an embodiment of the present invention. Referring to fig. 4, the driving assistance apparatus 400 may include an internal communication bus 401, a processor 402, a Read Only Memory (ROM)403, a Random Access Memory (RAM)404, and a communication port 405. When used on a personal computer, the device 400 may also include a hard disk 406. The internal communication bus 401 may enable data communication among the components of the device 400. The processor 402 may make the determination and issue the prompt. In some embodiments, processor 402 may be comprised of one or more processors. The communication port 405 may enable data communication of the driver assistance apparatus 400 with the outside. In some embodiments, the driver assistance device 400 may send and receive information and data from a network via the communication port 405. The device 400 may also include various forms of program storage units and data storage units, such as a hard disk 406, Read Only Memory (ROM)403 and Random Access Memory (RAM)404, capable of storing various data files for computer processing and/or communication use, as well as possible program instructions for execution by the processor 402. The processor executes these instructions to implement the main parts of the method. The results processed by the processor are communicated to the user device through the communication port and displayed on the user interface.
The driving assistance method based on machine learning may be implemented as a computer program, stored in the hard disk 406, and loaded into the processor 402 for execution, so as to implement the driving assistance method based on machine learning of the present application.
The invention also comprises a computer readable medium having stored computer program code which, when executed by a processor, implements the machine learning based assisted driving method as described above.
The driving assistance method based on machine learning may also be stored in a computer-readable storage medium as an article of manufacture when implemented as a computer program. For example, computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., electrically Erasable Programmable Read Only Memory (EPROM), card, stick, key drive). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
It should be understood that the above-described embodiments are illustrative only. The embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processor may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, and/or other electronic units designed to perform the functions described herein, or a combination thereof.
Aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. The processor may be one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), digital signal processing devices (DAPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or a combination thereof. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips … …), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD) … …), smart cards, and flash memory devices (e.g., card, stick, key drive … …).
The computer readable medium may comprise a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. The computer readable medium can be any computer readable medium that can communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested herein and are intended to be within the spirit and scope of the exemplary embodiments of this application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit-preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.

Claims (12)

1. A machine learning-based driving assistance method, comprising:
acquiring training data, wherein the training data comprises a plurality of images of a target road section under different weather conditions, and each image has visibility;
constructing a machine learning model, inputting the training data into the machine learning model, wherein the output result of the machine learning model comprises the visibility of each image;
acquiring a first image of the target road section, inputting the first image into the trained machine learning model, wherein a first output result of the trained machine learning model comprises the current visibility of the first image;
performing defogging processing on the first image according to the current visibility to obtain a second image; and
and providing visual information on the vehicle-mounted display equipment according to the second image.
2. The driving assist method according to claim 1, characterized by further comprising: acquiring multiple groups of radar data of the vehicle while acquiring the multiple images, wherein the radar data is used for providing obstacle distribution around the vehicle, and the multiple groups of radar data correspond to the multiple images one by one; determining the visual range of a driver according to the obstacle distribution and the visibility; the training data comprises the plurality of sets of radar data and the visible range; the output result of the machine learning model also comprises a visual range corresponding to each image;
acquiring a first image of the target road section and first radar data of the vehicle at the same time, inputting the first image and the first radar data into the trained machine learning model, wherein a second output result of the trained machine learning model comprises a current visual range;
generating a third image according to the second image and the current visual range; and
and providing visual information on the vehicle-mounted display equipment according to the third image.
3. The driving assist method according to claim 1, wherein the step of performing defogging processing on the first image according to the current visibility includes:
acquiring a clear image of the target road section in sunny days;
grouping each image in the training data according to the visibility, wherein the visibility of the images in the same visibility group is within a corresponding preset range, and each visibility group comprises at least 2 images;
respectively calculating the similarity of the plurality of images in each visibility group and the clear image, and acquiring a target image with the maximum similarity in each group;
similar characteristic parameters are extracted according to the target image and the clear image, and each visibility group has one similar characteristic parameter;
and processing all images in the visibility group according to the similar characteristic parameters to obtain the second image.
4. The driving assist method according to claim 1, wherein the sources of the plurality of images include: one or more of a vehicle-mounted camera, a road-mounted camera and a mobile terminal.
5. The driving assist method according to claim 1, wherein the in-vehicle display device includes an in-vehicle large screen, and the step of providing the visual information on the in-vehicle display device based on the second image includes: and displaying the second image on the vehicle-mounted large screen.
6. The driving assist method according to claim 2, wherein the in-vehicle display device includes an in-vehicle large screen, and the step of providing the visual information on the in-vehicle display device based on the third image includes: and displaying the third image on the vehicle-mounted large screen.
7. The driving assist method according to claim 6, wherein the third image includes a simulated image of the vehicle and visualized first radar data around the vehicle.
8. The method of assisting driving of claim 1, wherein the different weather conditions include at least one of sunny days, rainy days, snowy days, and foggy days.
9. A machine learning-based driving assistance apparatus, comprising: an image acquisition unit, a machine learning model, a defogging processing unit and a vehicle-mounted display device,
the image acquisition unit is used for acquiring a first image of a target road section;
the machine learning model is used for outputting the current visibility of the first image according to the input first image;
the defogging processing unit is used for defogging the first image according to the current visibility to obtain a second image;
the vehicle-mounted display equipment is used for providing visual information on the vehicle-mounted display equipment according to the second image.
10. The driving assist apparatus according to claim 9, further comprising: a radar data acquisition unit configured to acquire first radar data of a vehicle while the image acquisition unit acquires the first image; the machine learning model is further configured to output a current visibility range of the driver based on the first image and the first radar data.
11. A machine learning-based driving assistance apparatus comprising:
a memory for storing instructions executable by the processor;
a processor for executing the instructions to implement the method of any of claims 1-8.
12. A computer-readable medium having stored thereon computer program code which, when executed by a processor, implements the method of any of claims 1-8.
CN202210273214.0A 2022-03-18 2022-03-18 Auxiliary driving method and device based on machine learning and computer readable medium Pending CN114626472A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210273214.0A CN114626472A (en) 2022-03-18 2022-03-18 Auxiliary driving method and device based on machine learning and computer readable medium
PCT/CN2022/116658 WO2023173699A1 (en) 2022-03-18 2022-09-02 Machine learning-based assisted driving method and apparatus, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210273214.0A CN114626472A (en) 2022-03-18 2022-03-18 Auxiliary driving method and device based on machine learning and computer readable medium

Publications (1)

Publication Number Publication Date
CN114626472A true CN114626472A (en) 2022-06-14

Family

ID=81902654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210273214.0A Pending CN114626472A (en) 2022-03-18 2022-03-18 Auxiliary driving method and device based on machine learning and computer readable medium

Country Status (2)

Country Link
CN (1) CN114626472A (en)
WO (1) WO2023173699A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173699A1 (en) * 2022-03-18 2023-09-21 合众新能源汽车股份有限公司 Machine learning-based assisted driving method and apparatus, and computer-readable medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970178B2 (en) * 2007-12-21 2011-06-28 Caterpillar Inc. Visibility range estimation method and system
CN109117691A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109829386B (en) * 2019-01-04 2020-12-11 清华大学 Intelligent vehicle passable area detection method based on multi-source information fusion
CN110097519B (en) * 2019-04-28 2021-03-19 暨南大学 Dual-monitoring image defogging method, system, medium and device based on deep learning
CN111259957A (en) * 2020-01-15 2020-06-09 上海眼控科技股份有限公司 Visibility monitoring and model training method, device, terminal and medium based on deep learning
EP4149809B1 (en) * 2020-05-12 2023-10-11 C.R.F. Società Consortile per Azioni Motor-vehicle driving assistance in low meteorological visibility conditions, in particular with fog
CN114037630A (en) * 2021-11-05 2022-02-11 北京百度网讯科技有限公司 Model training and image defogging method, device, equipment and storage medium
CN114626472A (en) * 2022-03-18 2022-06-14 合众新能源汽车有限公司 Auxiliary driving method and device based on machine learning and computer readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173699A1 (en) * 2022-03-18 2023-09-21 合众新能源汽车股份有限公司 Machine learning-based assisted driving method and apparatus, and computer-readable medium

Also Published As

Publication number Publication date
WO2023173699A1 (en) 2023-09-21

Similar Documents

Publication Publication Date Title
CN108571974B (en) Vehicle positioning using a camera
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
CN106980813B (en) Gaze generation for machine learning
CN109308816B (en) Method and device for determining road traffic risk and vehicle-mounted system
CN104011737B (en) Method for detecting mist
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
US20170300763A1 (en) Road feature detection using a vehicle camera system
US9576489B2 (en) Apparatus and method for providing safe driving information
CN103847640B (en) The method and apparatus that augmented reality is provided
CN112750323A (en) Management method, apparatus and computer storage medium for vehicle safety
US20190141310A1 (en) Real-time, three-dimensional vehicle display
CN104881661A (en) Vehicle detection method based on structure similarity
CN113673403B (en) Driving environment detection method, system, device, computer equipment, computer readable storage medium and automobile
CN114626472A (en) Auxiliary driving method and device based on machine learning and computer readable medium
CN115273003A (en) Traffic sign recognition and navigation decision method and system combining character positioning
CN116142186A (en) Early warning method, device, medium and equipment for safe running of vehicle in bad environment
CN113635833A (en) Vehicle-mounted display device, method and system based on automobile A column and storage medium
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN116503825A (en) Semantic scene completion method based on fusion of image and point cloud in automatic driving scene
CN113752940B (en) Control method, equipment, storage medium and device for tunnel entrance and exit lamp
CN111433779A (en) System and method for identifying road characteristics
CN113727071A (en) Road condition display method and system
CN108428356B (en) Road condition map display and driving assistance application method based on fluid density field
Rani et al. Traffic sign detection and recognition using deep learning-based approach with haze removal for autonomous vehicle navigation
CN114359233A (en) Image segmentation model training method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hezhong New Energy Vehicle Co.,Ltd.