CN111738032A - Vehicle driving information determination method and device and vehicle-mounted terminal - Google Patents

Vehicle driving information determination method and device and vehicle-mounted terminal Download PDF

Info

Publication number
CN111738032A
CN111738032A CN201910224888.XA CN201910224888A CN111738032A CN 111738032 A CN111738032 A CN 111738032A CN 201910224888 A CN201910224888 A CN 201910224888A CN 111738032 A CN111738032 A CN 111738032A
Authority
CN
China
Prior art keywords
vehicle
road image
image frame
key component
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910224888.XA
Other languages
Chinese (zh)
Other versions
CN111738032B (en
Inventor
李亚
费晓天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN201910224888.XA priority Critical patent/CN111738032B/en
Publication of CN111738032A publication Critical patent/CN111738032A/en
Application granted granted Critical
Publication of CN111738032B publication Critical patent/CN111738032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a vehicle driving information determining method and device and a vehicle-mounted terminal. The method comprises the following steps: acquiring collected road image frames around the current vehicle, inputting the acquired road image frames into a vehicle key component detection model, and determining a key component area of the vehicle to be detected in the road image frames by the vehicle key component detection model; acquiring a determined key component area of the previous road image frame; associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame; determining vehicle speed information of a corresponding vehicle to be detected relative to a current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame; wherein the road image frame includes a vehicle to be detected around the current vehicle. By applying the scheme provided by the embodiment of the invention, the vehicle running information of the vehicle to be detected can be determined when the complete image of the vehicle cannot be shot.

Description

Vehicle driving information determination method and device and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of automatic driving, in particular to a vehicle driving information determining method and device and a vehicle-mounted terminal.
Background
With the development of science and technology, new concepts such as automatic driving, unmanned vehicles and the like are developed. High-precision vehicle detection recognition, tracking, distance and speed estimation are important elements in road scene analysis and environment perception, and are an indispensable part in the field of automatic driving. The analysis processing of the vehicle travel information on the road generally includes: the method comprises the following steps of tracking and analyzing a vehicle to be detected in a road scene, analyzing the distance between the vehicle to be detected and a current vehicle, analyzing the speed and the like. The current vehicle refers to a vehicle where various sensors are located, and the vehicle to be detected refers to a vehicle which may affect the running of the current vehicle.
In the related technology, an image of a vehicle to be detected in the current vehicle surrounding environment can be acquired through image acquisition equipment, and the image is processed by adopting a vehicle detection technology to obtain the speed, position, posture and the like of the vehicle to be detected in the image. The image acquired by the image acquisition device here refers to a plurality of frame images having a short time interval. And, by the moving object tracking technology, the vehicle running information of the vehicle to be detected in the image and the like can be determined.
According to the vehicle running information determining method, when the current vehicle is far away from the vehicle to be detected in front of or behind, the vehicle running information can be obtained by utilizing a moving target tracking technology. When the current vehicle is far away from the vehicle to be detected in front of or behind, the image frame contains the complete vehicle to be detected, and the target tracking technology can determine the vehicle running information of the vehicle to be detected according to the detected position of the vehicle to be detected in different image frames. However, when the vehicle to be detected is close to the current vehicle or the image frame cannot capture the complete image of the vehicle to be detected, the vehicle driving information of the vehicle to be detected cannot be determined according to the method.
Disclosure of Invention
The invention provides a vehicle running information determining method and device and a vehicle-mounted terminal, which are used for determining vehicle running information of a vehicle to be detected when a complete image of the vehicle cannot be shot. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention provides a vehicle driving information determining method, including:
acquiring a collected road image frame around a current vehicle; the road image frames comprise vehicles to be detected around the current vehicle, and image acquisition equipment for acquiring the road image frames is located on the current vehicle;
inputting the road image frames into a vehicle key component detection model, and determining key component areas of the vehicles to be detected in the road image frames by a convolution layer and a full-connection layer in the vehicle key component detection model according to pre-trained model parameters;
acquiring a determined key component area of the previous road image frame;
associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame;
and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
Optionally, the convolutional layer includes a first parameter, and the fully-connected layer includes a second parameter; the vehicle key component detection model is trained and completed in the following way:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and returning to execute the step of inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
Optionally, the step of associating the same key component region of the same vehicle to be detected between the road image frame and the previous road image frame includes:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
Optionally, the step of determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to a position difference between the associated key component regions and a time difference between the road image frame and the previous road image frame includes:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000031
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
Optionally, before the road image frame is input into the vehicle key component detection model, the method further includes:
when the time interval between the moment of collecting the road image frame and the first moment is larger than a preset time threshold, inputting the road image frame into a vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
Optionally, when a time interval between the time of acquiring the road image frame and the first time is not greater than a preset time threshold, the method further includes:
acquiring a determined key component area of a previous path image frame and a transformation matrix corresponding to the previous path image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
Optionally, after determining the key component region in the road image frame, the method further comprises:
determining vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area of the road image frame; wherein the target pixel point positions are: and the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame.
Optionally, the step of determining vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region of the road image frame includes:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure BDA0002004868720000041
wherein, the hsizeIs the height of the image acquisition device from the ground, f is the focal length of a light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
Optionally, after determining the key component region of the road image frame, the method further includes:
inputting the key component area of the road image frame into a vehicle structure detection model, and determining the overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame by the vehicle structure detection model according to pre-trained model parameters;
and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
In a second aspect, an embodiment of the present invention provides a vehicle travel information determination device, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire acquired road image frames around a current vehicle; the road image frames comprise vehicles to be detected around the current vehicle, and image acquisition equipment for acquiring the road image frames is located on the current vehicle;
the first determination module is configured to input the road image frames into a vehicle key component detection model, and determine key component areas of the vehicles to be detected in the road image frames according to pre-trained model parameters by a convolution layer and a full connection layer in the vehicle key component detection model;
a second acquisition module configured to acquire the determined key component region of the previous road image frame;
the association module is configured to associate the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame;
a second determination module configured to determine vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to a position difference between the associated key component regions and a time difference between the road image frame and the previous road image frame.
Optionally, the convolutional layer includes a first parameter, and the fully-connected layer includes a second parameter; the device further comprises: a model training module configured to train the vehicle critical component detection model using:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
Optionally, the association module is specifically configured to:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
Optionally, the second determining module is specifically configured to:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000061
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
Optionally, the first determining module is further configured to:
before the road image frames are input into a vehicle key component detection model, when the time interval between the moment of collecting the road image frames and the first moment is greater than a preset time threshold value, the road image frames are input into the vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
Optionally, the apparatus further comprises:
a third determination module configured to: when the time interval between the moment of collecting the road image frame and the first moment is not greater than a preset time threshold, acquiring a determined key component area of the previous road image frame and a transformation matrix corresponding to the previous road image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
Optionally, the apparatus further comprises:
a fourth determination module configured to determine vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region in the road image frame after determining the key component region in the road image frame; wherein the target pixel point positions are: and the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame.
Optionally, when the fourth determining module determines the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area in the road image frame, the fourth determining module includes:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure BDA0002004868720000071
wherein, the hsizeIs the height of the image acquisition device from the ground, f is the focal length of a light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
Optionally, the apparatus further comprises:
the fifth determining module is configured to input the key component area of the road image frame into a vehicle structure detection model after determining the key component area of the road image frame, and determine the overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame according to a pre-trained model parameter by the vehicle structure detection model; and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
In a third aspect, an embodiment of the present invention provides a vehicle-mounted terminal, including: a processor and an image acquisition device; the image acquisition equipment is positioned on the current vehicle;
the image acquisition equipment acquires road image frames around the current vehicle; wherein the road image frame includes a vehicle to be detected around the current vehicle;
the processor is used for acquiring the road image frames acquired by the image acquisition equipment, inputting the road image frames into a vehicle key component detection model, and determining a key component area of a vehicle to be detected in the road image frames by a convolution layer and a full-link layer in the vehicle key component detection model according to pre-trained model parameters; acquiring a determined key component area of the previous road image frame; associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame; and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
Optionally, the convolutional layer includes a first parameter, and the fully-connected layer includes a second parameter; the vehicle key component detection model is trained and completed by adopting the following operations:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
Optionally, when the processor associates the same key component region of the same vehicle to be detected between the road image frame and the previous road image frame, the processor includes:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
Optionally, when the processor determines the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the associated key component areas and the time difference between the road image frame and the previous road image frame, the processor includes:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000081
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
Optionally, before the road image frame is input into the vehicle key component detection model, when a time interval between a time of acquiring the road image frame and a first time is greater than a preset time threshold, the processor inputs the road image frame into the vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
Optionally, the processor further:
when the time interval between the moment of collecting the road image frame and the first moment is not greater than a preset time threshold, acquiring a determined key component area of the previous road image frame and a transformation matrix corresponding to the previous road image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
Optionally, after determining the key component region in the road image frame, the processor determines vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region in the road image frame; wherein the target pixel point positions are: and the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame.
Optionally, when determining the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region of the road image frame, the processor includes:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure BDA0002004868720000091
wherein, the hsizeIs the height of the image acquisition device from the ground, f is the focal length of a light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
Optionally, after determining the key component area of the road image frame, the processor inputs the key component area of the road image frame into a vehicle structure detection model, and the vehicle structure detection model determines, according to a pre-trained model parameter, overall structure data of a vehicle to be detected corresponding to the key component area of the road image frame; and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
As can be seen from the above, the vehicle driving information determining method and apparatus and the vehicle-mounted terminal provided in the embodiments of the present invention can determine the key component region of the vehicle to be detected in the road image frame by the vehicle key component detecting module, associate the same key component of the same vehicle to be detected between the road image frame and the previous road image frame, and determine the vehicle speed information of the vehicle to be detected relative to the current vehicle according to the position difference between the associated key component regions and the time difference between the frames. Because the embodiment of the invention can detect the key component area of the vehicle to be detected and track the movement of the vehicle to be detected in the image frame according to the key component area, when the complete image of the vehicle cannot be shot, the embodiment of the invention can also determine the vehicle running information of the vehicle to be detected according to the tracking of the key component area on the vehicle.
The innovation points of the embodiment of the invention comprise:
1. when the complete image of the vehicle cannot be shot in the image frame, the key component area of the vehicle can be detected, the key component area is tracked, the vehicle speed is determined according to the position difference between the key component areas between the image frames, and the vehicle speed can be detected when the complete image of the vehicle cannot be shot.
2. The translation amount of the vehicle in the transverse direction and the scaling amount of the vehicle in the longitudinal direction can be calculated through a first preset formula obtained in advance and the difference between key component areas of the vehicle in different road image frames, and the speed of the vehicle is calculated according to the translation amount and the scaling amount, so that the speed of the vehicle can be calculated through the image frames when the vehicle is close to a surrounding vehicle or the surrounding vehicle is blocked.
3. According to the pixel point position of the key component region, the distance information between the surrounding vehicles and the current vehicle can be determined; further, when the model is used to detect the key component region of the image frame at intervals, the key component region of each image frame can be specified from the obtained transformation matrix.
4. The road image frames and the corresponding key component areas are input into a vehicle structure detection model, the overall structure data of the vehicle can be detected, and the relative position of the vehicle from the current vehicle is determined according to the detected overall structure data and the vehicle distance information. Therefore, when the complete image of the vehicle cannot be shot, the relative position of the vehicle and the current vehicle can be more accurately determined.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a method for determining vehicle driving information according to an embodiment of the present invention;
FIG. 2(1) is a reference diagram of a road image frame according to an embodiment of the present invention;
FIG. 2(2) is a reference diagram of corresponding feature points in different road image frames according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a vehicle travel information determination apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a vehicle running information determining method and device and a vehicle-mounted terminal, which are used for determining vehicle running information of a vehicle to be detected when a complete image of the vehicle cannot be shot. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a method for determining vehicle driving information according to an embodiment of the present invention. The method is applied to a processor or an electronic device comprising the processor. The processor may comprise a CPU or microprocessor. The method specifically includes the following steps S110 to S150.
S110: and acquiring the acquired road image frames around the current vehicle.
The road image frames comprise vehicles to be detected around the current vehicle, and the image acquisition equipment for acquiring the road image frames is located on the current vehicle. The image acquisition device may be a vehicle event recorder or a surveillance camera or the like. The image acquisition equipment can be installed inside the current vehicle or outside the current vehicle. The road image frame may also be a road image captured by a passenger in the vehicle through a device such as a mobile phone when a camera mounted in the vehicle fails. The capture range of the image capture device may be the road scene ahead of, behind, or to the side of the current vehicle. The vehicle to be detected may be a vehicle in front of, behind or to the side of the current vehicle. The vehicle to be detected may be understood as a vehicle of vehicle travel information around the current vehicle to be detected. The vehicle travel information is generally travel information of the vehicle to be detected with respect to the current vehicle.
The road image frame may be one image frame of a sequence of road image frames captured by an image capturing device.
The vehicles to be detected in the road image frames may or may not be complete. The road image frames containing incomplete vehicles to be detected are understood to mean that the vehicles to be detected are not completely photographed, but at least one critical component area of the vehicles to be detected can be completely displayed in the road image frames. For example, fig. 2(1) shows a reference image of a road image frame. The leftmost vehicle is a complete vehicle, the rightmost vehicle is an incomplete vehicle, and only a part of the vehicle exists, but key parts of the vehicle, such as wheels and tail lamps (see a circle area in (1) of fig. 2) can be completely displayed in the road image.
The above are only some specific examples of obtaining the road image frame, and the obtaining mode of the road image frame is not limited in the present application, and different implementation modes may be adopted according to requirements.
S120: and inputting the road image frames into a vehicle key component detection model, and determining key component areas of the vehicle to be detected in the road image frames by a convolution layer and a full connection layer in the vehicle key component detection model according to pre-trained model parameters.
The key component area can be the area of a rearview mirror, a wheel, a tail lamp, a license plate and the like on the vehicle. Determining the key component region of the vehicle to be detected in the road image frame may be understood as determining a coordinate region of the key component region of the vehicle to be detected in the road image frame, for example, may be represented by coordinates of a rectangular frame to a corner point.
The determination of the critical component area of the vehicle to be inspected may be one or more. For example, referring to the right-most vehicle in fig. 2(1), two circles enclose the tail light area and the wheel area, respectively. The vehicle key component detection model can detect the coordinates of key component areas of a vehicle to be detected and respectively output the category of each key component area, wherein the category comprises a rearview mirror, a wheel, a tail lamp, a license plate and the like.
The vehicle key component detection model can be obtained by training in advance according to the sample road image and the standard key component area of the sample vehicle in the labeled sample road image. The trained vehicle key component detection model can enable the road image frames to be associated with key component areas in the road image frames. The vehicle key component detection model can be a model obtained by training through a deep learning algorithm, and can be obtained by training through a convolutional neural network.
S130: and acquiring the determined key component area of the previous road image frame.
And the previous road image frame is a road image frame before the road image frame. Other road image frames may not exist between the previous road image frame and the road image frame, and other road image frames may also exist. That is, the previous road image frame and the road image frame may be a continuous road image frame or a discontinuous road image frame, for example, the two may be separated by a preset number of frames.
The key component area of the previous road image frame can be obtained by inputting the previous road image frame into a vehicle key component detection model, and can also be obtained by adopting other tracking algorithms.
S140: and associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame.
When both the road image frame and the previous road image frame only contain one vehicle, the corresponding key component areas can be directly associated. When two or more vehicles are included in the road image frame and the previous road image frame, the key component regions may be associated by matching between the image regions.
In one embodiment, a tracking list may be established for a vehicle to be detected when associating the same key component region of the same vehicle to be detected between a road image frame and a previous road image frame. The tracking list may include: and the corresponding relation of the same key component of the same vehicle to be detected between the road image frame and the previous road image frame.
S150: and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
Wherein, the corresponding vehicle to be detected is: a vehicle to be detected corresponding to the associated critical component area. The vehicle speed information may include lateral speed and longitudinal speed.
The time difference may be determined according to the number of frames spaced between the road image frame and the previous road image frame and the frame rate. Wherein, the frame rate refers to the number of frames collected per second.
After determining the vehicle speed information of the vehicle to be detected relative to the current vehicle, the absolute speed information of the vehicle to be detected can be determined according to the vehicle speed information of the current vehicle.
As can be seen from the above, in the embodiment, the vehicle key component detection model may determine the key component area of the vehicle to be detected in the road image frame, associate the same key component of the same vehicle to be detected between the road image frame and the previous road image frame, and determine the vehicle speed information of the vehicle to be detected relative to the current vehicle according to the position difference between the associated key component areas and the time difference between the frames. Since the present embodiment can detect the key component region of the vehicle to be detected and track the movement of the vehicle to be detected in the image frame according to the key component region, when the complete image of the vehicle cannot be captured in the image frame, the present embodiment can also determine the vehicle driving information of the vehicle to be detected according to the tracking of the key component region on the vehicle.
In another embodiment of the present invention, in the embodiment shown in fig. 1, the convolutional layer includes a first parameter, and the fully-connected layer includes a second parameter. The model parameters in the vehicle critical component detection model may include a first parameter and a second parameter. The vehicle key component detection model can be trained in the following steps 1a to 4 a.
Step 1 a: and acquiring the standard key component area of the sample vehicle in the sample road image and the marked sample road image, and inputting the sample road image into the convolutional layer.
In practical applications, in order to make model training more accurate, a large number of sample road images may be acquired. One or more sample images may be included in the sample road image. The sample image in the sample road image may be complete or incomplete.
The sample road image may be previously acquired using a camera on the vehicle. Each sample road image is marked with a standard key component area of the sample vehicle, and the marked standard key component area comprises coordinates and types of key components.
The sample road image may also be scaled to a preset size before being input into the convolutional layer. Therefore, the vehicle key component detection model can learn the sample road images with the same size, so that the images containing the vehicle key components can be processed more quickly and accurately, and the training efficiency of the model is improved.
In the vehicle key component detection model, a spatial pyramid pooling layer may also be included. The spatial pyramid pooling layer can be used for zooming the sample road image, so that the vehicle key component detection model can adapt to images of any size, and image information loss is avoided.
Step 2 a: performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; and mapping the sample characteristic vector according to the second parameter through the full connection layer to obtain a reference key component area of the sample vehicle in the sample road image.
The initial values of the first parameter and the second parameter may be set in advance empirically, for example, may be set to small values. During each training, the first parameter and the second parameter are continuously corrected to gradually approach the true values.
In the training process, the obtained reference key component region may not be accurate enough, and the reference key component region may be used as a reference basis for correcting the first parameter and the second parameter.
Step 3 a: and comparing the reference key component area with the corresponding standard key component area to obtain the difference.
Wherein, the difference can be obtained by using a loss function.
Step 4 a: when the difference is larger than the preset difference threshold, correcting the first parameter and the second parameter according to the difference, and returning to execute the step of inputting the sample road image into the convolutional layer in the step 1 a; and when the difference is smaller than a preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
When the difference is equal to the preset difference threshold, the first parameter and the second parameter can be corrected according to the difference, and the completion of the training of the vehicle key component detection model can also be determined.
When the difference is larger than the preset difference threshold, the difference between the prediction result and the true value of the convolutional neural network is considered to be large, and the network needs to be trained continuously. When the first parameter and the second parameter are corrected according to the difference, the first parameter and the second parameter may be adjusted in opposite directions according to the specific value with reference to the specific value and the direction of the difference.
In order to obtain a vehicle key part detection model more quickly, a transfer learning method can be adopted, the number of output types and the structures of other parts which possibly need to be modified are correspondingly modified by utilizing the existing convolutional neural network which obtains a better result in the object detection field, such as fast R-CNN, SSD and the like, and parameters which are fully trained in the original network model are directly adopted as model parameters. Specifically, the convolutional layer sufficiently learns key component regions such as a rearview mirror, a wheel, a tail lamp and a license plate in the sample road image, and the full-link layer can map the sample feature vectors according to the learned sample feature vectors of the sample road image to obtain the identification results of the regions such as the rearview mirror, the wheel, the tail lamp and the license plate. The recognition result is compared with the regions of the rearview mirror, the wheel, the tail lamp, the license plate and the like marked in advance in the sample road image, the model parameters can be optimized, and after the model is subjected to iterative training of more training samples, the vehicle key part detection model can be obtained.
In summary, the present embodiment provides an implementation of cyclic training for a convolutional neural network using a large number of sample road images and standard key component areas.
In another embodiment of the present invention, in the embodiment shown in fig. 1, step S140, the step of associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame includes steps 1b to 3 b.
Step 1 b: and extracting the characteristic information of the key component area in the road image frame and the previous road image frame. Wherein the characteristic information comprises position information and/or pixel information.
For example, the position information may be coordinates of the key component region, and the pixel information may be a feature value obtained from pixel values of the key component region.
And step 2 b: and respectively matching the road image frame with the feature information of the same type of key component area of the previous road image frame.
For example, the coordinates of the headlight region of each vehicle in the road image frame may be respectively matched with the coordinates of the headlight region of each vehicle in the previous road image frame. When the difference is smaller than a preset difference threshold value, the matching is considered to be successful.
And step 3 b: and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
In summary, the present embodiment provides a specific implementation manner for implementing step S140.
In another embodiment of the present invention, in the embodiment shown in fig. 1, step S150, the step of determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the associated key component regions and the time difference between the road image frame and the previous road image frame, includes steps 1c and 2 c.
Step 1 c: selecting corresponding feature points from the related key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000161
wherein, X1And X0For the selected corresponding feature point, X1For the feature points in the road image frame, X0For feature points, X, in the image frame of the previous pass1And X0And the offset _ y and the offset _ x are respectively the translation amount between the associated key component areas in the directions of the y axis and the x axis, and the x axis and the y axis are coordinate axes in an image coordinate system. The longitudinal speed of the vehicle can be estimated from the zoom S according to the principle of perspective of large-near-far-small. offset _ y and the offset _ x can be used to estimate the lateral speed of the vehicle. The longitudinal direction is the current direction of travel of the vehicle, and the transverse direction is the direction perpendicular to the longitudinal direction. The associated two critical component areas may be nearThe affine transformation matrix H can be solved by using the corresponding feature points, which are considered to be parallel to the imaging plane of the image acquisition device.
See the image frame of the road shown in fig. 2(1), wherein the directions of the x-axis and the y-axis in the image coordinate system are labeled.
When selecting the corresponding feature point from the associated key component region, the method may specifically include: and taking the pixel points at the positions with the same proportion in the related key component regions as corresponding characteristic points. In specific application, for a preset number of feature points at a preset position of one key component region, corresponding feature points may be determined at the same proportional position of another key component region.
In another embodiment, the KLT optical flow tracking algorithm may be used to extract corresponding feature points within the critical component areas.
For example, see fig. 2(2), two road image frames shown by dashed boxes, wherein the solid quadrilateral areas are associated key component areas, and the quadrilateral areas are labeled with corresponding feature points a and a'. A and A' are determined according to the following relationship: the width and length of the quadrangular region in the left image frame are a and b, respectively, and the width and length of the quadrangular region in the right image frame are a ' and b ', respectively, and the position of (3a/4, 5b/6) in the left quadrangular region is point a, and the position of (3a '/4, 5b '/6) in the right quadrangular region is point a '.
And step 2 c: and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
For example, it can be based on a formula
Figure BDA0002004868720000171
To determine the longitudinal speed of the vehicle to be detected relative to the current vehicle, which can be based on a formula
Figure BDA0002004868720000172
To determine the lateral speed of the vehicle to be detected relative to the current vehicle, whichIn the following description, S is a scaling factor, Δ t is a time difference, and offset _ y and offset _ x are translation amounts.
In summary, in the embodiment, the transformation matrix is determined according to the first preset formula and the corresponding feature points, and the vehicle speed information is solved according to the parameters in the transformation matrix, so that the speed of the vehicle to be detected can be more accurately determined by the implementation method.
In another embodiment of the present invention, in the embodiment shown in fig. 1, in order to increase the speed of detection, before inputting the road image frame into the vehicle key component detection model, the method may further include:
and when the time interval between the moment of collecting the road image frame and the first moment is greater than a preset time threshold value, inputting the road image frame into a vehicle key component detection model.
And the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time. The preset time threshold is a numerical value representing a length of time. When the frame rate is constant, the above determination may also be performed using a frame number interval, for example, when the number of interval frames between the road image frame and the road image frame that was input into the vehicle key component detection model last time is greater than a preset number of frames, the road image frame is input into the vehicle key component detection model.
Specifically, before the road image frame is input into the vehicle key component detection model, it may be determined whether a time interval between a time at which the road image frame is collected and a first time is greater than a preset time threshold, and if so, the road image frame is input into the vehicle key component detection model. If not, no processing or other processing may be performed. Due to the fact that the interval time between adjacent road image frames is short, the road image frames can be input into the vehicle key component detection model at intervals, the vehicle speed information of the vehicle to be detected relative to the current vehicle is determined, the vehicle speed information does not need to be determined again aiming at each road image frame, and therefore the detection efficiency can be improved under the condition that the accuracy is guaranteed.
In another embodiment, when the time interval between the time of acquiring the road image frame and the first time is not greater than the preset time threshold, the determined key component region of the previous road image frame and the transformation matrix corresponding to the previous road image frame may also be obtained, and the key part region of the road image frame is determined according to the key component region of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
And the transformation matrix corresponding to the image frame of the previous channel is determined according to a first preset formula.
For example, when the road image frames are intermittently input to the vehicle key component detection model, for the road image frames without the input of the vehicle key component detection model, the transformation matrix corresponding to the key component area of the previous road image frame and the previous road image frame may be based on the above formula X1=HX0And determining a characteristic point in the key component area of the road image frame, and determining the key component area of the road image frame according to the characteristic point. Wherein, X1For the feature points in the road image frame, X0Is the feature point in the image frame of the previous path, H is the transformation matrix corresponding to the image frame of the previous path, when X0When H is a known amount, X can be obtained1. In practical application, a plurality of groups of feature points can be adopted for solving.
For example, for a key component region (such as a wheel key component region) of the left plane or the right plane of the vehicle in the road image frame and the previous road image frame, the wheel key component region in the previous road image frame is known, and the vehicle key component region in the road image frame is required. The left planes (or the right planes) of the vehicles of the two adjacent road image frames can be approximately regarded to be positioned on the same three-dimensional plane, and then the left planes (or the right planes) of the vehicles can be regarded as being positioned on the same three-dimensional plane according to the formula X1=HX0And solving the characteristic points in the key component area of the road image frame.
The key component area of the previous road image frame can be determined through a vehicle key component detection model, and can also be determined through corresponding feature points and a transformation matrix. The transformation matrix corresponding to the previous road image frame may be calculated from two road image frames which are not connected before the road image frame, or may be calculated from two road image frames which are closest to the road image frame before the road image frame in terms of time.
In another embodiment of the present invention, in the embodiment shown in fig. 1, in order to determine more driving information of the vehicle to be detected, after determining the key component area in the road image frame, the method further comprises:
and determining the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area of the road image frame.
Wherein, the target pixel point positions are: and the positions of the grounding pixel points of the tire areas of the vehicles to be detected in the road image frames.
In one embodiment, when determining the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area of the road image frame, the determining may include:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula:
Figure BDA0002004868720000191
wherein h issizeIs the height of the image acquisition device from the ground, f is the focal length of the light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of the image vanishing point of the image capturing device. foeyThe scene containing the parallel lines can be shot by using an image acquisition device in advance, and the longitudinal coordinate, namely the y-direction coordinate, of the vanishing point of the parallel lines in the shot image is determined. h issizeF and foeyCan be obtained in advance. When y is determined, D can be found.
In one embodiment, a plurality of target pixel point positions may be determined, a plurality of pieces of vehicle distance information may be obtained, and an average value of the plurality of pieces of vehicle distance information may be calculated as the vehicle distance information of the vehicle to be detected with respect to the current vehicle.
In summary, the present embodiment can determine the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position in the key component area, and further can obtain more driving information about the vehicle to be detected.
In another embodiment of the present invention, in the embodiment shown in fig. 1, after determining the key component area of the road image frame, the method may further include steps 1d and 2d in order to obtain more information about the vehicle to be detected.
Step 1 d: and inputting the key component area of the road image frame into a vehicle structure detection model, and determining the overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame by the vehicle structure detection model according to the pre-trained model parameters.
The overall structure data includes length, width, height, azimuth angle and the like of the vehicle. The overall structure data can embody the three-dimensional structure information of the vehicle to be detected.
When the vehicle to be detected in the road image frame is incomplete, the overall structure data of the vehicle to be detected can be determined according to the key component area of the vehicle to be detected in the road image frame through the step 1d, and a basis can be provided for further determining more information of the vehicle to be detected.
Step 2 d: and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
Compared with vehicle distance information, the relative position of the vehicle to be detected relative to the current vehicle can reflect the position relation between the vehicle to be detected and the current vehicle more accurately, and more vehicle information can be provided.
When the complete overall structure data of the vehicle to be detected is obtained, the vehicle position of the vehicle to be detected can be updated in real time by combining the overall structure data of the vehicle to be detected and the vehicle speed information between the vehicle to be detected and the current vehicle, and more accurate vehicle position information can be obtained through prediction, so that the driving of the vehicle is guided.
In another embodiment of the present invention, the vehicle structure detection model may be obtained by training a convolutional neural network in advance according to the sample vehicle image. Specifically, the vehicle structure detection model can be obtained by the following steps 1e to training.
Step 1 e: and acquiring a sample vehicle image, and determining a sample key component area of the sample vehicle in the sample vehicle image and standard overall structure data of the labeled sample vehicle.
Wherein the sample vehicle image includes an incomplete sample vehicle. The sample vehicle images may be multi-angle but incomplete photographic images of various large types of vehicles. For the same sample vehicle image, methods such as rotation, cutting, mirror symmetry and the like can be adopted for processing, so that the purpose of expanding the sample training set is achieved. In this embodiment, a sample library may be established in advance, and the sample vehicle image may be acquired from the sample library. The sample library may use images in the public data set, or may obtain images collected by a camera of the current vehicle from a storage device of the current vehicle. The training of the vehicle structure detection model in the present embodiment is supervised training. In training the vehicle structure detection model, it is the standard vehicle key component region of the sample vehicle that is input to the model.
The sample vehicle image may be scaled to a preset size, such as 300 × 300px or 512 × 512px, before determining the sample critical component area of the sample vehicle. Therefore, the vehicle structure detection model can learn the key part areas of the samples in the vehicle images of the same size, so that the vehicle images can be processed more quickly and accurately, and the training efficiency of the model is improved.
Step 2 e: and determining sample characteristic information of the sample key component area according to the model parameters of the vehicle structure detection model, and performing regression on the sample characteristic information to obtain reference overall structure data of the sample vehicle corresponding to the sample key component area.
The initial values of the model parameters of the vehicle structure detection model may be set in advance empirically, for example, may be set to small values. During each training, the model parameters are continuously modified to gradually approach the true values.
Step 3 e: and comparing the reference overall structure data with the corresponding standard overall structure data to obtain a first difference.
And 4 e: when the first difference is larger than a first preset difference threshold value, modifying the model parameters of the vehicle structure detection model according to the first difference, and returning to execute the step 2 e; and when the first difference is smaller than a first preset difference threshold value, determining that the training of the vehicle structure detection model is finished.
In this embodiment, a migration learning method may be adopted, an existing convolutional neural network that obtains a better result in the target detection field is used to modify the output class number and the structure that may need to be modified, a parameter that has been trained sufficiently in the original network model is directly used as an initial model parameter, and a fine tuning method is used to train the convolutional neural network by using the sample image.
The following explains the detection principle of the vehicle structure detection model by taking a sample vehicle image for marking a right wheel region and a license plate region as an example. When the distance between the vehicle to be detected and the current vehicle is s, the relative horizontal distance between the right side line of the license plate and the left contour line of the wheel on the image is x, the relative vertical height is y, the focal length of the known image acquisition equipment is f, and the distance between the vehicle to be detected and the current vehicle can be obtained according to the parameters. The azimuth angle of the vehicle to be detected relative to the current vehicle is theta, the height of the vehicle to be detected is H, the width of the vehicle to be detected is W, and the coordinate of the central point is P (P)x,Py,Pz) The three variables determine the three-dimensional structure and the standing posture of the vehicle to be detected, and are recorded as vehicle posture parameters omega (theta, H, W and P). Obviously, the width, height, azimuth angle and center point of the vehicle, the distance between the image capturing device and the current vehicle, and the focal length of the image capturing device all affect the size of x and y on the image.
Therefore, in the process of training the sample vehicle image, the boundary line is detected by using the corner detection technology, the values of x and y are measured as the input of the network model, f and s are known quantities, and the vehicle attitude parameter Ω (θ, H, W, P) is output, that is, the following function can be fitted by the convolutional neural network:
Ω=k(x,y,f,s)
the essence of the operation is to use the convolutional neural network to perform regression prediction on the vehicle center point, the azimuth angle, the vehicle width and the vehicle height. During training, the L2 Loss function may be used as the Loss function in determining the first difference amount. The L2 distance refers to the euclidean distance of the vector. In an embodiment, the expression of the L2 Loss function is as follows:
Loss=||θpredicttrue||2+||Hpredict-Htrue||2
+||Wpredict-Wtrue||2+||Ppredict-Ptrue||2
the parameter with the prediction typeface under the subscript refers to a predicted value obtained through model regression, and the parameter with the true typeface under the subscript is a standard value, namely a true value. The Loss function actually represents the squares of the euclidean distances between the predicted values and the true values.
When the model parameters of the vehicle structure detection model are corrected according to the first difference, the model parameters can be adjusted through a gradient reduction method, and the loss function is iteratively reduced repeatedly until the detection model with a smaller loss function and a higher generalization degree is obtained. The model parameters can also be adjusted by performing a targeted improvement on the Loss function, for example, normalizing the output values, etc., so as to obtain a better training effect.
In the aspect of learning rate setting of training, the learning rate is initially set to be 0.01-0.001, the learning rate can be gradually reduced after iteration passes through a certain number of rounds, and the decay rate of the learning rate is approximately more than 100 times when the training is finished. The specific learning rate can be adjusted according to the actual conditions such as the number of sample sets, the capability of hardware facilities and the like.
In the present embodiment, the mode of regression prediction of the three-dimensional structure information of the vehicle may not be limited to a single photograph. For example, regression prediction can be performed on the three-dimensional structure of the same vehicle by using multiple images with different angles or different marked key component areas, and then the overall structure data of the vehicle with higher confidence coefficient is comprehensively selected. In addition, because the regression prediction of the physical structure information of the three-dimensional image according to the two-dimensional image is a relatively difficult technology in the field of deep learning, in order to obtain more accurate overall structure data of the vehicle, a pre-classification process can be considered, that is, if the vehicle image of the sample has the vehicle brand mark or the model information, the vehicle image can be identified in advance and used as a priori knowledge, so as to improve the training speed and precision.
In the above embodiments, the convolutional neural network model is mainly used as the neural network model for detection, tracking and modeling. With the continuous development of machine learning, a convolutional neural network model is also continuously developed. In particular, different types of convolutional neural networks may be employed as model prototypes, based on the function of the model to be trained and the data to be processed by the model. Common convolutional neural networks for object detection include R-CNN, Fast R-CNN, Faster R-CNN, R-FCN, YOLO9000, SSD, NASNET, Mask R-CNN, and the like.
Fig. 3 is a schematic structural diagram of a vehicle travel information determination apparatus according to an embodiment of the present invention. This embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1. The embodiment is applied to a processor or an electronic device including a processor. The device specifically includes:
a first obtaining module 310 configured to obtain a collected road image frame around a current vehicle; the road image frames comprise vehicles to be detected around the current vehicle, and image acquisition equipment for acquiring the road image frames is located on the current vehicle;
a first determining module 320, configured to input the road image frame into a vehicle key component detection model, and determine a key component region of a vehicle to be detected in the road image frame according to a pre-trained model parameter by using a convolution layer and a full connection layer in the vehicle key component detection model;
a second obtaining module 330 configured to obtain the determined key component region of the previous road image frame;
an association module 340 configured to associate the same key component region of the same vehicle to be detected between the road image frame and the previous road image frame;
a second determining module 350 configured to determine vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to a position difference between the associated key component regions and a time difference between the road image frame and the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the convolutional layer comprises a first parameter, and the fully-connected layer comprises a second parameter; the device further comprises: a model training module (not shown) configured to train the vehicle critical component detection model using:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the association module 340 is specifically configured to:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the second determining module 350 is specifically configured to:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000241
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the first determining module 320 is further configured to:
before the road image frames are input into a vehicle key component detection model, when the time interval between the moment of collecting the road image frames and the first moment is greater than a preset time threshold value, the road image frames are input into the vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the apparatus further comprises:
a third determining module (not shown in the figures) configured to: when the time interval between the moment of collecting the road image frame and the first moment is not greater than a preset time threshold, acquiring a determined key component area of the previous road image frame and a transformation matrix corresponding to the previous road image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the apparatus further comprises:
a fourth determining module (not shown in the figures) configured to determine vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region in the road image frame after determining the key component region in the road image frame; wherein the target pixel point positions are: and the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the fourth determining module, when determining the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area in the road image frame, includes:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure BDA0002004868720000251
wherein, the hsizeIs the height of the image acquisition equipment from the ground, f is the focal length of a light sensing element in the image acquisition equipment, y is the longitudinal coordinate of the target pixel point, andfoeyis a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
In another embodiment of the present invention, in the embodiment shown in fig. 3, the apparatus further comprises:
a fifth determining module (not shown in the figures) configured to, after determining the key component area of the road image frame, input the key component area of the road image frame into a vehicle structure detection model, and determine, by the vehicle structure detection model, overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame according to pre-trained model parameters; and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. The vehicle-mounted terminal includes: a processor 410 and an image acquisition device 420; the image capture device 420 is located in the current vehicle;
an image collecting device 420 collecting road image frames around the current vehicle; wherein the road image frame includes a vehicle to be detected around the current vehicle;
the processor 410 acquires the road image frames acquired by the image acquisition device 420, inputs the road image frames into a vehicle key component detection model, and determines a key component area of a vehicle to be detected in the road image frames according to pre-trained model parameters by using a convolution layer and a full-link layer in the vehicle key component detection model; acquiring a determined key component area of the previous road image frame; associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame; and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the convolutional layer comprises a first parameter, and the fully-connected layer comprises a second parameter; the vehicle key component detection model is trained and completed by adopting the following operations:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the processor 410, when associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame, includes:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the processor 410, when determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the associated key component areas and the time difference between the road image frame and the previous road image frame, includes:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure BDA0002004868720000271
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the processor 410 further inputs the road image frames into a vehicle key component detection model when a time interval between a time when the road image frames are collected and a first time is greater than a preset time threshold before inputting the road image frames into the vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the processor 410 further:
when the time interval between the moment of collecting the road image frame and the first moment is not greater than a preset time threshold, acquiring a determined key component area of the previous road image frame and a transformation matrix corresponding to the previous road image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 4, the processor 410, after determining the key component region in the road image frame, determines the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component region in the road image frame; wherein the target pixel point positions are: and the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame.
In another embodiment of the present invention, in the embodiment shown in fig. 4, when the processor 410 determines the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area in the road image frame, the method includes:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure BDA0002004868720000281
wherein, the hsizeIs the height of the image acquisition device from the ground, f is the focal length of a light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
In another embodiment of the present invention, in the embodiment shown in fig. 4, after determining the key component area of the road image frame, the processor 410 further inputs the key component area of the road image frame into a vehicle structure detection model, and determines the overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame according to a pre-trained model parameter by using the vehicle structure detection model; and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
The vehicle-mounted terminal may be an existing, developing or future developed smart phone, a non-smart phone, a tablet computer, a laptop personal computer, a desktop personal computer, a mini computer, a midrange computer, a mainframe computer, etc. The present embodiment does not limit the implementation of the in-vehicle terminal.
The terminal embodiment and the method embodiment shown in fig. 1 are embodiments based on the same inventive concept, and the relevant points can be referred to each other. The terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, reference is made to the method embodiment.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A vehicle travel information determination method characterized by comprising:
acquiring a collected road image frame around a current vehicle; the road image frames comprise vehicles to be detected around the current vehicle, and image acquisition equipment for acquiring the road image frames is located on the current vehicle;
inputting the road image frames into a vehicle key component detection model, and determining key component areas of the vehicles to be detected in the road image frames by a convolution layer and a full-connection layer in the vehicle key component detection model according to pre-trained model parameters;
acquiring a determined key component area of the previous road image frame;
associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame;
and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
2. The method of claim 1, wherein the convolutional layer comprises a first parameter, the fully-connected layer comprises a second parameter; the vehicle key component detection model is trained and completed in the following way:
acquiring a sample road image and a standard key component area of a sample vehicle marked in the sample road image, and inputting the sample road image into the convolutional layer;
performing feature extraction on pixel points of the sample road image according to the first parameter through the convolution layer to obtain a sample feature vector of the sample road image; mapping the sample characteristic vector according to the second parameter through the full-connection layer to obtain a reference key component area of the sample vehicle in the sample road image;
comparing the reference key component area with the corresponding standard key component area to obtain a difference quantity;
when the difference is larger than a preset difference threshold value, correcting the first parameter and the second parameter according to the difference, and returning to execute the step of inputting the sample road image into the convolutional layer; and when the difference is smaller than the preset difference threshold value, determining that the training of the vehicle key component detection model is finished.
3. The method of claim 1, wherein the step of associating the same key component area of the same vehicle under inspection between the road image frame and the previous road image frame comprises:
extracting feature information of key component areas in the road image frame and the previous road image frame; wherein the characteristic information comprises position information and/or pixel information;
respectively matching the feature information of the same type of key component areas of the road image frame and the previous road image frame;
and taking the successfully matched key component area as the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame for association.
4. The method of claim 1, wherein the step of determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle based on a difference in position between the associated key component regions and a difference in time between the road image frame and the previous road image frame comprises:
selecting corresponding feature points from the associated key component areas, and determining a transformation matrix H corresponding to the road image frame according to the following first preset formula:
Figure FDA0002004868710000021
wherein, X is1And X0For the selected corresponding feature point, S in H is a degree of scaling between the associated key component regions, the offset _ y and the offset _ x are amounts of translation between the associated key component regions in y-axis and x-axis directions, respectively, and the x-axis and the y-axis are coordinate axes in an image coordinate system;
and determining the vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the zoom level and the translation amount in the transformation matrix corresponding to the road image frame and the time difference between the road image frame and the previous road image frame.
5. The method of claim 4, wherein prior to inputting the road image frames into a vehicle critical component detection model, the method further comprises:
when the time interval between the moment of collecting the road image frame and the first moment is larger than a preset time threshold, inputting the road image frame into a vehicle key component detection model; and the first moment is the moment when the collected road image frame is input into the vehicle component detection model last time.
6. The method of claim 5, wherein when a time interval between the time of acquiring the road image frame and the first time is not greater than a preset time threshold, the method further comprises:
acquiring a determined key component area of a previous path image frame and a transformation matrix corresponding to the previous path image frame; the transformation matrix corresponding to the previous channel image frame is determined according to the first preset formula;
and determining a key part area of the road image frame according to the key part area of the previous road image frame and the transformation matrix corresponding to the previous road image frame.
7. The method of claim 1 or 6, wherein after determining a critical component area in the road image frame, the method further comprises:
determining vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area of the road image frame; wherein the target pixel point positions are: the position of a grounding pixel point of a tire area of a vehicle to be detected in the road image frame;
the step of determining the vehicle distance information of the vehicle to be detected relative to the current vehicle according to the target pixel point position of the key component area of the road image frame comprises the following steps:
determining vehicle distance information D of the vehicle to be detected relative to the current vehicle according to a second preset formula as follows:
Figure FDA0002004868710000041
wherein, the hsizeIs the height of the image acquisition device from the ground, f is the focal length of a light sensing element in the image acquisition device, y is the longitudinal coordinate of the target pixel point, foeyIs a predetermined longitudinal coordinate of an image vanishing point of the image acquisition device.
8. The method of claim 7, wherein after determining key component regions of the road image frame, the method further comprises:
inputting the key component area of the road image frame into a vehicle structure detection model, and determining the overall structure data of the vehicle to be detected corresponding to the key component area of the road image frame by the vehicle structure detection model according to pre-trained model parameters;
and determining the relative position of the vehicle to be detected relative to the current vehicle according to the vehicle distance information of the vehicle to be detected relative to the current vehicle and the overall structure data.
9. A vehicle travel information determination device characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire acquired road image frames around a current vehicle; the road image frames comprise vehicles to be detected around the current vehicle, and image acquisition equipment for acquiring the road image frames is located on the current vehicle;
the first determination module is configured to input the road image frames into a vehicle key component detection model, and determine key component areas of the vehicles to be detected in the road image frames according to pre-trained model parameters by a convolution layer and a full connection layer in the vehicle key component detection model;
a second acquisition module configured to acquire the determined key component region of the previous road image frame;
the association module is configured to associate the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame;
a second determination module configured to determine vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to a position difference between the associated key component regions and a time difference between the road image frame and the previous road image frame.
10. A vehicle-mounted terminal characterized by comprising: a processor and an image acquisition device; the image acquisition equipment is positioned on the current vehicle;
the image acquisition equipment acquires road image frames around the current vehicle; wherein the road image frame includes a vehicle to be detected around the current vehicle;
the processor is used for acquiring the road image frames acquired by the image acquisition equipment, inputting the road image frames into a vehicle key component detection model, and determining a key component area of a vehicle to be detected in the road image frames by a convolution layer and a full-link layer in the vehicle key component detection model according to pre-trained model parameters; acquiring a determined key component area of the previous road image frame; associating the same key component area of the same vehicle to be detected between the road image frame and the previous road image frame; and determining vehicle speed information of the corresponding vehicle to be detected relative to the current vehicle according to the position difference between the related key component areas and the time difference between the road image frame and the previous road image frame.
CN201910224888.XA 2019-03-24 2019-03-24 Vehicle driving information determination method and device and vehicle-mounted terminal Active CN111738032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910224888.XA CN111738032B (en) 2019-03-24 2019-03-24 Vehicle driving information determination method and device and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910224888.XA CN111738032B (en) 2019-03-24 2019-03-24 Vehicle driving information determination method and device and vehicle-mounted terminal

Publications (2)

Publication Number Publication Date
CN111738032A true CN111738032A (en) 2020-10-02
CN111738032B CN111738032B (en) 2022-06-24

Family

ID=72645774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910224888.XA Active CN111738032B (en) 2019-03-24 2019-03-24 Vehicle driving information determination method and device and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN111738032B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232326A (en) * 2020-12-15 2021-01-15 北京每日优鲜电子商务有限公司 Driving information generation method and device, electronic equipment and computer readable medium
CN112883871A (en) * 2021-02-19 2021-06-01 北京三快在线科技有限公司 Model training and unmanned vehicle motion strategy determining method and device
CN113112866A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic early warning method and intelligent traffic early warning system
CN113191353A (en) * 2021-04-15 2021-07-30 华北电力大学扬中智能电气研究中心 Vehicle speed determination method, device, equipment and medium
WO2023020004A1 (en) * 2021-08-16 2023-02-23 长安大学 Vehicle distance detection method and system, and device and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
CN105551264A (en) * 2015-12-25 2016-05-04 中国科学院上海高等研究院 Speed detection method based on license plate characteristic matching

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232326A (en) * 2020-12-15 2021-01-15 北京每日优鲜电子商务有限公司 Driving information generation method and device, electronic equipment and computer readable medium
CN112883871A (en) * 2021-02-19 2021-06-01 北京三快在线科技有限公司 Model training and unmanned vehicle motion strategy determining method and device
CN113112866A (en) * 2021-04-14 2021-07-13 深圳市旗扬特种装备技术工程有限公司 Intelligent traffic early warning method and intelligent traffic early warning system
CN113191353A (en) * 2021-04-15 2021-07-30 华北电力大学扬中智能电气研究中心 Vehicle speed determination method, device, equipment and medium
WO2022217630A1 (en) * 2021-04-15 2022-10-20 华北电力大学扬中智能电气研究中心 Vehicle speed determination method and apparatus, device, and medium
WO2023020004A1 (en) * 2021-08-16 2023-02-23 长安大学 Vehicle distance detection method and system, and device and medium

Also Published As

Publication number Publication date
CN111738032B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
Dhiman et al. Pothole detection using computer vision and learning
EP3735675B1 (en) Image annotation
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
US11035958B2 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
WO2020000137A1 (en) Integrated sensor calibration in natural scenes
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
Parra et al. Robust visual odometry for vehicle localization in urban environments
KR20190030474A (en) Method and apparatus of calculating depth map based on reliability
CN111169468A (en) Automatic parking system and method
US20200279395A1 (en) Method and system for enhanced sensing capabilities for vehicles
CN111098850A (en) Automatic parking auxiliary system and automatic parking method
CN116469079A (en) Automatic driving BEV task learning method and related device
CN113012215A (en) Method, system and equipment for space positioning
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN110909620A (en) Vehicle detection method and device, electronic equipment and storage medium
CN112837404B (en) Method and device for constructing three-dimensional information of planar object
García-García et al. 3D visual odometry for road vehicles
CN116151320A (en) Visual odometer method and device for resisting dynamic target interference
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
JP2018116147A (en) Map creation device, map creation method and map creation computer program
García-García et al. 3D visual odometry for GPS navigation assistance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211123

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant