CN113674358A - Method and device for calibrating radar vision equipment, computing equipment and storage medium - Google Patents

Method and device for calibrating radar vision equipment, computing equipment and storage medium Download PDF

Info

Publication number
CN113674358A
CN113674358A CN202110906939.4A CN202110906939A CN113674358A CN 113674358 A CN113674358 A CN 113674358A CN 202110906939 A CN202110906939 A CN 202110906939A CN 113674358 A CN113674358 A CN 113674358A
Authority
CN
China
Prior art keywords
vertex
position information
vertexes
lane
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110906939.4A
Other languages
Chinese (zh)
Other versions
CN113674358B (en
Inventor
陈向阳
李冬冬
李乾坤
殷俊
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110906939.4A priority Critical patent/CN113674358B/en
Publication of CN113674358A publication Critical patent/CN113674358A/en
Application granted granted Critical
Publication of CN113674358B publication Critical patent/CN113674358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a calibration method and device of a radar vision device, a computing device and a storage medium, wherein the method comprises the following steps: carrying out lane line detection on a target image which is acquired by a camera and contains a lane so as to detect a dotted lane line and solid lane lines on two sides in the target image; then respectively detecting vertexes of all rectangular blocks on the dashed lane lines, dividing the vertexes into a plurality of vertex combinations according to the position information of the detected vertexes, wherein each vertex in each vertex combination can form a straight line perpendicular to the two solid lane lines; determining at least two straight lines based on the combination of the at least two vertexes, and acquiring position information of intersection points of the at least two straight lines and the two solid line lane lines respectively; therefore, the polygon formed by each intersection point can correspond to the rectangle in the real world, and the conversion relation between the target image and the top view corresponding to the target image can be accurately determined according to the position information of each intersection point, so that the camera can be accurately calibrated.

Description

Method and device for calibrating radar vision equipment, computing equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for calibrating a radar vision device, a computing device, and a storage medium.
Background
At present, monitoring technology based on radar and video all-in-one machines is more and more emphasized in the field of security protection. The radar obtains the measurement information (spatial position and motion speed information) of the moving target with high detection probability, but the radar cannot obtain higher target identification rate; the video or the image can obtain the target identification information with high accuracy, but the motion information and the spatial position information of the target are not easy to obtain. If the radar and the video data are effectively fused, higher target identification accuracy, motion information and spatial position information can be obtained, and therefore, the radar and video integrated machine (radar and video equipment for short) is widely applied.
In practical applications, the radar equipment is usually installed in places where monitoring is needed, such as parks, construction sites, intersections, roads, garden gates, and the like. After the radar vision device is installed for the first time, the camera and the radar of the radar vision device need to be calibrated, and the conversion relation between the camera coordinate system and the radar coordinate system is obtained. In the calibration process of the camera and the radar, the camera is calibrated firstly to obtain the conversion relation between the image shot by the camera and the top view corresponding to the image.
At present, when calibrating a camera, a calibrating person usually selects 4 points from an image shot by the camera to construct a polygon, and the polygon is required to be as rectangular as possible in the real world. However, due to perspective transformation, in the imaging coordinate system, the parallel and perpendicular relationships in the real world are no longer maintained, which increases the difficulty in selecting 4 points that meet the requirements, and easily makes the selected 4 points not meet the requirements, which may result in inaccurate calibration.
Disclosure of Invention
The application provides a calibration method and device of a radar vision device, a computing device and a storage medium, which are used for solving the problem that the calibration of a camera of the radar vision device in the related art is inaccurate.
The embodiment of the application provides the following specific technical scheme:
in a first aspect, an embodiment of the present application provides a calibration method for a radar vision device, where the radar vision device includes a camera and is installed in a place including a lane, where the lane is provided with at least one dotted lane line and two solid lane lines located on two sides of the at least one dotted lane line, and the method includes:
carrying out lane line detection on a target image which is acquired by the camera and contains the lane to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
respectively detecting vertexes of all rectangular blocks on the at least one dotted lane line to obtain position information of a plurality of vertexes corresponding to the at least one dotted lane line;
grouping the multiple vertexes according to the position information of the multiple vertexes to obtain multiple vertex combinations; each vertex in each vertex combination is positioned on the same straight line in a world coordinate system, and the same straight line is perpendicular to the two solid lane lines;
determining at least two straight lines based on at least two vertex combinations in the vertex combinations, and acquiring position information of intersection points of the at least two straight lines and the two solid lane lines respectively;
and determining the conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection point.
In some exemplary embodiments, the grouping the multiple vertices according to the location information of the multiple vertices to obtain multiple vertex combinations includes:
sequencing the plurality of vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located at the upper left corner;
and repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than the preset threshold, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex in sequence until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is not greater than the preset threshold value, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is greater than the preset threshold value, and combining the (N + 1) th vertex and the M +1 th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
In some exemplary embodiments, said determining at least two straight lines based on at least two vertex combinations of said plurality of vertex combinations comprises:
selecting at least two vertex combinations from the plurality of vertex combinations;
and fitting the at least two vertex combinations into corresponding straight lines respectively according to the position information of each target vertex corresponding to the at least two vertex combinations respectively.
In some exemplary embodiments, said selecting at least two vertex combinations from said plurality of vertex combinations comprises:
selecting a first determined vertex combination and a last determined vertex combination from the plurality of vertex combinations.
In some exemplary embodiments, the determining, according to the obtained position information of each intersection, a conversion relationship between the target image and a top view corresponding to the target image includes:
determining a polygonal area formed by each intersection point based on the obtained position information of each intersection point;
constructing a top view corresponding to the polygonal area, and acquiring position information of each reference point in the top view, which corresponds to each intersection point one by one;
and determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and a top view corresponding to the target image.
In some exemplary embodiments, the performing lane line detection on the target image including the lane collected by the camera to obtain the position information of the at least one dashed lane line and the position information of the two solid lane lines in the target image includes:
detecting lane lines in the target image by adopting a trained example segmentation model to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
the detecting the vertexes of the rectangular blocks on the at least one dotted lane line respectively to obtain the position information of the vertexes corresponding to the at least one dotted lane line includes:
and respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line by adopting a trained key point detection model to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
In a second aspect, an embodiment of the present application provides a calibration apparatus for a radar vision device, where the radar vision device includes a camera and is installed in a place including a lane, where at least one dotted lane line and two solid lane lines located on two sides of the at least one dotted lane line are disposed on the lane, the apparatus includes:
the lane line detection module is used for detecting lane lines of a target image which is acquired by the camera and contains the lane to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
the key point detection module is used for respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line;
the grouping module is used for grouping the vertexes according to the position information of the vertexes to obtain a plurality of vertex combinations; each vertex in each vertex combination is positioned on the same straight line in a world coordinate system, and the same straight line is perpendicular to the two solid lane lines;
the straight line determining module is used for determining at least two straight lines based on at least two vertex combinations in the vertex combinations and acquiring the position information of intersection points of the at least two straight lines and the two solid line lane lines;
and the conversion relation determining module is used for determining the conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection point.
In some exemplary embodiments, the grouping module is further configured to:
sequencing the plurality of vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located at the upper left corner;
and repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than the preset threshold, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex in sequence until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is not greater than the preset threshold value, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is greater than the preset threshold value, and combining the (N + 1) th vertex and the M +1 th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
In some exemplary embodiments, the straight line determination module is further configured to:
selecting at least two vertex combinations from the plurality of vertex combinations;
and fitting the at least two vertex combinations into corresponding straight lines respectively according to the position information of each target vertex corresponding to the at least two vertex combinations respectively.
In some exemplary embodiments, the straight line determination module is further configured to:
selecting a first determined vertex combination and a last determined vertex combination from the plurality of vertex combinations.
In some exemplary embodiments, the conversion relation determination module is further configured to:
determining a polygonal area formed by each intersection point based on the obtained position information of each intersection point;
constructing a top view corresponding to the polygonal area, and acquiring position information of each reference point in the top view, which corresponds to each intersection point one by one;
and determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and a top view corresponding to the target image.
In some exemplary embodiments, the lane line detection module is further configured to:
detecting lane lines in the target image by adopting a trained example segmentation model to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
the key point detection module is further configured to:
and respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line by adopting a trained key point detection model to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
In a third aspect, an embodiment of the present application provides a computing device, which includes a processor and a memory, where the memory stores program code, and when the program code is executed by the processor, the processor is caused to execute the steps of any one of the methods of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method of any one of the first aspect.
The embodiment of the application has at least the following beneficial effects:
according to the calibration method of the radar vision equipment, after a target image which is collected by a camera and contains a lane is obtained, lane line detection is carried out on the target image so as to detect a dotted lane line and solid lane lines on two sides in the target image; then respectively detecting the vertexes of all the rectangular blocks on the dotted lane line to obtain the position information of a plurality of vertexes; grouping the multiple vertexes according to the position information of the multiple vertexes to obtain multiple vertex combinations, wherein each vertex in each vertex combination can form a straight line perpendicular to two solid lane lines; therefore, based on at least two vertex combinations in the plurality of vertex combinations, at least two straight lines can be determined, and the position information of the intersection points of the at least two straight lines and the two solid lane lines is obtained; therefore, the polygon formed by the obtained intersection points corresponds to the rectangle in the real world, and the conversion relation between the target image and the top view corresponding to the target image can be accurately determined according to the position information of the intersection points, so that the camera can be accurately calibrated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a calibration method for a radar device provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an object image including a lane provided in an embodiment of the present application;
FIG. 3 is a top view of a target image provided in an embodiment of the present application;
fig. 4 is a flowchart of another calibration method for a radar device provided in an embodiment of the present application;
fig. 5 is a structural block diagram of a calibration apparatus of a radar device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computing device provided in an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
At present, when calibrating a camera of a radar vision device, a calibration person usually selects 4 points from an image shot by the camera to construct a polygon, and the polygon is required to be as rectangular as possible in the real world. However, due to perspective transformation, in the imaging coordinate system, the parallel and perpendicular relationships in the real world are no longer maintained, which increases the difficulty in selecting 4 points that meet the requirements, and easily makes the selected 4 points not meet the requirements, which may result in inaccurate calibration.
In view of this, embodiments of the present application provide a calibration method, an apparatus, a computing device, and a storage medium for a radar vision device, where lane line detection is performed on a target image including a lane acquired by a camera to determine a dashed lane line and two side solid lane lines in the target image, and then each intersection point is determined based on the dashed lane line and the two side solid lane lines, so that a polygon formed by each intersection point corresponds to a rectangle in the real world, and further according to position information of each intersection point, a conversion relationship between a top view corresponding to the target image and the target image can be accurately determined, thereby implementing accurate calibration of the camera.
The following describes in detail a calibration method of a radar device according to the present application with reference to the accompanying drawings and specific embodiments.
The radar vision device comprises a camera and a radar, is installed in a place containing a lane and is used for monitoring pedestrians, vehicles and the like on the lane, and is provided with at least one dotted lane line and two solid lane lines located on two sides of the at least one dotted lane line, wherein the dotted lane line is a dotted line in a lane boundary, and the solid lane line is a solid line in the lane boundary. Generally, a dashed lane line may be provided on the lane, for example, the lane is two lanes; two dotted lane lines can be arranged on the lane, for example, the lane is three lanes; the number of the dashed lane lines may be determined according to the specific situation of the lane, and is not limited herein.
Fig. 1 illustrates a calibration method for a radar-vision device according to an embodiment of the present application, where the method may be executed by a processor of the radar-vision device, or may be executed by a control device connected to the radar-vision device. As shown in fig. 1, a method for calibrating a radar device provided in an embodiment of the present application includes the following steps:
step S101, carrying out lane line detection on a target image which is acquired by a camera and contains a lane to obtain the position information of at least one dotted lane line and the position information of two solid lane lines in the target image.
In the embodiment of the application, after the radar vision equipment is installed, in the process of calibrating the camera, the target image of the lane shot by the camera is obtained, the target image comprises the dotted lane lines and the solid lane lines on two sides of the dotted lane lines, the number of the dotted lane lines is determined according to the specific conditions of the lane, and the solid lane lines on two sides can be boundary lane lines.
In some embodiments, when detecting lane lines in the target image, a trained example segmentation model may be used to detect the lane lines, and the example segmentation model may predict various lane markers in the lane, including the lane lines, the guide arrows, and the like. Therefore, the step S101 may include the steps of:
and detecting the lane lines in the target image by adopting the trained example segmentation model to obtain the position information of at least one dotted lane line and the position information of two solid lane lines in the target image.
And step S102, respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
In this step, a key point detection model based on deep learning may be used to detect the vertices of the rectangular blocks on each dashed lane line, for example, the key point detection model may be highherhrnet. Therefore, the step S101 may include the steps of:
and respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line by adopting the trained key point detection model to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
Specifically, each of the dashed lane lines includes a plurality of rectangular blocks, each of the rectangular blocks includes 4 vertices, for example, the number of the dashed lane lines is 2, and the vertices of the rectangular blocks on the two dashed lane lines are detected, respectively, so that the position information of the plurality of vertices can be obtained.
Step S103, grouping the vertexes according to the position information of the vertexes to obtain a plurality of vertex combinations; and all the vertexes in each vertex combination are positioned on the same straight line in the world coordinate system, and the same straight line is perpendicular to the two solid line lane lines.
The position information of each vertex comprises the pixel coordinates of the vertex in the target image, a plurality of vertexes on the same straight line can be determined according to the vertical coordinates of the vertexes, and the determined vertexes are divided into a group to obtain a plurality of vertex combinations.
In practical application, the dotted lane line is composed of a plurality of rectangular blocks, when the number n of the dotted lane line is 1, the dotted lane line is two lanes, the dotted lane line is located in the center of the lanes, 2 vertexes of the upper boundary or the lower boundary of one rectangular block are in a group, the 2 vertexes are fitted, and a straight line perpendicular to the solid lane lines on the two sides can be obtained. When the number n of the dotted lane lines is 2, the lane is three lanes, the rectangular blocks on the two dotted lane lines appear in pairs, 4 vertexes of the upper boundary or the lower boundary of the pair of rectangular blocks are a group, and the 4 vertexes are fitted to obtain a straight line perpendicular to the solid lane lines on the two sides. Therefore, when the number of the dashed lane lines is n, 2n vertexes are grouped, and the 2n vertexes in the group are fitted to obtain a straight line perpendicular to the solid lane lines on both sides.
It should be noted that, when detecting the vertices of the rectangular blocks on the dashed lane lines, if vertex missing occurs, the number of vertices in a certain vertex combination may be less than 2 n. The following embodiments further describe the grouping of multiple vertices.
And step S104, determining at least two straight lines based on at least two vertex combinations in the vertex combinations, and acquiring the position information of the intersection points of the at least two straight lines and the two solid lane lines.
In this step, at least two vertex combinations may be selected from the plurality of vertex combinations, each vertex combination includes a plurality of vertices, for example, 2 or 4 vertices, and the vertex combinations are fitted to corresponding straight lines according to the position information of each vertex in each vertex combination, so that at least two straight lines corresponding to at least two vertex combinations may be obtained. In a specific implementation, based on the position information of each vertex in each vertex combination, a least square method may be used to fit a plurality of vertices in the vertex combination to a straight line.
Optionally, two vertex combinations are selected from the multiple vertex combinations, two straight lines corresponding to the two vertex combinations are determined, and position information of intersections of the two straight lines and the two solid lane lines is obtained to obtain position information of 4 intersections. As shown in fig. 2, taking 2 dashed lane lines as an example, after two straight lines corresponding to two vertex combinations are determined, intersection points A, B, C, D between the two straight lines and the two solid lane lines on both sides can be obtained.
Each solid lane line includes two boundary lines, the straight lines corresponding to the two boundary lines may be determined according to the coordinates of corresponding pixel points on the two boundary lines, and the intersection point of the two straight lines corresponding to the two vertex combinations and the solid lane line may be the intersection point of one of the two straight lines and the solid lane line, for example, in fig. 2, the intersection point of the two straight lines and the boundary line on the solid lane line and near the dashed lane line is obtained.
And step S105, determining the conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection.
In some possible embodiments, step S105 may include the steps of:
(1) based on the obtained position information of each intersection, a polygon region formed by each intersection is determined.
For example, in the target image shown in fig. 2, 4 intersections A, B, C, D are taken as an example, and a polygonal area ABCD formed by the 4 intersections is a trapezoid.
(2) And constructing a top view corresponding to the polygonal area, and acquiring the position information of each reference point in the top view, which corresponds to each intersection point one by one.
For example, constructing the top view of the polygonal area ABCD as described above may result in a rectangle a 'B' C 'D' as shown in fig. 3, where a ', B', C ', and D' are respectively in one-to-one correspondence with A, B, C, D, i.e., are the reference points corresponding to the intersection points. In the top view, the position information of a ', B', C ', D' can be determined. Optionally, the size of the top view may be the same as or different from the size of the target image, and may be specifically set according to needs, which is not limited herein.
(3) And determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and the top view corresponding to the target image.
Wherein the homography matrix describes the transformation relationship between two images for some points lying on a common plane. The homography matrix can be solved through the position information of each intersection point and the position information of each reference point. For example, the pixel coordinate of the intersection in the target image is (u)i,vi) The pixel coordinate of the reference point in the target image is (x)i,yi) Where i is 1, 2, and 3 … …, the relationship between the intersection and the reference point can be described by the following equation (1).
Figure BDA0003202011770000111
Further transformation of equation (1) can result in:
xi=a11·ui+a12·vi+a13 (2)
yi=a21·ui+a22·vi+a23 (3)
1=a31·ui+a32·vi+a33 (4)
and substituting the obtained position information of each intersection point and the position information of each reference point into the equations (2), (3) and (4) to calculate a homography matrix H.
Figure BDA0003202011770000112
It should be noted that, according to the position information of at least 4 intersection points and the position information of corresponding 4 reference points, the homography matrix H can be solved, and the more point pairs of the intersection points and the reference points, the more robust the calculation result.
In the embodiment of the application, the polygon formed by the obtained intersection points corresponds to the rectangle in the real world, and further, the conversion relation between the target image and the top view corresponding to the target image can be accurately determined according to the position information of the intersection points, so that the camera can be accurately calibrated.
In some possible embodiments, the grouping the multiple vertices according to the position information of the multiple vertices in step S103 to obtain a combination of the multiple vertices may include the following steps:
a. sequencing the multiple vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located in the upper left corner.
When the number of the dotted-line lane lines is 1, considering that the vertical coordinates of the two vertices of the upper boundary or the lower boundary of each rectangular block of the dotted-line lane lines have a certain difference in the target image, and this difference is small, the two vertices of the upper boundary or the two vertices of the lower boundary of the same rectangular block may be determined according to the difference between the vertical coordinates of the multiple vertices.
Likewise, when the number of the dotted-line lane lines is 2, considering that the vertical coordinates of the 4 vertices of the upper boundary or the lower boundary of each rectangular block pair of the two dotted-line lane lines have a certain difference in the target image, and this difference is small, the 4 vertices of the upper boundary or the 4 vertices of the lower boundary of the same rectangular block pair may be determined according to the difference between the vertical coordinates of the plurality of vertices.
In consideration of the fact that the camera head follows the principle of 'near-far-small' when shooting, namely, the more pixels are occupied in the image by the rectangular blocks closer to the radar vision device, and the imaging of the rectangular blocks at the near position is clearer, the vertex on the rectangular blocks at the near position is preferably selected. Generally, the origin of coordinates of an image shot by the camera is located at the upper left corner, so that the ordinate of the vertex on the rectangular block closer to the radar device is larger, and therefore, the vertices are sorted in the order of descending ordinate so as to preferentially group the vertices on the rectangular block closer to the radar device.
b. And repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than a preset threshold value, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold value, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
sequentially comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the (M) th vertex is not greater than a preset threshold, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the (M + 1) th vertex is greater than the preset threshold, and combining the (N + 1) th vertex and the (M) th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
Wherein, assuming that the number of the dashed lane lines is n, the sorted vertices start from the beginning, and every 2n points form a group. Since the distant rectangular blocks in the target image occupy fewer pixels, the methodThe image definition is low, and a large error may occur during vertex detection, so the number of divided vertex combinations may be selected as needed to ensure that the detection error of the vertex in the last vertex combination is small, and when the number of vertex combinations is m, m × 2n vertices are required. Vertex (x) to be detectedi,yi) And after sorting according to the longitudinal coordinate values of the pixels from big to small, obtaining:
(x1,y1)、(x2,y2)……(xi,yi)
the maximum difference value of the vertical coordinates of the pixels between the 2n points in each group does not exceed a preset threshold value theta, and the pixels can be grouped into one group, wherein the preset threshold value theta can be set according to specific situations, for example, the maximum vertical coordinate difference value between each vertex in the upper boundary or the lower boundary of each rectangular block or rectangular block pair on the target image is determined; and simultaneously, constraining the vertex grouping according to the number n of the detected dotted lane lines:
when the vertex detection is normal, the following equation (6) is satisfied:
Figure BDA0003202011770000131
resulting in a set of 2n points.
When the missed detection of the vertex occurs, the vertex combination where the missed detection vertex is located satisfies the following formula (7):
Figure BDA0003202011770000141
k points are obtained as a group.
Further, after obtaining the plurality of combinations of vertices, a first determined combination of vertices and a last determined combination of vertices may be selected from the plurality of combinations of vertices. Therefore, the area of the subsequently determined multi-polygon can be maximized, and the phenomenon of large-scale deviation can not occur when the polygonal area is converted into a top view.
The following describes a specific flow of the calibration method of the radar vision device according to the embodiment of the present application in detail with reference to fig. 4.
For example, the calibration method of the radar vision device may be executed by a control device of the radar vision device, and intermediate nodes from the radar vision device to the control device are as few as possible, so as to improve data transmission efficiency and reduce time delay.
As shown in fig. 4, the calibration method of the radar device may include the following steps:
s401, collecting video data to obtain a target image in the video data.
The target image comprises a lane, and the lane comprises at least one dotted lane line and two solid lane lines.
S402: and predicting various lane marks in the target image by adopting the trained example segmentation model to obtain the positions and the number of the dotted lane lines and the positions of the solid lane lines on two sides of the lane.
S403: and detecting the vertexes of all the rectangular blocks on the dotted lane line by adopting the trained key point detection model to obtain the positions of a plurality of vertexes corresponding to the dotted lane line.
S404: and sequencing the detected multiple vertexes from large to small according to the vertical coordinates of the pixels to obtain the sequenced multiple vertexes.
S405: and grouping the sorted vertexes to obtain a plurality of vertex combinations.
Specifically, from the first vertex, a plurality of vertices whose maximum difference in pixel ordinate does not exceed the threshold value θ are grouped together, and the number of vertices within a group does not exceed twice the number of dashed lane lines.
S406: and performing straight line fitting on at least two vertex combinations in the vertex combinations by adopting a least square method.
For example, fitting only the first vertex combination and the last vertex combination maximizes the area of the rectangle, so that the subsequent conversion to the bird's-eye view does not result in a large-scale deviation.
S407: and calculating to obtain the positions of at least 4 intersection points according to the fitted at least two straight lines and the solid lane lines on the two sides of the lane.
S408: and solving the homography matrix according to the positions of at least 4 intersection points to obtain a conversion relation between the target image and the corresponding top view, and converting the target image into the corresponding top view according to the conversion relation.
In the step, a polygon area formed by each intersection point is determined based on the obtained position of each intersection point; constructing a top view corresponding to the polygonal area, and acquiring the positions of reference points in the top view, which are in one-to-one correspondence with the intersection points; and determining a corresponding homography matrix according to the positions of the intersection points and the positions of the reference points, and taking the homography matrix as a conversion relation between the target image and the top view corresponding to the target image.
Based on the same inventive concept, the embodiment of the application also provides a calibration device of the radar equipment, and as the principle of the device for solving the problems is similar to the method of the embodiment, the implementation of the device can refer to the embodiment of the method, and repeated parts are not described again.
As shown in fig. 5, an embodiment of the present application provides a calibration device for a radar vision device, where the radar vision device includes a camera and is installed in a place including a lane, the lane is provided with at least one dotted lane line and two solid lane lines located on two sides of the at least one dotted lane line, and the calibration device for the radar vision device includes:
the lane line detection module 51 is configured to perform lane line detection on a target image including a lane acquired by the camera to obtain position information of at least one broken line lane line and position information of two solid line lane lines in the target image;
the key point detection module 52 is configured to detect vertices of each rectangular block on at least one dashed lane line, respectively, to obtain position information of multiple vertices corresponding to the at least one dashed lane line;
a grouping module 53, configured to group the multiple vertices according to the position information of the multiple vertices to obtain a multiple vertex combination; each vertex in each vertex combination is positioned on the same straight line in a world coordinate system, and the same straight line is vertical to two solid line lane lines;
a straight line determining module 54, configured to determine at least two straight lines based on at least two vertex combinations of the multiple vertex combinations, and obtain position information of intersection points of the at least two straight lines and the two solid lane lines, respectively;
and a conversion relation determining module 55, configured to determine a conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection.
In some exemplary embodiments, the grouping module 53 is further configured to:
sequencing the multiple vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located at the upper left corner;
and repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than a preset threshold value, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold value, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
sequentially comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the (M) th vertex is not greater than a preset threshold, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the (M + 1) th vertex is greater than the preset threshold, and combining the (N + 1) th vertex and the (M) th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
In some exemplary embodiments, the straight line determination module 54 is further configured to:
selecting at least two vertex combinations from the plurality of vertex combinations;
and fitting the at least two vertex combinations into corresponding straight lines according to the position information of each target vertex corresponding to the at least two vertex combinations.
In some exemplary embodiments, the straight line determination module is further configured to:
the first determined vertex combination and the last determined vertex combination are selected from the plurality of vertex combinations.
In some exemplary embodiments, the conversion relation determination module 55 is further configured to:
determining a polygonal area formed by each intersection point based on the obtained position information of each intersection point;
constructing a top view corresponding to the polygonal area, and acquiring position information of each reference point in the top view, which corresponds to each intersection point one by one;
and determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and the top view corresponding to the target image.
In some exemplary embodiments, the lane line detection module 51 is further configured to:
detecting lane lines in the target image by adopting a trained example segmentation model to obtain the position information of at least one dotted lane line and the position information of two solid lane lines in the target image;
the keypoint detection module 52 is further configured to:
and respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line by adopting the trained key point detection model to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
Based on the same inventive concept, the embodiment of the present application further provides a computing device, where the computing device may be a radar device in the embodiment of the method of the present application, or a control device of the radar device, and as a principle of solving problems of the computing device is similar to the method in the embodiment of the present application, reference may be made to the embodiment of the method for implementing the formic acid device, and repeated details are not described here again.
As shown in fig. 6, the computing device includes a processor 600, a memory 601 and a communication interface 602, wherein the processor 600 and the communication interface 602 and the memory 601 communicate with each other through a communication bus 603; the memory 601 is used for storing programs executable by the processor 600, and the processor 600 is used for reading the programs in the memory 601 and executing the steps of the calibration method of any radar vision device in the above embodiments.
The communication bus 603 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface 602 is used for communication between the above-described computing device and other devices. The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Based on the same inventive concept, embodiments of the present application further provide a computer storage medium, where a computer program executable by a processor is stored in the computer storage medium, and when the program runs on the processor, the processor is caused to execute the steps of the calibration method of any of the above-mentioned embodiments.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
In another embodiment provided by the present application, a computer program product containing instructions is further provided, and when the computer program product is called by an electronic device to execute, the electronic device may execute the steps of the calibration method of any one of the above-mentioned embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (13)

1. A calibration method of a radar vision device is characterized in that the radar vision device comprises a camera and is installed in a place containing a lane, at least one dotted lane line and two solid lane lines located on two sides of the dotted lane line are arranged on the lane, and the method comprises the following steps:
carrying out lane line detection on a target image which is acquired by the camera and contains the lane to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
respectively detecting vertexes of all rectangular blocks on the at least one dotted lane line to obtain position information of a plurality of vertexes corresponding to the at least one dotted lane line;
grouping the multiple vertexes according to the position information of the multiple vertexes to obtain multiple vertex combinations; each vertex in each vertex combination is positioned on the same straight line in a world coordinate system, and the same straight line is perpendicular to the two solid lane lines;
determining at least two straight lines based on at least two vertex combinations in the vertex combinations, and acquiring position information of intersection points of the at least two straight lines and the two solid lane lines respectively;
and determining the conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection point.
2. The method of claim 1, wherein grouping the plurality of vertices according to the position information of the plurality of vertices to obtain a plurality of vertex combinations comprises:
sequencing the plurality of vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located at the upper left corner;
and repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than the preset threshold, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex in sequence until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is not greater than the preset threshold value, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is greater than the preset threshold value, and combining the (N + 1) th vertex and the M +1 th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
3. The method of claim 2, wherein determining at least two lines based on at least two vertex combinations of the plurality of vertex combinations comprises:
selecting at least two vertex combinations from the plurality of vertex combinations;
and fitting the at least two vertex combinations into corresponding straight lines respectively according to the position information of each target vertex corresponding to the at least two vertex combinations respectively.
4. The method of claim 3, wherein selecting at least two vertex combinations from the plurality of vertex combinations comprises:
selecting a first determined vertex combination and a last determined vertex combination from the plurality of vertex combinations.
5. The method according to any one of claims 1 to 4, wherein the determining, according to the obtained position information of each intersection, a conversion relationship between the target image and a top view corresponding to the target image comprises:
determining a polygonal area formed by each intersection point based on the obtained position information of each intersection point;
constructing a top view corresponding to the polygonal area, and acquiring position information of each reference point in the top view, which corresponds to each intersection point one by one;
and determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and a top view corresponding to the target image.
6. The method according to any one of claims 1 to 4, wherein the performing lane line detection on the target image including the lane collected by the camera to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image comprises:
detecting lane lines in the target image by adopting a trained example segmentation model to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
the detecting the vertexes of the rectangular blocks on the at least one dotted lane line respectively to obtain the position information of the vertexes corresponding to the at least one dotted lane line includes:
and respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line by adopting a trained key point detection model to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line.
7. The utility model provides a calibration arrangement of equipment is looked to thunder, its characterized in that, the equipment is looked to thunder includes the camera, and installs in the place that contains the lane, be equipped with at least one dotted line lane line on the lane, be located two solid line lane lines of at least one dotted line lane line both sides, the device includes:
the lane line detection module is used for detecting lane lines of a target image which is acquired by the camera and contains the lane to obtain the position information of the at least one dotted lane line and the position information of the two solid lane lines in the target image;
the key point detection module is used for respectively detecting the vertexes of the rectangular blocks on the at least one dotted lane line to obtain the position information of a plurality of vertexes corresponding to the at least one dotted lane line;
the grouping module is used for grouping the vertexes according to the position information of the vertexes to obtain a plurality of vertex combinations; each vertex in each vertex combination is positioned on the same straight line in a world coordinate system, and the same straight line is perpendicular to the two solid lane lines;
the straight line determining module is used for determining at least two straight lines based on at least two vertex combinations in the vertex combinations and acquiring the position information of intersection points of the at least two straight lines and the two solid line lane lines;
and the conversion relation determining module is used for determining the conversion relation between the target image and the top view corresponding to the target image according to the obtained position information of each intersection point.
8. The apparatus of claim 7, wherein the grouping module is further configured to:
sequencing the plurality of vertexes according to the sequence of the vertical coordinates from large to small to obtain a plurality of sequenced vertexes; wherein the origin of coordinates of the target image is located at the upper left corner;
and repeatedly executing the following operations for the sequenced vertexes until the number of the obtained vertex combinations reaches a preset number:
sequentially comparing the ordinate of the first vertex with the ordinates of a plurality of vertexes arranged behind the first vertex until the difference value between the ordinate of the first vertex and the ordinate of the Nth vertex is not greater than the preset threshold, and the difference value between the ordinate of the first vertex and the ordinate of the (N + 1) th vertex is greater than the preset threshold, and combining the first N vertexes as one vertex; wherein N is an integer greater than 0;
comparing the ordinate of the (N + 1) th vertex with the ordinates of a plurality of vertexes arranged behind the (N + 1) th vertex in sequence until the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is not greater than the preset threshold value, and the difference value between the ordinate of the (N + 1) th vertex and the ordinate of the M +1 th vertex is greater than the preset threshold value, and combining the (N + 1) th vertex and the M +1 th vertex as one vertex; wherein M is an integer greater than 0, and M > N + 1.
9. The apparatus of claim 8, wherein the straight line determining module is further configured to:
selecting at least two vertex combinations from the plurality of vertex combinations;
and fitting the at least two vertex combinations into corresponding straight lines respectively according to the position information of each target vertex corresponding to the at least two vertex combinations respectively.
10. The apparatus of claim 9, wherein the straight line determining module is further configured to:
selecting a first determined vertex combination and a last determined vertex combination from the plurality of vertex combinations.
11. The apparatus of any of claims 7 to 10, wherein the conversion relationship determination module is further configured to:
determining a polygonal area formed by each intersection point based on the obtained position information of each intersection point;
constructing a top view corresponding to the polygonal area, and acquiring position information of each reference point in the top view, which corresponds to each intersection point one by one;
and determining a corresponding homography matrix according to the position information of each intersection point and the position information of each reference point, and taking the homography matrix as a conversion relation between the target image and a top view corresponding to the target image.
12. A computing device comprising a processor and a memory, wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 6.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110906939.4A 2021-08-09 2021-08-09 Calibration method and device of radar equipment, computing equipment and storage medium Active CN113674358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110906939.4A CN113674358B (en) 2021-08-09 2021-08-09 Calibration method and device of radar equipment, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110906939.4A CN113674358B (en) 2021-08-09 2021-08-09 Calibration method and device of radar equipment, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113674358A true CN113674358A (en) 2021-11-19
CN113674358B CN113674358B (en) 2024-06-04

Family

ID=78541856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110906939.4A Active CN113674358B (en) 2021-08-09 2021-08-09 Calibration method and device of radar equipment, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113674358B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249270A (en) * 2022-09-22 2022-10-28 广州市德赛西威智慧交通技术有限公司 Automatic re-labeling method and system for radar-vision all-in-one machine

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158312A1 (en) * 2008-12-23 2010-06-24 National Chiao Tung University Method for tracking and processing image
CN108564629A (en) * 2018-03-23 2018-09-21 广州小鹏汽车科技有限公司 A kind of scaling method and system of vehicle-mounted camera external parameter
EP3432219A2 (en) * 2017-07-21 2019-01-23 Dental Monitoring Method for analysing an image of a dental arch
WO2019233286A1 (en) * 2018-06-05 2019-12-12 北京市商汤科技开发有限公司 Visual positioning method and apparatus, electronic device and system
CN112257539A (en) * 2020-10-16 2021-01-22 广州大学 Method, system and storage medium for detecting position relation between vehicle and lane line
CN112819895A (en) * 2019-11-15 2021-05-18 西安华为技术有限公司 Camera calibration method and device
WO2021104180A1 (en) * 2019-11-29 2021-06-03 上海商汤临港智能科技有限公司 Map generation method, positioning method, apparatus, device, storage medium, and computer program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158312A1 (en) * 2008-12-23 2010-06-24 National Chiao Tung University Method for tracking and processing image
EP3432219A2 (en) * 2017-07-21 2019-01-23 Dental Monitoring Method for analysing an image of a dental arch
CN108564629A (en) * 2018-03-23 2018-09-21 广州小鹏汽车科技有限公司 A kind of scaling method and system of vehicle-mounted camera external parameter
WO2019233286A1 (en) * 2018-06-05 2019-12-12 北京市商汤科技开发有限公司 Visual positioning method and apparatus, electronic device and system
CN112819895A (en) * 2019-11-15 2021-05-18 西安华为技术有限公司 Camera calibration method and device
WO2021104180A1 (en) * 2019-11-29 2021-06-03 上海商汤临港智能科技有限公司 Map generation method, positioning method, apparatus, device, storage medium, and computer program
CN112257539A (en) * 2020-10-16 2021-01-22 广州大学 Method, system and storage medium for detecting position relation between vehicle and lane line

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249270A (en) * 2022-09-22 2022-10-28 广州市德赛西威智慧交通技术有限公司 Automatic re-labeling method and system for radar-vision all-in-one machine
CN115249270B (en) * 2022-09-22 2022-12-30 广州市德赛西威智慧交通技术有限公司 Automatic re-labeling method and system for radar-vision all-in-one machine

Also Published As

Publication number Publication date
CN113674358B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN109141364B (en) Obstacle detection method and system and robot
CN109658454B (en) Pose information determination method, related device and storage medium
CN113421305B (en) Target detection method, device, system, electronic equipment and storage medium
CN109918977B (en) Method, device and equipment for determining idle parking space
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
WO2022217630A1 (en) Vehicle speed determination method and apparatus, device, and medium
CN104167109A (en) Detection method and detection apparatus for vehicle position
CN113947766B (en) Real-time license plate detection method based on convolutional neural network
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
CN110632582A (en) Sound source positioning method, device and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN111553956A (en) Calibration method and device of shooting device, electronic equipment and storage medium
CN111626189B (en) Road surface abnormity detection method and device, electronic equipment and storage medium
CN111914845A (en) Character layering method and device in license plate and electronic equipment
CN113674358B (en) Calibration method and device of radar equipment, computing equipment and storage medium
CN113505643B (en) Method and related device for detecting violation target
CN113256683A (en) Target tracking method and related equipment
CN115267722A (en) Angular point extraction method and device and storage medium
CN112329886A (en) Double-license plate recognition method, model training method, device, equipment and storage medium
CN109117866B (en) Lane recognition algorithm evaluation method, computer device, and storage medium
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN116740680A (en) Vehicle positioning method and device and electronic equipment
CN116721396A (en) Lane line detection method, device and storage medium
CN112308061B (en) License plate character recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant