CN113658252A - Method, medium, apparatus for estimating elevation angle of camera, and camera - Google Patents

Method, medium, apparatus for estimating elevation angle of camera, and camera Download PDF

Info

Publication number
CN113658252A
CN113658252A CN202110533904.0A CN202110533904A CN113658252A CN 113658252 A CN113658252 A CN 113658252A CN 202110533904 A CN202110533904 A CN 202110533904A CN 113658252 A CN113658252 A CN 113658252A
Authority
CN
China
Prior art keywords
elevation angle
lines
camera
line
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110533904.0A
Other languages
Chinese (zh)
Inventor
宫原俊二
白金成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haomo Zhixing Technology Co Ltd
Original Assignee
Haomo Zhixing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haomo Zhixing Technology Co Ltd filed Critical Haomo Zhixing Technology Co Ltd
Priority to CN202110533904.0A priority Critical patent/CN113658252A/en
Publication of CN113658252A publication Critical patent/CN113658252A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of intelligent transportation and image processing, and provides a method, a medium and a device for estimating an elevation angle of a camera, and the camera. The method comprises the following steps: acquiring all lines in the image, wherein the lines refer to lines seen from the vehicle perspective direction; selecting a plurality of line pairs from all combinations of the obtained lines; determining vertical positions of intersections of the selected plurality of line pairs, respectively, and calculating an average of all the determined vertical positions as an optimal vertical position; and calculating the elevation angle of the camera according to the optimal vertical position. The scheme of the invention can carry out accurate elevation angle estimation, makes up the blank that no accurate method for estimating the elevation angle exists in the prior art, can adopt any parallel line to estimate the elevation angle, and can be suitable for various conditions.

Description

Method, medium, apparatus for estimating elevation angle of camera, and camera
Technical Field
The invention relates to the technical field of intelligent transportation and image processing, in particular to a method, a medium and a device for estimating an elevation angle of a camera and the camera.
Background
At present, vehicles having an AD (Autonomous driving) function or ADAS (Advanced Driver Assistance System) have begun to be brought to the market, and the development of intelligent transportation has been greatly promoted. In the prior art, the sensors supporting AD/ADAS mainly include radar, vision camera system, laser radar, ultrasonic sensor, etc., wherein the vision camera system is most widely applied because it can obtain two-dimensional image information as human vision, and typical applications thereof include lane detection, object detection, vehicle detection, pedestrian detection, rider detection, etc.
In various applications, visual camera systems extract features (objects or lanes) from captured images using image processing, and the corresponding image processing typically includes two steps:
1) the basic process is as follows: differential processing is first applied to the image data, and then thresholding is applied to the corresponding image to produce a binary or ternary image.
2) Object/lane detection process: based on the binary/ternary image and hough transform (or pattern matching), an object (or its range) or lane is estimated.
In this detection process for objects/lanes, the elevation angle of the monocular camera of the vision camera system is very important, and the estimation of the elevation angle affects the effectiveness or accuracy of the detection. However, the prior art usually estimates the elevation angle of the camera via the edge lines detected by hough transform, but this scheme is not enough to obtain an accurate elevation angle. But with the development of AD/ADAS, precise elevation angles are increasingly needed to estimate the range of monocular cameras and calibrate the entire visual camera system in real time.
The invention accordingly provides a solution for evaluating the exact elevation angle of a camera with a higher accuracy.
Disclosure of Invention
In view of the above, the present invention is directed to a method, medium, and apparatus for estimating an elevation angle of a camera, and the camera, so as to at least partially solve the above technical problems.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method for estimating an elevation angle of a camera used to capture images in front of a vehicle at a preset elevation angle, comprising: acquiring all lines in the image, wherein the lines refer to lines seen from the vehicle perspective direction; selecting a plurality of line pairs from all combinations of the obtained lines; determining vertical positions of intersections of the selected plurality of line pairs, and calculating an average of all the determined vertical positions as an optimal vertical position; and calculating the elevation angle of the camera according to the optimal vertical position.
Further, selecting pairs from all combinations of the obtained lines includes: selecting a line pair meeting the following first preset condition: at two different test distances, the width difference of the line pair is smaller than a preset value, wherein the width difference d is calculated by the following formula:
d | -width 1-width2|/{ (width1+ width2)/2} < preset value
Where width1 and width2 are the width of the wire pair at the test distance.
Further, in a case where there are a plurality of line pairs that meet the first preset condition, the selecting a line pair from all combinations of the obtained lines further includes: selecting an optimal pair by the following second preset conditions for the plurality of pairs meeting the first preset conditions: in a plurality of line pairs, the peak value of the hough histogram corresponding to the optimal line pair is the largest, or the width difference value corresponding to the optimal line pair is the smallest.
Further, the acquiring all lines in the image comprises: applying a difference operation and a threshold process to the image and obtaining ternary edge points; applying Hough transform to the ternary edge points to obtain a positive edge line and a negative edge line; and performing line estimation on the positive edge line and the negative edge line according to the Hough histogram to obtain all lines corresponding to the positive edge line and the negative edge line.
Further, the method further comprises: and comparing the preset elevation angle with the calculated elevation angle, and if the difference value between the preset elevation angle and the calculated elevation angle is within a preset range, determining the calculated elevation angle as a reliable elevation angle, wherein the camera is used for capturing images in front of the vehicle at the reliable elevation angle instead of the preset elevation angle.
Compared with the prior art, the method for estimating the elevation angle of the camera has the following advantages: the scheme of the invention can carry out accurate elevation angle estimation, makes up the blank that no accurate method for estimating the elevation angle exists in the prior art, and the proposed method for estimating the elevation angle based on the parallel line pair is not limited to lane lines, but can adopt any parallel line, thereby being applicable to various conditions.
Another object of the present invention is to provide a machine-readable storage medium, a camera and a device for estimating the elevation angle of the camera, so as to at least partially solve the above technical problems.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a machine-readable storage medium having instructions stored thereon for causing a machine to perform the above-described method for estimating a camera elevation angle.
A camera, comprising: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for estimating camera elevation angle as described above.
A device for estimating an elevation angle of a camera for capturing an image in front of a vehicle at a preset elevation angle, comprising: a detection module configured to acquire all lines in the image, wherein the lines refer to lines seen from a vehicle perspective direction; and a calculation module. The computing module is configured to: selecting a plurality of line pairs from all combinations of all the obtained lines; determining vertical positions of intersections of the selected plurality of line pairs, and calculating an average of all the determined vertical positions as an optimal vertical position; and calculating the elevation angle of the camera according to the optimal vertical position.
The advantages of the machine-readable storage medium, the camera and the apparatus, which are the same as the advantages of the method for estimating the elevation angle of the camera described above with respect to the prior art, are not described herein again.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1(a) is a schematic diagram of the relationship between a camera and an object;
FIG. 1(b) is a schematic diagram of an object in an image moving vertically due to camera elevation inaccuracy;
FIG. 1(c) is a schematic diagram of calculating the range of the camera capturing an object according to the relative position between the camera and the object;
1(d) -FIG. 1(i) is a result or schematic diagram of processes for estimating the elevation angle of a camera in the example;
FIG. 2 is a schematic diagram of the principle of the relationship between horizontal lines and parallel lines;
fig. 3(a) -3 (c) are schematic diagrams of three relative positions of the intersection of parallel lines and horizontal line.
Fig. 4(a) -4 (c) are schematic diagrams illustrating principles of three examples of estimating a horizontal line using a parallel line pair.
FIG. 5 is a flow chart illustrating a method for estimating the elevation angle of a camera according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of the acquisition of line pairs in the preferred embodiment;
fig. 7(a) -7 (b) are schematic diagrams showing a process from an original image to a monochrome image, in which fig. 7(a) is the original image and fig. 7(b) is the monochrome image;
FIG. 7(c) is a schematic diagram of the ternary image obtained in FIG. 7 (b);
fig. 7(d) is a schematic view of a hough line connected to the positive edge of fig. 7 (c);
FIG. 7(e) is a schematic diagram showing all the lines that would result from considering FIG. 7 (d);
FIG. 7(f) is an exemplary diagram showing the calculation of the width between line pairs based on two ranges of 40[ m ] and 80[ m ];
FIG. 7(g) is a schematic diagram of the final line fit and Hough line obtained in FIGS. 7(a) -7 (f);
FIG. 8 is a schematic flow chart of a method for estimating the elevation angle of a camera in other embodiments of the present invention;
fig. 9(a) -9 (b) are schematic diagrams of accuracy verification for elevation angle estimation in the first example;
fig. 10(a) -10 (f) are schematic diagrams of an application process for elevation estimation in the second example; and
fig. 11 is a schematic structural diagram of an apparatus for estimating an elevation angle of a camera according to another embodiment of the present invention.
Description of reference numerals:
100. a camera; 200. an object; 1110. a detection module; 1120. and a calculation module.
Detailed Description
It should be noted that the embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
The present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The inventive idea of an embodiment of the invention is described herein with reference to the examples and the drawings.
Fig. 1(a) is a schematic diagram of the relationship between the camera 100 and its object 200 to be captured. Referring to fig. 1(a), a camera 100 is mounted on a vehicle and has a height HcThe elevation angle of the camera 100 is thetaeleFOV (Field of view) is expressed as θFOV. Fig. 1(b) is a schematic diagram of the vertical movement of the object 200 in the image due to the inaccurate elevation angle of the camera, the left side is a schematic diagram of the object 200 detected in the case of the correct elevation angle, the right side is a schematic diagram of the object 200 detected in the case of the wrong elevation angle, and the relative position of the object 200 to the ground is observed, so that the vertical movement of the object obviously occurs in the image. In FIG. 1(b), the vertical position of the object in the image is shown by the axis mAnd vertical elevation error results in a movement of the position of the m at the bottom of the object. And in the following, the elevation angle of the camera will also be denoted by m.
With respect to fig. 1(a) -1 (b), it is easy to know that the precise elevation angle of the camera is crucial for object recognition and range detection, and in case of obtaining the precise elevation angle, it can be further performed, for example: 1) determining the precise range within which the camera captures the object, for example as illustrated by distance HH in fig. 1 (a); 2) estimating an accurate object height position, for example as illustrated by distance H1 in fig. 1 (a); 3) calibrating angle in real time, e.g. theta in FIG. 1(a)eleAnd thetaFOV. If the elevation angle is wrong, its range may be wrong. For example, as shown in fig. 1(c), the range is based on the height of the camera, the bottom position of the object, and the elevation angle (θ)ele+ d θ (m)). If theta is greater than thetaeleIn error, the range xgiWill be erroneous. Specifically, in fig. 1(c), the following equation relationship exists:
xgi=-Hc tanθele-i=-Hc tan(θele+dθ(m))
where d θ is a function of m.
Further, fig. 1(d) -fig. 1(i) show a typical example of estimating the elevation angle of a camera, which specifically includes the following steps:
1) color and analysis area processing: from the original image (fig. 1(d)) to the monochrome image (fig. 1 (e)), where the white-framed area of fig. 1(e) is the analysis area. The analysis area is an area for limiting image processing, and is a concept well known to those skilled in the art.
2) Secondary sampling: subsampling is performed for the analysis region. For example, every 3 or 4 spots.
3) Obtaining a ternary image: the ternary image is obtained by Sobel and thresholding, and the corresponding positive and negative edges can be seen as shown in fig. 1 (f).
4) Hough transform: a hough histogram for the aligned edges is obtained as shown in fig. 1 (g).
5) Line estimation: this is done according to the hough histogram as shown in fig. 1 (g).
6) Determining vanishing points in subsampling: the estimated intersection of the lines is determined as shown in fig. 1 (h).
7) Determining vanishing points and elevation angles: after all pixels are returned, the vertical position and elevation of the vanishing point (at the estimated horizontal line) is calculated, as shown in fig. 1 (i).
Based on the methods of fig. 1(d) -1 (i), fig. 2 is a schematic diagram of the principle of the relationship between horizontal lines and parallel lines. The intersection of the parallel lines is on the horizon. Referring to fig. 2, this aspect may include the following steps:
and step 1), estimating a lane line through Hough transformation. The results of the estimation are shown in fig. 2.
Step 2), an intersection point of the estimated lane, also called vanishing point, is determined, and as shown in fig. 2, the coordinates of the vanishing point can be represented as (m, n), where m is the vertical position of the vanishing point and n is the horizontal position of the vanishing point.
And 3) converting the vanishing point m into an elevation angle.
Specifically, the height and FOV of the camera are constant, while the coordinates (m, n) of the vanishing point are uniquely determined by the elevation angle, FOV and height of the camera, so that the elevation angle and vanishing point are in one-to-one correspondence, and the m position of the vanishing point can be corresponded to the elevation angle of the camera.
More specifically, the relationship between the m position of the vanishing point and the elevation angle of the camera can be described by the following equation (1):
v=(-m/MM+0.5)*2*tan(FOV/2);
angle[radian]=atan(v)+pi/2; (1)
where m, MM and FOV are the vertical position of the vanishing point, the number of vertical pixels in the image and the field of view, respectively, angle [ radian ] denotes the elevation angle in radians and pi is the circumference ratio.
However, such a scheme may not provide accurate elevation because of the limited resolution of the hough transform in angle estimation and location. Moreover, such a solution has another problem: limited to the use of lane markings only.
On the basis, in order to solve the problems of the currently common scheme for determining the elevation angle of the camera, the inventor finds an important rule in the process of implementing the present application, namely: observing the camera image as in fig. 2, the horizontal lines distinguish the two parts of the air and the road surface, and any pair of parallel lines on the road surface should intersect on the horizontal lines. For example, the relationship between the intersection of parallel lines and the horizontal line may exist in three cases as shown in fig. 3(a) -3 (c). As shown in fig. 3(a), if parallel lines intersect below the horizontal line, the parallel lines intersect at a certain point on the road surface, which contradicts that the parallel lines should be parallel on the ground. That is, this situation is not feasible. As shown in fig. 3(b), if the intersection exists in the air, this contradicts that the intersection should be on the road surface. That is, as shown in fig. 3(c), no intersection point can exist on the ground or in the air other than on the horizontal line. Therefore, estimation regarding the horizon is crucial.
Accordingly, fig. 4(a) -4 (c) show three examples of estimating a horizontal line using a parallel line pair. As shown in fig. 4(a), the lane line may of course estimate the horizon. As shown in fig. 4(b), the segmented parallel line pairs can estimate horizontal lines. The combination between the curb and the line can also estimate a horizontal line as shown in fig. 4 (c).
On the basis, the embodiment of the invention also provides a method for estimating the elevation angle of the camera. Fig. 5 is a flowchart illustrating a method for estimating an elevation angle of a camera according to an embodiment of the present invention. As shown in fig. 5, for an image in front of the vehicle captured by the camera at a preset elevation angle, the method may include the steps of:
step S510, all lines in the image are acquired.
The line according to the embodiment of the present invention refers to a line viewed from a vehicle viewing angle, and particularly refers to a parallel line at the viewing angle. However, it should be noted that "parallel" is not absolutely parallel, and there may be a certain error, and how to select the best pair with the best parallelism will be described below. The obtained line is, for example, a line shown in fig. 4(a) -4 (c), such as a segmented parallel line pair in fig. 4 (b).
FIG. 6 is a schematic flow chart of the preferred embodiment for acquiring lines in an image. As shown in fig. 6, the following steps may be included:
step S511, applies differential operation and threshold processing to the image and obtains ternary edge points.
The process of obtaining the ternary edge point may refer to the examples in fig. 1(d) -fig. 1(i), and for example, includes: carrying out single-color processing on the image and determining an analysis area; sub-sampling the image within the analysis area, e.g. using one sample every 3 or 4 points; and performing Sobel edge detection and threshold processing on the processed image to obtain a positive edge point and a negative edge point of the corresponding ternary image.
Step S512, Hough transform is applied to the ternary edge points to obtain a positive edge line and a negative edge line.
Step S513, performing line estimation on the positive edge line and the negative edge line according to the hough histogram to obtain all lines corresponding to the positive edge line and the negative edge line.
Note that, for the sake of description, the "edge line" is directly referred to as an "edge" hereinafter.
With respect to steps S511-S513, fig. 7(a) -7 (b) show the process from the original image to the monochrome image, by way of example, where fig. 7(a) is the original image, fig. 7(b) is the monochrome image, and the lower half of fig. 7(b) distinguished by the middle white line is the determined analysis region; FIG. 7(c) is the resulting ternary image, taken from FIG. 7(b), showing the left lane positive edge, the left lane negative edge, and the curb positive edge; fig. 7(d) shows a hough line connected to the positive edge of fig. 7 (c); fig. 7(e) shows all the lines that will be obtained in consideration of fig. 7(d), including lines P1, P2, and N1, where P denotes positive and N denotes negative, which correspond to the left lane positive edge, curb positive edge, and left lane negative edge in fig. 7(c), respectively.
Further, after all the lines are obtained, any pair of the lines is selected from a combination of all the lines.
In a preferred embodiment, after step S510, the method preferably comprises the following steps S520-S540:
in step S520, a plurality of line pairs are selected from all combinations of the obtained lines.
In a preferred embodiment, the step S520 may include: and selecting the line pair meeting the first preset condition. Wherein the first preset condition is described as: at two different test distances, the width difference of the line pair is smaller than a preset value.
For example, a predetermined elevation angle θ may be usedpreAnd the geometrical relationship shown in fig. 1(a), the widths of all possible line pairs at a plurality of camera test distances are calculated (refer to fig. 2). As shown in fig. 7(f), for 40[ m ]]And 80[ m ]]Respectively, the width between the line pairs may be calculated. If the difference in width between the line pairs within the two test distances is small enough (less than a preset value), the line pairs are considered parallel (better parallelism). Specifically, the width difference value may be calculated by the following equation (2):
Figure BDA0003068962420000101
where width40 denotes the width for the 40 m range, width80 denotes the width for the 80 m range, and nn denotes the number of pairs. The width40 and the width80 are actual widths [ m ], and the pixel width is converted into the actual width [ m ] according to fig. 1 (a). For example, as shown in fig. 2, since the lane is calculated from the actual width to the pixel width, the pixel width is estimated from the actual width by performing the inverse calculation.
Based on equation (2), the above-mentioned first preset condition can be described as the following equation:
d | -width 1-width2|/{ (width1+ width2)/2} < preset value
Where, analogy (2), width1 and width2 are the widths of the line pairs at the test distance, d is the width difference.
Accordingly, the pair satisfying the first preset condition is picked up. However, there may be a plurality of pairs satisfying the first preset condition, and in a more preferred embodiment, the step S520 may further include: for a plurality of line pairs that meet the first preset condition, an optimal line pair is selected by the following second preset condition. Wherein, the second preset condition can be described as: in a plurality of line pairs, the peak value of the hough histogram corresponding to the optimal line pair is the largest, or the width difference value corresponding to the optimal line pair is the smallest (equation 2). Wherein the best line pair is understood to be the line pair with the best parallelism. Wherein, the hough histogram peak value is used for showing the reliability of the estimated line, and the higher the height (representing frequency, range, angle, etc.) of the hough histogram, the higher the score of the reliability of the estimated line; the width difference is used to indicate the reliability of the selected line pair, the smaller the width difference, the higher the reliability of the selected line pair.
By way of further example, fig. 7(g) is a schematic diagram of the final line fit and hough line resulting from fig. 7(a) -7 (f), which explicitly shows the available pairs of parallel lines.
In step S530, the vertical positions of the intersections of the selected line pairs are determined, and the average of all the determined vertical positions is calculated as the optimal vertical position.
For example, assuming that the vertical position of the vanishing point of the optimal parallel line pair is m0, the coordinates (mi, ni) of each intersection point are taken
Figure BDA0003068962420000111
And calculates their respective mi, and then averages all mi to determine the vertical position m0 of the vanishing point of the best parallel line pair.
And step S540, calculating the elevation angle of the camera according to the optimal vertical position.
For example, in the case where the optimal vertical position is known, the elevation angle of the camera head can be calculated by equation (1).
Finally, steps S510-S540 shown in fig. 5 determine the optimal vertical position based on the selected plurality of line pairs to calculate a scheme for camera elevation angle. However, in other embodiments, as shown in fig. 8, a unique optimal line pair may be selected, and the vertical position of the optimal line pair may be directly used to calculate and capture an imageThe head elevation angle. For example, after all lines of the image are acquired, the intersection (vanishing point) of each line pair is determined as (mi, ni), where
Figure BDA0003068962420000112
And mi represents the vertical position of the intersection formed by the ith pair, and ni represents the horizontal position of the intersection formed by the ith pair, so as to find out the best pair. That is, the scheme of fig. 5 calculates the vertical position of a plurality of line pairs, and the scheme of fig. 8 selects a unique line pair to calculate the vertical position thereof.
Finally, steps S510-S540 shown in fig. 5 illustrate a method of selecting a plurality of line pairs and calculating the optimal vertical position of the line pairs to calculate the camera elevation angle.
In a more preferred embodiment, after calculating the elevation angle of the camera, the method may further include:
step S550 (not shown), comparing the preset elevation angle with the calculated elevation angle, and if the difference between the preset elevation angle and the calculated elevation angle is within a preset range, determining the calculated elevation angle as a reliable elevation angle.
Wherein the camera is configured to capture images forward of the vehicle at the reliable elevation angle instead of the preset elevation angle.
That is, the preset range requirement is set such that the preset elevation angle is sufficiently close to the calculated elevation angle, the reliable elevation angle being an elevation angle that can be subsequently used in image processing for object detection and ranging.
With the above embodiment, it can be seen that the pair of lines for elevation angle estimation may be a unique pair (corresponding to fig. 8) or multiple pairs (corresponding to fig. 5), where multiple pairs may be applicable and enable more reliable estimation.
The embodiment of the invention also provides two examples for verifying the accuracy of the method for estimating the elevation angle of the camera.
Fig. 9(a) -9 (b) are schematic diagrams of the principle of verifying the accuracy of the elevation angle estimation in the first example, in which the traffic cone is set to 150[ m ] and photographed at a virtual window of 150[ m ] in fig. 9(a), and the bottom of the traffic cone (i.e., the actual 150m position) is compared with the virtual window bottom (i.e., the 150m position based on the calculated elevation angle) in fig. 9(b), indicating that the estimation of the elevation angle is reasonable if the two are small in distance (most preferably coincident).
Fig. 10(a) -10 (f) are schematic diagrams illustrating the application process of the elevation angle estimation in the second example, in which the road surface exhibits a wet edge line, herein referred to as a wet edge, due to rain and snow, and the like, and the wet edge is not a lane line on the road surface. Fig. 10(a) -10 (b) show the process of the road surface with wet edges from an original image to a monochrome image, where fig. 10(a) is the original image, fig. 10(b) is the monochrome image, and the lower half of fig. 10(b) divided by the middle horizontal line is a determined analysis area. FIG. 10(c) is the resulting ternary image, as received in FIG. 10(b), including positive and negative edges, where the circled portion is the line corresponding to the wet edge. Fig. 10(d) and 10(e) are lines of positive edges and lines of negative edges obtained by being connected to fig. 10(c), respectively, where fig. 10(d) shows four lines of positive edges and fig. 10(e) shows four lines of negative edges, and further it can be seen that 8 lines are collectively identified in this example. For the 8 lines identified, the best line pair was selected based on the above scheme, and fig. 10(f) is a schematic of the resulting final line fit and hough line, which clearly shows that the line pair formed by the wet edge and the right edge is the best line pair selected, and the vanishing point of the line pair is exactly on the horizontal line.
In summary, the method for estimating the elevation angle of the camera according to the embodiment of the present invention can perform accurate elevation angle estimation, and fills the gap that no accurate method exists in the prior art for estimating the elevation angle.
Fig. 11 is a schematic structural diagram of an apparatus for estimating an elevation angle of a camera according to another embodiment of the present invention, which is based on the same inventive concept as the above method. As shown in fig. 11, the apparatus includes a detection module 1110 and a calculation module 1120.
Wherein the detection module 1110 is configured to acquire all lines in the image, wherein the lines refer to lines seen from a vehicle perspective direction.
Wherein, in a preferred embodiment, the calculation module 1120 is configured to: selecting pairs of lines from all combinations of all lines obtained; determining the vertical position of the intersection of the selected line pair and calculating the average of all the determined vertical positions as the optimal vertical position; and calculating the elevation angle of the camera according to the optimal vertical position.
The detecting module 1110 is, for example, a sensing component and an image processing component in a camera, and the calculating module 1120 is, for example, a processor, and reference may be made to the following description for an available processor, which is not described herein again.
In addition, for more implementation details and effects of the apparatus for estimating an elevation angle of a camera according to the embodiment of the present invention, reference may be made to the above embodiments related to the corresponding method, which are not described herein again.
Another embodiment of the present invention is also directed to a machine-readable storage medium having instructions stored thereon for causing a machine to perform the above-described method for estimating an elevation angle of a camera. The machine-readable storage medium includes, but is not limited to, phase change Memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other Memory technologies, compact disc read only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and the like, which can store program code.
Another embodiment of the present invention further provides a camera, including: one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method for estimating camera elevation angle described above.
Wherein the camera is, for example, a monocular camera. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium. The processor may be a general-purpose processor, a special-purpose processor, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of Integrated Circuit (IC), a state machine, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A method for estimating an elevation angle of a camera used to capture an image in front of a vehicle at a preset elevation angle, the method comprising:
acquiring all lines in the image, wherein the lines refer to lines seen from the vehicle perspective direction;
selecting a plurality of line pairs from all combinations of the obtained lines;
determining vertical positions of intersections of the selected plurality of line pairs, and calculating an average of all the determined vertical positions as an optimal vertical position; and
and calculating the elevation angle of the camera according to the optimal vertical position.
2. The method of claim 1, wherein selecting pairs of lines from all combinations of obtained lines comprises:
selecting a line pair meeting the following first preset condition: at two different test distances, the width difference of the line pair is smaller than a preset value, wherein the width difference d is calculated by the following formula:
d | -width 1-width2|/{ (width1+ width2)/2} < preset value
Where width1 and width2 are the width of the wire pair at the test distance.
3. The method according to claim 2, wherein in the case where there are a plurality of pairs meeting the first preset condition, the selecting pairs from all combinations of the obtained lines further comprises:
selecting an optimal pair by the following second preset conditions for the plurality of pairs meeting the first preset conditions: in a plurality of line pairs, the peak value of the hough histogram corresponding to the optimal line pair is maximum, or the width difference value corresponding to the optimal line pair is minimum.
4. The method of claim 1, wherein the acquiring all lines in the image comprises:
applying a difference operation and a threshold process to the image and obtaining ternary edge points;
applying a hough transform to the ternary edge points to obtain positive edge lines and negative edge lines; and performing line estimation on the positive edge line and the negative edge line according to the Hough histogram to obtain all lines corresponding to the positive edge line and the negative edge line.
5. The method of claim 1, further comprising:
comparing the preset elevation angle with the calculated elevation angle, and determining the calculated elevation angle as a reliable elevation angle if a difference between the preset elevation angle and the calculated elevation angle is within a preset range, wherein the camera is used for capturing an image in front of the vehicle at the reliable elevation angle instead of the preset elevation angle.
6. A machine-readable storage medium having stored thereon instructions for causing a machine to perform the method for estimating the elevation angle of a camera of any one of claims 1 to 5.
7. A camera head, characterized in that the camera head comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method for estimating camera elevation angle as recited in any one of claims 1-5.
8. An apparatus for estimating an elevation angle of a camera for capturing an image in front of a vehicle at a preset elevation angle, comprising:
a detection module configured to acquire all lines in the image, wherein the lines refer to lines seen from a vehicle perspective direction; and
a computing module configured to:
selecting a plurality of line pairs from all combinations of all the obtained lines;
determining vertical positions of intersections of the selected plurality of line pairs, and calculating an average of all the determined vertical positions as an optimal vertical position; and
and calculating the elevation angle of the camera according to the optimal vertical position.
CN202110533904.0A 2021-05-17 2021-05-17 Method, medium, apparatus for estimating elevation angle of camera, and camera Pending CN113658252A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110533904.0A CN113658252A (en) 2021-05-17 2021-05-17 Method, medium, apparatus for estimating elevation angle of camera, and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110533904.0A CN113658252A (en) 2021-05-17 2021-05-17 Method, medium, apparatus for estimating elevation angle of camera, and camera

Publications (1)

Publication Number Publication Date
CN113658252A true CN113658252A (en) 2021-11-16

Family

ID=78488900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110533904.0A Pending CN113658252A (en) 2021-05-17 2021-05-17 Method, medium, apparatus for estimating elevation angle of camera, and camera

Country Status (1)

Country Link
CN (1) CN113658252A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102905A (en) * 2014-07-16 2014-10-15 中电海康集团有限公司 Lane line adaptive detection method
CN109993761A (en) * 2018-06-29 2019-07-09 长城汽车股份有限公司 Three value image acquiring methods of one kind, device and vehicle
CN111220143A (en) * 2018-11-26 2020-06-02 北京图森智途科技有限公司 Method and device for determining position and posture of imaging equipment
CN111539907A (en) * 2019-07-25 2020-08-14 长城汽车股份有限公司 Image processing method and device for target detection
CN111696160A (en) * 2020-06-22 2020-09-22 深圳市中天安驰有限责任公司 Automatic calibration method and device for vehicle-mounted camera and readable storage medium
CN112017249A (en) * 2020-08-18 2020-12-01 东莞正扬电子机械有限公司 Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN112069924A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 Lane line detection method, lane line detection device and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102905A (en) * 2014-07-16 2014-10-15 中电海康集团有限公司 Lane line adaptive detection method
CN109993761A (en) * 2018-06-29 2019-07-09 长城汽车股份有限公司 Three value image acquiring methods of one kind, device and vehicle
CN111220143A (en) * 2018-11-26 2020-06-02 北京图森智途科技有限公司 Method and device for determining position and posture of imaging equipment
CN111539907A (en) * 2019-07-25 2020-08-14 长城汽车股份有限公司 Image processing method and device for target detection
CN111696160A (en) * 2020-06-22 2020-09-22 深圳市中天安驰有限责任公司 Automatic calibration method and device for vehicle-mounted camera and readable storage medium
CN112017249A (en) * 2020-08-18 2020-12-01 东莞正扬电子机械有限公司 Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN112069924A (en) * 2020-08-18 2020-12-11 东莞正扬电子机械有限公司 Lane line detection method, lane line detection device and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN107703528B (en) Visual positioning method and system combined with low-precision GPS in automatic driving
CN107272021B (en) Object detection using radar and visually defined image detection areas
CN106096525B (en) A kind of compound lane recognition system and method
CN106289159B (en) Vehicle distance measurement method and device based on distance measurement compensation
EP3057063B1 (en) Object detection device and vehicle using same
US10909395B2 (en) Object detection apparatus
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
JP2020064046A (en) Vehicle position determining method and vehicle position determining device
KR101609303B1 (en) Method to calibrate camera and apparatus therefor
EP2476996B1 (en) Parallax calculation method and parallax calculation device
CN107305632B (en) Monocular computer vision technology-based target object distance measuring method and system
US20150278610A1 (en) Method and device for detecting a position of a vehicle on a lane
US20050270286A1 (en) Method and apparatus for classifying an object
JP2014146326A (en) Detecting method and detecting system for multiple lane
US11410334B2 (en) Vehicular vision system with camera calibration using calibration target
US11151729B2 (en) Mobile entity position estimation device and position estimation method
US11468691B2 (en) Traveling lane recognition apparatus and traveling lane recognition method
JP4052291B2 (en) Image processing apparatus for vehicle
CN110986887B (en) Monocular camera-based distance measurement method, storage medium and monocular camera
US10916034B2 (en) Host vehicle position estimation device
JP6834401B2 (en) Self-position estimation method and self-position estimation device
KR20180022277A (en) System for measuring vehicle interval based blackbox
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
KR102003387B1 (en) Method for detecting and locating traffic participants using bird&#39;s-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN115597550B (en) Ramp monocular ranging method and device based on vanishing point and target grounding point

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination