CN110782497A - Method and device for calibrating external parameters of camera - Google Patents

Method and device for calibrating external parameters of camera Download PDF

Info

Publication number
CN110782497A
CN110782497A CN201910844477.0A CN201910844477A CN110782497A CN 110782497 A CN110782497 A CN 110782497A CN 201910844477 A CN201910844477 A CN 201910844477A CN 110782497 A CN110782497 A CN 110782497A
Authority
CN
China
Prior art keywords
image
point cloud
edge
features
markers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910844477.0A
Other languages
Chinese (zh)
Other versions
CN110782497B (en
Inventor
任忠辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910844477.0A priority Critical patent/CN110782497B/en
Publication of CN110782497A publication Critical patent/CN110782497A/en
Application granted granted Critical
Publication of CN110782497B publication Critical patent/CN110782497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a calibration method and device for external parameters of a camera. The method comprises the following steps: acquiring an image of a target road scene and a point cloud data set of the target road scene; acquiring image data and point cloud data of edge markers positioned at the edge of an image; a first parameter of a camera device acquiring the image is determined from the image data of the edge marker and the point cloud data. According to the camera external reference calibration method and device, camera external reference calibration is carried out by using the edge markers in the image of the target road scene as reference points, manual intervention is omitted, and accuracy of camera external reference calibration is improved.

Description

Method and device for calibrating external parameters of camera
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for calibrating external parameters of a camera.
Background
The calibration of the camera comprises internal reference calibration and external reference calibration, the internal reference calibration technology and tools are mature at present, the precision can be ensured, but the external reference calibration is not a uniform method due to the fact that the use scenes are different. The existing external reference calibration method needs to manually select a reference point, so that the efficiency is very low, and the calibration precision is greatly influenced by human factors.
Disclosure of Invention
The application aims to provide a method and a device for calibrating external parameters of a camera, which can improve the accuracy of calibration of the external parameters of the camera.
According to an aspect of an embodiment of the present application, there is provided a method for calibrating a camera external parameter, including: acquiring an image of a target road scene and a point cloud data set of the target road scene; acquiring image data and point cloud data of edge markers positioned at the edge of the image; determining a first parameter of a camera device that acquired the image from the image data and point cloud data of the edge marker.
According to an aspect of an embodiment of the present application, there is provided a calibration apparatus for a camera external parameter, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image of a target road scene and a point cloud data set of the target road scene; the acquisition module is used for acquiring image data and point cloud data of edge markers positioned at the edge of the image; and the processing module is used for determining a first external parameter of camera equipment for acquiring the image according to the image data and the point cloud data of the edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module is configured to: acquiring point cloud data of the edge marker and mapping the point cloud data to first conversion data formed in the image; determining the first external parameter from the image data and the first conversion data of the edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module is further configured to: acquiring image data of markers other than the edge marker in the image; extracting a first type of feature of the other marker from the image data of the other marker; acquiring point cloud data of the first type of features; obtaining a second external parameter according to the first type of characteristics and the point cloud data of the first type of characteristics; and acquiring the first conversion data according to the second external parameter.
In some embodiments of the present application, based on the foregoing solution, the processing module is further configured to: mapping the point cloud data of the first type of features to the image according to the second external parameters to obtain second conversion data corresponding to the first type of features; calculating a distance between the first class feature and the second transformed data; if the distance between the first class feature and the second conversion data is determined to exceed a first threshold value, acquiring an image of another road scene and a point cloud data set of the another road scene, and determining the first external reference according to the image of the another road scene and the point cloud data set of the another road scene.
In some embodiments of the present application, based on the foregoing solution, the processing module is further configured to: if the distance between the first type of feature and the first converted data does not exceed the first threshold, extracting a second type of feature of the other marker from the image data of the other marker; acquiring point cloud data of the second type of features; mapping the point cloud data of the second type of features to the image according to the second external parameters to obtain third conversion data corresponding to the second type of features; obtaining a third external parameter according to the second type of characteristics and the third conversion data; and mapping the point cloud data of the edge marker to the image according to the third external reference to obtain first conversion data of the edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module is configured to: calculating a distance between first conversion data of a specified edge marker among the plurality of edge markers and image data of the specified edge marker as a first distance if the plurality of edge markers have the same feature; calculating a distance between the first conversion data of another edge marker adjacent to the specified edge marker among the plurality of edge markers and the image data of the another edge marker as a second distance; and if the difference value of the first distance and the second distance reaches a second threshold value, determining the first external parameter according to the image data of the specified edge marker and the point cloud data of the specified edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module is further configured to: and if the difference value between the first distance and the second distance does not reach the second threshold value, the point cloud data of the specified edge marker is obtained again, and the first external parameter is determined according to the image data of the specified edge marker and the point cloud data of the specified edge marker obtained again.
In some embodiments of the present application, based on the foregoing solution, the calibration apparatus of the camera external parameter further includes: a verification module for extracting features of the edge markers and features of markers other than the edge markers from the image; counting the number of features of the other markers and the number of features of the edge marker; and if the sum of the number of the features of the other markers and the number of the features of the edge markers reaches a third threshold value, determining the first external reference according to the image of the other road scene and the point cloud data set of the other road scene.
In some embodiments of the present application, based on the foregoing solution, the inspection module is further configured to: extracting features of markers other than the edge marker from the image; and if the features of the other markers lack the first type of features or the second type of features, acquiring an image of another road scene and a point cloud data set of the other road scene, and determining the first external parameters according to the image of the other road scene and the point cloud data set of the other road scene.
According to an aspect of embodiments of the present application, there is provided a computer-readable program medium storing computer program instructions which, when executed by a computer, cause the computer to perform the method of any one of the above.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of the above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the technical scheme provided by some embodiments of the application, the image of the target road scene and the first point cloud data set of the target road scene are acquired, the image data and the point cloud data of the edge marker located at the edge of the image are acquired, and the first external reference of the camera equipment for acquiring the image is determined according to the image data and the point cloud data of the edge marker, so that the external reference of the camera equipment can be determined by taking the edge marker in the image of the target road scene as a reference point, a reference point does not need to be manually selected, and the accuracy of camera external reference calibration is improved. Meanwhile, when the camera is used for carrying out external reference calibration on the target road scene, the selected edge markers located at the edges of the image of the target road scene acquired by the camera are closer to the camera than other markers, the proportion of the edge markers in the image is larger than that of other markers located in the middle of the image, the details are more, and the edge markers are more sensitive to the external reference accuracy of the camera, so that the method for selecting the reference points on the edge markers for external reference calibration of the camera is more accurate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 schematically illustrates a flow diagram of a calibration method of a camera external reference according to an embodiment of the present application;
FIG. 3 schematically illustrates a flow diagram of screening a target road scene according to an embodiment of the present application;
FIG. 4 schematically illustrates a flow diagram of screening a target road scene according to an embodiment of the present application;
FIG. 5 schematically shows a flow chart of determining a first external reference of a camera device acquiring an image from image data of edge markers and point cloud data according to an embodiment of the application;
FIG. 6 schematically illustrates a flow chart for obtaining a mapping of point cloud data of edge markers to first converted data formed in an image, according to one embodiment of the present application;
FIG. 7 schematically illustrates a flow diagram for obtaining first conversion data according to a second external reference, according to an embodiment of the present application;
FIG. 8 schematically illustrates a flow diagram for screening edge features according to an embodiment of the present application;
FIG. 9 schematically illustrates a flow diagram for verifying a first external reference, according to an embodiment of the present application;
FIG. 10 schematically illustrates a flow diagram of a calibration method of camera external references according to an embodiment of the present application;
FIG. 11 schematically illustrates a block diagram of a calibration arrangement for camera external references according to an embodiment of the present application;
FIG. 12 is a hardware diagram illustrating a calibration arrangement for camera external references in accordance with an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which the technical solutions of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture 100 may include a camera 101 (the camera 101 may be a vehicle-mounted video camera), a three-dimensional scanner 102, a network 103, a server 104, and a terminal device 105 (the terminal device 104 may be one or more of a smartphone, a tablet, a laptop, a desktop, a registered machine). The network 103 is a medium to provide communication links between the camera 101, the three-dimensional scanner 102, the server 104, and the terminal device 105. Network 103 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of cameras 101, three-dimensional scanners 102, networks 103, servers 104, and terminal devices 105 in fig. 1 are merely illustrative. There may be any number of cameras 101, three-dimensional scanners 102, networks 103, servers 104, and terminal devices 105, as desired for an implementation. For example, server 104 may be a server cluster comprised of multiple servers, or the like.
In one embodiment of the present application, the server 104 may acquire an image of the target road scene acquired by the camera 101, and the server 104 may acquire a first point cloud data set of the target road scene acquired by the three-dimensional scanner 102. The server 104 acquires the point cloud data of the edge marker by acquiring the image data of the edge marker located at the edge of the image, and determines the first external reference of the camera equipment for acquiring the image according to the image data of the edge marker and the point cloud data, so that the edge marker in the image of the target road scene can be used as a reference point to replace the manual selection of the reference point in the prior art, thereby avoiding the manual participation and improving the accuracy of the calibration of the external reference of the camera. Meanwhile, when the camera is used for carrying out external reference calibration on the target road scene, the selected edge markers located at the edges of the image of the target road scene acquired by the camera are closer to the camera than other markers, the proportion of the edge markers in the image is larger than that of other markers located in the middle of the image, the details are more, and the edge markers are more sensitive to the external reference accuracy of the camera, so that the method for selecting the reference points on the edge markers for external reference calibration of the camera is more accurate.
It should be noted that the calibration method for the camera external parameter provided in the embodiment of the present application is generally executed by the server 104, and accordingly, the calibration device for the camera external parameter is generally disposed in the server 104. However, in other embodiments of the present application, the terminal device 105 may also have a similar function as the server 104, so as to execute the calibration method of the camera external reference provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 schematically shows a flowchart of a calibration method of camera external parameters according to an embodiment of the present application, where a main body of the calibration of the camera external parameters may be a server, such as the server 104 shown in fig. 1.
Referring to fig. 2, the calibration method of the camera external reference at least includes steps S210 to S230, which are described in detail as follows:
in step S210, an image of a target road scene and a point cloud dataset of the target road scene are acquired.
In an embodiment of the present application, before step S210 shown in fig. 2, the target road scene may be filtered through the steps shown in fig. 3, which specifically include the following steps S310 to S350:
in step S310, features of edge markers located at edges of the image, and features of other markers than the edge markers are extracted from the image of the target road scene.
In one embodiment of the present application, the markers may include road factors such as road surfaces, lane lines, guardrails, light poles, road signs, etc. located at the edges of the image, and may also include environmental factors such as trees, buildings, etc. in the target road scene. The edge markers located at the edges of the image may be the above-mentioned road and environmental factors at the edges of the image of the target road scene. The other markers than the edge marker may be the above-described road factors and environmental factors other than the edge marker in the image of the target road scene.
In one embodiment of the present application, the selected target road scene may be a straight road scene. Because the positions of the markers in the image are more dispersed when the markers with the same density are positioned on a straight road than when the markers are positioned at the turning of the road, the camera parameter calibration by selecting a straight road scene is simpler.
In one embodiment of the present application, the dividing criteria of the edge markers and other markers may be determined according to the positions of the markers in the image, because the markers closer to the edge of the image when the camera is distorted occupy a larger proportion of the image, the more details can be displayed in the image, and the more the reference point is selected on the image, the more the camera-external reference calibration is facilitated.
In one embodiment of the present application, the dividing criteria of the edge markers and other markers may be determined according to the distance between the markers and the camera, because the markers closer to the camera occupy a larger proportion in the image, and the camera external reference calibration is more accurate by selecting reference points thereon. When the camera is used for calibrating the target road scene, the selected reference object is a straight marker of the target road scene, no marker is shielded right in front of the camera, and in the image of the target road scene acquired by the camera, the marker closer to the camera is closer to the edge of the image.
In one embodiment of the present application, road factors such as a road surface, a lane line, a guardrail, a light pole, and a road sign in the edge marker may be selected as reference points, and environmental factors such as trees and buildings in the edge marker may also be selected as reference points.
In one embodiment of the present application, the feature of the marker may be determined according to the category, position, shape, structure, and proportion of the marker in the image, and may be classified into a surface feature, a line feature, and a point feature. The face features may include: road surfaces, signboards, sides of buildings, etc.; the line features may include: lane lines, guardrails, light poles, trees, and the like; the point features may include: the lane line terminal, the lamp post top point, the guideboard edge contour line corner point and the like.
In one embodiment of the present application, switching between the face feature, line feature, and point feature is possible. For example, when the marker is a lane line, when the lane line is far away from a camera for acquiring images and the four sides of the lane line cannot be distinguished, the whole lane line can be used as a line feature; when the lane line is close to the camera for collecting the image and four sides of the lane line can be distinguished, the whole lane line can be used as a surface feature, each side of the lane line can be used as a line feature, a central axis of the lane line can be used as a line feature, and an intersection point of every two adjacent sides of the lane line can be used as a point feature.
In one embodiment of the present application, the surface features in the target road scene image may be numbered consecutively according to their positions in the target road scene image; the line features in the target road scene image can also be numbered continuously according to the positions of the line features in the target road scene image; the point features in the target road scene image may also be numbered consecutively according to their position in the target road scene image. This sequential numbering avoids missing features.
In one embodiment of the present application, features of the markers may be extracted from an image of the target road scene using a deep learning model. The deep learning model can be trained in advance through the following processes before extracting the features: the method comprises the steps of obtaining an image sample set of road scenes, wherein the characteristics of markers contained in the image sample of each road scene are known, inputting the image sample of each road scene into a deep learning model, obtaining the characteristics of the markers in the image sample of each road scene output by the deep learning model, comparing the characteristics of the markers in the output image sample of each road scene with the known characteristics of the markers in the image sample of each road scene, and if the characteristics of the markers in the output image sample of each road scene are not consistent with the known characteristics of the markers in the image sample of each road scene, adjusting a first machine learning model to enable the characteristics of the markers in the output image sample of each road scene to be consistent with the known characteristics of the markers in the image sample of each road scene.
In an embodiment of the application, after the deep learning model training is completed, the image of the target road scene may be input into the deep learning model, and then the features of the markers in the image of the target road scene output by the deep learning model may be obtained. And dividing the characteristics of the markers in the image of the target road scene into the characteristics of the edge markers and the characteristics of other markers except the edge markers according to the positions of the characteristics in the image.
In one embodiment of the present application, before extracting features, the deep learning model may be trained in advance by the following process: the method comprises the steps of obtaining a set of image samples of road scenes, wherein the characteristics of edge markers in the image samples of each road scene in the set of image samples of the road scenes are known, the characteristics of other markers except the edge markers in the image samples of each road scene are known, inputting the image samples of each road scene in the set of image samples of the road scenes into a deep learning model, and obtaining the characteristics of the edge markers of the image samples of the road scenes output by the deep learning model and the characteristics of other markers except the edge markers. Comparing the output features of the edge markers in the image sample of the road scene with the known features of the edge markers in the image sample of the road scene, comparing the output features of the other markers in the image sample of the road scene with the known features of the other markers in the image sample of the road scene, if the output features of the edge markers in the image sample of the road scene are inconsistent with the known features of the edge markers in the image sample of the road scene, or the output features of the other markers in the image sample of the road scene are inconsistent with the known features of the other markers in the image sample of the road scene, adjusting the first machine learning model so that the output features of the edge markers in the image sample of the road scene are consistent with the known features of the edge markers in the image sample of the road scene, and the features of other markers in the output image sample of the road scene are made consistent with the known features of other markers in the image sample of the road scene.
In step S320, the number of features of other markers and the number of features of edge markers are counted.
In step S330, it is determined whether the sum of the number of features of the other markers and the number of features of the edge marker reaches a third threshold value.
In one embodiment of the present application, the third threshold value may be set as needed.
In step S340, if the sum of the number of features of the other markers and the number of features of the edge markers reaches a third threshold, an image of another road scene and a point cloud data set of another road scene are acquired, and the first external reference is determined according to the image of another road scene and the point cloud data set of another road scene.
In step S350, if the sum of the number of features of the other markers and the number of features of the edge marker does not reach the third threshold, the execution continues with step S220.
In this embodiment, when the total number of markers located in the image of the target road scene is too large, the markers are too concentrated, which may cause difficulty in calibrating the camera parameters. Therefore, when the sum of the number of features of the other markers and the number of features of the edge markers reaches a third threshold, the target road scene is discarded, another road scene is reselected, and the first external reference is determined according to the image of the other road scene and the point cloud data set of the other road scene. Another road scene with a small number of markers in the road scene is selected to carry out camera external parameter calibration, so that the calibration is easier.
In one embodiment of the present application, a fourth threshold may be set as a lower limit for the sum of the number of features of the other markers and the number of features of the edge markers, and if the sum of the number of features of the other markers and the number of features of the edge markers exceeds the fourth threshold and the sum of the number of features of the other markers and the number of features of the edge markers does not reach the third threshold, step S220 is performed. And if the sum of the number of the features of the other markers and the number of the features of the edge markers does not reach a fourth threshold value, acquiring an image of another road scene and a point cloud data set of the other road scene, and determining a first external parameter according to the image of the other road scene and the point cloud data set of the other road scene.
In the embodiment, another road scene with a moderate number of markers in the road scene is selected for camera external reference calibration, so that calibration can be more accurate.
In one embodiment of the present application, ranges may be set for the number of features of other markers and the number of features of edge markers in the target road scene image, respectively. If the number of features of the other markers is within the first range set for them and the number of features of the edge markers is within the second range set for them, then step S220 is continued. If the number of the features of the other markers is not within the first range set for the other markers or the number of the features of the edge markers is not within the second range set for the other markers, the image of the other road scene and the point cloud data set of the other road scene are acquired, and the first external reference is determined according to the image of the other road scene and the point cloud data set of the other road scene.
In the embodiment, the camera external parameter calibration is performed by selecting the target road scene with a reasonable number of other markers and edge markers, so that the calibration of the camera external parameter is more accurate.
In an embodiment of the present application, an image of a target road scene may be divided into a plurality of regions, one of the regions may be arbitrarily selected, features of markers may be extracted from the region, and a target road scene in which the number of the features of the markers in each region meets requirements may be selected. A third range can be set for the number of the features of each area, and the images of the target road scene with the number of the features of each area within the third range are selected for camera parameter calibration, so that the camera parameter calibration is simpler.
In an embodiment of the application, the concentration of the image of the target road scene can be calculated according to the position of the marker in the image of the target road scene, and the image of the target road scene with the concentration meeting the requirement is selected to carry out camera external reference calibration more accurately. The concentration of the image of the target road scene may be calculated by the following process: the method comprises the steps of averagely dividing an image of a target road scene into a plurality of regions, counting the number of features of markers in each region, calculating the ratio of the number of the features of the markers in each region to the number of the features of all the markers in the image of the target road scene, and taking the maximum value of the obtained ratios as the concentration ratio of the image of the target road scene.
In an embodiment of the present application, before step S210 shown in fig. 2, a target road scene may be further screened through the steps shown in fig. 4, which specifically include the following steps S410 to S440:
extracting features of markers other than the edge markers from the image in step S410;
in step S420, determining whether the features of the other markers lack the first type of features or the second type of features;
in step S430, if the features of the other markers lack the first type of features or the second type of features, acquiring an image of another road scene and a point cloud dataset of another road scene, and determining a first external reference according to the image of another road scene and the point cloud dataset of another road scene;
in step S440, if the features of the other markers do not lack the first type of features and do not lack the second type of features, the process continues to step S220.
In one embodiment of the present application, the first type of features may be features of other features where corresponding point cloud data is easily found, and the second type of features may be features of other features where corresponding point cloud data is not easily found. The set percentage of the size of the image can be used as a distinguishing standard of the first type of features and the second type of features, and when the size of the features reaches the set percentage of the size of the image, the features are determined to be the first type of features. Otherwise, the feature is confirmed as a second type of feature.
In one embodiment of the present application, the percentage may be set for each of the face feature, the point feature, and the line feature on the basis of the size of the image. The features are first classified into face features, line features, and point features. If the size of the surface feature reaches a set percentage of the size of the image set for the surface feature, determining that the surface feature is a first type of feature, otherwise, determining that the surface feature is a second type of feature; if the size of the line feature reaches a set percentage of the size of the image set for the line feature, the line feature is confirmed as a first type of feature, otherwise, the line feature is confirmed as a second type of feature. If the size of the point feature reaches a set percentage of the size of the image set for the point feature, the point feature is confirmed as a first type of feature, otherwise, the point feature is confirmed as a second type of feature.
In the embodiment of fig. 4, the target road scene without the first type of feature and the second type of feature is selected for calibration, and the external parameters of the camera can be gradually obtained through the first type of feature and the second type of feature in the camera calibration process, so that the result of the external parameter calibration of the camera is more accurate.
With continued reference to fig. 2, in step S220, image data and point cloud data of edge markers located at the edges of the image are acquired.
In one embodiment of the present application, the image data of the marker may be position information of the marker in the image of the target road scene. For example, a camera coordinate system may be established with the camera position as the origin, and coordinates of the markers in the camera coordinate system are image data of the markers. The point cloud data of the marker may be coordinates of the marker in a point cloud coordinate system.
In step S230, a first external reference of the camera device acquiring the image is determined from the image data of the edge marker and the point cloud data.
In one embodiment of the present application, the first external reference may also be determined according to the image data and the point cloud data of the edge marker and the image data and the point cloud data of the other markers together. And the external parameters are obtained by using more markers, so that the obtained camera external parameters are more stable.
In one embodiment of the present application, as shown in fig. 5, in step S230, determining a first external reference of a camera device acquiring an image according to the image data of the edge marker and the point cloud data may include steps S510 to S520:
in step S510, point cloud data of the edge marker is acquired and mapped to first conversion data formed in the image.
In one embodiment of the present application, as shown in fig. 6, acquiring the point cloud data of the edge marker to map to the first conversion data formed in the image in step S510 may include steps S610 to S630:
in step S610, image data of other markers in the image except for the edge marker is acquired, first type features of the other markers are extracted from the image data of the other markers, and point cloud data of the first type features is acquired.
In step S620, a second external parameter is obtained according to the first type of feature and the point cloud data of the first type of feature.
In an embodiment of the application, after point cloud data of a first type of features is obtained, the point cloud data of the first type of features is mapped to an image according to set initial external parameters to obtain initial conversion data of the first type of features, a cost function is established according to the first type of features and the initial conversion data, and then nonlinear optimization solution is performed on the cost function to obtain second external parameters.
In one embodiment of the present application, the initial external parameters may be set according to external parameters obtained by the camera in other scenes.
In one embodiment of the present application, the first type of feature is in a camera coordinate system with the camera as an origin, and the point cloud data of the first type of feature is in a point cloud coordinate system. And mapping the point cloud data of the first type of features into the image, namely converting the point cloud data of the first type of features into a camera coordinate system.
In step S630, first conversion data is acquired from the second external reference.
In an embodiment of the present application, as shown in fig. 7, in step S630, acquiring the first conversion data according to the second external reference may include steps S710 to S780:
in step S710, the point cloud data of the first type of feature is mapped to the image according to the second external parameter, so as to obtain second conversion data corresponding to the first type of feature.
In step S720, a distance between the first kind of feature and the second conversion data is calculated.
In one embodiment of the present application, the first type of feature is in a camera coordinate system with a camera as an origin, and the second conversion data is obtained by converting point cloud data of the first type of feature in a point cloud coordinate system into the camera coordinate system. The distance between the first type of feature and the first transformation data is calculated, i.e. the distance between the coordinates of the first type of feature in the camera coordinate system and the coordinates of the first transformation data in the camera coordinate system is calculated.
In one embodiment of the present application, the first type of feature may be a face feature, a line feature, or a point feature. The distance between the first kind of feature and the second conversion data is calculated, that is, the distance between the face feature and the second conversion data of the face feature can be calculated, the distance between the line feature and the second conversion data of the line feature can be calculated, and the distance between the point feature and the second conversion data of the point feature can be calculated.
In step S730, it is compared whether the distance between the first type feature and the second conversion data exceeds a first threshold.
In step S740, if it is determined that the distance between the first type of feature and the second conversion data exceeds the first threshold, the image of the other road scene and the point cloud data set of the other road scene are acquired, and the first external reference is determined according to the image of the other road scene and the point cloud data set of the other road scene.
In step S750, if the distance between the first type feature and the first conversion data does not exceed the first threshold, the first conversion data is acquired according to the second external reference, and the second type features of the other markers are extracted from the image data of the other markers.
In step S760, point cloud data of the second type of feature is obtained, and the point cloud data of the second type of feature is mapped to the image according to the second external parameter, so as to obtain third conversion data corresponding to the second type of feature.
In step S770, a third external parameter is obtained according to the second class feature and the third conversion data.
In step S780, the point cloud data of the edge marker is mapped to the image according to the third external reference, so as to obtain first conversion data of the edge marker.
In the embodiment, the second external reference is obtained through the first type of features, the second type of features are mapped to the picture according to the second external reference to obtain third conversion data, and the target road scene is screened according to the distance between the second type of features and the third conversion data, so that the distance between the third conversion data of the second type of features and the second type of features in the screened target road scene is within a first threshold value, and the first external reference obtained according to the image data and the point cloud data of the screened target road scene is more accurate.
With continued reference to fig. 5, in step S520, a first external reference is determined from the image data of the edge marker and the first conversion data.
In an embodiment of the application, a cost function may be established according to the image data of the edge marker and the first conversion data, and then the cost function is subjected to nonlinear optimization solution to obtain the first external parameter.
In the embodiment, the image of the target road scene and the point cloud data set of the target road scene are acquired, the image data and the point cloud data of the edge marker located at the edge of the image are acquired, the point cloud data of the edge marker is mapped to the image to obtain first conversion data, and the first external reference of the camera equipment for acquiring the image is determined according to the image data of the edge marker and the first conversion data, so that the edge marker in the image of the target road scene is used as a reference point to replace the manual selection of the reference point in the prior art, the manual participation is avoided, and the accuracy of the calibration of the camera external reference is improved. Meanwhile, when the camera is used for carrying out external reference calibration on the target road scene, the selected edge markers located at the edges of the image of the target road scene acquired by the camera are closer to the camera than other markers except the edge markers located in the middle of the image close to the target road scene, the proportion of the edge markers in the image is larger than that of the other markers located in the middle of the image, the details are more, and the edge markers are more sensitive to the accuracy of the external reference of the camera, so that the reference points selected from the edge markers are more accurate to be used for external reference calibration of the camera.
In one embodiment of the present application, the first transformation data may be filtered, and the first external reference may be determined according to the filtered first transformation data and the edge marker corresponding to the filtered first transformation data.
In one embodiment of the present application, if a first transform data does not find a matching edge marker in the image, the first transform data is discarded, and the first external reference is determined according to the remaining first transform data and the edge markers corresponding to the remaining first transform data. And the non-matching features are discarded, so that the obtained camera external parameters are more accurate.
In one embodiment of the present application, referring to fig. 8, when determining the first external reference according to the image data of the edge marker and the first conversion data in step S520, the edge feature may be filtered according to steps S810 to S860 to determine the first external reference according to the image data of the filtered edge marker and the first conversion data of the filtered edge marker:
in step S810, mapping the point cloud data of the edge marker to the image according to the third external reference to obtain first conversion data of the edge marker;
in step S820, if the plurality of edge markers have the same feature, calculating a distance between first conversion data of a specified edge marker among the plurality of edge markers and image data of the specified edge marker as a first distance;
in step S830, a distance between the first conversion data of another edge marker adjacent to the specified edge marker among the plurality of edge markers and the image data of the another edge marker is calculated as a second distance;
in step S840, it is determined whether a difference between the first distance and the second distance reaches a second threshold;
in step S850, if the difference between the first distance and the second distance reaches the second threshold, the first external reference is determined according to the image data of the designated edge marker and the point cloud data of the designated edge marker.
In step S860, if the difference between the first distance and the second distance does not reach the second threshold, the designated edge markers are discarded, and the first external reference is determined according to the remaining edge markers of the edge markers except for the designated edge markers.
In this embodiment, the distance between the first conversion data for a given edge marker and the given edge marker is selected by comparing the distance between the first conversion data for the given edge marker and two adjacent edge markers having the same characteristic, the given edge marker significantly exceeding the distance between the given edge marker and the adjacent edge markers. The selected edge markers can accurately find point cloud data of the edge markers, and the error association of the edge markers with the same characteristics is avoided, so that the first external reference obtained according to the edge markers is more accurate.
In one embodiment of the present application, the first and second types of features in the other markers may also be screened by the steps in the embodiment shown in fig. 8. The camera external parameters obtained according to the screened first-class features and the screened second-class features are more accurate.
In one embodiment of the present application, referring to fig. 9, the first external reference may be checked according to steps S910 to S960:
in step S910, mapping the point cloud data of the edge marker to the image according to the first external reference to obtain fourth conversion data of the edge marker;
in step S920, if the plurality of edge markers have the same feature, calculating a distance between fourth conversion data of a designated edge marker among the plurality of edge markers and image data of the designated edge marker as a first distance;
in step S930, calculating an average value of the first distances of the plurality of edge markers;
in step S940, it is determined whether the average value of the first distances reaches a fifth threshold;
in step S950, if the average value of the first distances reaches the fifth threshold value, it is confirmed that the first external parameter passes the test.
In step S960, if the average of the first distances does not reach the fifth threshold, an image of another road scene and a point cloud data set of another road scene are obtained, and the first external reference is determined according to the image of another road scene and the point cloud data set of another road scene.
In this embodiment, the selected specified edge marker is mapped into the picture according to the first external reference, and the first external reference is checked by checking whether the first distance between the specified edge marker and the fourth conversion data reaches the fifth threshold value according to the fourth conversion data in which the specified edge marker is mapped into the picture, so that the obtained camera external reference is more accurate.
It should be noted that fig. 9 only schematically shows a flow of the calibration method of the camera external reference according to an embodiment of the present application, where an execution sequence between the steps may be adjusted, for example, step S960 may also be executed before step S950.
Fig. 10 schematically shows a flowchart of a calibration method of camera external parameters according to an embodiment of the present application, where a main body of the calibration of the camera external parameters may be a server, such as the server 104 shown in fig. 1.
Referring to fig. 10, the method for calibrating the camera external reference at least includes steps S1010 to S1090, which are described in detail as follows:
in step S1010, a target road scene is selected, and an image of the target road scene and a second point cloud data set of the target road scene are collected.
In an embodiment of the application, a deep learning model can be adopted to select a target road scene, and the target road scene can be a road scene with proper quantity of markers and uniform distribution of the markers in a straight road, so that camera external parameter calibration is facilitated.
In step S1020, a marker in the image of the target road scene is acquired, and features are extracted from the image data of the marker.
In one embodiment of the present application, the marker is a marker carried by a target road in a target road scene, and includes a lane line, a guardrail, a light pole, a road sign, and the like. The features are equally divided into surface features, point features, and line features according to the shape and position of the marker.
In step S1030, the features are associated with the point cloud data for which the features correspond in the second point cloud data set.
In step S1040, a cost function is established according to the features and the point cloud data corresponding to the features in the second point cloud data set, and the cost function is subjected to nonlinear optimization solution to obtain a second external parameter.
In step S1050, the features are screened according to the second external parameters, the features and the point cloud data corresponding to the features in the second point cloud data set, and the first point cloud data set is formed by the point cloud data corresponding to the screened features in the second point cloud data set.
In one embodiment of the application, the point cloud data in the second point cloud data set is mapped into the image according to the second external parameter, one or more features are included, the distance between the data of the point cloud data corresponding to each feature in the second point cloud data set and the feature in the image is calculated, whether the distance between the data of the point cloud data corresponding to the feature in the second point cloud data set and the feature in the image exceeds a first threshold value or not is judged, and if the distance between the position of the point cloud data corresponding to the feature in the second point cloud data set and the feature in the image exceeds the first threshold value, the feature is discarded; if the distance between the data of the point cloud data corresponding to the feature in the second point cloud data set and the feature mapped to the image does not exceed the first threshold value, the point cloud data corresponding to the feature in the second point cloud data set is classified into the first point cloud data set.
In one embodiment of the present application, point cloud data in the first point cloud data set that does not have corresponding features is discarded.
In step S1060, a first external parameter is solved according to the point cloud data in the first point cloud data set and the feature corresponding to the point cloud data in the first point cloud data set.
In step S1070, the first external reference is checked to determine whether the first external reference passes the check.
In one embodiment of the application, point cloud data in a first point cloud data set may be mapped into an image according to a first external reference, one or more point cloud data in the first point cloud data set may be calculated, an average value of distances between data in the one or more first point cloud data sets mapped into the image and corresponding features is calculated, and if the average value exceeds a first threshold, the first external reference passes a test; if the average does not exceed the first threshold, the first external parameter fails the test; .
In step S1080, if the first external parameter passes the verification, the calibration is finished.
If the first external parameter fails the check, step S1010 is executed again.
In the embodiment, the point cloud data in the collected second point cloud data set is screened to obtain the first point cloud data set, so that the first external reference obtained according to the first point cloud data set is more accurate, meanwhile, the target road scene is prevented from being selected for many times, and the camera external reference calibration efficiency is improved.
The following describes embodiments of the apparatus of the present application, which may be used to perform the calibration method of the camera external reference in the above embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the calibration method of the external reference of the camera described above in the present application.
FIG. 11 schematically shows a block diagram of a calibration arrangement for camera external references according to an embodiment of the present application.
Referring to fig. 11, a calibration apparatus 1100 for camera external parameters according to an embodiment of the present application includes an acquisition module 1101, an acquisition module 1102, and a processing module 1103.
In some embodiments of the present application, based on the foregoing solution, the collecting module 1101 is configured to collect an image of a target road scene and a point cloud data set of the target road scene; an obtaining module 1102, configured to obtain image data and point cloud data of an edge marker located at an edge of an image; a processing module 1103 configured to determine a first external parameter of the camera device acquiring the image according to the image data of the edge marker and the point cloud data.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is configured to: acquiring point cloud data of the edge marker, and mapping the point cloud data to first conversion data formed in the image; a first external parameter is determined from the image data of the edge marker and the first conversion data.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is further configured to: acquiring image data of other markers except for the edge marker in the image; extracting first type features of the other markers from the image data of the other markers; acquiring point cloud data of a first type of features; obtaining a second external parameter according to the first type of characteristics and the point cloud data of the first type of characteristics; first conversion data is obtained according to the second external reference.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is further configured to: mapping the point cloud data of the first type of features to the image according to the second external parameters to obtain second conversion data corresponding to the first type of features; calculating a distance between the first type of feature and the second converted data; if the distance between the first class feature and the second conversion data is determined to exceed a first threshold value, acquiring an image of another road scene and a point cloud data set of another road scene, and determining a first external parameter according to the image of another road scene and the point cloud data set of another road scene.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is further configured to: if the distance between the first type of feature and the first conversion data does not exceed a first threshold, extracting second type of features of other markers from the image data of the other markers; acquiring point cloud data of the second type of features; mapping the point cloud data of the second type of features to the image according to the second external parameters to obtain third conversion data corresponding to the second type of features; obtaining a third external parameter according to the second type of characteristics and the third conversion data; and mapping the point cloud data of the edge marker into the image according to the third external parameters to obtain first conversion data of the edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is configured to: calculating a distance between first conversion data of a designated edge marker among the plurality of edge markers and image data of the designated edge marker as a first distance if the plurality of edge markers have the same feature; calculating a distance between the first conversion data of another edge marker adjacent to the specified edge marker among the plurality of edge markers and the image data of the another edge marker as a second distance; and if the difference value of the first distance and the second distance reaches a second threshold value, determining a first external parameter according to the image data of the specified edge marker and the point cloud data of the specified edge marker.
In some embodiments of the present application, based on the foregoing solution, the processing module 1103 is further configured to: and if the difference value of the first distance and the second distance does not reach a second threshold value, the point cloud data of the appointed edge marker is obtained again, and the first external parameter is determined according to the image data of the appointed edge marker and the obtained point cloud data of the appointed edge marker.
In some embodiments of the present application, based on the foregoing solution, the calibration apparatus of the external parameter of the camera further includes: the inspection module is used for extracting the characteristics of the edge markers and the characteristics of other markers except the edge markers from the image; counting the number of features of other markers and the number of features of edge markers; and if the sum of the number of the features of the other markers and the number of the features of the edge markers reaches a third threshold value, determining a first external parameter according to the image of the other road scene and the point cloud data set of the other road scene.
In some embodiments of the present application, based on the foregoing, the inspection module is further configured to: extracting features of markers other than the edge markers from the image; if the features of the other markers lack the first type of features or the second type of features, acquiring an image of another road scene and a point cloud data set of another road scene, and determining a first external reference according to the image of the another road scene and the point cloud data set of the another road scene.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1200 according to this embodiment of the present application is described below with reference to fig. 12. The electronic device 1200 shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 12, the electronic device 1200 is embodied in the form of a general purpose computing device. The components of the electronic device 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, the bus 1230 connecting the various system components (including the memory unit 1220 and the processing unit 1210), and the display unit 1240.
Wherein the storage unit stores program code, which can be executed by the processing unit 1210, to cause the processing unit 1210 to perform the steps according to various exemplary embodiments of the present application described in the section "example methods" above in this specification.
The storage unit 1220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1221 and/or a cache memory unit 1222, and may further include a read only memory unit (ROM) 1223.
Storage unit 1220 may also include a program/utility 1224 having a set (at least one) of program modules 1225, such program modules 1225 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1200 may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1250. Also, the electronic device 1200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 1260. As shown, the network adapter 1260 communicates with the other modules of the electronic device 1200 via the bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
There is also provided, in accordance with an embodiment of the present application, a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present application described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
There is also provided, in accordance with an embodiment of the present application, a program product for implementing the above-described method, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present application, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A calibration method of external parameters of a camera is characterized by comprising the following steps:
acquiring an image of a target road scene and a point cloud data set of the target road scene;
acquiring image data and point cloud data of edge markers positioned at the edge of the image;
determining a first parameter of a camera device that acquired the image from the image data and point cloud data of the edge marker.
2. The method for calibrating the camera external reference according to claim 1, wherein the determining the first external reference of the camera device for acquiring the image according to the image data and the point cloud data of the edge marker comprises:
acquiring point cloud data of the edge marker and mapping the point cloud data to first conversion data formed in the image;
determining the first external parameter from the image data and the first conversion data of the edge marker.
3. The method for calibrating the camera external reference according to claim 2, wherein the obtaining point cloud data of the edge marker is mapped to first conversion data formed in the image, and comprises:
acquiring image data of markers other than the edge marker in the image;
extracting a first type of feature of the other marker from the image data of the other marker;
acquiring point cloud data of the first type of features;
obtaining a second external parameter according to the first type of characteristics and the point cloud data of the first type of characteristics;
and acquiring the first conversion data according to the second external parameter.
4. The method for calibrating the camera external parameter according to claim 3, wherein the obtaining the first conversion data according to the second external parameter includes:
mapping the point cloud data of the first type of features to the image according to the second external parameters to obtain second conversion data corresponding to the first type of features;
calculating a distance between the first class feature and the second transformed data;
if the distance between the first class feature and the second conversion data is determined to exceed a first threshold value, acquiring an image of another road scene and a point cloud data set of the another road scene, and determining the first external reference according to the image of the another road scene and the point cloud data set of the another road scene.
5. The method for calibrating the camera external reference according to claim 4, wherein if the distance between the first class of features and the first conversion data does not exceed the first threshold, the obtaining the first conversion data according to the second external reference comprises:
extracting a second type of feature of the other marker from the image data of the other marker;
acquiring point cloud data of the second type of features;
mapping the point cloud data of the second type of features to the image according to the second external parameters to obtain third conversion data corresponding to the second type of features;
obtaining a third external parameter according to the second type of characteristics and the third conversion data;
and mapping the point cloud data of the edge marker to the image according to the third external reference to obtain first conversion data of the edge marker.
6. The method for calibrating the camera external reference according to claim 2, wherein the determining the first external reference according to the image data of the edge marker and the first conversion data comprises:
calculating a distance between first conversion data of a specified edge marker among the plurality of edge markers and image data of the specified edge marker as a first distance if the plurality of edge markers have the same feature;
calculating a distance between the first conversion data of another edge marker adjacent to the specified edge marker among the plurality of edge markers and the image data of the another edge marker as a second distance;
and if the difference value of the first distance and the second distance reaches a second threshold value, determining the first external parameter according to the image data of the specified edge marker and the point cloud data of the specified edge marker.
7. The method for calibrating the camera external parameter according to claim 6, wherein if the difference between the first distance and the second distance does not reach the second threshold, the point cloud data of the specified edge marker is obtained again, and the first external parameter is determined according to the image data of the specified edge marker and the obtained point cloud data of the specified edge marker.
8. The method for calibrating the camera external reference according to claim 1, further comprising:
extracting features of the edge markers and features of markers other than the edge markers from the image;
counting the number of features of the other markers and the number of features of the edge marker;
and if the sum of the number of the features of the other markers and the number of the features of the edge markers reaches a third threshold value, acquiring an image of another road scene and a point cloud data set of the other road scene, and determining the first external reference according to the image of the other road scene and the point cloud data set of the other road scene.
9. The method for calibrating the camera external reference according to claim 1, further comprising:
extracting features of markers other than the edge marker from the image;
and if the features of the other markers lack the first type of features or the second type of features, acquiring an image of another road scene and a point cloud data set of the other road scene, and determining the first external parameters according to the image of the other road scene and the point cloud data set of the other road scene.
10. A calibration device for external parameters of a camera is characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image of a target road scene and a point cloud data set of the target road scene;
the acquisition module is used for acquiring image data and point cloud data of edge markers positioned at the edge of the image;
and the processing module is used for determining a first external parameter of camera equipment for acquiring the image according to the image data and the point cloud data of the edge marker.
CN201910844477.0A 2019-09-06 2019-09-06 Method and device for calibrating external parameters of camera Active CN110782497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910844477.0A CN110782497B (en) 2019-09-06 2019-09-06 Method and device for calibrating external parameters of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910844477.0A CN110782497B (en) 2019-09-06 2019-09-06 Method and device for calibrating external parameters of camera

Publications (2)

Publication Number Publication Date
CN110782497A true CN110782497A (en) 2020-02-11
CN110782497B CN110782497B (en) 2022-04-29

Family

ID=69384126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910844477.0A Active CN110782497B (en) 2019-09-06 2019-09-06 Method and device for calibrating external parameters of camera

Country Status (1)

Country Link
CN (1) CN110782497B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184828A (en) * 2020-08-21 2021-01-05 北京百度网讯科技有限公司 External parameter calibration method and device for laser radar and camera and automatic driving vehicle
CN112509054A (en) * 2020-07-20 2021-03-16 北京智行者科技有限公司 Dynamic calibration method for external parameters of camera
CN116168090A (en) * 2023-04-24 2023-05-26 南京芯驰半导体科技有限公司 Equipment parameter calibration method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106017320A (en) * 2016-05-30 2016-10-12 燕山大学 Bulk cargo stack volume measuring method based on image processing and system for realizing same
US20170064287A1 (en) * 2015-08-24 2017-03-02 Itseez3D, Inc. Fast algorithm for online calibration of rgb-d camera
CN108519605A (en) * 2018-04-09 2018-09-11 重庆邮电大学 Curb detection method based on laser radar and video camera
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109215083A (en) * 2017-07-06 2019-01-15 华为技术有限公司 The method and apparatus of the calibrating external parameters of onboard sensor
CN109767473A (en) * 2018-12-30 2019-05-17 惠州华阳通用电子有限公司 A kind of panorama parking apparatus scaling method and device
CN109827595A (en) * 2019-03-22 2019-05-31 京东方科技集团股份有限公司 Indoor inertial navigator direction calibration method, indoor navigation device and electronic equipment
CN109978842A (en) * 2019-03-14 2019-07-05 藏龙信息科技(苏州)有限公司 A kind of visibility analytic method based on camera image
CN110163064A (en) * 2018-11-30 2019-08-23 腾讯科技(深圳)有限公司 A kind of recognition methods of Sign for road, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064287A1 (en) * 2015-08-24 2017-03-02 Itseez3D, Inc. Fast algorithm for online calibration of rgb-d camera
CN106017320A (en) * 2016-05-30 2016-10-12 燕山大学 Bulk cargo stack volume measuring method based on image processing and system for realizing same
CN109215083A (en) * 2017-07-06 2019-01-15 华为技术有限公司 The method and apparatus of the calibrating external parameters of onboard sensor
CN108519605A (en) * 2018-04-09 2018-09-11 重庆邮电大学 Curb detection method based on laser radar and video camera
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN110163064A (en) * 2018-11-30 2019-08-23 腾讯科技(深圳)有限公司 A kind of recognition methods of Sign for road, device and storage medium
CN109767473A (en) * 2018-12-30 2019-05-17 惠州华阳通用电子有限公司 A kind of panorama parking apparatus scaling method and device
CN109978842A (en) * 2019-03-14 2019-07-05 藏龙信息科技(苏州)有限公司 A kind of visibility analytic method based on camera image
CN109827595A (en) * 2019-03-22 2019-05-31 京东方科技集团股份有限公司 Indoor inertial navigator direction calibration method, indoor navigation device and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SURABHI VERMA等: "Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
***: "机器视觉中摄像机标定技术研究及实现", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *
汪云龙: "基于双目视觉的结构化道路前方车辆检测与距离测量", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *
马玛双等: "基于空间约束的非重叠视场相机精确标定方法", 《光学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112509054A (en) * 2020-07-20 2021-03-16 北京智行者科技有限公司 Dynamic calibration method for external parameters of camera
CN112509054B (en) * 2020-07-20 2024-05-17 重庆兰德适普信息科技有限公司 Camera external parameter dynamic calibration method
CN112184828A (en) * 2020-08-21 2021-01-05 北京百度网讯科技有限公司 External parameter calibration method and device for laser radar and camera and automatic driving vehicle
CN112184828B (en) * 2020-08-21 2023-12-05 阿波罗智联(北京)科技有限公司 Laser radar and camera external parameter calibration method and device and automatic driving vehicle
CN116168090A (en) * 2023-04-24 2023-05-26 南京芯驰半导体科技有限公司 Equipment parameter calibration method and device
CN116168090B (en) * 2023-04-24 2023-08-22 南京芯驰半导体科技有限公司 Equipment parameter calibration method and device

Also Published As

Publication number Publication date
CN110782497B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN110726418B (en) Method, device and equipment for determining interest point region and storage medium
CN110782497B (en) Method and device for calibrating external parameters of camera
US11238310B2 (en) Training data acquisition method and device, server and storage medium
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
US20190340746A1 (en) Stationary object detecting method, apparatus and electronic device
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
US8917935B2 (en) Detecting text using stroke width based text detection
CN112927363B (en) Voxel map construction method and device, computer readable medium and electronic equipment
WO2021217924A1 (en) Method and apparatus for identifying vehicle type at traffic checkpoint, and device and storage medium
JP2013025799A (en) Image search method, system, and program
CN114359758B (en) Power transmission line detection method and device, computer equipment and storage medium
CN112884764A (en) Method and device for extracting land parcel in image, electronic equipment and storage medium
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
CN113033516A (en) Object identification statistical method and device, electronic equipment and storage medium
CN109711441B (en) Image classification method and device, storage medium and electronic equipment
US11915478B2 (en) Bayesian methodology for geospatial object/characteristic detection
CN112287957A (en) Target matching method and device
CN112466334A (en) Audio identification method, equipment and medium
CN114387199A (en) Image annotation method and device
CN111832579A (en) Map interest point data processing method and device, electronic equipment and readable medium
CN114461853A (en) Training sample generation method, device and equipment of video scene classification model
WO2024093641A1 (en) Multi-modal-fused method and apparatus for recognizing high-definition map element, and device and medium
CN113378768A (en) Garbage can state identification method, device, equipment and storage medium
CN111967449A (en) Text detection method, electronic device and computer readable medium
CN111737330A (en) Spatial data standardization method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020402

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant