CN117523005A - Camera calibration method and device - Google Patents

Camera calibration method and device Download PDF

Info

Publication number
CN117523005A
CN117523005A CN202311617441.1A CN202311617441A CN117523005A CN 117523005 A CN117523005 A CN 117523005A CN 202311617441 A CN202311617441 A CN 202311617441A CN 117523005 A CN117523005 A CN 117523005A
Authority
CN
China
Prior art keywords
image
monitoring camera
vehicle
camera
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311617441.1A
Other languages
Chinese (zh)
Inventor
吕铄
蒋姚亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Publication of CN117523005A publication Critical patent/CN117523005A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides a camera calibration method and device, and relates to the technical field of data processing, wherein the method comprises the following steps: acquiring a first image and a second image which are synchronously acquired by a first monitoring camera and a second monitoring camera, wherein the first monitoring camera and the second monitoring camera have an overlapped field-of-view range; identifying a first vehicle in an overlapping region of the first image and the second image respectively, and extracting vehicle characteristics of a preset part of the first vehicle; based on the extracted vehicle features, determining an image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera, as a result of camera calibration on the first monitoring camera and the second monitoring camera; based on the feature mapping difference of the vehicles in the synchronous acquisition images of the first monitoring camera and the second monitoring camera, checking whether the image mapping relation needs to be updated or not; if necessary, updating the image mapping relation. By applying the scheme provided by the embodiment of the application, the calibration efficiency of the camera can be improved.

Description

Camera calibration method and device
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for calibrating a camera.
Background
In order to facilitate management of the vehicle, a monitoring camera may be used to collect images of a road area, and based on the collected images, a running position information collecting point of the vehicle running in the road area is obtained, so that whether the vehicle is correctly parked, whether the vehicle is running in a correct lane, and the like can be detected based on the running position information collecting point.
In general, a single monitoring camera has a limited field of view, so that in order to obtain a relatively complete running position information gathering point for subsequent use, a plurality of monitoring cameras need to be erected in a road area, and thus, the running position information gathering point of a vehicle in the road area covered by the field of view of each monitoring camera can be obtained, and the obtained multiple sections of running position information gathering points are fused, so that the running position information gathering point of the vehicle in the road area is relatively complete.
Before the position information gathering point fusion is performed, calibration needs to be performed on the monitoring cameras, that is, an image mapping relationship between images acquired by each monitoring camera is determined, so that multiple sections of running position information gathering points obtained based on multiple monitoring cameras can be fused based on the image mapping relationship.
In the related art, various manual calibration methods are often adopted in advance to calibrate the monitoring camera, the calibration flow is complex, and the calibration efficiency is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for calibrating a camera, so as to improve calibration efficiency of a monitoring camera.
The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a camera calibration method, where the method includes:
acquiring a first image and a second image which are synchronously acquired by a first monitoring camera and a second monitoring camera, wherein the first monitoring camera and the second monitoring camera have an overlapped field-of-view range;
identifying a first vehicle within an overlapping region of the first image and the second image, respectively, wherein the overlapping region is: the overlapped view field range is in the corresponding image area in the images acquired by the first monitoring camera and the second monitoring camera;
extracting a first set of vehicle features of the first vehicle preset part from the first image, and extracting a second set of vehicle features of the first vehicle preset part from the second image;
determining an image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and using the image mapping relationship as a result of calibrating the first monitoring camera and the second monitoring camera;
Checking whether the image mapping relation needs to be updated or not based on the feature mapping difference of the vehicles in the synchronously acquired images of the first monitoring camera and the second monitoring camera, wherein the feature mapping difference is the difference between an expected feature and a reference feature, the expected feature is the feature of the vehicle in one synchronously acquired image mapped to the feature of the vehicle in the other synchronously acquired image, and the reference feature is the feature of the vehicle in the other synchronously acquired image;
and if necessary, updating the image mapping relation.
In a second aspect, an embodiment of the present application provides a camera calibration apparatus, including:
the first image acquisition module is used for acquiring a first image and a second image which are synchronously acquired by the first monitoring camera and the second monitoring camera, wherein the first monitoring camera and the second monitoring camera have an overlapped field-of-view range;
the first vehicle identification module is used for identifying the first vehicle in the overlapping area of the first image and the second image, wherein the overlapping area is: the overlapped view field range is in the corresponding image area in the images acquired by the first monitoring camera and the second monitoring camera;
a first set of vehicle feature extraction modules for extracting a first set of vehicle features of the first vehicle preset location from the first image and extracting a second set of vehicle features of the first vehicle preset location from the second image;
The mapping relation determining module is used for determining an image mapping relation between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and the image mapping relation is used as a result of calibrating the first monitoring camera and the second monitoring camera;
the mapping relation verification module is used for verifying whether the image mapping relation needs to be updated or not based on the feature mapping difference of the vehicles in the synchronously acquired images of the first monitoring camera and the second monitoring camera, and triggering the mapping relation updating module if the image mapping relation needs to be updated, wherein the feature mapping difference is the difference between an expected feature and a reference feature, the expected feature is the feature of the vehicles in one synchronously acquired image after the feature of the vehicles in the other synchronously acquired image is mapped to the feature of the vehicles in the other synchronously acquired image;
the mapping relation updating module is used for updating the image mapping relation.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
and a processor, configured to implement the method according to the first aspect when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method of the first aspect.
In a fifth aspect, embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
From the above, when the scheme provided by the embodiment of the application is applied to camera calibration, first, the first image and the second image which have the overlapping field of view and are synchronously acquired by the first monitoring camera and the second monitoring camera are obtained, the first vehicle is identified in the overlapping area of the first image and the second image, and then the first group of vehicle features and the second group of vehicle features of the preset part of the first vehicle are extracted from the first image and the second image respectively, so that the image mapping relationship between the images acquired by the first monitoring camera and the images acquired by the second monitoring camera can be determined based on the first group of vehicle features and the second group of vehicle features, and the result of camera calibration on the first monitoring camera and the second monitoring camera is obtained.
Therefore, after the same vehicle is identified from the first image acquired by the first monitoring camera and the second image acquired by the second monitoring camera, the vehicle characteristics of the same vehicle under different view fields can be extracted from the first image and the second image, so that camera calibration is realized according to different vehicle characteristics extracted from the first image and the second image. Compared with manual calibration, no calibration object is required to be arranged on the road surface in advance by a worker, so that the workload and the landing cost of a calibration scheme are reduced, the camera calibration can be realized more conveniently and efficiently, and the calibration efficiency of the camera is improved.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a flow chart of a first camera calibration method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an overlapping region according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a first camera mounting manner according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a second camera mounting manner according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a key point correspondence provided in an embodiment of the present application;
fig. 6 is a flowchart of a second camera calibration method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an image mapping relationship update scenario provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a camera calibration device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
First, an execution body of the scheme provided by the embodiment of the present application is described.
The implementation main body of the scheme provided by the embodiment of the application is as follows: any one electronic device with data processing, communication, storage and other functions.
The following describes in detail the camera calibration scheme provided in the embodiment of the present application.
Referring to fig. 1, a flowchart of a first camera calibration method according to an embodiment of the present application is provided, where the method includes the following steps S101 to S106.
Step S101: and acquiring a first image and a second image which are synchronously acquired by the first monitoring camera and the second monitoring camera.
The embodiment of the application does not limit the types, the models and the like of the first monitoring camera and the second monitoring camera.
In one case, the first monitoring camera may be a close-up camera and the second monitoring camera may be a fisheye camera, as described in detail in the following embodiments.
In this step, the first image and the second image are acquired by the first monitoring camera and the second monitoring camera synchronously, and therefore, it can be understood that the image acquisition time of the first image and the second image is the same.
The first monitoring camera and the second monitoring camera have an overlapped field of view range.
It should be noted that, the first monitoring camera and the second monitoring camera may be any two cameras belonging to the same camera sequence, and only the first monitoring camera and the second monitoring camera need to have an overlapping field of view range, which is described in detail in the embodiment shown in fig. 3.
The overlapping field of view range refers to: an overlapping portion of the field of view range of the first monitoring camera and the field of view range of the second monitoring camera.
The manner in which the above-described overlapping region is determined is described below.
In one embodiment, the overlapping area may be determined according to a correspondence between a field of view of the monitoring camera and an image area of an image acquired by the monitoring camera.
For example, when determining the overlapping area from the first image, an image area corresponding to the overlapping field-of-view range in the first image may be determined as the overlapping area in the first image according to a correspondence between the field-of-view range of the first monitoring camera and the image area of the image acquired by the first monitoring camera. The same applies to determining the overlapping area from the first image, and this will not be described in detail here.
In another embodiment, the worker may delineate the overlapping area in the first image and the second image, such that the electronic device may directly determine the overlapping area that the worker previously delineated in the first image and the second image.
The embodiment of the application does not limit the erection mode of the first monitoring camera and the second monitoring camera, and can adjust the positions and the visual angles of the first monitoring camera and the second monitoring camera in various modes, so long as the first monitoring camera and the second monitoring camera have an overlapping visual field range.
In an embodiment of the present application, the first monitoring camera and the second monitoring camera may be disposed on the same mounting bar.
For example, the first monitoring camera and the second monitoring camera may be disposed on two sides of the same mounting bar.
Therefore, the monitoring cameras are not required to be arranged on different erection rods, the number of erection rods can be saved, the cost of the vertical rods is reduced, and the implementation cost of the scheme is reduced.
Of course, other monitor camera mounting schemes are also provided in the embodiments of the present application, see later embodiments for details.
Step S102: the first vehicle is identified within overlapping regions of the first image and the second image, respectively.
Wherein, the overlapping area is: and overlapping the corresponding image areas in the images acquired by the first monitoring camera and the second monitoring camera in the field of view.
The overlapping area is described in detail with reference to fig. 2.
Referring to fig. 2, a schematic diagram of an overlapping area is provided in an embodiment of the present application.
It can be seen that the first monitoring camera and the second monitoring camera have overlapping view field ranges, the image areas corresponding to the overlapping view field ranges in the first image acquired by the first monitoring camera are overlapping areas, and the image areas corresponding to the overlapping view field ranges in the second image acquired by the second monitoring camera are also overlapping areas.
In particular, various object recognition or object segmentation algorithms may be employed to identify various vehicle features within the overlapping region of the first image and the second image.
For example, the algorithm may be a polar mask, segement Anything Model algorithm, or the like.
Wherein, the first vehicle time division is identified as follows:
in the first case, if only one vehicle is recognized in the overlapping region of the first image and the second image, the recognized vehicle may be directly determined as the first vehicle.
In the second case, if a plurality of vehicles are identified in any one of the first image and the second image, the vehicle identified in each of the first image and the second image may be identified from among the plurality of vehicles, and the identified vehicle may be identified as the first vehicle.
In this case, the first vehicle may be one of the identified vehicles, a plurality of the identified vehicles, or all of the identified vehicles.
The following describes a manner of identifying the same vehicle from among vehicles identified from the first image and the second image.
In one embodiment, features of the vehicle parts identified in the first image and the second image may be extracted, respectively, and then, it is determined whether the vehicles identified in the first image and the second image are the same vehicle according to a similarity between the features of the vehicle parts.
The vehicle parts may be front windows, side windows, rear view mirrors, license plates, car lamps, etc., which are not limited in the embodiment of the present application.
For example, if the similarity between the feature of the vehicle portion of the vehicle a identified in the first image and the feature of the vehicle portion of the vehicle B identified in the second image is greater than a preset similarity threshold, it may be considered that the vehicle a and the vehicle B are the same vehicle.
In another embodiment, the relative positions of the vehicles identified in the first and second images may be determined, respectively, and whether the vehicles identified in the first and second images are the same vehicle may be determined based on the relative positions.
The relative position may be a relative position of the vehicle with respect to a spatial positional relationship or the like in the overlapping region.
In this way, the vehicles identified in the first image and the second image that are close in relative position can be determined as the same vehicle.
Step S103: a first set of vehicle features of a first vehicle preset location is extracted from the first image and a second set of vehicle features of the first vehicle preset location is extracted from the second image.
Wherein the first image is an image acquired by the first monitoring camera, the second image is an image acquired by the second monitoring camera, and since the field of view ranges of the first monitoring camera and the second monitoring camera are different, the first set of vehicle features and the second set of vehicle features extracted from the first image and the second image are vehicle features corresponding to the respective field of view ranges of the first monitoring camera and the second monitoring camera, respectively.
In addition, it should be noted that the first set of vehicle features may include one feature, or may include multiple features; similarly, the second set of vehicle features may include one feature, but may include multiple features.
In one case, the preset portion may include at least one of the following portions:
front window, side window, rear-view mirror, license plate, car light.
Therefore, the characteristics of different types of vehicle parts can be extracted, so that the extracted characteristics are richer and more comprehensive, and the accuracy of the follow-up camera calibration based on the extracted characteristics is improved.
Specifically, each preset portion of the first vehicle may be first identified, and then the first object feature and the second object feature may be extracted by using various feature extraction algorithms.
For example, the above algorithm may be output synchronously in the image segmentation algorithm, or the like.
In this case, after each preset portion of the first vehicle is identified, the positions of the keypoints of each preset portion in the first image and the second image may be determined, and the determined positions of the keypoints are used as the first object feature and the second object feature.
Step S104: and determining an image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and performing camera calibration on the first monitoring camera and the second monitoring camera.
The above image mapping relationship can be understood as: perspective transformation relationship between an image plane of an image captured by the first monitoring camera and an image plane of an image captured by the second monitoring camera.
The first image is an image collected by the first monitoring camera, the second image is an image collected by the second monitoring camera, and since the vehicle features of the same preset part of the first vehicle are extracted from the first image and the first vehicle respectively, the image mapping relationship between the image collected by the first monitoring camera and the image collected by the second monitoring camera can be determined according to the difference between the extracted first set of vehicle features and the second set of vehicle features.
If the number of the first vehicles is plural, the image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera may be determined based on the first set of vehicle features and the second set of vehicle features corresponding to each of the first vehicles.
In this case, if the first set of vehicle features and the second set of vehicle features include the key point positions of the preset portion of the first vehicle, the image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera may be determined according to the key point positions. The detailed description will be given in the following examples, which will not be described in detail here.
Step S105: and based on the feature mapping difference of the vehicles in the synchronous acquisition images of the first monitoring camera and the second monitoring camera, checking whether the image mapping relation needs to be updated, and if so, executing step S106.
The feature mapping difference is a difference between an expected feature and a reference feature, the expected feature is a feature of a vehicle in one image acquired synchronously and mapped to a feature of another image acquired synchronously, and the reference feature is a feature of the vehicle in the other image acquired synchronously.
As can be seen from the foregoing step S104, the image mapping relationship is obtained according to the vehicle characteristics of the vehicle in the overlapping region of the images synchronously acquired by the first monitoring camera and the second monitoring camera. When the first monitoring camera and the second monitoring camera collect images, the images are affected by weather conditions, vehicle speed, sporadic conditions and the like, blurring, distortion and the like can occur to the obtained images, and therefore when vehicle features of vehicles in the images are extracted, the accuracy of the obtained vehicle features is low, and further the image mapping relationship obtained based on the vehicle features is inaccurate.
For example, there may be a vehicle that is parking on a road, there may be a vehicle that is traveling, or there may be a vehicle that is traveling at a high speed. When the running speed of the vehicle is high, the condition of smear, blurring and the like may occur in the vehicle in the images synchronously acquired by the first monitoring camera and the second monitoring camera, and thus, the accuracy of the extracted vehicle features may be low.
For example, in overcast days, rainy days, and other days, because light is shielded, the images synchronously collected by the first monitoring camera and the second monitoring camera may be blurred, so that the vehicles identified by the images synchronously collected by the first monitoring camera and the second monitoring camera are also blurred, and thus, the accuracy of the extracted vehicle features is lower.
Then, after the first monitoring camera and the second monitoring camera acquire images synchronously, the expected feature that the feature of the vehicle in one image acquired synchronously is mapped to the feature of the other image acquired synchronously can be acquired based on the image mapping relation. Further, based on the expected feature and the reference feature of the vehicle in the other image acquired synchronously, it can be checked in reverse whether the image mapping relation needs to be updated.
Furthermore, the monitoring camera may shake under some circumstances, which causes a change in the field of view range of the monitoring camera and a change in the relative position between the monitoring cameras, so that the image mapping relationship obtained based on the original field of view range and the relative position may also fail, i.e. not be accurate.
From the above, it can be seen that inaccuracy may occur in the obtained image mapping relationship.
In the scheme provided by the embodiment of the application, after the image mapping relationship is obtained, two groups of driving position information gathering points of the same vehicle in the view field range of the first monitoring camera and the second monitoring camera are respectively fused based on the image mapping relationship, so that the fused position information gathering points are obtained. If the obtained image mapping relationship is inaccurate, the effect of fusing the position information gathering points according to the image mapping relationship is poor, and the obtained fused position information gathering points are inaccurate.
In view of the above, it is possible to check whether the image mapping relationship is accurate based on the feature mapping difference of the vehicle in the image synchronously acquired by the first monitoring camera and the second monitoring camera, and update the image mapping relationship when it is determined that the image mapping relationship is inaccurate, thereby ensuring that the image mapping relationship can be updated continuously, and improving the accuracy in merging of the subsequent position information gathering points.
The specific update judgment condition may be that the feature mapping difference is greater than a preset threshold value, etc.
Taking the example that the characteristics of the vehicle include the position of the vehicle rearview mirror, assuming that 2 images acquired synchronously are an image p1 and an image p2, and the positions of the vehicle rearview mirror in the images p1 and p2 are s1 and s2 respectively, based on the image mapping relationship, the position s3 after mapping s1 to p2 can be determined, and further based on the difference between s3 and s2, whether the image mapping relationship needs to be updated can be checked.
In this case, the same vehicle may be identified from the overlapping area of the images synchronously acquired by the first monitoring camera and the second monitoring camera, and based on the vehicle characteristics of the same vehicle, whether the image mapping relationship needs to be updated is checked, which is detailed in the embodiment shown in fig. 6 and will not be described herein.
Step S106: updating the image mapping relation.
Specifically, if it is determined that the image mapping relationship needs to be updated, a new image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera may be determined according to the difference between the expected feature and the reference feature, and the existing image mapping relationship is updated to be the new image mapping relationship.
From the above, when the scheme provided by the embodiment of the application is applied to camera calibration, first, the first image and the second image which have the overlapping field of view and are synchronously acquired by the first monitoring camera and the second monitoring camera are obtained, the first vehicle is identified in the overlapping area of the first image and the second image, and then the first group of vehicle features and the second group of vehicle features of the preset part of the first vehicle are extracted from the first image and the second image respectively, so that the image mapping relationship between the images acquired by the first monitoring camera and the images acquired by the second monitoring camera can be determined based on the first group of vehicle features and the second group of vehicle features, and the result of camera calibration on the first monitoring camera and the second monitoring camera is obtained.
Therefore, after the same vehicle is identified from the first image acquired by the first monitoring camera and the second image acquired by the second monitoring camera, different vehicle features of the same vehicle can be extracted from the first image and the second image, and camera calibration is achieved according to the different vehicle features extracted from the first image and the second image. Compared with manual calibration, no calibration object is required to be arranged on the road surface in advance by a worker, so that the workload and the landing cost of a calibration scheme are reduced, the camera calibration can be realized more conveniently and efficiently, and the calibration efficiency of the camera is improved.
And after the image mapping relation is obtained according to the images acquired by the first monitoring camera and the second monitoring camera and the camera calibration is realized, the obtained image mapping relation can be updated according to the characteristic mapping difference of the vehicles in the images synchronously acquired by the first monitoring camera and the second monitoring camera. That is, the existing image mapping relationship can be verified and optimized at any time according to the later acquired image, which is beneficial to eliminating the contingency when determining the image mapping relationship, and improving the accuracy of the obtained image mapping relationship, namely, the accuracy when calibrating the camera.
In some cases, the monitoring camera may shake, which causes a change in the field of view of the monitoring camera and a change in the relative position between the monitoring cameras, so that the image mapping relationship may fail. In this case, in the prior art, the staff needs to set the calibration object again in the field of view of the monitoring camera to perform recalibration. In this embodiment, the feature mapping difference of the vehicle in the image can be directly collected by the first monitoring camera and the second monitoring camera synchronously, so that the obtained image mapping relation is updated without resetting the calibration object, adaptability to scene and environment changes is improved, and the calibration efficiency of the camera is greatly improved.
Therefore, the scheme provided by the embodiment realizes the self-learning and self-optimization of the image mapping relation, and compared with the prior art, the calibration efficiency, the calibration accuracy and the calibration stability are improved better. Furthermore, when the position information gathering point fusion is carried out on the basis of the calibrated camera, the image mapping parameters can be self-learned and self-optimized, so that the position information gathering point fusion effect can be continuously optimized, and the accuracy of the finally obtained global position information gathering point can be improved.
In one embodiment of the application, the first monitoring camera and the second monitoring camera are arranged on the same erection rod, and a plurality of close-range cameras with camera central axes obliquely intersecting with the horizontal plane can be arranged on the erection rod, the camera central axes of the second monitoring camera are vertically intersecting with the horizontal plane,
in this case, the first monitoring camera may be: and a close-up camera arranged adjacent to the second monitoring camera.
The number of close-range cameras arranged adjacent to the second monitoring camera may be multiple, and at this time, the first monitoring camera may be any one of the cameras.
The mounting manner of the monitoring camera will be described in detail with reference to fig. 3.
Referring to fig. 3, a schematic diagram of a first camera mounting manner according to an embodiment of the present application is provided.
It can be seen that the two sides of the erection rod are uniformly provided with a plurality of near-field cameras (only near-field cameras 1-3 arranged on the left side are shown in fig. 3) with central axes of the cameras obliquely intersecting with the horizontal plane, and the fields of view of the near-field cameras 1-3 are different; the second monitoring camera with the camera central axis vertically intersected with the horizontal plane is also arranged on the erection rod, and the field of view range of the close-range camera 3 is overlapped with the field of view range of the second monitoring camera; the plurality of close-range cameras and the second monitoring camera may be referred to as a camera sequence.
In the erection mode, a plurality of near-field cameras with central axes obliquely intersected with the horizontal plane are respectively arranged on two sides of the erection rod, so that the combined view field range of the plurality of near-field cameras can be ensured to be wider; meanwhile, under the erection mode, a view field blind area exists in the combined view field range of each near view camera, a second monitoring camera which is arranged on the erection rod and is perpendicular to the horizontal plane in the camera central axis is used for repairing the view field blind area, and the second monitoring camera can also be called as 'blind repairing', so that the whole coverage of the view field range is realized.
In this case, in order to smoothly connect the vehicle position information collecting point captured by the close-range camera and the second monitoring camera, the close-range camera arranged adjacent to the second monitoring camera needs to be calibrated, as shown in fig. 3, the close-range camera 3 is the close-range camera arranged adjacent to the second monitoring camera, that is, the first monitoring camera to be calibrated.
In an embodiment of the present application, the second monitoring camera may be a fisheye camera. The field of view scope of the fisheye camera is wider, and the fisheye camera has better blind supplementing effect.
The above-mentioned field of view ranges of the close-range cameras may have an overlapping area, and when unique features conforming to a standard license plate number and the like are clearly visible, the overlapping area may not be present, which is not limited in the embodiment of the present application.
In one aspect, the close-up camera includes: cameras with view angles towards different driving directions of the road.
Referring to fig. 4, a schematic diagram of a second camera mounting manner according to an embodiment of the present application is provided.
The thick arrow in the figure indicates the road direction, and for simplicity of the drawing, only one near-view camera 1 having a view angle directed toward the east-side driving direction of the road and one near-view camera 2 having a view angle directed toward the south-side driving direction of the road are shown.
According to the camera mounting mode, the field of view ranges of the multiple close-range cameras can cover different driving directions of a road, so that the field of view ranges of the multiple close-range cameras can jointly cover the whole intersection area.
Under the condition, the joint field of view range of each near view camera can also have a field of view blind area, and still a second monitoring camera needs to be arranged, and blind compensation is carried out through the field of view range of the second monitoring camera. It can be seen that there is an overlap of the field of view range of the close-range camera 1 with the field of view range of the second monitoring camera, and there is an overlap of the field of view range of the close-range camera 2 with the field of view range of the second monitoring camera.
Similarly, each of the close-range camera 1 and the close-range camera 2 which are arranged adjacent to the second monitoring camera can be used as a first monitoring camera to calibrate the second monitoring camera.
Like this, through above-mentioned surveillance camera system scheme, single pole that erects can realize the all-round visual field of road different direction of travel and below blind area and cover, need not to arrange different surveillance cameras in different pole that erects, reduced pole setting cost by a wide margin.
An embodiment of determining the image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera in the step S104 is described below.
In one embodiment of the present application, the above-described image mapping relationship may be determined based on perspective transformation between images.
Specifically, when any one of the first image and the second image is referred to as a reference image, the other image is referred to as a non-reference image, the position of any planar point in the reference image in the image is recorded as (u, v), the position of the planar point projected into space is recorded as (x ', y', w '), and the following relationship is satisfied between (u, v) and (x', y ', w') based on the principle of perspective transformation:
wherein a is 11 、a 12 、a 13 、a 21 、a 21 、a 23 、a 31 、a 32 、a 33 Representation ofUnknown parameters to be determined.
If the plane point is projected into the non-reference image, since the plane height directions of the non-reference image and the reference image are normalized, a33 may take 1, and the position of the plane point in the non-reference image is (x, y), then (x, y) satisfies the following relationship:
Taking w' to 1, the following equation can be derived based on the above relationship:
there are a total of 8 unknown parameters in the above equation: a, a 11 、a 12 、a 13 、a 21 、a 21 、a 23 、a 31 A) 32 In this way, the key point positions of the first vehicle preset part identified in the reference image and the non-reference image can be obtained respectively, at least 4 key point positions in the reference image are respectively brought into (u, v) in the equation, at least 4 corresponding key point positions in the non-reference image are respectively brought into (x, y) in the equation, and the 8 unknown parameters can be solved by adopting a least square method.
The matrix formed by the parameters obtained by solving can be a perspective matrix, and any point in the reference image can be projected to another image based on the perspective matrix, such as the following expression:
the above perspective matrix may be referred to as an image mapping relationship between the reference image and the non-reference image, that is, an image mapping relationship between the image collected by the first monitoring camera and the image collected by the second monitoring camera.
The corresponding keypoints in the first image and the second image are described below with reference to fig. 5.
Referring to fig. 5, a schematic diagram of a key point correspondence relationship is provided in an embodiment of the present application.
In fig. 5, the left image is a first image, the right image is a second image, and the start point and the end point of the arrow from the first image to the second image represent the corresponding key points in the first image and the second image.
It can be seen that the key points of the rearview mirror, the front window and the license plate of the vehicle in the first image all have corresponding key points in the second image.
Specifically, the relative position of the key point with respect to the first vehicle, the relative position of the key point with respect to the fixed identifier in the road, and the like may be used to determine the corresponding key point from the first image and the second image, which will not be described herein.
On the basis of the embodiment shown in fig. 1, after determining the image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera and obtaining the calibration result, whether the image mapping relationship needs to be updated or not may be further determined according to the images acquired by the first monitoring camera and the second monitoring camera subsequently, if yes, the image mapping relationship is updated. In view of the above, the embodiment of the application provides a second camera calibration method.
Referring to fig. 6, a flowchart of a first camera calibration method according to an embodiment of the present application is provided, where the method includes the following steps S601-S611.
Step S601: and acquiring a first image and a second image which are synchronously acquired by the first monitoring camera and the second monitoring camera.
Step S602: the first vehicle is identified within overlapping regions of the first image and the second image, respectively.
Step S603: a first set of vehicle features of a first vehicle preset location is extracted from the first image and a second set of vehicle features of the first vehicle preset location is extracted from the second image.
Step S604: and determining an image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and performing camera calibration on the first monitoring camera and the second monitoring camera.
The steps S601 to S604 are the same as the steps S101 to S104 in the embodiment shown in fig. 1, and are not repeated here.
Step S605: and obtaining a third image and a fourth image which are synchronously acquired by the first monitoring camera and the second monitoring camera.
Wherein the third image is acquired after the first image and the fourth image is acquired after the second image.
Step S606: the same vehicle is identified in the overlapping areas of the third image and the fourth image, respectively.
In this step, the manner of identifying the same vehicle in the overlapping area of the third image and the fourth image is similar to the manner of identifying the first vehicle in the first image and the second image described above, and will not be described again.
Wherein, the same vehicle identified in the overlapping area of the third image and the fourth image in this step has the following two cases:
In the first case, the same vehicle identified from the third image and the fourth image is the aforementioned first vehicle.
That is, the same vehicle identified from the third image and the fourth image is the same as the vehicle for camera calibration described above.
At this time, although the same vehicle identified from the third image and the fourth image is the aforementioned first vehicle for camera calibration, since the third image is acquired after the first image and the fourth image is acquired after the second image, the first vehicle tends to move with time, and thus the position of the first vehicle identified from the third image and the fourth image may be different from the position of the first vehicle identified from the first image and the second image, and the features of the first vehicle identified from the third image and the fourth image may be different from the features of the first vehicle identified from the first image and the second image.
Thus, the calibration result of the camera can be verified on the basis of the first vehicle identified in the overlapping area of the third image and the fourth image.
In the second case, the same vehicle identified from the third image and the fourth image is a second vehicle different from the aforementioned first vehicle.
That is, the same vehicle identified from the third image and the fourth image is the same as the vehicle for camera calibration described above.
In this way, the calibration result of the camera can be verified on the basis of the second vehicle different from the first vehicle, which is identified in the overlapping area of the third image and the fourth image.
Therefore, whether the same vehicle identified from the third image and the fourth image is the first vehicle which is used for camera calibration or not can be verified whether the mapping relation is accurate or not according to the identified same vehicle, whether updating is needed or not is judged, and therefore the mapping relation can be updated conveniently and rapidly.
Step S607: and extracting a third set of vehicle features of the identified preset vehicle position from the third image, and extracting a fourth set of vehicle features of the identified preset vehicle position from the fourth image.
The manner of extracting the third set of vehicle features and the fourth set of vehicle features of the identified preset vehicle position from the third image and the fourth image is similar to the manner of extracting the first set of vehicle features and the second set of vehicle features of the first preset vehicle position from the first image and the second image, and is not repeated here.
Step S608: based on the image mapping relationship, the expected features of the non-reference features are obtained.
Wherein, the non-benchmark features are: one of the third and fourth sets of vehicle features.
The image mapping relationship is determined based on the image features of the vehicle in the image acquired by the first monitoring camera and the image features of the vehicle in the image acquired by the second monitoring camera, so that on the basis of the known image mapping relationship, the expected features of the image features in the image acquired by one of the monitoring cameras can be obtained according to the image features of the vehicle in the image acquired by the other monitoring camera.
For example, the positions of the keypoints included in the non-reference feature may be obtained, and the expected positions of the keypoints included in the non-reference feature may be determined based on the image mapping relationship, with the expected positions being taken as expected features.
Step S609: and determining the difference between the expected characteristic and the reference characteristic as the characteristic mapping difference of the vehicle in the synchronous acquisition images of the first monitoring camera and the second monitoring camera.
Wherein, the benchmark characteristic is: the third set of vehicle features and the fourth set of vehicle features are other than the reference features.
Step S610: whether the feature mapping difference is greater than a preset difference threshold is determined, if so, step S611 is performed.
Since the expected feature is a feature expected in the image acquired by the other monitoring camera, and the reference feature is a real feature in the image acquired by the other monitoring camera, the feature mapping difference between the expected feature and the reference feature reflects whether the existing image mapping relationship is applicable to the third image and the fourth image acquired at the time.
If the feature mapping difference is greater than the preset difference threshold, it indicates that the existing image mapping relationship is not suitable for the third image and the fourth image acquired this time, so it may be determined that the mapping relationship needs to be updated.
The difference threshold may be set by a worker according to experience and/or actual requirements, or may be set to be determined by whether the difference threshold is increased compared with the original difference value.
Therefore, the feature mapping difference between the expected feature and the reference feature can accurately reflect whether the existing image mapping relation is suitable for the third image and the fourth image acquired at this time, namely whether the existing image mapping relation is accurate, so that the image mapping relation can be rapidly and accurately verified based on the difference between the expected feature and the reference feature.
Step S611: the image map is updated based on the third set of vehicle features and the fourth set of vehicle features.
Specifically, the image mapping relationship may be updated in the following manner.
In one embodiment, the new image mapping relationship may be determined based on the third set of vehicle features and the fourth set of vehicle features in a manner similar to the manner in which the image mapping relationship is determined based on the first set of vehicle features and the second set of vehicle features described in step S104, and the existing image mapping relationship may be updated to the new image mapping relationship.
In another embodiment, after obtaining the new image mapping relationship, the adjustment parameter for the existing image mapping relationship may be determined based on the difference between the new image mapping relationship and the existing image mapping relationship, and the existing image mapping relationship may be adjusted based on the determined adjustment parameter, so as to obtain the updated image mapping relationship.
The third group of vehicle features and the fourth group of vehicle features are vehicle features of the same vehicle in an overlapping area of images synchronously acquired by the first monitoring camera and the second monitoring camera, so that the image mapping relation is updated based on the third group of vehicle features and the fourth group of vehicle features, the image mapping relation can be updated according to the vehicle feature difference of the same vehicle in the overlapping area, and the accuracy of updating the image mapping relation is improved.
From the above, it can be seen that the non-reference feature is a real feature of the vehicle in the image collected by one monitoring camera, the expected feature obtained based on the non-reference feature and the image mapping relationship is a feature expected by the vehicle in the image collected by the other monitoring camera, and the reference feature is a real feature of the vehicle in the image collected by the other monitoring camera, so that the feature mapping difference between the expected feature and the reference feature reflects the difference between the expected feature and the real feature in the existing image mapping relationship, that is, reflects whether the existing image mapping relationship is accurate. If the feature mapping difference is greater than the preset difference threshold, it indicates that the existing image mapping relationship has deviation, and the accuracy cannot meet the requirement, so that it can be determined that the mapping relationship needs to be updated. Therefore, the image mapping relation can be verified rapidly and accurately based on the difference between the expected characteristic and the reference characteristic.
A more visual description of the scene of updating the image mapping relationship is provided below in conjunction with fig. 7.
Referring to fig. 7, a schematic diagram of an image mapping relationship update scenario is provided in an embodiment of the present application.
The upper vehicle is a first vehicle identified from the third image, and the overall position of the vehicle and the position of the rearview mirror are shown by a white solid mark box; the lower vehicle is a first vehicle identified from the fourth image, and the overall position of the vehicle and the position of the rearview mirror are shown by black solid line marking frames; in addition, the position of the entire first vehicle and the position of the mirror in the second image, which are recognized from the third image, are also shown by the black dotted mark boxes.
Based on the image mapping relationship, the mapping positions of the whole vehicle and the key point positions of the rearview mirror in the second image, which are obtained from the third image by recognition, can be determined, and if the differences between the mapping positions of the whole vehicle and the key point positions in the third image and the actual positions of the whole vehicle and the key point in the fourth image are greater than the preset differences, the image mapping relationship is determined to need to be updated.
In one embodiment of the present application, after the calibration of the camera is completed, a first running position information gathering point and a second running position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera may be obtained, and based on the image mapping relationship, the first running position information gathering point and the second running position information gathering point may be fused.
The first driving position information gathering point and the second driving position information gathering point are determined based on images synchronously acquired by the first monitoring camera and the second monitoring camera.
Specifically, the position information gathering point fusion may be performed in the following manner.
In one embodiment, the first travel position information gathering point and the second travel position information gathering point may be projected to the same map coordinate system based on the image mapping relationship, so as to obtain the fused position information gathering point.
In another embodiment, the first driving position information gathering point may be mapped to a position information gathering point under a camera coordinate system corresponding to the second monitoring camera based on the image mapping relationship, and then the mapped first driving position information gathering point and the mapped second driving position information gathering point are fused to obtain a fused position information gathering point.
Of course, the second running position information gathering point may be mapped to a position information gathering point under a camera coordinate system corresponding to the first monitoring camera, and then the mapped second running position information gathering point and the first running position information gathering point are fused to obtain a fused position information gathering point.
Therefore, after the scheme provided by the embodiment of the application is applied to calibrate the monitoring camera, the first running position information gathering point and the second running position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera can be respectively fused according to the image mapping relation obtained by calibration. The scheme provided by the embodiment of the application can improve the calibration efficiency of the camera, so that the efficiency of the subsequent position information gathering point fusion can be improved.
In one embodiment of the present application, after the position information integration point is integrated with the first driving position information integration point and the second driving position information integration point, a license plate number of the same vehicle identified based on the image acquired by the first monitoring camera or the image acquired by the second monitoring camera may be obtained, and the license plate number is associated with the integrated position information integration point.
Therefore, the license plate number of the vehicle corresponding to the position information gathering point can be determined based on the fused position information gathering point, so that the fused position information gathering point can display more information, and more vehicle management measures can be conveniently carried out subsequently.
Corresponding to the camera calibration method, the embodiment of the application also provides a camera calibration device.
Referring to fig. 8, a schematic structural diagram of a camera calibration device according to an embodiment of the present application is provided, where the device includes the following modules:
a first image obtaining module 801, configured to obtain a first image and a second image that are synchronously collected by a first monitoring camera and a second monitoring camera, where the first monitoring camera and the second monitoring camera have an overlapping field of view range;
a first vehicle identification module 802 configured to identify a first vehicle in an overlapping region of the first image and the second image, where the overlapping region is: the overlapped view field range is in the corresponding image area in the images acquired by the first monitoring camera and the second monitoring camera;
A first set of vehicle feature extraction module 803 configured to extract a first set of vehicle features of the first vehicle preset location from the first image and extract a second set of vehicle features of the first vehicle preset location from the second image;
the mapping relation determining module 804 is configured to determine, based on the first set of vehicle features and the second set of vehicle features, an image mapping relation between an image acquired by the first monitoring camera and an image acquired by the second monitoring camera, as a result of performing camera calibration on the first monitoring camera and the second monitoring camera;
the mapping relation verification module 805 is configured to verify whether the image mapping relation needs to be updated based on a feature mapping difference of the vehicle in the synchronously acquired images of the first monitoring camera and the second monitoring camera, and if so, trigger a mapping relation update module, where the feature mapping difference is a difference between an expected feature and a reference feature, the expected feature is a feature of the vehicle in one synchronously acquired image mapped to a feature of the vehicle in the other synchronously acquired image, and the reference feature is a feature of the vehicle in the other synchronously acquired image;
The mapping relation updating module 806 is configured to update the image mapping relation.
From the above, when the scheme provided by the embodiment of the application is applied to camera calibration, first, the first image and the second image which have the overlapping field of view and are synchronously acquired by the first monitoring camera and the second monitoring camera are obtained, the first vehicle is identified in the overlapping area of the first image and the second image, and then the first group of vehicle features and the second group of vehicle features of the preset part of the first vehicle are extracted from the first image and the second image respectively, so that the image mapping relationship between the images acquired by the first monitoring camera and the images acquired by the second monitoring camera can be determined based on the first group of vehicle features and the second group of vehicle features, and the result of camera calibration on the first monitoring camera and the second monitoring camera is obtained.
Therefore, after the same vehicle is identified from the first image acquired by the first monitoring camera and the second image acquired by the second monitoring camera, different vehicle features of the same vehicle can be extracted from the first image and the second image, and camera calibration is achieved according to the different vehicle features extracted from the first image and the second image. Compared with manual calibration, no calibration object is required to be arranged on the road surface in advance by a worker, so that the workload and the landing cost of a calibration scheme are reduced, the camera calibration can be realized more conveniently and efficiently, and the calibration efficiency of the camera is improved;
And after the image mapping relation is obtained according to the images acquired by the first monitoring camera and the second monitoring camera and the camera calibration is realized, the obtained image mapping relation can be updated according to the characteristic mapping difference of the vehicles in the images synchronously acquired by the first monitoring camera and the second monitoring camera. That is, the existing image mapping relationship can be verified and optimized at any time according to the later acquired image, which is beneficial to eliminating the contingency when determining the image mapping relationship, and improving the accuracy of the obtained image mapping relationship, namely, the accuracy when calibrating the camera.
In some cases, the monitoring camera may shake, which causes a change in the field of view of the monitoring camera and a change in the relative position between the monitoring cameras, so that the image mapping relationship may fail. In this case, in the prior art, the staff needs to set the calibration object again in the field of view of the monitoring camera to perform recalibration. In this embodiment, the feature mapping difference of the vehicle in the image can be directly collected by the first monitoring camera and the second monitoring camera synchronously, so that the obtained image mapping relation is updated without resetting the calibration object, adaptability to scene and environment changes is improved, and the calibration efficiency of the camera is greatly improved.
Therefore, the scheme provided by the embodiment realizes the self-learning and self-optimization of the image mapping relation, and compared with the prior art, the calibration efficiency, the calibration accuracy and the calibration stability are improved better. Furthermore, when the position information gathering point fusion is carried out on the basis of the calibrated camera, the image mapping parameters can be self-learned and self-optimized, so that the position information gathering point fusion effect can be continuously optimized, and the accuracy of the finally obtained global position information gathering point can be improved.
In one embodiment of the present application, the mapping relationship verification module 805 is specifically configured to obtain a third image and a fourth image that are synchronously acquired by the first monitoring camera and the second monitoring camera, where the third image is acquired after the first image, and the fourth image is acquired after the second image; identifying the same vehicle in overlapping areas of the third image and the fourth image, respectively; extracting a third set of vehicle features of the identified vehicle preset location from the third image, and extracting a fourth set of vehicle features of the identified vehicle preset location from the fourth image; based on the image mapping relation, expected characteristics of non-reference characteristics are obtained, wherein the non-reference characteristics are as follows: one of the third and fourth sets of vehicle features; determining the difference between the expected feature and a reference feature as a feature mapping difference of a vehicle in the synchronously acquired images of the first monitoring camera and the second monitoring camera, wherein the reference feature is: another set of features of the third and fourth sets of vehicle features other than the non-baseline feature; if the feature mapping difference is larger than a preset difference threshold, judging that the image mapping relation needs to be updated, and triggering a mapping relation updating module.
From the above, it can be seen that the non-reference feature is a real feature of the vehicle in the image collected by one monitoring camera, the expected feature obtained based on the non-reference feature and the image mapping relationship is a feature expected by the vehicle in the image collected by the other monitoring camera, and the reference feature is a real feature of the vehicle in the image collected by the other monitoring camera, so that the feature mapping difference between the expected feature and the reference feature reflects the difference between the expected feature and the real feature in the existing image mapping relationship, that is, reflects whether the existing image mapping relationship is accurate. If the feature mapping difference is greater than the preset difference threshold, it indicates that the existing image mapping relationship has deviation, and the accuracy cannot meet the requirement, so that it can be determined that the mapping relationship needs to be updated. Therefore, the image mapping relation can be verified rapidly and accurately based on the difference between the expected characteristic and the reference characteristic.
In one embodiment of the present application, the mapping relation updating module 806 is specifically configured to update the image mapping relation based on the third set of vehicle features and the fourth set of vehicle features.
The third group of vehicle features and the fourth group of vehicle features are vehicle features of the same vehicle in an overlapping area of images synchronously acquired by the first monitoring camera and the second monitoring camera, so that the image mapping relation is updated based on the third group of vehicle features and the fourth group of vehicle features, the image mapping relation can be updated according to the vehicle feature difference of the same vehicle in the overlapping area, and the accuracy of updating the image mapping relation is improved.
In one embodiment of the present application, the same vehicle identified from the third image and the fourth image is the first vehicle; or the same vehicle identified from the third image and the fourth image is a second vehicle different from the first vehicle.
Therefore, whether the same vehicle identified from the third image and the fourth image is the first vehicle which is used for camera calibration or not can be verified whether the mapping relation is accurate or not according to the identified same vehicle, whether updating is needed or not is judged, and therefore the mapping relation can be updated conveniently and rapidly.
In one embodiment of the present application, the preset portion includes at least one of the following portions:
front window, side window, rear-view mirror, license plate, car light.
Therefore, the characteristics of different types of vehicle parts can be extracted, so that the extracted characteristics are richer and more comprehensive, and the accuracy of the follow-up camera calibration based on the extracted characteristics is improved.
In one embodiment of the present application, the first monitoring camera and the second monitoring camera are disposed on the same mounting rod.
Therefore, the monitoring cameras are not required to be arranged on different erection rods, the number of erection rods can be saved, the cost of the vertical rods is reduced, and the implementation cost of the scheme is reduced.
In one embodiment of the application, a plurality of close-range cameras are arranged on the erection rod, wherein the close-range cameras are as follows: a camera with a camera central axis obliquely intersected with the horizontal plane; the camera central axis of the second monitoring camera is perpendicularly intersected with the horizontal plane; the first monitoring camera is a close-up camera which is arranged adjacent to the second monitoring camera.
In the erection mode, a plurality of near-field cameras with central axes obliquely intersected with the horizontal plane are respectively arranged on two sides of the erection rod, so that the combined view field range of the plurality of near-field cameras can be ensured to be wider; meanwhile, under the erection mode, a view field blind area exists in the combined view field range of each near view camera, a second monitoring camera which is arranged on the erection rod and is perpendicular to the horizontal plane in the camera central axis is used for repairing the view field blind area, and the second monitoring camera can also be called as 'blind repairing', so that the whole coverage of the view field range is realized.
In one embodiment of the present application, the close-range camera includes: cameras with view angles towards different driving directions of the road.
Like this, through above-mentioned surveillance camera system scheme, single pole that erects can realize the all-round visual field of road different direction of travel and below blind area and cover, need not to arrange different surveillance cameras in different pole that erects, reduced pole setting cost by a wide margin.
In one embodiment of the present application, the apparatus further comprises:
the system comprises a driving position information gathering point obtaining module, a first driving position information gathering point obtaining module and a second driving position information gathering point obtaining module, wherein the driving position information gathering point obtaining module is used for obtaining a first driving position information gathering point and a second driving position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera respectively, and the first driving position information gathering point and the second driving position information gathering point are determined based on images synchronously collected by the first monitoring camera and the second monitoring camera;
and the position information gathering point fusion module is used for carrying out position information gathering point fusion on the first running position information gathering point and the second running position information gathering point based on the image mapping relation.
Therefore, after the scheme provided by the embodiment of the application is applied to calibrate the monitoring camera, the first running position information gathering point and the second running position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera can be respectively fused according to the image mapping relation obtained by calibration. The scheme provided by the embodiment of the application can improve the calibration efficiency of the camera, so that the efficiency of the subsequent position information gathering point fusion can be improved.
In one embodiment of the present application, the apparatus further comprises:
the license plate number obtaining module is used for obtaining the license plate number of the same vehicle identified based on the image acquired by the first monitoring camera or the image acquired by the second monitoring camera;
and the association module is used for associating the license plate number with the fused position information gathering point.
Therefore, the license plate number of the vehicle corresponding to the position information gathering point can be determined based on the fused position information gathering point, so that the fused position information gathering point can display more information, and more vehicle management measures can be conveniently carried out subsequently.
In the technical scheme of the application, the related operations of acquiring, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user are all performed under the condition that the authorization of the user is obtained.
The embodiment of the application also provides an electronic device, as shown in fig. 9, including:
a memory 901 for storing a computer program;
the processor 902 is configured to implement the camera calibration method when executing the program stored in the memory 901.
And the electronic device may further include a communication bus and/or a communication interface, where the processor 902, the communication interface, and the memory 901 perform communication with each other via the communication bus.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, a computer readable storage medium is also provided, in which a computer program is stored, which when executed by a processor, implements any of the camera calibration methods described above.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the camera calibration methods of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a Solid State Disk (SSD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus, electronic device and storage medium embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only needed.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (12)

1. A camera calibration method, comprising:
acquiring a first image and a second image which are synchronously acquired by a first monitoring camera and a second monitoring camera, wherein the first monitoring camera and the second monitoring camera have an overlapped field-of-view range;
identifying a first vehicle within an overlapping region of the first image and the second image, respectively, wherein the overlapping region is: the overlapped view field range is in the corresponding image area in the images acquired by the first monitoring camera and the second monitoring camera;
extracting a first set of vehicle features of the first vehicle preset part from the first image, and extracting a second set of vehicle features of the first vehicle preset part from the second image;
determining an image mapping relationship between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and using the image mapping relationship as a result of calibrating the first monitoring camera and the second monitoring camera;
Checking whether the image mapping relation needs to be updated or not based on the feature mapping difference of the vehicles in the synchronously acquired images of the first monitoring camera and the second monitoring camera, wherein the feature mapping difference is the difference between an expected feature and a reference feature, the expected feature is the feature of the vehicle in one synchronously acquired image mapped to the feature of the vehicle in the other synchronously acquired image, and the reference feature is the feature of the vehicle in the other synchronously acquired image;
and if necessary, updating the image mapping relation.
2. The method of claim 1, wherein the verifying whether the image mapping relationship needs to be updated based on feature mapping differences of vehicles in the synchronously acquired images of the first monitoring camera and the second monitoring camera comprises:
obtaining a third image and a fourth image which are synchronously acquired by the first monitoring camera and the second monitoring camera, wherein the third image is acquired after the first image, and the fourth image is acquired after the second image;
identifying the same vehicle in overlapping areas of the third image and the fourth image, respectively;
extracting a third set of vehicle features of the identified vehicle preset location from the third image, and extracting a fourth set of vehicle features of the identified vehicle preset location from the fourth image;
Based on the image mapping relation, expected characteristics of non-reference characteristics are obtained, wherein the non-reference characteristics are as follows: one of the third and fourth sets of vehicle features;
determining the difference between the expected feature and a reference feature as a feature mapping difference of a vehicle in the synchronously acquired images of the first monitoring camera and the second monitoring camera, wherein the reference feature is: another set of features of the third and fourth sets of vehicle features other than the non-baseline feature;
and if the feature mapping difference is larger than a preset difference threshold, judging that the image mapping relation needs to be updated.
3. The method of claim 2, wherein the updating the image mapping relationship comprises:
the image mapping relationship is updated based on the third set of vehicle features and a fourth set of vehicle features.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the same vehicle identified from the third image and the fourth image is the first vehicle; or (b)
The same vehicle identified from the third image and the fourth image is a second vehicle different from the first vehicle.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the first monitoring camera and the second monitoring camera are arranged on the same erection rod.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
a plurality of close-range cameras are arranged on the erection rod, wherein the close-range cameras are as follows: a camera with a camera central axis obliquely intersected with the horizontal plane;
the camera central axis of the second monitoring camera is perpendicularly intersected with the horizontal plane;
the first monitoring camera is a close-up camera which is arranged adjacent to the second monitoring camera.
7. The method according to any one of claims 1-6, further comprising:
acquiring a first driving position information gathering point and a second driving position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera respectively, wherein the first driving position information gathering point and the second driving position information gathering point are determined based on images synchronously acquired by the first monitoring camera and the second monitoring camera;
and based on the image mapping relation, carrying out position information gathering point fusion on the first and second driving position information gathering points.
8. The method of claim 7, wherein the method further comprises:
obtaining a license plate number of the same vehicle identified based on the image acquired by the first monitoring camera or the image acquired by the second monitoring camera;
and correlating the license plate number with the fused position information gathering point.
9. A camera calibration apparatus, comprising:
the first image acquisition module is used for acquiring a first image and a second image which are synchronously acquired by the first monitoring camera and the second monitoring camera, wherein the first monitoring camera and the second monitoring camera have an overlapped field-of-view range;
the first vehicle identification module is used for identifying the first vehicle in the overlapping area of the first image and the second image, wherein the overlapping area is: the overlapped view field range is in the corresponding image area in the images acquired by the first monitoring camera and the second monitoring camera;
a first set of vehicle feature extraction modules for extracting a first set of vehicle features of the first vehicle preset location from the first image and extracting a second set of vehicle features of the first vehicle preset location from the second image;
The mapping relation determining module is used for determining an image mapping relation between the image acquired by the first monitoring camera and the image acquired by the second monitoring camera based on the first group of vehicle features and the second group of vehicle features, and the image mapping relation is used as a result of calibrating the first monitoring camera and the second monitoring camera;
the mapping relation verification module is used for verifying whether the image mapping relation needs to be updated or not based on the feature mapping difference of the vehicles in the synchronously acquired images of the first monitoring camera and the second monitoring camera, and triggering the mapping relation updating module if the image mapping relation needs to be updated, wherein the feature mapping difference is the difference between an expected feature and a reference feature, the expected feature is the feature of the vehicles in one synchronously acquired image after the feature of the vehicles in the other synchronously acquired image is mapped to the feature of the vehicles in the other synchronously acquired image;
the mapping relation updating module is used for updating the image mapping relation.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the mapping relation verification module is specifically configured to obtain a third image and a fourth image that are synchronously collected by the first monitoring camera and the second monitoring camera, where the third image is collected after the first image, and the fourth image is collected after the second image; identifying the same vehicle in overlapping areas of the third image and the fourth image, respectively; extracting a third set of vehicle features of the identified vehicle preset location from the third image, and extracting a fourth set of vehicle features of the identified vehicle preset location from the fourth image; based on the image mapping relation, expected characteristics of non-reference characteristics are obtained, wherein the non-reference characteristics are as follows: one of the third and fourth sets of vehicle features; determining the difference between the expected feature and a reference feature as a feature mapping difference of a vehicle in the synchronously acquired images of the first monitoring camera and the second monitoring camera, wherein the reference feature is: another set of features of the third and fourth sets of vehicle features other than the non-baseline feature; if the feature mapping difference is larger than a preset difference threshold, judging that the image mapping relation needs to be updated, and triggering a mapping relation updating module;
Or (b)
The mapping relation updating module is specifically configured to update the image mapping relation based on the third set of vehicle features and the fourth set of vehicle features;
or (b)
The same vehicle identified from the third image and the fourth image is the first vehicle; or the same vehicle identified from the third image and the fourth image is a second vehicle different from the first vehicle;
or (b)
The predetermined location includes at least one of: front windows, side windows, rearview mirrors, license plates and car lights;
or (b)
The first monitoring camera and the second monitoring camera are arranged on the same erection rod;
or (b)
A plurality of close-range cameras are arranged on the erection rod, wherein the close-range cameras are as follows: a camera with a camera central axis obliquely intersected with the horizontal plane; the camera central axis of the second monitoring camera is perpendicularly intersected with the horizontal plane; the first monitoring camera is a close-range camera arranged adjacent to the second monitoring camera;
or (b)
The close-up camera includes: cameras with view angles facing different driving directions of the road;
or (b)
The apparatus further comprises: the system comprises a driving position information gathering point obtaining module, a first driving position information gathering point obtaining module and a second driving position information gathering point obtaining module, wherein the driving position information gathering point obtaining module is used for obtaining a first driving position information gathering point and a second driving position information gathering point of the same vehicle in the field of view of the first monitoring camera and the second monitoring camera respectively, and the first driving position information gathering point and the second driving position information gathering point are determined based on images synchronously collected by the first monitoring camera and the second monitoring camera; the position information gathering point fusion module is used for carrying out position information gathering point fusion on the first driving position information gathering point and the second driving position information gathering point based on the image mapping relation;
Or (b)
The apparatus further comprises: the license plate number obtaining module is used for obtaining the license plate number of the same vehicle identified based on the image acquired by the first monitoring camera or the image acquired by the second monitoring camera; and the association module is used for associating the license plate number with the fused position information gathering point.
11. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1-8 when executing a program stored on a memory.
12. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any of claims 1-8.
CN202311617441.1A 2023-10-26 2023-11-29 Camera calibration method and device Pending CN117523005A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2023114049489 2023-10-26
CN202311404948 2023-10-26

Publications (1)

Publication Number Publication Date
CN117523005A true CN117523005A (en) 2024-02-06

Family

ID=89751178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617441.1A Pending CN117523005A (en) 2023-10-26 2023-11-29 Camera calibration method and device

Country Status (1)

Country Link
CN (1) CN117523005A (en)

Similar Documents

Publication Publication Date Title
AU2018282302B2 (en) Integrated sensor calibration in natural scenes
CN110378965B (en) Method, device and equipment for determining coordinate system conversion parameters of road side imaging equipment
CN107133988B (en) Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system
US11035958B2 (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
WO2021155685A1 (en) Map updating method, apparatus and device
CN110728720B (en) Method, apparatus, device and storage medium for camera calibration
CN110751693B (en) Method, apparatus, device and storage medium for camera calibration
CN110766761B (en) Method, apparatus, device and storage medium for camera calibration
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
CN111553956A (en) Calibration method and device of shooting device, electronic equipment and storage medium
CN111323027A (en) Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN112424568A (en) System and method for constructing high-definition map
CN116664498A (en) Training method of parking space detection model, parking space detection method, device and equipment
CN114863096B (en) Semantic map construction and positioning method and device for indoor parking lot
CN117523005A (en) Camera calibration method and device
CN111462243A (en) Vehicle-mounted streaming media rearview mirror calibration method, system and device
CN115457488A (en) Roadside parking management method and system based on binocular stereo vision
CN113327192B (en) Method for measuring and calculating automobile running speed through three-dimensional measurement technology
CN113591640A (en) Road guardrail detection method and device and vehicle
CN113777618B (en) Obstacle ranging method, obstacle ranging device, electronic equipment and readable medium
CN118247996A (en) Parking space state detection method and device, electronic equipment and readable storage medium
CN112136021B (en) System and method for constructing landmark-based high definition map
CN118196210A (en) Panoramic camera external parameter verification method, system and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination