CN115359132A - Camera calibration method and device for vehicle, electronic equipment and storage medium - Google Patents

Camera calibration method and device for vehicle, electronic equipment and storage medium Download PDF

Info

Publication number
CN115359132A
CN115359132A CN202211293635.6A CN202211293635A CN115359132A CN 115359132 A CN115359132 A CN 115359132A CN 202211293635 A CN202211293635 A CN 202211293635A CN 115359132 A CN115359132 A CN 115359132A
Authority
CN
China
Prior art keywords
key point
camera
processed
target
camera parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211293635.6A
Other languages
Chinese (zh)
Other versions
CN115359132B (en
Inventor
杨龙召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202211293635.6A priority Critical patent/CN115359132B/en
Publication of CN115359132A publication Critical patent/CN115359132A/en
Application granted granted Critical
Publication of CN115359132B publication Critical patent/CN115359132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides a camera calibration method, a camera calibration device, an electronic device and a storage medium for a vehicle, wherein the method comprises the following steps: the method comprises the steps of obtaining an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene through a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene through a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, a target key point feature is identified from the image to be processed according to the reference key point feature and the reference camera parameters, and the target camera parameters corresponding to the camera to be calibrated are determined according to the reference camera parameters, the reference key point feature and the target key point feature.

Description

Camera calibration method and device for vehicle, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a camera calibration method and apparatus for a vehicle, an electronic device, and a storage medium.
Background
The development of the automatic vehicle driving technology is perfect, the accurate collection of the road condition information in the driving process is relied on, namely, a corresponding camera is arranged for a vehicle, the scene image of the scene where the vehicle is located is collected through the camera, and the road condition information of the corresponding vehicle is obtained from the collected scene image, so that the driving safety of the vehicle is guaranteed.
In the related art, when calibration processing is performed on a plurality of cameras, calibration of camera parameters of the cameras is generally performed before shipment of a vehicle.
In this way, since the vehicle is affected by a vehicle fault or a vehicle use scene after leaving the factory, the camera parameters calibrated before leaving the factory cannot be adapted to an actual vehicle driving scene, and further the image acquisition effect in the vehicle driving scene is affected.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the camera calibration method, the camera calibration device, the electronic equipment and the storage medium for the vehicle are provided, and the camera parameter to be calibrated is calibrated by combining the reference key point characteristics of the reference image acquired by the reference camera, so that the camera parameter calibration effect can be effectively guaranteed while the calculation complexity is effectively reduced.
The camera calibration method for the vehicle provided by the embodiment of the first aspect of the disclosure includes:
acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters; identifying a reference key point feature from a reference image; identifying and obtaining target key point characteristics from the image to be processed according to the reference key point characteristics and the reference camera parameters; and determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
In some embodiments of the present disclosure, identifying target keypoint features from an image to be processed according to reference camera parameters and reference keypoint features includes:
performing image transformation processing on an image to be processed according to the reference camera parameters to obtain a target image;
inputting a target image into a deep learning model to obtain a plurality of initial key point features output by the deep learning model;
and determining the target key point characteristics according to the reference camera parameters, the reference key point characteristics and the plurality of initial key point characteristics.
In some embodiments of the present disclosure, determining target keypoint features from reference camera parameters, reference keypoint features, and a plurality of initial keypoint features comprises:
identifying and obtaining key point features to be processed corresponding to the reference key point features from the plurality of initial key point features;
and determining the target key point characteristics according to the key point characteristics to be processed, the reference key point characteristics and the reference camera parameters.
In some embodiments of the present disclosure, determining the target keypoint feature according to the to-be-processed keypoint feature, the reference keypoint feature, and the reference camera parameter includes:
determining a target loss value according to the reference camera parameters, the key point characteristics to be processed and the reference key point characteristics;
and if the target loss value is less than or equal to the loss threshold value, taking the key point feature to be processed corresponding to the target loss value as the target key point feature.
In some embodiments of the present disclosure, determining a target loss value according to the reference camera parameters, the to-be-processed keypoint features, and the reference keypoint features includes:
determining a first similarity value between the key point feature to be processed and the reference key point feature according to the reference camera parameter, the key point feature to be processed and the reference key point feature;
determining a second similarity value between other key point features except the key point features to be processed in the initial key point features and the reference key point features according to the reference camera parameters, the key point features to be processed and the reference key point features;
and taking the sum of the first similarity value and the second similarity value as a target loss value.
In some embodiments of the present disclosure, determining target camera parameters corresponding to a camera to be calibrated according to the reference camera parameters, the reference key point features, and the target key point features includes:
determining camera parameters to be processed corresponding to a camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics;
checking the camera parameters to be processed to obtain a parameter checking result;
and processing the camera parameters to be processed by adopting the parameter verification result to obtain the target camera parameters.
In some embodiments of the present disclosure, processing the camera parameter to be processed by using the parameter verification result to obtain the target camera parameter includes:
if the check result indicates that: if the camera parameters to be processed pass the verification, taking the camera parameters to be processed as target camera parameters;
if the check result indicates: and if the parameter check of the camera to be processed fails, updating the current camera parameters of the camera to be calibrated by adopting the camera parameters to be processed.
According to the camera calibration method for the vehicle, the to-be-processed image and the reference image are obtained, wherein the to-be-processed image is obtained by imaging a target area in a driving scene through the to-be-calibrated camera, the reference image is obtained by imaging the target area in the driving scene through the reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, a target key point feature is identified from the to-be-processed image according to the reference key point feature and the reference camera parameters, and the target camera parameters corresponding to the to-be-calibrated camera are determined according to the reference camera parameters, the reference key point feature of the reference image collected by the reference camera and the reference key point feature and the target key point feature.
The camera calibration device for the vehicle that this disclosed second aspect embodiment provided includes:
the device comprises an acquisition module, a calibration module and a control module, wherein the acquisition module is used for acquiring an image to be processed and a reference image, the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters;
the first identification module is used for identifying the reference key point characteristics from the reference image;
the second identification module is used for identifying and obtaining target key point characteristics from the image to be processed according to the reference key point characteristics and the reference camera parameters;
and the determining module is used for determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
In some embodiments of the disclosure, the second identification module comprises:
the first processing submodule is used for carrying out image transformation processing on the image to be processed according to the reference camera parameters so as to obtain a target image;
the second processing submodule is used for inputting the target image into the deep learning model so as to obtain a plurality of initial key point features output by the deep learning model;
and the first determining submodule is used for determining the target key point characteristics according to the reference camera parameters, the reference key point characteristics and the plurality of initial key point characteristics.
In some embodiments of the present disclosure, the first determining sub-module is further configured to:
identifying and obtaining key point features to be processed corresponding to the reference key point features from the plurality of initial key point features;
and determining the target key point characteristics according to the key point characteristics to be processed, the reference key point characteristics and the reference camera parameters.
In some embodiments of the present disclosure, the first determining sub-module is further configured to:
determining a target loss value according to the reference camera parameters, the key point characteristics to be processed and the reference key point characteristics;
and if the target loss value is less than or equal to the loss threshold value, taking the key point feature to be processed corresponding to the target loss value as the target key point feature.
In some embodiments of the disclosure, the first determining submodule is further configured to:
determining a first similarity value between the key point feature to be processed and the reference key point feature according to the reference camera parameter, the key point feature to be processed and the reference key point feature;
determining a second similarity value between other key point features except the key point features to be processed in the initial key point features and the reference key point features according to the reference camera parameters, the key point features to be processed and the reference key point features;
and taking the sum of the first similarity value and the second similarity value as a target loss value.
In some embodiments of the disclosure, the determined module comprises:
the second determining submodule is used for determining camera parameters to be processed corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics;
the third processing submodule is used for carrying out verification processing on the camera parameters to be processed so as to obtain a parameter verification result;
and the fourth processing submodule is used for processing the camera parameters to be processed by adopting the parameter verification result so as to obtain the target camera parameters.
In some embodiments of the disclosure, the third processing submodule is further configured to:
if the check result indicates: if the camera parameter to be processed passes the verification, taking the camera parameter to be processed as a target camera parameter;
if the check result indicates: and if the parameter check of the camera to be processed fails, updating the current camera parameters of the camera to be calibrated by adopting the camera parameters to be processed.
The camera calibration device for the vehicle, provided by the embodiment of the second aspect of the disclosure, acquires a to-be-processed image and a reference image, wherein the to-be-processed image is obtained by imaging a target area in a driving scene by a to-be-calibrated camera, the reference image is obtained by imaging the target area in the driving scene by a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, a target key point feature is identified from the to-be-processed image according to the reference key point feature and the reference camera parameters, and the target camera parameters corresponding to the to-be-calibrated camera are determined according to the reference camera parameters, the reference key point feature and the target key point feature of the reference image collected by the reference camera.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: the camera calibration method for the vehicle comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the camera calibration method for the vehicle according to the embodiment of the first aspect of the disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a camera calibration method for a vehicle as set forth in the first aspect of the present disclosure.
An embodiment of a fifth aspect of the present disclosure provides a computer program product, which when executed by an instruction processor in the computer program product, performs the camera calibration method for a vehicle as set forth in the embodiment of the first aspect of the present disclosure.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of a camera calibration method for a vehicle in accordance with some embodiments of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a camera calibration method for a vehicle according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating a camera calibration method for a vehicle according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a camera calibration device for a vehicle according to some embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a camera calibration device for a vehicle according to other embodiments of the present disclosure;
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present disclosure, and are not to be construed as limiting the present disclosure. On the contrary, the embodiments of the disclosure include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
Fig. 1 is a schematic flowchart of a camera calibration method for a vehicle according to an embodiment of the disclosure.
The disclosed embodiments are exemplified in a camera calibration method for a vehicle configured to be used in a camera calibration apparatus for a vehicle.
The camera calibration method for the vehicle in the embodiment of the present disclosure may be configured in a camera calibration device for the vehicle, and the camera calibration device for the vehicle may be disposed in a server, or may also be disposed in an electronic device, which is not limited by the embodiment of the present disclosure.
In the disclosed embodiment, the electronic device may be any electronic device type suitable for implementation, for example, a smart phone, a tablet Computer, a wearable device, a Personal Computer (PC) device, and the like, which is not limited by the disclosed embodiment.
It should be noted that, the execution main body in the embodiment of the present disclosure may be, for example, a Central Processing Unit (CPU) in a server or an electronic device in hardware, and may be, for example, a related background service in the server or the electronic device in software, which is not limited to this.
As shown in fig. 1, in an embodiment of the present disclosure, a camera calibration method for a vehicle of an example of the present disclosure includes:
s101: the method comprises the steps of obtaining an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene through a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene through a reference camera, and the reference camera has corresponding reference camera parameters.
In the initial implementation stage of the camera calibration method for a vehicle, the obtained unprocessed image may be referred to as a to-be-processed image.
In the implementation process of the camera calibration method for the vehicle, the camera to be calibrated may be referred to as a camera to be calibrated.
In the implementation process of the camera calibration method for the vehicle, the camera which can be used as a camera calibration reference to be calibrated, namely, the camera can be called as a reference camera.
The reference camera has corresponding camera parameters, which may be referred to as reference camera parameters, and the reference camera parameters may specifically be camera internal parameters, camera external parameters, and the like, for example, without limitation.
The imaging ranges corresponding to the camera to be calibrated and the reference camera may have a view overlapping region, and the view overlapping region may be referred to as a target region.
The image acquired by the reference camera for the target area may be referred to as a reference image, and correspondingly, the image acquired by the to-be-calibrated camera for the target area may be referred to as a to-be-processed image, which is not limited herein.
That is to say, a specific application scenario of the embodiment of the present disclosure may specifically be, for example, a reference camera and a camera to be calibrated in a vehicle are adopted, scene images of a scene where the vehicle is located are respectively acquired at the same time, corresponding image overlapping regions are identified and obtained from the scene images of the scene where the vehicle is located respectively acquired by the reference camera and the camera to be calibrated, and the image to be processed is respectively used as a reference image and an image to be processed, and then the reference image and the reference camera parameter may be combined to process the image to be processed, so as to determine a camera parameter corresponding to the camera to be calibrated, thereby implementing calibration processing of the camera to be calibrated, which is not limited herein.
In other embodiments, the to-be-processed image and the reference image may be obtained by presetting a corresponding target area in a vehicle driving scene, and acquiring a corresponding image as the reference image for the target area by the reference camera at the same time, and acquiring a corresponding image as the to-be-processed image for the target area by the to-be-calibrated camera, without limitation.
It should be noted that, in the embodiments of the present disclosure, the to-be-processed image and the reference image are obtained in compliance with the regulations of the relevant laws and regulations, and do not violate the customs of the public order.
S102: reference keypoint features are identified from a reference image.
After the reference image is obtained, the corresponding key point features can be identified from the reference image, and the key point features can be called as reference key point features.
In some embodiments, the step of identifying the reference keypoint features from the reference image may be to identify the corresponding keypoint features from the reference image by using a corresponding keypoint Feature extraction algorithm, for example, a Scale Invariant Feature Transform (SIFT) algorithm, and use the identified keypoint features as the reference keypoint features, which is not limited herein.
In other embodiments, the reference keypoint feature may be obtained by identifying from the reference image, the reference image may be input into a corresponding feature extraction model (for example, a deep learning model, a neural network model, and the like, without limitation), a plurality of image features corresponding to the pre-reference image may be output by the feature extraction model, then, an image feature that can characterize the reference image keypoint may be selected from the plurality of image features, and the image feature is used as the reference keypoint feature, without limitation.
S103: and identifying and obtaining target key point characteristics from the image to be processed according to the reference key point characteristics and the reference camera parameters.
After the corresponding reference key point features are identified and obtained from the reference image, the key point features for calibrating the camera to be calibrated can be identified and obtained from the image to be calibrated according to the reference key point features and the reference camera parameters, and the key point features can be called as target key point features.
In some embodiments, the target key point feature is identified and obtained from the image to be processed according to the reference key point feature and the reference camera parameter, or the image to be processed is adjusted according to the reference key point feature and the reference camera parameter, and the key point feature extraction is performed on the adjusted image to be processed by using a key point feature extraction algorithm to obtain the corresponding target key point feature, which is not limited.
In other embodiments, the target keypoint features are identified from the image to be processed according to the reference keypoint features and the reference camera parameters, or corresponding multiple keypoint features are identified from the image to be processed, and the target keypoint features are determined from the multiple keypoint features according to the reference keypoint features and the reference camera parameters, which is not limited to this.
S104: and determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
In the embodiment of the present disclosure, after the reference key point feature and the target key point feature are determined, the camera parameters of the camera to be calibrated may be calibrated according to the reference camera parameters, the reference key point feature and the target key point feature, so as to obtain the camera parameters of the camera to be calibrated after calibration, where the camera parameters may be referred to as target camera parameters.
In some embodiments, the target camera parameters corresponding to the camera to be calibrated are determined according to the reference camera parameters, the reference key point features and the target key point features, the reference camera parameters may be adjusted according to the corresponding relationship between the reference key point features and the target key point features, and the camera parameters obtained through the adjustment are used as the target camera parameters of the camera to be calibrated, or any other possible method may be adopted to determine the target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point features and the target key point features, for example, a model prediction mode, a camera calibration algorithm mode, and the like, which is not limited.
The camera calibration method for the vehicle described in the embodiment of the disclosure may be applied to an application scenario of multi-camera calibration, that is, a reference camera may be determined from a plurality of cameras, and then calibration processing is performed on parameters of one or more other cameras to be calibrated in combination with the reference camera, so that a multi-camera calibration requirement in a vehicle driving scenario is effectively met, which is not limited herein.
In the embodiment of the disclosure, by acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, a target key point feature is identified from the image to be processed according to the reference key point feature and the reference camera parameters, and the target camera parameters corresponding to the camera to be calibrated are determined according to the reference camera parameters, the reference key point feature and the target key point feature of the reference image acquired by combining the reference camera, and the parameters of the camera to be calibrated are calibrated, so that the calibration effect of the camera parameters can be effectively guaranteed while the calculation complexity is effectively reduced.
Fig. 2 is a schematic flowchart of a camera calibration method for a vehicle according to another embodiment of the disclosure.
As shown in fig. 2, in some embodiments, a camera calibration method for a vehicle of examples of the present disclosure includes:
s201: and acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in the driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters.
S202: reference keypoint features are identified from a reference image.
For the description of S201-S202, reference may be made to the above embodiments, which are not described herein again.
S203: and performing image transformation processing on the image to be processed according to the reference camera parameters to obtain a target image.
In the embodiment of the present disclosure, an image to be processed may be subjected to image transformation processing according to a reference camera parameter, and an image obtained after the image transformation processing may be used as a target image.
In the embodiment of the present disclosure, the image to be processed is subjected to image transformation processing according to the reference camera parameter to obtain the target image, the image to be processed may be subjected to rotation or translation and the like according to the reference camera parameter to obtain the target image, and then, a subsequent camera calibration method for a vehicle may be performed based on the target image, which is not limited herein.
S204: and inputting the target image into the deep learning model to obtain a plurality of initial key point features output by the deep learning model.
After the target image is obtained, the target image may be input into the deep learning model, the target image is subjected to key point recognition by the deep learning model to obtain a plurality of initial key point features output by the deep learning model and corresponding to the target image, and then, a subsequent camera calibration method for a vehicle may be executed in combination with the initial key point features, without limitation.
S205: and determining the target key point characteristics according to the reference camera parameters, the reference key point characteristics and the plurality of initial key point characteristics.
In the embodiment of the present disclosure, the initial key point feature suitable for performing the subsequent camera calibration method for the vehicle may be determined from the multiple initial key point features as the target key point feature according to the reference camera parameter and the reference key point feature, which is not limited herein.
In some embodiments, the target keypoint features are determined according to the reference camera parameters, the reference keypoint features and the multiple initial keypoint features, where the initial keypoint features corresponding to the reference keypoint features are determined from the multiple initial keypoint features as the target keypoint features, and this is not limited.
For example, the initial keypoint features corresponding to the reference keypoint features are determined from the multiple initial keypoint features as the target keypoint features, which may be determining similarity values between the reference keypoint features and the initial keypoint features respectively, comparing the similarity values with a predetermined similarity threshold, and when the similarity values are greater than the similarity threshold, using the initial keypoint features corresponding to the similarity values as the target keypoint features, without limitation.
In the embodiment of the disclosure, the image transformation processing is performed on the image to be processed according to the reference camera parameter, so that the target image obtained through transformation can be adapted to a subsequent camera parameter calibration task, the target image is input into the deep learning model to obtain a plurality of initial key point features output by the deep learning model, and then the target key point features are determined according to the reference camera parameter, the reference key point features and the plurality of initial key point features, so that the determination effect of the target key point features can be effectively improved.
S206: and determining the camera parameters to be processed corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
After the reference camera parameters, the reference key point features and the target key point features are determined, the camera parameters corresponding to the camera to be calibrated can be determined according to the reference camera parameters, the reference key point features and the target key point features, the camera parameters can be called as the camera parameters to be processed, and then the performance of a downstream task can be used as an evaluation index to evaluate whether the camera parameters to be processed are accurate, and specific reference can be made to subsequent embodiments.
In the embodiment of the disclosure, the to-be-processed camera parameters corresponding to the to-be-calibrated camera are determined according to the reference camera parameters, the reference key point features and the target key point features, which may be in a manner of adopting a camera calibration algorithm, or in a manner of adopting a camera parameter calibration tool, so that the to-be-processed camera parameters are determined according to the reference camera parameters, the reference key point features and the target key point features, and no limitation is imposed on the method.
S207: and checking the camera parameters to be processed to obtain a parameter checking result.
According to the method and the device for verifying the camera parameters, after the camera parameters to be processed corresponding to the camera to be calibrated are determined according to the reference camera parameters, the reference key point characteristics and the target key point characteristics, the performance of a downstream task can be used as an evaluation index to verify the camera parameters to be processed to obtain a corresponding verification result, and the result can be called a parameter verification result.
For example, output of a model prediction task at the downstream of a camera calibration method for a vehicle may be used as an evaluation index to verify parameters of a camera to be processed, that is, current camera parameters of the camera to be calibrated may be updated by using the parameters of the camera to be processed, and a scene image of the vehicle is acquired based on the camera to be calibrated after updating the parameters, and then, when model prediction is performed on the scene image, whether precision of a prediction result is better than that before camera parameter calibration to obtain a parameter verification result, if precision of the prediction result is better than that before camera parameter calibration, it may be determined that the parameter verification result is a pass parameter verification, and if precision of the prediction result is not better than that before camera parameter calibration, it may be determined that the parameter verification result is a non-pass parameter verification, which is not limited.
S208: and processing the camera parameters to be processed by adopting the parameter verification result to obtain the target camera parameters.
In the embodiment of the disclosure, after the camera parameters to be processed are checked to obtain the parameter checking result, the camera parameters to be processed may be processed by using the parameter checking result to obtain the target camera parameters.
Optionally, in some embodiments, the to-be-processed camera parameter is processed by using a parameter verification result to obtain the target camera parameter, where the verification result indicates: when the camera parameter to be processed passes the verification, the camera parameter to be processed is taken as a target camera parameter, and the verification result indicates that: and when the parameter check of the camera to be processed fails, updating the current camera parameters of the camera to be calibrated by adopting the camera parameters to be processed.
Before the camera calibration method for the vehicle is executed, the camera parameters corresponding to the camera to be calibrated, that is, the camera parameters may be referred to as current camera parameters.
That is, in the embodiment of the present disclosure, if the check result indicates that: when the camera parameter to be processed passes the verification, the camera parameter to be processed can be directly used as the target camera parameter, and if the verification result indicates that: when the parameter check of the camera to be processed fails, updating the current camera parameter of the camera to be calibrated by using the camera parameter to be processed to trigger to continue executing the camera calibration method for the vehicle described in the embodiment of the present disclosure, and triggering to continue calibrating the parameter of the camera to be calibrated, which is not limited herein.
In the embodiment of the disclosure, the camera parameter to be processed corresponding to the camera to be calibrated is determined according to the reference camera parameter, the reference key point characteristic and the target key point characteristic, the camera parameter to be processed is subjected to verification processing to obtain a parameter verification result, and the camera parameter to be processed is processed by adopting the parameter verification result to obtain the target camera parameter.
In the embodiment, an image to be processed and a reference image are obtained, wherein the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, image transformation processing is performed on the image to be processed according to the reference camera parameters, so that the transformed target image can be adapted to a subsequent camera parameter calibration task, the target image is input into a deep learning model, so as to obtain a plurality of initial key point features output by the deep learning model, the target key point feature is determined according to the reference camera parameters, the reference key point features and the plurality of initial key point features, so that the determination effect of the target key point feature can be effectively improved, the camera parameters to be processed are determined according to the reference camera parameters, the key point features and the target key point features, the camera parameters to be processed are verified, so as to obtain the target processing camera parameters, the target processing camera parameters can be effectively evaluated based on the calibration effect, and the calibration camera parameters can be effectively improved.
Fig. 3 is a schematic flowchart of a camera calibration method for a vehicle according to another embodiment of the disclosure.
As shown in fig. 3, in some embodiments, a camera calibration method for a vehicle of examples of the present disclosure includes:
s301: the method comprises the steps of obtaining an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene through a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene through a reference camera, and the reference camera has corresponding reference camera parameters.
S302: reference keypoint features are identified from a reference image.
S303: and performing image transformation processing on the image to be processed according to the reference camera parameters to obtain a target image.
S304: and inputting the target image into the deep learning model to obtain a plurality of initial key point features output by the deep learning model.
For the description of S301 to S304, reference may be made to the above embodiments, which are not described herein again.
S305: and identifying and obtaining the key point features to be processed corresponding to the reference key point features from the plurality of initial key point features.
It can be understood that, since the reference image and the to-be-processed image are obtained by imaging the target region, the to-be-processed keypoint features corresponding to the reference keypoint features exist in the plurality of initial keypoint features, which is not limited to this.
In the embodiment of the present disclosure, after the target image is input into the deep learning model to obtain a plurality of initial keypoint features output by the deep learning model, keypoint features corresponding to the reference keypoint features may be identified from the plurality of initial keypoint features, and the keypoint features may be referred to as to-be-processed keypoint features.
S306: and determining the target key point characteristics according to the key point characteristics to be processed, the reference key point characteristics and the reference camera parameters.
In the embodiment of the present disclosure, after the to-be-processed keypoint features corresponding to the reference keypoint features are obtained by identifying the multiple initial keypoint features, the target keypoint features may be determined according to the to-be-processed keypoint features, the reference keypoint features, and the reference camera parameters.
That is to say, in the embodiment of the present disclosure, the target keypoint features may be determined from the multiple to-be-processed keypoint features according to the to-be-processed keypoint features, the reference keypoint features, and the reference camera parameters, and then, the camera parameters of the camera to be calibrated may be calibrated based on the target keypoint features, which may be specifically referred to in the following embodiments.
Optionally, in some embodiments, the target keypoint feature is determined according to the to-be-processed keypoint feature, the reference keypoint feature, and the reference camera parameter, and the target loss value may be determined according to the reference camera parameter, the to-be-processed keypoint feature, and the reference keypoint feature, and when the target loss value is less than or equal to the loss threshold, the to-be-processed keypoint feature corresponding to the target loss value is taken as the target keypoint feature.
That is to say, in the embodiment of the present disclosure, a corresponding loss function may be constructed according to the reference camera parameter, the feature of the key point to be processed, and the reference key point feature, and a loss value of the loss function is used as a target loss value, then the target loss value may be compared with a predetermined loss threshold, and when the target loss value is less than or equal to the loss threshold, the feature of the key point to be processed corresponding to the target loss value is used as the feature of the target key point.
Optionally, in some embodiments, the target loss value is determined according to the reference camera parameter, the feature of the key point to be processed, and the feature of the reference key point, where a first similarity value between the feature of the key point to be processed and the feature of the reference key point is determined according to the reference camera parameter, the feature of the key point to be processed, and the feature of the reference key point, and a second similarity value between the feature of the key point to be processed and the feature of the reference key point in the initial feature of the key point, except the feature of the key point to be processed, is determined according to the reference camera parameter, the feature of the key point to be processed, and the feature of the reference key point, and a sum of the first similarity value and the second similarity value is used as the target loss value, so that a determination effect of the target loss value can be effectively improved, so that the target loss value can characterize the similarity between the feature of the key point to be processed and the feature of the reference key point, and the similarity between the feature of the other key point and the feature of the reference key point, and effectively improve the referability of the target loss value.
The similarity value used between the feature of the to-be-processed keypoint and the feature of the reference keypoint may be referred to as a first similarity value, and the first similarity value may specifically be, for example, a euclidean distance between the feature of the to-be-processed keypoint and the feature of the reference keypoint, or a cosine similarity between the feature of the to-be-processed keypoint and the feature of the reference keypoint, which is not limited thereto.
In the embodiment of the present disclosure, the other key point features may be determined by taking the key point feature as a circle center and r as a radius to make a circle, and taking the key point feature falling outside the circle as the other key point features, which is not limited herein.
The similarity value between the other keypoint features and the reference keypoint feature may be referred to as a second similarity value, and the second similarity value may specifically be, for example, an euclidean distance between the other keypoint features and the reference keypoint feature, or a cosine similarity between the other keypoint features and the reference keypoint feature, which is not limited thereto.
That is to say, in the embodiment of the present disclosure, the first similarity value between the feature of the to-be-processed keypoint and the feature of the reference keypoint may be determined according to the reference camera parameter, the feature of the to-be-processed keypoint, and the feature of the reference keypoint, where the determining process may be specifically expressed as:
Figure 892888DEST_PATH_IMAGE001
the loss _ pass represents a first similarity value, feat1 is a reference key point feature, feat2 is a key point feature to be processed, param is a reference camera parameter, i and j are function variables, and H and W represent the height and width of the key point feature respectively.
In the embodiment of the present disclosure, a second similarity value between other keypoint features except the feature of the to-be-processed keypoint in the initial keypoint features and the reference keypoint features may be determined according to the reference camera parameter, the feature of the to-be-processed keypoint and the reference keypoint features, and the determining process may specifically be represented as:
Figure 22518DEST_PATH_IMAGE002
wherein param is the reference camera parameter, loss _ neg is the second similarity value, feat1 is the reference keypoint feature,
Figure 633628DEST_PATH_IMAGE003
representing other key point features which take the key point features as the circle center, take r as the radius to make a circle and fall outside the circle, param is a reference camera parameter, i and j are function variables, and H and W respectively represent the height and width of the key point features.
After determining the loss _ pass and the loss _ neg, the embodiment of the present disclosure may use the sum of the first similarity value and the second similarity value as the target loss value, and the process may be expressed as:
Figure 456091DEST_PATH_IMAGE004
where loss is expressed as a target loss value.
S307: and determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
For the description of S307, reference may be made to the foregoing embodiments, and details are not repeated herein.
In the embodiment of the disclosure, by acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target region in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target region in the driving scene by a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, image transformation processing is performed on the image to be processed according to the reference camera parameters to obtain a target image, the target image is input into a deep learning model to obtain a plurality of initial key point features output by the deep learning model, a key point feature to be processed corresponding to the reference key point feature is identified from the plurality of initial key point features to obtain the key point feature to be processed corresponding to the reference key point feature, the key point feature to be processed and the reference camera parameters are determined according to the key point feature to be processed, the key point feature and the reference camera parameters, the target camera parameters corresponding to the camera to be calibrated are determined according to the reference camera parameters, the key point feature of the reference image acquired by the reference camera, calibration processing is performed on the parameters of the camera to be calibrated, and the calibration effect of the camera to be calibrated can be calibrated while the complexity can be effectively reduced.
Fig. 4 is a schematic structural diagram of a camera calibration device for a vehicle according to an embodiment of the present disclosure.
As shown in fig. 4, in some embodiments, a camera calibration apparatus 40 for a vehicle of an example of the present disclosure includes:
the acquiring module 401 is configured to acquire an image to be processed and a reference image, where the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters;
a first identification module 402, configured to identify a reference keypoint feature from a reference image;
a second identification module 403, configured to identify target keypoint features from the image to be processed according to the reference keypoint features and the reference camera parameters;
the determining module 404 is configured to determine, according to the reference camera parameters, the reference key point features, and the target key point features, target camera parameters corresponding to the camera to be calibrated.
In some embodiments of the present disclosure, as shown in fig. 5, fig. 5 is a schematic structural diagram of a camera calibration apparatus for a vehicle according to another embodiment of the present disclosure, and the second identification module 403 includes:
the first processing submodule 4031 is used for performing image transformation processing on an image to be processed according to the reference camera parameters to obtain a target image;
the second processing sub-module 4032 is used for inputting the target image into the deep learning model to obtain a plurality of initial key point features output by the deep learning model;
the first determining submodule 4033 is configured to determine the target keypoint features according to the reference camera parameters, the reference keypoint features, and the multiple initial keypoint features.
In some embodiments of the present disclosure, the first determination submodule 4033 is further operable to:
identifying and obtaining key point features to be processed corresponding to the reference key point features from the plurality of initial key point features;
and determining the target key point characteristics according to the key point characteristics to be processed, the reference key point characteristics and the reference camera parameters.
In some embodiments of the present disclosure, the first determination submodule 4033 is further configured to:
determining a target loss value according to the reference camera parameters, the key point characteristics to be processed and the reference key point characteristics;
and if the target loss value is less than or equal to the loss threshold value, taking the key point feature to be processed corresponding to the target loss value as the target key point feature.
In some embodiments of the present disclosure, the first determination submodule 4033 is further operable to:
determining a first similarity value between the key point feature to be processed and the reference key point feature according to the reference camera parameter, the key point feature to be processed and the reference key point feature;
determining a second similarity value between other key point features except the key point features to be processed in the initial key point features and the reference key point features according to the reference camera parameters, the key point features to be processed and the reference key point features;
and taking the sum of the first similarity value and the second similarity value as a target loss value.
In some embodiments of the present disclosure, the determining module 404 includes:
the second determining submodule 4041 is configured to determine, according to the reference camera parameters, the reference key point features, and the target key point features, to-be-processed camera parameters corresponding to the camera to be calibrated;
the third processing submodule 4042 is configured to perform verification processing on the camera parameters to be processed to obtain a parameter verification result;
the fourth processing sub-module 4043 is configured to process the camera parameters to be processed by using the parameter verification result to obtain the target camera parameters.
In some embodiments of the present disclosure, the fourth processing submodule 4043 is further configured to:
if the check result indicates that: if the camera parameters to be processed pass the verification, taking the camera parameters to be processed as target camera parameters;
if the check result indicates that: and if the parameter check of the camera to be processed fails, updating the current camera parameters of the camera to be calibrated by adopting the camera parameters to be processed.
It should be noted that the foregoing explanation of the embodiment of the camera calibration method for a vehicle also applies to the camera calibration device for a vehicle of this embodiment, and details are not repeated here.
In the embodiment of the disclosure, by acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, the reference camera has corresponding reference camera parameters, a reference key point feature is identified from the reference image, a target key point feature is identified from the image to be processed according to the reference key point feature and the reference camera parameters, and the target camera parameters corresponding to the camera to be calibrated are determined according to the reference camera parameters, the reference key point feature and the target key point feature of the reference image acquired by combining the reference camera, and the parameters of the camera to be calibrated are calibrated, so that the calibration effect of the camera parameters can be effectively guaranteed while the calculation complexity is effectively reduced.
In order to implement the above-mentioned embodiment, the present disclosure further provides an electronic device, including: the camera calibration method for the vehicle comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein when the processor executes the program, the camera calibration method for the vehicle as set forth in the foregoing embodiments of the disclosure is realized.
In order to achieve the above-mentioned embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a camera calibration method for a vehicle as proposed in the previous embodiments of the present disclosure.
In order to achieve the above embodiments, the present disclosure further provides a computer program product, which when executed by an instruction processor in the computer program product, executes the camera calibration method for a vehicle as set forth in the foregoing embodiments of the present disclosure.
FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitations to the functionality and scope of use of the embodiments of the present disclosure.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive").
Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any device (e.g., network card, modem, etc.) that enables the electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, implementing the camera calibration method for a vehicle or the interactive credential verification method mentioned in the foregoing embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that, in the description of the present disclosure, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method of implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (16)

1. A camera calibration method for a vehicle, comprising:
acquiring an image to be processed and a reference image, wherein the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters;
identifying reference keypoint features from the reference image;
identifying and obtaining target key point features from the image to be processed according to the reference key point features and the reference camera parameters;
and determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
2. The method of claim 1, wherein said identifying the target keypoint features from the image to be processed according to the reference camera parameters and the reference keypoint features comprises:
performing image transformation processing on the image to be processed according to the reference camera parameters to obtain a target image;
inputting the target image into a deep learning model to obtain a plurality of initial key point features output by the deep learning model;
and determining the target key point features according to the reference camera parameters, the reference key point features and the plurality of initial key point features.
3. The method of claim 2, wherein said determining the target keypoint features from the reference camera parameters, the reference keypoint features and a plurality of the initial keypoint features comprises:
identifying and obtaining key point features to be processed corresponding to the reference key point features from the plurality of initial key point features;
and determining the target key point features according to the key point features to be processed, the reference key point features and the reference camera parameters.
4. The method of claim 3, wherein determining the target keypoint features from the to-be-processed keypoint features, the reference keypoint features, and the reference camera parameters comprises:
determining a target loss value according to the reference camera parameters, the key point features to be processed and the reference key point features;
and if the target loss value is less than or equal to a loss threshold value, taking the to-be-processed key point feature corresponding to the target loss value as the target key point feature.
5. The method of claim 4, wherein said determining a target loss value from the reference camera parameters, the keypoint features to be processed and the reference keypoint features comprises:
determining a first similarity value between the key point feature to be processed and the reference key point feature according to the reference camera parameters, the key point feature to be processed and the reference key point feature;
determining second similarity values between other key point features except the key point features to be processed in the initial key point features and the reference key point features according to the reference camera parameters, the key point features to be processed and the reference key point features;
taking the sum of the first similarity value and the second similarity value as the target loss value.
6. The method of claim 1, wherein determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference keypoint features, and the target keypoint features comprises:
determining the camera parameters to be processed corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics;
checking the camera parameters to be processed to obtain parameter checking results;
and processing the camera parameters to be processed by adopting the parameter verification result to obtain the target camera parameters.
7. The method as claimed in claim 6, wherein the processing the camera parameters to be processed using the parameter verification result to obtain the target camera parameters comprises:
if the verification result indicates: if the camera parameter to be processed passes the verification, taking the camera parameter to be processed as the target camera parameter;
if the verification result indicates: and if the to-be-processed camera parameters are not verified, updating the current camera parameters of the to-be-calibrated camera by adopting the to-be-processed camera parameters.
8. A camera calibration device for a vehicle, comprising:
the device comprises an acquisition module, a calibration module and a control module, wherein the acquisition module is used for acquiring an image to be processed and a reference image, the image to be processed is obtained by imaging a target area in a driving scene by a camera to be calibrated, the reference image is obtained by imaging the target area in the driving scene by a reference camera, and the reference camera has corresponding reference camera parameters;
a first identification module for identifying a reference keypoint feature from the reference image;
the second identification module is used for identifying and obtaining target key point characteristics from the image to be processed according to the reference key point characteristics and the reference camera parameters;
and the determining module is used for determining target camera parameters corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics.
9. The apparatus of claim 8, wherein the second identification module comprises:
the first processing submodule is used for carrying out image transformation processing on the image to be processed according to the reference camera parameters so as to obtain a target image;
the second processing submodule is used for inputting the target image into a deep learning model so as to obtain a plurality of initial key point features output by the deep learning model;
and the first determining submodule is used for determining the target key point features according to the reference camera parameters, the reference key point features and the plurality of initial key point features.
10. The apparatus of claim 9, wherein the first determination submodule is further configured to:
identifying and obtaining key point features to be processed corresponding to the reference key point features from the plurality of initial key point features;
and determining the target key point features according to the key point features to be processed, the reference key point features and the reference camera parameters.
11. The apparatus of claim 10, wherein the first determination submodule is further configured to:
determining a target loss value according to the reference camera parameters, the key point features to be processed and the reference key point features;
and if the target loss value is less than or equal to a loss threshold value, taking the key point feature to be processed corresponding to the target loss value as the target key point feature.
12. The apparatus of claim 11, wherein the first determination submodule is further configured to:
determining a first similarity value between the key point feature to be processed and the reference key point feature according to the reference camera parameters, the key point feature to be processed and the reference key point feature;
determining second similarity values between other key point features except the key point features to be processed in the initial key point features and the reference key point features according to the reference camera parameters, the key point features to be processed and the reference key point features;
taking the sum of the first similarity value and the second similarity value as the target loss value.
13. The apparatus of claim 8, wherein the determined module comprises:
the second determining submodule is used for determining the camera parameters to be processed corresponding to the camera to be calibrated according to the reference camera parameters, the reference key point characteristics and the target key point characteristics;
the third processing submodule is used for carrying out verification processing on the camera parameters to be processed so as to obtain a parameter verification result;
and the fourth processing submodule is used for processing the camera parameters to be processed by adopting the parameter verification result so as to obtain the target camera parameters.
14. The apparatus of claim 13, wherein the fourth processing submodule is further configured to:
if the check result indicates that: if the camera parameter to be processed passes the verification, taking the camera parameter to be processed as the target camera parameter;
if the check result indicates that: and if the to-be-processed camera parameters are not verified, updating the current camera parameters of the to-be-calibrated camera by adopting the to-be-processed camera parameters.
15. An electronic device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements a camera calibration method for a vehicle as claimed in any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a camera calibration method for a vehicle according to any one of claims 1 to 7.
CN202211293635.6A 2022-10-21 2022-10-21 Camera calibration method and device for vehicle, electronic equipment and storage medium Active CN115359132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211293635.6A CN115359132B (en) 2022-10-21 2022-10-21 Camera calibration method and device for vehicle, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211293635.6A CN115359132B (en) 2022-10-21 2022-10-21 Camera calibration method and device for vehicle, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115359132A true CN115359132A (en) 2022-11-18
CN115359132B CN115359132B (en) 2023-03-24

Family

ID=84008084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211293635.6A Active CN115359132B (en) 2022-10-21 2022-10-21 Camera calibration method and device for vehicle, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115359132B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
CN111488855A (en) * 2020-04-24 2020-08-04 上海眼控科技股份有限公司 Fatigue driving detection method, device, computer equipment and storage medium
WO2020220809A1 (en) * 2019-04-29 2020-11-05 北京字节跳动网络技术有限公司 Action recognition method and device for target object, and electronic apparatus
CN114170324A (en) * 2021-12-09 2022-03-11 深圳市商汤科技有限公司 Calibration method and device, electronic equipment and storage medium
CN114187624A (en) * 2021-11-09 2022-03-15 北京百度网讯科技有限公司 Image generation method, image generation device, electronic equipment and storage medium
CN114332977A (en) * 2021-10-14 2022-04-12 北京百度网讯科技有限公司 Key point detection method and device, electronic equipment and storage medium
US20220219708A1 (en) * 2021-01-14 2022-07-14 Ford Global Technologies, Llc Multi-degree-of-freedom pose for vehicle navigation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020220809A1 (en) * 2019-04-29 2020-11-05 北京字节跳动网络技术有限公司 Action recognition method and device for target object, and electronic apparatus
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
CN111488855A (en) * 2020-04-24 2020-08-04 上海眼控科技股份有限公司 Fatigue driving detection method, device, computer equipment and storage medium
US20220219708A1 (en) * 2021-01-14 2022-07-14 Ford Global Technologies, Llc Multi-degree-of-freedom pose for vehicle navigation
CN114332977A (en) * 2021-10-14 2022-04-12 北京百度网讯科技有限公司 Key point detection method and device, electronic equipment and storage medium
CN114187624A (en) * 2021-11-09 2022-03-15 北京百度网讯科技有限公司 Image generation method, image generation device, electronic equipment and storage medium
CN114170324A (en) * 2021-12-09 2022-03-11 深圳市商汤科技有限公司 Calibration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115359132B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN107729935B (en) The recognition methods of similar pictures and device, server, storage medium
US20230214989A1 (en) Defect detection method, electronic device and readable storage medium
CN112559341A (en) Picture testing method, device, equipment and storage medium
US9928408B2 (en) Signal processing
CN113255516A (en) Living body detection method and device and electronic equipment
CN110298302B (en) Human body target detection method and related equipment
CN111062927A (en) Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN113158773B (en) Training method and training device for living body detection model
CN114817933A (en) Method and device for evaluating robustness of business prediction model and computing equipment
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
JP2004030694A (en) Digital video texture analytic method
Bhat et al. Investigating inconsistencies in prnu-based camera identification
CN110969640A (en) Video image segmentation method, terminal device and computer-readable storage medium
CN115359132B (en) Camera calibration method and device for vehicle, electronic equipment and storage medium
US11507670B2 (en) Method for testing an artificial intelligence model using a substitute model
Doan et al. Image tampering detection based on a statistical model
CN109934185B (en) Data processing method and device, medium and computing equipment
CN116071804A (en) Face recognition method and device and electronic equipment
KR102405168B1 (en) Method and apparatus for generating of data set, computer-readable storage medium and computer program
CN114639056A (en) Live content identification method and device, computer equipment and storage medium
CN115393756A (en) Visual image-based watermark identification method, device, equipment and medium
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium
CN111274899B (en) Face matching method, device, electronic equipment and storage medium
CN112825145B (en) Human body orientation detection method and device, electronic equipment and computer storage medium
CN112883973A (en) License plate recognition method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant