CN116563387A - Training method and device of calibration model, storage medium and electronic equipment - Google Patents

Training method and device of calibration model, storage medium and electronic equipment Download PDF

Info

Publication number
CN116563387A
CN116563387A CN202310484027.1A CN202310484027A CN116563387A CN 116563387 A CN116563387 A CN 116563387A CN 202310484027 A CN202310484027 A CN 202310484027A CN 116563387 A CN116563387 A CN 116563387A
Authority
CN
China
Prior art keywords
attribute
calibration
point cloud
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310484027.1A
Other languages
Chinese (zh)
Inventor
田值
马际洲
许爽
聂琼
初祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202310484027.1A priority Critical patent/CN116563387A/en
Publication of CN116563387A publication Critical patent/CN116563387A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The specification discloses a training method, a device, a storage medium and electronic equipment of a calibration model, wherein a sample image and a sample point cloud are obtained, a sample calibration relation is determined through the calibration model, the attribute of the sample point cloud is determined according to the sample calibration relation, the attribute of a predicted point cloud corresponding to the sample image is determined through the sample image, and further the loss is determined through the difference between the attribute of the predicted point cloud and the attribute of the sample point cloud, so that the calibration model is trained. Based on the calibration model in the specification, the accurate calibration relation can be determined only by one frame of image and one frame of point cloud data without iterating for a plurality of times, and the calibration efficiency is ensured.

Description

Training method and device of calibration model, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a training method and apparatus for a calibration model, a storage medium, and an electronic device
Background
At present, with the development of the unmanned technical field, the driving safety of unmanned equipment is increasingly important. The method for detecting and classifying the obstacle based on the fusion of the image data and the radar data is widely applied to scenes such as obstacle detection and classification. The premise of the fusion of the image data and the radar data is the calibration of the radar sensor and the image sensor.
In the prior art, edge detection can be generally performed on acquired point cloud data and image data respectively to obtain point cloud boundary points and image boundary points, wherein the point cloud boundary points are boundary points of a target object in the point cloud data, and the image boundary points are boundary points of the target object in the image data. And then, according to the initial calibration relation, determining each point cloud boundary point and each image boundary point which are positioned under the same coordinate system, matching each point cloud boundary point with each image boundary point, and determining the point cloud boundary point and the image boundary point which are matched with each other. And finally, updating the calibration relation according to the point cloud boundary points and the image boundary points which are matched with each other, and continuously determining the point cloud boundary points and the image boundary points which are matched with each other according to the updated calibration relation. And taking the latest determined calibration relation as the calibration relation between the radar sensor and the image sensor until the determined distance between the point cloud boundary points and the image boundary points which are matched with each other is smaller than a distance threshold value.
However, in the process of determining the calibration relation between the radar sensor and the image sensor in the prior art, a plurality of iterations are required to determine the accurate calibration parameters, so that the calibration efficiency in the prior art is poor.
Disclosure of Invention
The present disclosure provides a method and apparatus for training a calibration model, a storage medium, and an electronic device, so as to partially solve the foregoing problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a training method of a calibration model, which comprises the following steps:
determining a sample image acquired by a first device and a sample point cloud acquired by a second device;
inputting the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model;
determining an attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment as a first attribute when the calibration relation between the first equipment and the second equipment is the sample calibration relation according to the sample calibration relation and the sample point cloud;
according to the sample image, predicting the attribute of a projection result of the prediction point cloud corresponding to the sample image under the first coordinate system when the calibration relation between the first equipment and the second equipment is a standard calibration relation, wherein the attribute is used as a second attribute;
and training the calibration model according to the difference between the first attribute and the second attribute, wherein the calibration model is used for determining the calibration relation between the image sensor and the radar sensor.
Optionally, the method further comprises:
according to a preset initial calibration relation, the sample point cloud data are sent to a first coordinate system where the sample image is located, and a projection result of the sample point cloud in the first coordinate system is obtained;
combining a projection result of the sample point cloud in the first coordinate system with the sample image to obtain a combined result;
inputting the sample image and the sample point cloud into a calibration model to be trained, wherein the method specifically comprises the following steps:
and taking the combined result as input, and inputting the combined result into a calibration model to be trained.
Optionally, determining, according to the sample calibration relationship and the sample point cloud, when the calibration relationship between the first device and the second device is the sample calibration relationship, an attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first device, as a first attribute, specifically includes:
determining target attributes from all attributes of the sample point cloud, wherein each attribute comprises at least one of a point cloud intensity attribute, a point cloud coordinate attribute and a point cloud depth attribute;
according to the sample calibration relation and the sample point cloud, determining a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment when the calibration relation between the first equipment and the second equipment is the sample calibration relation;
And extracting the characteristics of the projection result of the sample point cloud under the first coordinate system corresponding to the first equipment by the characteristic extraction mode of the target attribute to obtain a first attribute.
Optionally, according to the sample image, predicting an attribute of a projection result of the prediction point cloud corresponding to the sample image under the first coordinate system when the calibration relation between the first device and the second device is a standard calibration relation, and taking the attribute as the second attribute. The method specifically comprises the following steps:
inputting the sample image into a pre-trained prediction model to obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, wherein the attribute is used as a second attribute;
the prediction model is obtained by training in advance according to image data and point cloud data acquired by first equipment and second equipment, wherein the calibration relation is a standard calibration relation.
Optionally, training the calibration model according to a gap between the first attribute and the second attribute specifically includes:
determining a first loss based on a gap between the first attribute and the second attribute;
determining a real calibration relationship between a second device for acquiring the sample point cloud data and a first device for acquiring the sample image;
Determining an attribute of a projection result of the sample point cloud data under a first coordinate system corresponding to the first equipment as a third attribute when the calibration relation between the first equipment and the second equipment is the real calibration relation according to the real calibration relation and the sample point cloud;
determining a second loss based on a gap between the second attribute and the third attribute;
and training the calibration model by taking the minimum sum of the first loss and the second loss as an optimization target.
Optionally, predicting, according to the sample image, an attribute of a projection result of the prediction point cloud corresponding to the sample image in the first coordinate system when the calibration relationship between the first device and the second device is a standard calibration relationship, where the attribute is used as a second attribute, specifically including:
inputting the sample image into a pre-trained prediction model to obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, wherein the attribute is used as a second attribute;
training the calibration model according to the difference between the first attribute and the second attribute, wherein the training comprises the following steps:
And training the calibration model and the prediction model according to the difference between the first attribute and the second attribute.
Optionally, the method further comprises:
responding to the calibration request, determining target point cloud acquired by the radar sensor and target image acquired by the image sensor;
and taking the target point cloud and the target image as inputs, and inputting the inputs into the calibration model which is trained in advance, so as to obtain the calibration relation between the radar sensor which is output by the calibration model and acquires the target point cloud and the image sensor which acquires the target image.
The present specification provides a training device for calibration models, the device comprising:
a sample determination module for determining a sample image acquired by a first device and a sample point cloud acquired by a second device;
the relation determining module is used for inputting the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model;
the first determining module is used for determining the attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment as a first attribute when the calibration relation between the first equipment and the second equipment is the sample calibration relation according to the sample calibration relation and the sample point cloud;
The second determining module is used for predicting the attribute of the projection result of the prediction point cloud corresponding to the sample image under the first coordinate system as a second attribute when the calibration relation between the first equipment and the second equipment is a standard calibration relation according to the sample image;
the training module is used for training the calibration model according to the difference between the first attribute and the second attribute, and the calibration model is used for determining the calibration relation between the image sensor and the radar sensor.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the training method of the calibration model described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the training method of the calibration model described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the training method of the calibration model provided by the specification, a sample image and a sample point cloud are obtained, a sample calibration relation is determined through the calibration model, the attribute of the sample point cloud is determined according to the sample calibration relation, the attribute of the predicted point cloud corresponding to the sample image is determined through the sample image, and further the loss is determined through the difference between the attribute of the predicted point cloud and the attribute of the sample point cloud, so that the calibration model is trained.
According to the method, the training method of the calibration model in the specification does not need to carry out edge detection on a sample image and a sample point cloud to determine a point cloud boundary point and an image boundary point, and under the condition of large edge detection error, the calibration model obtained based on training can obtain an accurate calibration relation. Based on the calibration model, the accurate calibration relation can be determined only by one frame of image and one frame of point cloud data without iteration for a plurality of times, and the calibration efficiency is ensured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a flow chart of a training method of a calibration model provided in the present specification;
FIG. 2 is a flow chart of a training method of the calibration model provided in the present specification;
FIG. 3 is a schematic diagram of a training device for calibration models provided herein;
fig. 4 is a schematic view of the electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
It should be noted that all actions for acquiring signals, information or data in this specification are performed in accordance with the corresponding data protection policy and obtaining authorization given by the owner of the corresponding device
The method is characterized in that edge detection is respectively carried out on the basis of point cloud data and image attributes at present to obtain point cloud boundary points and image boundary points, and then the point cloud boundary points and the image boundary points which are matched with each other are determined through an initial calibration relation. And updating the initial calibration relation based on the difference between the point cloud boundary points and the image boundary points which are matched with each other, so as to redetermine the point cloud boundary points and the image boundary points which are matched with each other according to the updated initial calibration relation. Repeating the steps until the distance between the determined point cloud boundary points and the determined image boundary points, which are matched with each other, is smaller than a distance threshold value, and taking the latest determined calibration relation as the calibration relation between the radar sensor and the image sensor. The accurate calibration parameters can be determined only by more iterations, and under the condition that the determined boundary points are inaccurate, the calibration parameters determined by the iterations are not accurate enough.
The specification provides a new training method of a calibration model, which does not need point cloud data and image data to carry out edge detection, does not need to determine point cloud boundary points and image boundary points which are matched with each other based on an edge detection result, and can determine an accurate calibration relation even more times of iteration.
Fig. 1 is a flow chart of a training method of a calibration model provided in the present specification, specifically including the following steps:
s100: a sample image acquired by a first device and a sample point cloud acquired by a second device are determined.
The present disclosure provides a training method for a calibration model, where an execution process of the training method for a calibration model may be executed by an electronic device such as a server for performing sensor calibration, and for convenience of description, the present disclosure describes, as an example, a training process for executing the calibration model by a server for performing calibration on an image sensor and a point cloud sensor.
Generally, a model training method generally comprises three stages: a sample determination phase, a sample processing phase, and a training phase. The calibration model is used for determining the calibration relation between the image sensor and the point cloud sensor. Thus, the server may determine image data collected by the image sensor and point cloud data collected by the point cloud sensor as training samples.
Specifically, the server may receive a training request, where the training request carries a model identifier of a model to be trained, and a sample identifier of a sample required by a model corresponding to the model identifier.
Then, the server can determine a calibration model to be trained from the models stored by the server according to the model identification.
And finally, the server can determine the image data and the point cloud data of the same area, which are acquired by the first equipment and the second equipment at the same moment, from all data stored by the server according to the sample identification, and the image data and the point cloud data are used as sample images acquired by the first equipment and sample point clouds acquired by the second equipment.
The image data and the point cloud data acquired at the same time may be image data and point cloud data acquired at two acquisition times when a difference between the acquisition times is smaller than a preset time period threshold, and similarly, the same region may be a region where a difference between a first region corresponding to the image data and a second region corresponding to the point cloud data is smaller than a preset region threshold. The first region and the second region are real regions.
Of course, the server may determine the model result and the initial model parameters of the calibration model to be trained in advance, then determine a sample image and a sample point cloud for training the calibration model from the image data and the point cloud data that are matched with each other, and determine the acquisition device of the sample image as the first device and the acquisition device of the sample point cloud as the second device. The first device and the second device may be acquisition devices disposed on the unmanned device.
S102: and inputting the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model.
In one or more embodiments provided herein, the training process of the model includes a sample determination phase, a sample processing phase, and a training phase, as previously described. Thus, after determining the sample, the server may process the sample.
Specifically, the calibration model in the present specification is used to determine a calibration relationship between the first device and the second device.
The server can input the sample image and the sample point cloud into the calibration model to obtain a sample calibration relation output by the calibration model. The sample calibration relation is a calibration relation between a first device for acquiring the sample image and a second device for acquiring the sample point cloud.
The calibration relationship may be a conversion relationship between a first coordinate system corresponding to the first device and a second coordinate system corresponding to the second device (e.g., a rotation matrix, a translation matrix, quaternion and six-element number used for determining the rotation matrix and the translation matrix, etc.), or may be an external parameter between an image sensor for acquiring image data and a radar sensor for acquiring point cloud data. The specific type of the calibration relation can be set according to the needs, and the specification does not limit the specific type of the calibration relation.
That is, the conversion matrix and the translation matrix may be quaternions, six-element numbers, or the like, which are determined to correspond to the conversion matrix and the translation matrix.
The specific form of the calibration relation and how to perform conversion based on the calibration relation can be set according to the needs, and the specification does not limit the specific form.
S104: and determining the attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment as a first attribute when the calibration relation between the first equipment and the second equipment is the sample calibration relation according to the sample calibration relation and the sample point cloud.
In one or more embodiments provided herein, the server may train the calibration model after processing the training samples. However, if the loss is determined directly based on the difference between the real calibration relationship between the first device and the second device and the determined sample calibration relationship, the determined loss contains less information, and it is difficult to train to obtain an accurate calibration model based on the loss with less information.
Under the condition, if the attribute of the point cloud corresponding to the sample image can be predicted based on the characteristics of the image, and then the loss is determined through the difference between the predicted attribute and the attribute of the point cloud determined according to the sample calibration relation, the determined loss can be used for representing the reliability of the sample calibration relation. The server may then determine the properties of the sample point cloud from the sample calibration relationship.
Specifically, the server needs to compare the attribute obtained through the sample calibration relationship with the attribute obtained based on the sample image, and the coordinate system where the attribute obtained based on the sample image is located is the first coordinate system corresponding to the sample image, so that the server needs to determine the attribute of the sample point cloud located under the first coordinate system.
Then, the server can project the sample point cloud acquired by the second device under the first coordinate system according to the sample calibration relation when the calibration relation between the first device and the second device is assumed to be the sample calibration relation, so as to obtain a projection result. The second equipment is controlled to be motionless, the first equipment which is the calibration relation of the sample and the second equipment is determined to be the calibration relation of the sample is used as the appointed equipment, and the sample point cloud is projected to the appointed coordinate system where the appointed equipment is located. And the projection result of the sample point cloud under the specified coordinate system is the attribute of the projection result of the sample point cloud under the first coordinate system corresponding to the first equipment when the calibration relation between the first equipment and the second equipment is the sample calibration relation.
The server may determine an attribute of a projection result of the point cloud data in the first coordinate system as the first attribute. The attribute may be any attribute of the point cloud data, such as a point cloud intensity attribute, a point cloud coordinate attribute, a point cloud depth attribute, and the like.
S106: and predicting the attribute of the projection result of the prediction point cloud corresponding to the sample image under the first coordinate system as a second attribute when the calibration relation between the first equipment and the second equipment is the standard calibration relation according to the sample image.
In one or more embodiments provided in the present disclosure, as described above, the server may predict an attribute of a point cloud corresponding to the sample image based on a feature of the image itself, and determine the loss by using a difference between the predicted attribute and the attribute of the point cloud determined according to the sample calibration relationship. The server can then predict the attributes of the predicted point cloud corresponding to the sample image from the sample image.
Specifically, the server needs to compare the attribute obtained through the sample calibration relationship with the attribute obtained based on the sample image, and the coordinate system where the attribute obtained based on the sample image is located is the first coordinate system corresponding to the sample image, so that the server needs to predict the attribute of the projection result of the prediction point cloud corresponding to the sample image under the first coordinate system according to the sample image.
And then, the server predicts the predicted point cloud corresponding to the sample image according to the sample image when the calibration relation between the first equipment and the second equipment is assumed to be a standard calibration relation, and predicts the attribute of the projection result of the predicted point cloud under the first coordinate system according to the standard calibration relation. That is, the first device is controlled to be stationary, and the second device, which is a standard calibration relationship with the second device, is determined as a specific device. And determining a prediction point cloud corresponding to the image data according to the image data, and projecting the prediction point cloud to a first coordinate system where the first equipment is located. The server can obtain the attribute of the projection result of the predicted point cloud under the first coordinate system.
The prediction point cloud may be: and the specific equipment with the calibration relation between the first equipment and the first equipment being the standard calibration relation is the point cloud data corresponding to the sample images of the same area acquired by the first equipment at the same moment. The standard calibration relationship is used to characterize the calibration relationship between the specific device and the first device, and the standard calibration relationship may be preset.
The server may determine an attribute of a projection result of the predicted point cloud in the first coordinate system as a second attribute. The attribute may be any attribute of the point cloud data, such as a point cloud intensity attribute, a point cloud coordinate attribute, a point cloud depth attribute, and the like.
S108: and training the calibration model according to the difference between the first attribute and the second attribute, wherein the calibration model is used for determining the calibration relation between the image sensor and the radar sensor.
In one or more embodiments provided herein, as described above, after determining the attribute of the predicted point cloud corresponding to the sample image and the attribute of the sample point cloud determined based on the sample calibration relationship, the server may determine the loss based on the difference between the first attribute and the second attribute.
Specifically, taking the first attribute and the second attribute as point cloud intensity attributes as examples, if the calibration relationship between the first device and the second device is a sample calibration relationship, the first attribute may be the point cloud intensity of a projection result of the sample point cloud under a first coordinate system corresponding to the first device, and if the calibration relationship between the first device and the second device is a standard calibration relationship, the second attribute is the point cloud intensity of a projection result of the sample point cloud under the first coordinate system corresponding to the sample image.
Then, the server may determine a gap between the point cloud intensity of the projection result of the sample point cloud and the point cloud intensity of the projection result of the predicted point cloud, and determine a loss according to the gap.
Finally, the server can adjust model parameters of the calibration model according to the determined loss so as to train the calibration model.
In addition, in the driving process of the unmanned equipment, the unmanned equipment can acquire sensor data according to a preset frequency, wherein the sensor data are data required for calibrating the sensor data, and at least comprise the following steps: image data and point cloud data. Wherein the unmanned device is provided with an image sensor and a radar sensor, i.e. a first device and a second device. The first device and the second device corresponding to the training samples in the calibration model training process can be set identically or differently.
The unmanned device can then receive the calibration request and determine a target image acquired by the image sensor and a target point cloud acquired by the radar sensor based on the received calibration request. The target image and the target point cloud may be image data and point cloud data of the same area acquired by the image sensor and the radar sensor at the same time.
The server can input the determined target point cloud and the target image into a pre-trained calibration model to obtain the calibration relation between the radar sensor for acquiring the target point cloud and the image sensor for acquiring the target image, which are output by the calibration model.
Further, in this specification, in order to avoid a situation that the calibration relationship determined based on the single-frame image data and the single-frame point cloud data is not accurate enough, the server may further determine a plurality of data pairs. Wherein, not for each data pair, the data pair comprises a frame of target point cloud and a frame of target image which are mutually corresponding. And then, respectively taking each data pair as input, and inputting the input data pairs into a pre-trained calibration model to obtain the corresponding calibration relation of each data pair output by the calibration model. And finally, determining the target calibration relation between the image sensor and the radar sensor according to each calibration relation. The server can determine the error corresponding to the calibration relation according to the calibration relation and each determined data pair aiming at the calibration relation corresponding to each data pair, and finally determine the target calibration relation based on the error corresponding to each calibration relation.
Further, after determining the calibration relation, the server may determine, according to the acquired sensor data and the calibration relation, the point cloud data and the projection of the point cloud data under the first reference system, further fuse the projection with the image data in the sensor data, determine a fusion result, and perform obstacle detection on the fusion result to determine the position of the obstacle. After determining the calibration relation, how to fuse the image data and the point cloud data based on the calibration relation is specifically set according to the needs, and the specification is not limited.
Based on the training method of the calibration model provided in fig. 1, a sample image and a sample point cloud are obtained, a sample calibration relation is determined through the calibration model, the attribute of the sample point cloud is determined according to the sample calibration relation, the attribute of the predicted point cloud corresponding to the sample image is determined through the sample image, and further the loss is determined through the difference between the attribute of the predicted point cloud and the attribute of the sample point cloud, so that the calibration model is trained. According to the training method of the calibration model in the specification, edge detection is not needed for the sample image and the sample point cloud to determine the point cloud boundary point and the image boundary point, and under the condition that the edge detection error is large, an accurate calibration relation can be obtained based on the calibration model obtained through training. Based on the calibration model, the accurate calibration relation can be determined only by one frame of image and one frame of point cloud data without iteration for a plurality of times, and the calibration efficiency is ensured.
Further, in general, the calibration model may determine a calibration relationship between the second device that collects the sample point cloud and the first device that collects the sample image based on a gap between the sample point cloud and the sample image in the same coordinate system. Thus, when the sample point cloud and the sample image are input into the calibration model, the sample image and the sample point cloud under the same coordinate system can be determined as input data of the calibration model.
Specifically, the server stores an initial calibration relationship. Then, the server can project the sample image and the sample point cloud into the same coordinate system according to the pre-stored initial calibration relation, then combine the sample point cloud and the sample image in the same coordinate system, and take the combined result as input data of the calibration model.
Further, since the model needs to be trained based on the attribute of the predicted point cloud and the attribute of the sample point cloud in the first coordinate system in the present specification, in the model input stage, the input of the model can be determined based on the sample point cloud and the sample image in the first coordinate system.
Specifically, the server may project the sample point cloud to a first coordinate system in which the image data is located according to a preset initial calibration relationship, so as to obtain a projection result of the sample point cloud under the image coordinate system.
Then, the server can combine the projection result of the sample point cloud under the image coordinate system with the sample image to obtain a combined result.
Finally, the server may take the combined result as input to the sample point cloud.
In addition, as previously described, each attribute of the point cloud may include point cloud intensity data, point cloud coordinate attributes, and point cloud depth attributes. At least one attribute of the point cloud may be determined as the first attribute or the second attribute when the first attribute and the second attribute are determined.
Specifically, the server may determine the target attribute from each attribute of the sample point cloud, where each attribute includes at least one of a point cloud intensity attribute, a point cloud coordinate attribute, and a point cloud depth attribute.
And then, the server can determine the projection result of the sample point cloud under the first coordinate system corresponding to the first equipment according to the sample calibration relation and the sample point cloud when the calibration relation between the first equipment and the second equipment is the sample calibration relation.
Finally, the service performs feature extraction on the sample projection result in a feature extraction mode of the target attribute to obtain a first attribute.
Further, in this specification, the process of determining the attribute of the predicted point cloud according to the sample image may also be implemented by a prediction model.
Specifically, a pre-trained prediction model is arranged in the server. The prediction model is obtained by training in advance according to image data and point cloud data acquired by first equipment and second equipment with calibration relations being standard calibration relations.
The server can input the sample image into the prediction model to obtain the attribute of the projection result of the prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system as the second attribute.
The prediction model is obtained by training in the following mode:
and acquiring image data and point cloud data acquired by the first equipment and the second equipment with the calibration relation being the standard calibration relation, taking the image data as a training sample, determining the attribute of the point cloud data according to the standard calibration relation and the point cloud data, taking the attribute as a label of the training sample, and training the label model according to the training sample and the label thereof.
Further, during model training, the server may determine the loss based not only on the gap between the first attribute and the second attribute, but also on the gap between the second attribute and the third attribute derived based on the true calibration relationship.
In particular, the server may determine a gap between the first attribute and the second attribute, determining a first loss. Wherein the gap between the first attribute and the second attribute may be determined using a cross entropy loss function or a contrast function.
Second, the server may determine a true calibration relationship between the second device that acquired the sample point cloud data and the first device that acquired the sample image.
Then, the service can determine the attribute of the projection result of the sample point cloud data under the first coordinate system corresponding to the first device as a third attribute according to the real calibration relation and the sample point cloud when the calibration relation between the first device and the second device is the real calibration relation. The step of determining the third attribute may refer to the description of determining the first attribute, which is not described in detail in this specification.
The service may then determine a second loss based on the gap between the second attribute and the third attribute. Likewise, the gap between the second attribute and the third attribute may be determined using a cross entropy loss function, or may be determined using a contrast function. In particular, how the first loss and the second loss are determined may be set as needed, which is not limited in this specification.
Finally, the server can adjust model parameters of the calibration model by taking the minimum sum of the first loss and the second loss as an optimization target so as to train the calibration model.
In addition, the calibration model and the prediction model can be trained in a combined training mode:
firstly, the server can determine a sample image and a sample point cloud, and input the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model.
Meanwhile, the server can input the sample image into a prediction model to be trained to obtain the attribute of the prediction point cloud output by the prediction model as a second attribute.
The server may then determine the first attribute based on the sample calibration relationship and determine a third attribute based on the true calibration relationship.
And finally, determining a first loss according to the first attribute and the second attribute, determining a second loss according to the second attribute and the third attribute, and training the prediction model and the calibration model by taking the sum of the first loss and the second loss as an optimization target. As shown in fig. 2.
Fig. 2 is a schematic flow chart of a training method of the calibration model provided in the present specification. The server determines sample point clouds and sample images, and inputs the sample point clouds and the sample images into the calibration model to obtain a sample calibration relation. Then, the server may determine a first attribute according to the sample point cloud and the sample calibration relationship, and determine a third attribute according to the sample point cloud and the real calibration relationship. The server may then determine a second attribute from the sample image via a predictive model. Finally, the server may determine a first loss based on the gap between the first attribute and the second attribute, determine a second loss based on the gap between the second attribute and the third attribute, and train the calibration model and the predictive model based on the first loss and the second loss.
In the scene of combined training of the prediction model and the calibration model, the standard calibration relation obviously cannot be preset manually or specifically given. The standard calibration relation can be obtained by adjusting the prediction model based on the loss, and the attribute of the prediction point cloud corresponding to the sample image is determined based on the calibration relation obtained by adjustment. The standard calibration relation is determined and whether the standard calibration relation can be characterized by adopting functions, formulas and the like, and can be set according to the needs, and the specification does not limit the standard calibration relation.
The above training method for the calibration model provided for one or more embodiments of the present disclosure further provides a training device for the corresponding calibration model based on the same concept, as shown in fig. 3.
FIG. 3 is a training device for calibration models provided in the present specification, including:
a sample determination module 200 for determining a sample image acquired by a first device and a sample point cloud acquired by a second device.
The relationship determining module 202 is configured to input the sample image and the sample point cloud into a calibration model to be trained, and obtain a sample calibration relationship output by the calibration model.
The first determining module 204 is configured to determine, according to the sample calibration relationship and the sample point cloud, an attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first device when the calibration relationship between the first device and the second device is the sample calibration relationship, as a first attribute.
And a second determining module 206, configured to predict, according to the sample image, an attribute of a projection result of the prediction point cloud corresponding to the sample image in the first coordinate system when the calibration relationship between the first device and the second device is a standard calibration relationship, as a second attribute.
The training module 208 is configured to train the calibration model according to a difference between the first attribute and the second attribute, where the calibration model is used to determine a calibration relationship between the image sensor and the radar sensor.
Optionally, the sample determining module 200 is configured to obtain a projection result of the sample point cloud in the first coordinate system according to a preset initial calibration relationship, combine the projection result of the sample point cloud in the first coordinate system with the sample image to obtain a combined result, and input the combined result as an input into a calibration model to be trained.
Optionally, the first determining module 204 is configured to determine a target attribute from each attribute of the sample point cloud, where each attribute includes at least one of a point cloud intensity attribute, a point cloud coordinate attribute, and a point cloud depth attribute, determine, according to the sample calibration relationship and the sample point cloud, when the calibration relationship between the first device and the second device is the sample calibration relationship, a projection result of the sample point cloud under a first coordinate system corresponding to the first device, and perform feature extraction on a projection result of the sample point cloud under a first coordinate system corresponding to the first device by using a feature extraction mode of the target attribute, so as to obtain the first attribute.
Optionally, the second determining module 206 is configured to input the sample image into a pre-trained prediction model, obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, as a second attribute, where the prediction model is pre-trained according to image data and point cloud data acquired by a first device and a second device whose calibration relationship is a standard calibration relationship.
Optionally, the training module 208 is configured to determine a first loss according to a difference between the first attribute and the second attribute, determine a real calibration relationship between a second device that collects the sample point cloud data and a first device that collects the sample image, determine, according to the real calibration relationship and the sample point cloud, an attribute of a projection result of the sample point cloud data under a first coordinate system corresponding to the first device when the calibration relationship between the first device and the second device is the real calibration relationship, as a third attribute, determine a second loss according to a difference between the second attribute and the third attribute, and train the calibration model with a sum of the first loss and the second loss as an optimization target.
Optionally, the second determining module 206 is configured to input the sample image into a pre-trained prediction model, obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, and use the attribute as a second attribute, and the training module 208 is configured to train the calibration model and the prediction model according to a difference between the first attribute and the second attribute.
Optionally, the training module 208 is configured to determine, in response to a calibration request, a target point cloud acquired by a radar sensor and a target image acquired by an image sensor, input the target point cloud and the target image as input into the calibration model that is pre-trained, and obtain a calibration relationship between the radar sensor acquiring the target point cloud and the image sensor acquiring the target image output by the calibration model.
The present specification also provides a computer readable storage medium storing a computer program operable to perform the above-described training method of the calibration model provided in fig. 1.
The present specification also provides a computer readable storage medium storing a computer program operable to perform the above-described training method of the calibration model provided in fig. 1.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 4. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 4, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the training method of the calibration model described in the above figure 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (10)

1. A method of training a calibration model, the method comprising:
determining a sample image acquired by a first device and a sample point cloud acquired by a second device;
inputting the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model;
determining an attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment as a first attribute when the calibration relation between the first equipment and the second equipment is the sample calibration relation according to the sample calibration relation and the sample point cloud;
According to the sample image, predicting the attribute of a projection result of the prediction point cloud corresponding to the sample image under the first coordinate system when the calibration relation between the first equipment and the second equipment is a standard calibration relation, wherein the attribute is used as a second attribute;
and training the calibration model according to the difference between the first attribute and the second attribute, wherein the calibration model is used for determining the calibration relation between the image sensor and the radar sensor.
2. The method of claim 1, wherein the method further comprises:
according to a preset initial calibration relation, projecting the sample point cloud data into a first coordinate system in which the sample image is positioned, and obtaining a projection result of the sample point cloud in the first coordinate system;
combining a projection result of the sample point cloud in the first coordinate system with the sample image to obtain a combined result;
inputting the sample image and the sample point cloud into a calibration model to be trained, wherein the method specifically comprises the following steps:
and taking the combined result as input, and inputting the combined result into a calibration model to be trained.
3. The method of claim 1, wherein determining, as the first attribute, an attribute of a projection result of the sample point cloud in a first coordinate system corresponding to the first device when the calibration relationship between the first device and the second device is the sample calibration relationship according to the sample calibration relationship and the sample point cloud, specifically includes:
Determining target attributes from all attributes of the sample point cloud, wherein each attribute comprises at least one of a point cloud intensity attribute, a point cloud coordinate attribute and a point cloud depth attribute;
according to the sample calibration relation and the sample point cloud, determining a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment when the calibration relation between the first equipment and the second equipment is the sample calibration relation;
and extracting the characteristics of the projection result of the sample point cloud under the first coordinate system corresponding to the first equipment by the characteristic extraction mode of the target attribute to obtain a first attribute.
4. The method of claim 1, wherein predicting, according to the sample image, an attribute of a projection result of a predicted point cloud corresponding to the sample image in the first coordinate system when the calibration relationship between the first device and the second device is a standard calibration relationship, as the second attribute, specifically includes:
inputting the sample image into a pre-trained prediction model to obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, wherein the attribute is used as a second attribute;
The prediction model is obtained by training in advance according to image data and point cloud data acquired by first equipment and second equipment, wherein the calibration relation is a standard calibration relation.
5. The method of claim 1, wherein training the calibration model based on the gap between the first attribute and the second attribute, comprises:
determining a first loss based on a gap between the first attribute and the second attribute;
determining a real calibration relationship between a second device for acquiring the sample point cloud data and a first device for acquiring the sample image;
determining an attribute of a projection result of the sample point cloud data under a first coordinate system corresponding to the first equipment as a third attribute when the calibration relation between the first equipment and the second equipment is the real calibration relation according to the real calibration relation and the sample point cloud;
determining a second loss based on a gap between the second attribute and the third attribute;
and training the calibration model by taking the minimum sum of the first loss and the second loss as an optimization target.
6. The method of claim 1, wherein predicting, according to the sample image, an attribute of a projection result of a predicted point cloud corresponding to the sample image in the first coordinate system when the calibration relationship between the first device and the second device is a standard calibration relationship, as the second attribute, specifically includes:
Inputting the sample image into a pre-trained prediction model to obtain an attribute of a projection result of a prediction point cloud corresponding to the sample image output by the prediction model under the first coordinate system, wherein the attribute is used as a second attribute;
training the calibration model according to the difference between the first attribute and the second attribute, wherein the training comprises the following steps:
and performing joint training on the calibration model and the prediction model according to the difference between the first attribute and the second attribute.
7. The method of claim 1, wherein the method further comprises:
responding to the calibration request, determining target point cloud acquired by the radar sensor and target image acquired by the image sensor;
and taking the target point cloud and the target image as inputs, and inputting the inputs into the calibration model which is trained in advance, so as to obtain the calibration relation between the radar sensor which is output by the calibration model and acquires the target point cloud and the image sensor which acquires the target image.
8. A training device for a calibration model, the device comprising:
a sample determination module for determining a sample image acquired by a first device and a sample point cloud acquired by a second device;
The relation determining module is used for inputting the sample image and the sample point cloud into a calibration model to be trained to obtain a sample calibration relation output by the calibration model;
the first determining module is used for determining the attribute of a projection result of the sample point cloud under a first coordinate system corresponding to the first equipment as a first attribute when the calibration relation between the first equipment and the second equipment is the sample calibration relation according to the sample calibration relation and the sample point cloud;
the second determining module is used for predicting the attribute of the projection result of the prediction point cloud corresponding to the sample image under the first coordinate system as a second attribute when the calibration relation between the first equipment and the second equipment is a standard calibration relation according to the sample image;
the training module is used for training the calibration model according to the difference between the first attribute and the second attribute, and the calibration model is used for determining the calibration relation between the image sensor and the radar sensor.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
CN202310484027.1A 2023-04-28 2023-04-28 Training method and device of calibration model, storage medium and electronic equipment Pending CN116563387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310484027.1A CN116563387A (en) 2023-04-28 2023-04-28 Training method and device of calibration model, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310484027.1A CN116563387A (en) 2023-04-28 2023-04-28 Training method and device of calibration model, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116563387A true CN116563387A (en) 2023-08-08

Family

ID=87493973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310484027.1A Pending CN116563387A (en) 2023-04-28 2023-04-28 Training method and device of calibration model, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116563387A (en)

Similar Documents

Publication Publication Date Title
CN113095124B (en) Face living body detection method and device and electronic equipment
CN112766468B (en) Trajectory prediction method and device, storage medium and electronic equipment
CN111238450B (en) Visual positioning method and device
CN115600157B (en) Data processing method and device, storage medium and electronic equipment
CN115828162B (en) Classification model training method and device, storage medium and electronic equipment
CN111797711A (en) Model training method and device
CN117197781B (en) Traffic sign recognition method and device, storage medium and electronic equipment
CN117409466B (en) Three-dimensional dynamic expression generation method and device based on multi-label control
CN112883871B (en) Model training and unmanned vehicle motion strategy determining method and device
CN114494381A (en) Model training and depth estimation method and device, storage medium and electronic equipment
CN111426299B (en) Method and device for ranging based on depth of field of target object
CN116721399B (en) Point cloud target detection method and device for quantitative perception training
CN112861831A (en) Target object identification method and device, storage medium and electronic equipment
CN117093862A (en) Model training method and device, electronic equipment and storage medium
CN112734851B (en) Pose determination method and device
CN116563387A (en) Training method and device of calibration model, storage medium and electronic equipment
CN114187355A (en) Image calibration method and device
CN116740197B (en) External parameter calibration method and device, storage medium and electronic equipment
CN114528923B (en) Video target detection method, device, equipment and medium based on time domain context
CN116434787B (en) Voice emotion recognition method and device, storage medium and electronic equipment
CN116188919B (en) Test method and device, readable storage medium and electronic equipment
CN116310406B (en) Image detection method and device, storage medium and electronic equipment
CN117746193B (en) Label optimization method and device, storage medium and electronic equipment
CN117726907B (en) Training method of modeling model, three-dimensional human modeling method and device
CN117591217A (en) Information display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination