CN113850916A - Model training and point cloud missing completion method, device, equipment and medium - Google Patents

Model training and point cloud missing completion method, device, equipment and medium Download PDF

Info

Publication number
CN113850916A
CN113850916A CN202111129999.6A CN202111129999A CN113850916A CN 113850916 A CN113850916 A CN 113850916A CN 202111129999 A CN202111129999 A CN 202111129999A CN 113850916 A CN113850916 A CN 113850916A
Authority
CN
China
Prior art keywords
point cloud
cloud data
training
missing
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111129999.6A
Other languages
Chinese (zh)
Inventor
卢丽华
魏辉
李茹杨
赵雅倩
李仁刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202111129999.6A priority Critical patent/CN113850916A/en
Publication of CN113850916A publication Critical patent/CN113850916A/en
Priority to PCT/CN2022/078359 priority patent/WO2023045252A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a model training method, a point cloud missing completion device, an electronic device and a computer readable storage medium, wherein the model training method comprises the following steps: acquiring training missing point cloud data; inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data; if the condition that the training is completed is detected to be met, determining the initial model as a point cloud completion model; the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes training missing point cloud data to perform contrast learning, the training missing point cloud data are input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data are used for generating training repair point cloud data; the accuracy of the processed point cloud data after completion processing is improved.

Description

Model training and point cloud missing completion method, device, equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for model training and point cloud missing completion, an electronic device, and a computer-readable storage medium.
Background
The three-dimensional reconstruction technology reconstructs three-dimensional objects in a virtual world, and is the basis for realizing three-dimensional visual technologies such as VR/AR (virtual Reality, Augmented Reality) and the like. In recent years, with the development of sensors, deep learning, and the like, three-dimensional point clouds have become a mainstream representation of three-dimensional reconstruction results. However, due to mutual occlusion between objects, technical limitations of hardware equipment, and the like, the three-dimensional reconstruction result based on the point cloud has a hole or a shape structure. At present, research work proposes a point-based completion method, which can directly process point cloud data and obtain point cloud characteristics, predict complete three-dimensional point cloud or missing point cloud through a decoder based on full connection or folding, and repair and complete a three-dimensional reconstruction result. Compared with voxel model representation, the method has the advantages that the point cloud is directly input, so that the input data volume and the parameter scale of the neural network are reduced, and the network training speed is greatly improved. However, the related art extracts features of missing input point clouds to obtain feature representation of the input point clouds, and can only perform data restoration from the perspective of existing data, thereby reducing the accuracy of the model-generated completion data.
Therefore, the problem of low data accuracy in the related art is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, an object of the present application is to provide a method, an apparatus, an electronic device, and a computer-readable storage medium for model training and point cloud missing completion, which improve accuracy of processed point cloud data after completion processing.
In order to solve the above technical problem, the present application provides a model training method, including:
acquiring training missing point cloud data;
inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data;
if the condition that the training is finished is detected to be met, determining the initial model as a point cloud completion model;
the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes the training missing point cloud data to perform contrast learning, the training missing point cloud data is input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data is used for generating the training repairing point cloud data.
Optionally, the generating process of the initial model includes:
performing learning training on an initial reconstruction network by using the training missing point cloud data to obtain the target reconstruction network;
and combining the target reconstruction network and the initial generation network to obtain the initial model.
Optionally, the performing learning training on the initial reconstruction network by using the training missing point cloud data to obtain the target reconstruction network includes:
determining an anchor point cloud from the training missing point cloud data;
inputting the training missing point cloud data into the initial reconstruction network based on the anchor point cloud to obtain target data; wherein the target data comprises the input features and reconstructed point cloud data;
obtaining a contrast learning loss value by using the input characteristics, obtaining a reconstruction loss value by using the reconstruction point cloud data, and performing parameter adjustment on the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value;
and if the condition that the pre-training completion condition is met is detected, determining the initial reconstruction network as the target reconstruction network.
Optionally, the inputting the training missing point cloud data into the initial reconstruction network to obtain target data includes:
inputting the training missing point cloud data into an initial coding network in the initial reconstruction network to obtain the input characteristic;
inputting the input features into an initial decoding network in the initial reconstruction network to obtain the reconstruction point cloud data;
correspondingly, the performing parameter adjustment on the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value includes:
generating a first loss value using the comparison learning loss value and the reconstruction loss value;
and utilizing the first loss value to carry out parameter adjustment on the initial reconstruction network.
Optionally, the initial coding network comprises a plurality of feature extraction blocks, each of the feature extraction blocks comprising a multi-layer perceptron and a downsampling layer based on a farthest point sample; the initial decoding network includes a plurality of multi-layer perceptrons and a plurality of upsampling layers.
Optionally, the acquiring training missing point cloud data includes:
acquiring a plurality of original missing point clouds as the original point cloud data;
respectively carrying out deletion processing of different degrees on each original deletion point cloud to obtain training deletion point cloud data; the missing processing is clipping processing.
Optionally, the initial generation network includes a missing point cloud generation network and a correction network, and the generation process of the training and repairing point cloud data includes:
inputting the input features into the missing point cloud generation network to obtain the missing point cloud data;
inputting the missing point cloud data and output data output by the target reconstruction network into the correction network to obtain the training and repairing point cloud data;
the missing point cloud generating network comprises a missing point cloud modulating module and a folding decoding module, and the inputting of the input features into the missing point cloud generating network to obtain the missing point cloud data comprises the following steps:
inputting the input features into the missing point cloud modulation module to obtain missing point cloud features;
and inputting the missing point cloud characteristics and the input characteristics into the folding and decoding module to obtain the missing point cloud data.
Optionally, the adjusting the parameters of the initial model based on the original point cloud data corresponding to the training repair point cloud data and the training missing point cloud data includes:
obtaining a corrected reconstruction loss value by using the training restoration point cloud data and the original point cloud data;
obtaining a missing reconstruction loss value by using the missing point cloud data and the missing point cloud true value data;
generating a second loss value using the corrected reconstruction loss value and the missing reconstruction loss value;
performing parameter adjustment on the initial model by using the second loss value;
and the missing point cloud true value data is difference data of the training missing point cloud data and the corresponding original point cloud data.
The application also provides a point cloud missing completion method, which comprises the following steps:
acquiring point cloud data to be complemented;
and inputting the point cloud data to be supplemented into the point cloud supplementation model to obtain processed point cloud data.
The application also provides a model training device, including:
the first acquisition module is used for acquiring training missing point cloud data;
the training module is used for inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data;
the determining module is used for determining the initial model as a point cloud completion model if the condition that the training is completed is detected to be met;
the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes the training missing point cloud data to perform contrast learning, the training missing point cloud data is input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data is used for generating the training repairing point cloud data.
The application also provides a device is mended to some cloud disappearance, includes:
the second acquisition module is used for acquiring point cloud data to be complemented;
and the completion processing module is used for inputting the point cloud data to be completed into the point cloud completion model to obtain processed point cloud data.
The present application further provides an electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is used for executing the computer program to realize the model training method and/or the point cloud missing completion method.
The present application further provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned model training method, and/or the above-mentioned point cloud missing complementing method.
According to the model training method, training missing point cloud data are obtained; inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data; if the condition that the training is completed is detected to be met, determining the initial model as a point cloud completion model; the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes training missing point cloud data to perform contrast learning, the training missing point cloud data are input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data are used for generating training repair point cloud data.
Therefore, in the method, the initial model comprises a target reconstruction network and an initial generation network, wherein the target reconstruction network can learn the global structure from the perspective of other training missing point cloud data with different missing conditions by taking certain training missing point cloud data as an anchor point. The point cloud data is obtained by performing comparison learning on the point cloud data, wherein the point cloud data is obtained by performing comparison learning on the point cloud data, and the point cloud data is obtained by performing comparison learning on the point cloud data. The initial generation network is used for generating missing point cloud data, and the missing point cloud part lost by the training missing point cloud data is presumed based on input features corresponding to the training missing point cloud data. And during training, the missing point cloud features are extracted according to input feature learning. And when the initial model meets the training completion condition, determining the initial model as a point cloud completion model. The point cloud completion model can acquire a global structure with local area information, and accurately predict missing point clouds according to the condition of input data, so that the accuracy of the point cloud data after completion processing is improved, and the problem of low data accuracy in the related technology is solved.
In addition, the application also provides a point cloud missing completion method, a model training device, a point cloud missing completion device, electronic equipment and a computer readable storage medium, and the beneficial effects are also achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies of the present application, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a model training method provided in an embodiment of the present application;
fig. 2 is a specific structure diagram of a point cloud completion model according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a model training apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a point cloud missing completion apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a model training method according to an embodiment of the present disclosure. The method comprises the following steps:
s101: and acquiring training missing point cloud data.
The training missing point cloud data is incomplete three-dimensional point cloud data used for model training, each piece of training missing point cloud data corresponds to one piece of original point cloud data, the original point cloud data can be used as label data in a training process, loss values can be calculated through the original point cloud data, so that the trained model can recognize the difference between the original point cloud data and the label data, and the capability of predicting the missing part of the incomplete three-dimensional point cloud is learned.
As to the method for acquiring the training missing point cloud data, in an embodiment, the training missing point cloud data and the corresponding original point cloud data thereof may be acquired from an existing data set, and the original point cloud data is usually the true point cloud data, i.e. the complete point cloud data of an object. In practical application, the difficulty in obtaining the point cloud truth value data is high, the quantity is small, and the accuracy is not enough, so that after the point cloud truth value data is used as the original point cloud data to train the model, the result obtained by point cloud completion of the model is different from the real result. In order to solve the above problem, in another embodiment, three-dimensional point cloud data with a deletion may be used as original point cloud data, and further deletion processing is performed on the original point cloud data to obtain training missing point cloud data, specifically, the method may include the following steps:
step 11: obtaining a plurality of original missing point clouds.
Step 12: and respectively carrying out deletion processing of different degrees on each original deletion point cloud to obtain training deletion point cloud data.
The original missing point cloud refers to incomplete three-dimensional point cloud data serving as original point cloud data, and the specific number of the original missing point clouds is not limited in this embodiment. The deletion processing refers to processing causing three-dimensional point cloud data to be incomplete, and specifically may be clipping processing. And (4) clipping, namely selecting partial content deletion on the original missing point cloud. It can be understood that the clipping process causes a portion of the content in the original missing point cloud to be missing, resulting in training missing point cloud data that is more lossy than the original missing point cloud. It can be understood that after an original missing point cloud is subjected to different degrees of missing processing, a plurality of corresponding training missing point cloud data can be obtained, and the specific form of each training missing point cloud data is related to the degree of the missing processing.
It can be understood that, while the missing processing generates the training missing point cloud data, the missing contents of the training missing point cloud data and the original missing point cloud can be specified, and the missing contents can be referred to as missing truth data. According to the method and the device, the initial model can not only extract the characteristics of the incomplete three-dimensional point cloud and learn the global structure of the incomplete three-dimensional point cloud, but also predict the missing point cloud (namely the missing part) to obtain the predicted missing point cloud. In one embodiment, therefore, the missing truth data may also be used as label data during training in some part of the initial model, so that the model can make an accurate prediction.
S102: inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data.
S103: and if the condition that the training is completed is detected to be met, determining the initial model as a point cloud completion model.
The specific content and form of the training completion condition are not limited, and may be, for example, a training round condition, or may be a training time condition, or may be a model accuracy condition, or may be any other optional condition.
It should be noted that the initial model includes two parts, a target reconstruction network and an initial generation network, where the target reconstruction network is a network at least used for extracting features of the training missing point cloud data, and in addition, in general, the target reconstruction network may also perform data reconstruction according to the extracted features, and remove noise in the training missing point cloud data in a data reconstruction manner. The target reconstruction network comprises a target coding network, and the target coding network refers to a network for extracting features. It can be understood that if the target reconstruction network does not perform the step of data reconstruction, the target reconstruction network is the target coding network. The initial generation of the network refers to the generation of missing point cloud data and the generation of the network of training and repairing point cloud data by using the missing point cloud data.
In the related art, data portions and label portions in training data adopted in model training correspond to each other one by one, and in this case, a model can learn a global structure of the training data only from the perspective of the overall situation of the data portions, and feature extraction is performed based on the global structure. The global structure is obtained depending on the degree of missing data portions compared to tag portions, and therefore is generally not accurate enough. In order to obtain a better global structure, the input characteristics capable of reflecting the training missing point cloud data more accurately are further obtained.
Specifically, in the present application, the target coding network respectively uses some training missing point cloud data as anchor point clouds to perform comparison learning. The anchor point cloud is a point cloud serving as a learning reference for comparison learning, and training missing point cloud data corresponding to the same original point cloud data as the anchor point cloud is a positive sample, and training missing point cloud data corresponding to other original point cloud data is a negative sample. In the training process, after missing point cloud data is trained and input into a target coding network, corresponding input characteristics can be obtained. And inputting the input features into the initial generation network to obtain missing point cloud data, wherein the missing point cloud data is used for generating training repair point cloud data.
In an embodiment, in order to improve the convergence rate of the initial model, the initial model may be formed by using a pre-trained target reconstruction network, the pre-training may make parameters of the target reconstruction network substantially determined, and during the initial model training, only fine tuning is performed on the initial model on the existing basis, and meanwhile, parameters of an initially generated network are adjusted. Specifically, the generation process of the initial model includes:
step 21: and comparing, learning and training the initial reconstruction network by using the training missing point cloud data to obtain a target reconstruction network.
Step 22: and combining the target reconstruction network and the initial generation network to obtain an initial model.
The initial reconstruction network refers to an untrained reconstruction network, and can be pre-trained by using the missing point cloud data, and the pre-training is also comparison learning training to obtain a target reconstruction network. And combining the target reconstruction network and the initial generation network to obtain an initial model. When the initial model is trained subsequently, the target reconstruction network basically achieves convergence after being pre-trained, so that the pre-trained target initial model can achieve convergence faster than a scheme of using the initial reconstruction network which is not trained at all as the target reconstruction network and forming the initial model.
Specifically, the process of performing comparison learning training on the initial reconstruction network by using the training missing point cloud data to obtain the target reconstruction network may include the following steps:
step 31: and determining an anchor point cloud from the training missing point cloud data.
Step 32: and inputting the training missing point cloud data into an initial reconstruction network based on the anchor point cloud to obtain target data.
Step 33: and obtaining a contrast learning loss value by using the input characteristics, obtaining a reconstruction loss value by using the reconstruction point cloud data, and performing parameter adjustment on the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value.
Step 34: and if the condition that the pre-training completion condition is met is detected, determining the initial reconstruction network as a target reconstruction network.
In this embodiment, the initial reconstruction network not only performs feature extraction on the input training missing point cloud data, but also performs data reconstruction based on the extracted features, so as to remove noise in the training missing point cloud data. Thus, the target data includes the input features and the reconstructed point cloud data. Inputting features, namely features obtained by performing feature extraction on input training missing point cloud data; the reconstruction point cloud data is reconstruction data obtained by data reconstruction by using input characteristics.
In this embodiment, P may be utilizedinRepresenting original point cloud data, using SinRepresenting a set of raw point cloud data, using SSRepresenting a set of training missing point cloud data. When the initial reconstruction network is trained, any one piece of training missing point cloud data can be selected as the anchor point cloud PSAnd then S isSIs neutralized with PSAnd taking the corresponding training missing point cloud data as a positive sample, and taking other training missing point cloud data as a negative sample. For example, if the original point cloud data corresponding to the selected anchor point cloud is an airplane point cloud, SSThe training missing point cloud data corresponding to the aircraft (i.e. each aircraft point cloud with a missing point obtained from the aircraft point cloud) is a positive sample, and the training missing point cloud data not corresponding to the aircraft (e.g. the chair point cloud with a missing point) is a negative sample. It will be appreciated that when a positive or negative example is input into the initial reconstruction network, the corresponding sample type (i.e. positive or negative example) needs to be declared.
And after the target data are obtained, calculating corresponding loss values by respectively utilizing the input features and the reconstructed point cloud data. Specifically, a comparison learning loss value is obtained by using the input features, and the comparison learning loss value is a loss value used for parameter adjustment of the feature extraction part; and calculating by using the reconstructed point cloud data to obtain a reconstruction loss value, wherein the reconstruction loss value is a loss value for performing parameter adjustment on the data reconstruction part. And after the two loss values are obtained, the initial reconstruction network is subjected to parameter adjustment by using the two loss values, and the initial reconstruction network is determined as a target reconstruction network when the pre-training completion condition is met. The specific content and form of the pre-training completion condition are not limited, and may be, for example, a training turn condition, or a training time condition, or any other optional condition.
Specifically, the process of inputting the training missing point cloud data into the initial reconstruction network to obtain the target data may include the following steps:
step 41: inputting the training missing point cloud data into an initial coding network in an initial reconstruction network to obtain input characteristics.
Step 42: and inputting the input characteristics into an initial decoding network in the initial reconstruction network to obtain reconstruction point cloud data.
Correspondingly, the process of adjusting the parameters of the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value may include the following steps:
step 43: generating a first loss value using the comparison learning loss value and the reconstruction loss value.
Step 44: and utilizing the first loss value to carry out parameter adjustment on the initial reconstruction network.
In this embodiment, the initial reconstruction network includes an initial coding network and an initial decoding network, and the target coding network is configured to perform feature extraction on training missing point cloud data to obtain an input feature. And the initial decoding network is used for decoding the input features so as to complete data reconstruction and obtain reconstructed point cloud data. And integrating the comparison learning loss value and the reconstruction loss value to obtain a first loss value, and further performing parameter adjustment on the whole initial reconstruction network by using the first loss value. For a specific generation manner of the first loss value, this embodiment is not limited, and for example, in an implementation manner, the first loss value may be obtained by adding the two.
In one embodiment, the initial coding network may use PointNet + + (a network structure that handles point clouds) as a basic framework, which includes several feature extraction blocks, each of which contains an MLP (Multi-layer Perceptron) and a downsampling layer. The MLP is used for optimizing the extracted Point cloud characteristics, downsampling is conducted on the Point cloud by means of downsampling layers based on FPS (Fabth _ Point Sampling), the Point clouds with multiple resolutions are obtained from thin to thick, accordingly local characteristics of the Point clouds with multiple scales are learned, and finally pooling is conducted by means of pooling layers in an initial coding network, and overall characteristics of the Point clouds are obtained. An initial decoding network, comprising a plurality of MLPs for feature dimension transformation and upsampling using a plurality of upsampling layers, is capable of iteratively reconstructing the shape of the input point cloud. The initial coding network and the initial decoding network of the structure are matched, so that noise in the input point cloud can be better removed, and the shape of the input point cloud is optimized.
It can be understood that, as for one original point cloud data, there are a plurality of training missing point cloud data of different missing situations, and each training missing point cloud data has the same global structure. However, different training missing point cloud data as different local parts of the same original point cloud data have a limited receptive field, and the initial coding network is trained by using contrast learning, so that the point cloud global structure learned by the initial coding network contains information from different local areas.
The above process is illustrated by way of example: inputting training missing point cloud data of the type of 'airplane' into an initial reconstruction network, and obtaining local detail features representing each part of the airplane and global structure features representing the whole, namely a global structure by the initial coding network. Obtaining the global structure of the positive and negative samples of the comparative learning in the same way, and using the global structure as the input of the initial decoding network to obtainAnd reconstructing point cloud data. Then, the minimum contrast learning loss and the reconstruction loss are calculated, the network parameters are updated by using the minimum contrast learning loss and the reconstruction loss, and the local and global features of the input point cloud are continuously optimized and extracted. Specifically, can utilize
Figure BDA0003280136710000111
Representing comparative learning loss value, using LinExpressing a reconstruction loss value, adopting InfonCE loss as a loss function of a comparative learning loss value, and specifically calculating the formula as follows:
Figure BDA0003280136710000112
wherein v represents the characteristics of the anchor point cloud, v + represents the input characteristics of the positive sample, v-represents the input characteristics of the negative sample,
Figure BDA0003280136710000113
a set of input features representing all positive samples,
Figure BDA0003280136710000114
represents the set of input features for all negative examples, τ being a constant.
At the same time, it is possible to utilize:
Figure BDA0003280136710000115
calculating a reconstruction loss, wherein S1For reconstructing point cloud data, S2To train the original point cloud data corresponding to the missing point cloud data, x and y represent the points therein.
Based on the above embodiment, in a feasible implementation manner, the initial generation network may directly splice the generated missing point cloud data and the training missing point cloud data (or reconstructed point cloud data obtained through reconstruction) to obtain training and repairing point cloud data. In another embodiment, the data obtained by direct splicing may be rough three-dimensional point cloud data, and the initial generation network may further optimize the rough three-dimensional point cloud data to obtain training repair point cloud data. Specifically, the initial generation network includes a missing point cloud generation network and a correction network, and the generation process of training and repairing point cloud data may include the following steps:
step 51: inputting the input features into a missing point cloud generation network to obtain missing point cloud data.
Step 52: and inputting the missing point cloud data and output data output by the target reconstruction network into a correction network to obtain training and repairing point cloud data.
The missing point cloud generation network is a network used for generating corresponding missing point cloud data according to the input features. The correction network refers to a network for performing shape correction on output data (which may be training missing point cloud data without reconstruction or reconstructed point cloud data after reconstruction). The specific structures of the missing point cloud generation network and the correction network are not limited, and can be set according to the requirements.
For example, in an embodiment, the missing point cloud generating network includes a missing point cloud modulating module and a folding decoding module, and the process of inputting the input features into the missing point cloud generating network to obtain the missing point cloud data may include the following steps:
step 53: and inputting the input features into a missing point cloud modulation module to obtain the missing point cloud features.
Step 54: and inputting the missing point cloud characteristics and the input characteristics into a folding and decoding module to obtain missing point cloud data.
Specifically, the missing point cloud generation network comprises a plurality of decoding modules, and each decoding module comprises a missing point cloud modulation module and a folding-based decoding layer (i.e. a folding decoding module). And the missing point cloud modulation module is used for converting the input characteristics through an MLP (Multi-level processing) to obtain the missing point cloud characteristics obtained through learning. And processing the two-dimensional grid obtained by random sampling, the missing point cloud characteristics obtained by learning and the input characteristics based on the folded decoding layer to obtain missing point cloud data. By increasing the density of the two-dimensional grid layer by layer, missing point clouds with higher resolution can be predicted.
In this embodiment, the specific process of obtaining the training and repairing point cloud data by correcting the network is not limited, and in an implementation manner, after the correction network fuses the reconstructed point cloud data and the missing point cloud data, the rough three-dimensional point cloud is obtained by using FPS sampling. The correction network comprises a plurality of MLPs and a correction layer based on folding, the input rough three-dimensional point cloud is processed by the MLPs to obtain point cloud characteristics, and then the two-dimensional grid is randomly sampled from a two-dimensional plane with fixed size. And inputting the two-dimensional grid, the point cloud characteristics and the three-dimensional coordinates of the point cloud obtained by sampling into a correction layer based on folding, and optimizing the rough three-dimensional point cloud by using the correction layer to obtain training and repairing point cloud data.
It will be appreciated that the process of adjusting the parameters of the initial model based on the training repair point cloud data and the training missing point cloud data in the presence of the correction network may include the steps of:
step 61: and obtaining a corrected reconstruction loss value by using the training and repairing point cloud data and the original point cloud data.
Step 62: and obtaining a missing reconstruction loss value by using the missing point cloud data and the missing point cloud true value data.
And step 63: a second loss value is generated using the corrected reconstruction loss value and the missing reconstruction loss value.
Step 64: and performing parameter adjustment on the initial model by using a second loss value, wherein the missing truth value data is difference data of the training missing point cloud data and the corresponding original point cloud data. Can utilize LrRepresenting corrected reconstruction loss value, using LcIndicating missing reconstruction loss values. L isrAnd LcIs calculated by the method of (1) and LinThe same is true.
Referring to fig. 2, fig. 2 is a specific structure diagram of a point cloud completion model according to an embodiment of the present disclosure. The incomplete three-dimensional point cloud is point cloud data which is missing point cloud data in training or point cloud data to be supplemented and is input when a model is trained well and used, an input point cloud reconstruction network based on contrast learning is a target reconstruction network, a missing point cloud decoding modulation network is a missing point cloud generation network, and a rough point cloud prediction correction network is a correction network. The module 1 is a target coding network (or an initial coding network) for feature coding, the module 2 is an initial decoding network for full-link decoding, the module 3 is a folding decoding module for folding decoding, the module 4 is a correction network for rough point cloud correction, and the module 5 is a missing point cloud modulation module for missing point cloud modulation to generate missing point cloud features. The above process can be referred to for each loss value calculation manner in fig. 2, and details are not repeated here.
It can be understood that after the model training is completed, the point cloud data can be supplemented by using the model training, so that the application also provides a point cloud missing supplementation method, which can comprise the following steps:
step 71: and acquiring point cloud data to be complemented.
Step 72: and inputting the point cloud data to be supplemented into the point cloud supplementing model to obtain the processed point cloud data.
By applying the model training method provided by the embodiment of the application, the initial model comprises the target reconstruction network and the initial generation network, wherein the target reconstruction network can learn the global structure from the perspective of training missing point cloud data with different missing conditions by taking the original point cloud data as an anchor point. The point cloud data is obtained by performing comparison learning on the point cloud data, wherein the point cloud data is obtained by performing comparison learning on the point cloud data, and the point cloud data is obtained by performing comparison learning on the point cloud data. The initial generation network is used for generating missing point cloud data, and the missing point cloud part lost by the training missing point cloud data is presumed based on input features corresponding to the training missing point cloud data. And during training, the missing point cloud features are extracted according to input feature learning. And when the initial model meets the training completion condition, determining the initial model as a point cloud completion model. The point cloud completion model can acquire a global structure with local area information, and accurately predict missing point clouds according to the condition of input data, so that the accuracy of the point cloud data after completion processing is improved, and the problem of low data accuracy in the related technology is solved.
The following describes a model training apparatus provided in an embodiment of the present application, and the model training apparatus described below and the model training method described above may be referred to correspondingly.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application, including:
a first obtaining module 110, configured to obtain training missing point cloud data;
the training module 120 is configured to input the training missing point cloud data into the initial model to obtain training repair point cloud data, and adjust parameters of the initial model based on the training repair point cloud data and the original point cloud data corresponding to the training missing point cloud data;
a determining module 130, configured to determine that the initial model is a point cloud completion model if it is detected that the training completion condition is met;
the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes training missing point cloud data to perform contrast learning, the training missing point cloud data are input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data are used for generating training repair point cloud data.
Optionally, comprising:
the pre-training module is used for performing learning training on the initial reconstruction network by using the training missing point cloud data to obtain a target reconstruction network;
and the combination module is used for combining the target reconstruction network and the initial generation network to obtain an initial model.
Optionally, a pre-training module comprising:
the anchor point determining unit is used for determining an anchor point cloud from the training missing point cloud data;
the input unit is used for inputting the training missing point cloud data into the initial reconstruction network based on the anchor point cloud to obtain target data; wherein the target data comprises input features and reconstructed point cloud data;
the parameter adjusting unit is used for obtaining a contrast learning loss value by utilizing the input characteristics, obtaining a reconstruction loss value by utilizing the reconstruction point cloud data, and carrying out parameter adjustment on the initial reconstruction network by utilizing the contrast learning loss value and the reconstruction loss value;
and the target reconstruction network determining unit is used for determining the initial reconstruction network as the target reconstruction network if the condition that the pre-training completion condition is met is detected.
Optionally, an input unit comprising:
the characteristic acquisition subunit is used for inputting the training missing point cloud data into an initial coding network in an initial reconstruction network to obtain input characteristics;
the reconstruction subunit is used for inputting the input characteristics into an initial decoding network in the initial reconstruction network to obtain reconstructed point cloud data;
correspondingly, the parameter adjusting unit comprises:
a first loss generation subunit configured to generate a first loss value using the comparison learning loss value and the reconstruction loss value;
and the initial reconstruction network adjusting subunit is used for adjusting the parameters of the initial reconstruction network by using the first loss value.
Optionally, the initial coding network comprises a plurality of feature extraction blocks, each feature extraction block comprising a multi-layer perceptron and a downsampling layer based on a farthest point sample; the initial decoding network includes a plurality of multi-layer perceptrons and a plurality of upsampling layers.
Optionally, the first obtaining module 110 includes:
the original missing acquisition unit is used for acquiring a plurality of original missing point clouds as original point cloud data;
the missing processing unit is used for respectively carrying out missing processing of different degrees on each original missing point cloud to obtain training missing point cloud data; the deletion process is a clipping process.
Optionally, training module 120, comprising:
a missing point cloud generating unit which inputs the input characteristics into a missing point cloud generating network to obtain missing point cloud data;
the correction unit is used for inputting the missing point cloud data and output data output by the target reconstruction network into a correction network to obtain training and repairing point cloud data;
the missing point cloud generating network comprises a missing point cloud modulation module and a folding decoding module, and the missing point cloud generating unit comprises:
the missing feature acquisition subunit is used for inputting the input features into the missing point cloud modulation module to obtain the missing point cloud features;
and the folding decoding subunit is used for inputting the missing point cloud characteristics and the input characteristics into the folding decoding module to obtain missing point cloud data.
Optionally, training module 120, comprising:
the correction reconstruction loss generating unit is used for obtaining a correction reconstruction loss value by utilizing the training restoration point cloud data and the original point cloud data;
the missing reconstruction loss generating unit is used for obtaining a missing reconstruction loss value by utilizing the missing point cloud data and the missing point cloud true value data;
a second loss generation unit for generating a second loss value using the corrected reconstruction loss value and the missing reconstruction loss value;
and the initial model adjusting unit is used for carrying out parameter adjustment on the initial model by utilizing the second loss value.
The point cloud missing complementing device provided in the embodiment of the present application is introduced below, and the point cloud missing complementing device described below and the point cloud missing complementing method described above may be referred to in a corresponding manner.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a point cloud missing completing device according to an embodiment of the present application, including:
a second obtaining module 210, configured to obtain point cloud data to be complemented;
and a completion processing module 220, configured to input the point cloud data to be completed into the point cloud completion model, so as to obtain processed point cloud data.
In the following, the electronic device provided by the embodiment of the present application is introduced, and the electronic device described below and the model training method described above may be referred to correspondingly.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Wherein the electronic device 100 may include a processor 101 and a memory 102, and may further include one or more of a multimedia component 103, an information input/information output (I/O) interface 104, and a communication component 105.
The processor 101 is configured to control the overall operation of the electronic device 100 to complete all or part of the steps in the model training method; the memory 102 is used to store various types of data to support operation at the electronic device 100, such data may include, for example, instructions for any application or method operating on the electronic device 100, as well as application-related data. The Memory 102 may be implemented by any type or combination of volatile and non-volatile Memory devices, such as one or more of Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
The multimedia component 103 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 102 or transmitted through the communication component 105. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 104 provides an interface between the processor 101 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 105 is used for wired or wireless communication between the electronic device 100 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 105 may include: Wi-Fi part, Bluetooth part, NFC part.
The electronic Device 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components, and is configured to perform the model training method according to the above embodiments.
In the following, a computer-readable storage medium provided by an embodiment of the present application is introduced, and the computer-readable storage medium described below and the model training method described above may be referred to correspondingly.
The present application further provides a computer-readable storage medium having a computer program stored thereon, which, when being executed by a processor, implements the steps of the above-mentioned model training method.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relationships such as first and second, etc., are intended only to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms include, or any other variation is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (13)

1. A method of model training, comprising:
acquiring training missing point cloud data;
inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data;
if the condition that the training is finished is detected to be met, determining the initial model as a point cloud completion model;
the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes the training missing point cloud data to perform contrast learning, the training missing point cloud data is input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data is used for generating the training repairing point cloud data.
2. The model training method of claim 1, wherein the generation process of the initial model comprises:
performing learning training on an initial reconstruction network by using the training missing point cloud data to obtain the target reconstruction network;
and combining the target reconstruction network and the initial generation network to obtain the initial model.
3. The model training method of claim 2, wherein the learning training of the initial reconstruction network using the training missing point cloud data to obtain the target reconstruction network comprises:
determining an anchor point cloud from the training missing point cloud data;
inputting the training missing point cloud data into the initial reconstruction network based on the anchor point cloud to obtain target data; wherein the target data comprises the input features and reconstructed point cloud data;
obtaining a contrast learning loss value by using the input characteristics, obtaining a reconstruction loss value by using the reconstruction point cloud data, and performing parameter adjustment on the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value;
and if the condition that the pre-training completion condition is met is detected, determining the initial reconstruction network as the target reconstruction network.
4. The model training method of claim 3, wherein the inputting the training missing point cloud data into the initial reconstruction network to obtain target data comprises:
inputting the training missing point cloud data into an initial coding network in the initial reconstruction network to obtain the input characteristic;
inputting the input features into an initial decoding network in the initial reconstruction network to obtain the reconstruction point cloud data;
correspondingly, the performing parameter adjustment on the initial reconstruction network by using the contrast learning loss value and the reconstruction loss value includes:
generating a first loss value using the comparison learning loss value and the reconstruction loss value;
and utilizing the first loss value to carry out parameter adjustment on the initial reconstruction network.
5. The model training method of claim 4, wherein the initial coding network comprises a plurality of feature extraction blocks, each of the feature extraction blocks comprising a multi-layer perceptron and a downsampling layer based on a farthest point sample; the initial decoding network includes a plurality of multi-layer perceptrons and a plurality of upsampling layers.
6. The model training method of claim 1, wherein the obtaining training missing point cloud data comprises:
acquiring a plurality of original missing point clouds as the original point cloud data;
respectively carrying out deletion processing of different degrees on each original deletion point cloud to obtain training deletion point cloud data; the missing processing is clipping processing.
7. The model training method of claim 1, wherein the initial generation network comprises a missing point cloud generation network and a correction network, and the generation process of the training repair point cloud data comprises:
inputting the input features into the missing point cloud generation network to obtain the missing point cloud data;
inputting the missing point cloud data and output data output by the target reconstruction network into the correction network to obtain the training and repairing point cloud data;
the missing point cloud generating network comprises a missing point cloud modulating module and a folding decoding module, and the inputting of the input features into the missing point cloud generating network to obtain the missing point cloud data comprises the following steps:
inputting the input features into the missing point cloud modulation module to obtain missing point cloud features;
and inputting the missing point cloud characteristics and the input characteristics into the folding and decoding module to obtain the missing point cloud data.
8. The model training method of claim 7, wherein the adjusting the parameters of the initial model based on the original point cloud data corresponding to the training restoration point cloud data and the training missing point cloud data comprises:
obtaining a corrected reconstruction loss value by using the training restoration point cloud data and the original point cloud data;
obtaining a missing reconstruction loss value by using the missing point cloud data and the missing point cloud true value data;
generating a second loss value using the corrected reconstruction loss value and the missing reconstruction loss value;
performing parameter adjustment on the initial model by using the second loss value;
and the missing point cloud true value data is difference data of the training missing point cloud data and the corresponding original point cloud data.
9. A point cloud missing completion method is characterized by comprising the following steps:
acquiring point cloud data to be complemented;
inputting the point cloud data to be supplemented into the point cloud supplementation model according to any one of claims 1 to 8 to obtain processed point cloud data.
10. A model training apparatus, comprising:
the first acquisition module is used for acquiring training missing point cloud data;
the training module is used for inputting the training missing point cloud data into an initial model to obtain training repairing point cloud data, and adjusting parameters of the initial model based on the training repairing point cloud data and original point cloud data corresponding to the training missing point cloud data;
the determining module is used for determining the initial model as a point cloud completion model if the condition that the training is completed is detected to be met;
the initial model comprises a target reconstruction network and an initial generation network, the target reconstruction network comprises a target coding network, the target coding network utilizes the training missing point cloud data to perform contrast learning, the training missing point cloud data is input into the target coding network to obtain input characteristics, the input characteristics are input into the initial generation network to obtain missing point cloud data, and the missing point cloud data is used for generating the training repairing point cloud data.
11. A point cloud missing complementing device is characterized by comprising:
the second acquisition module is used for acquiring point cloud data to be complemented;
a completion processing module, configured to input the point cloud data to be completed into the point cloud completion model according to any one of claims 1 to 8, so as to obtain processed point cloud data.
12. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the model training method according to any one of claims 1 to 8 and/or the point cloud missing completion method according to claim 9.
13. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the model training method of any one of claims 1 to 8 and/or the point cloud missing replenishment method of claim 9.
CN202111129999.6A 2021-09-26 2021-09-26 Model training and point cloud missing completion method, device, equipment and medium Pending CN113850916A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111129999.6A CN113850916A (en) 2021-09-26 2021-09-26 Model training and point cloud missing completion method, device, equipment and medium
PCT/CN2022/078359 WO2023045252A1 (en) 2021-09-26 2022-02-28 Model training method and apparatus, point cloud missing completion method and apparatus, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111129999.6A CN113850916A (en) 2021-09-26 2021-09-26 Model training and point cloud missing completion method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113850916A true CN113850916A (en) 2021-12-28

Family

ID=78979820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111129999.6A Pending CN113850916A (en) 2021-09-26 2021-09-26 Model training and point cloud missing completion method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN113850916A (en)
WO (1) WO2023045252A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331821A (en) * 2021-12-29 2022-04-12 中国人民解放***箭军工程大学 Image conversion method and system
CN114758078A (en) * 2022-05-17 2022-07-15 北京大学深圳研究生院 Point cloud data processing method and device, electronic equipment and storage medium
CN114820465A (en) * 2022-04-06 2022-07-29 合众新能源汽车有限公司 Point cloud detection model training method and device, electronic equipment and storage medium
CN115422264A (en) * 2022-11-02 2022-12-02 苏州浪潮智能科技有限公司 Time sequence data processing method, device and equipment and readable storage medium
WO2023045252A1 (en) * 2021-09-26 2023-03-30 浪潮电子信息产业股份有限公司 Model training method and apparatus, point cloud missing completion method and apparatus, and device and medium
CN115994936A (en) * 2023-03-23 2023-04-21 季华实验室 Point cloud fusion model acquisition method and device, electronic equipment and storage medium
CN116758096A (en) * 2023-07-03 2023-09-15 强联智创(北京)科技有限公司 Aneurysm segmentation method, electronic device, and storage medium
CN116957991A (en) * 2023-09-19 2023-10-27 北京渲光科技有限公司 Three-dimensional model complement method and three-dimensional model complement model generation method
WO2024007616A1 (en) * 2022-07-06 2024-01-11 山东海量信息技术研究院 Point cloud completion method and apparatus, and device and medium
CN117422847A (en) * 2023-10-27 2024-01-19 神力视界(深圳)文化科技有限公司 Model repairing method, device, electronic equipment and computer storage medium
CN117807378A (en) * 2023-12-01 2024-04-02 太极计算机股份有限公司 Intelligent wind power data restoration method and device

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524123B (en) * 2023-04-20 2024-02-13 深圳市元甪科技有限公司 Three-dimensional electrical impedance tomography image reconstruction method and related equipment
CN116721399B (en) * 2023-07-26 2023-11-14 之江实验室 Point cloud target detection method and device for quantitative perception training
CN116910912B (en) * 2023-07-28 2024-04-30 小米汽车科技有限公司 Method, device, equipment and storage medium for generating three-dimensional model of vehicle
CN116882035B (en) * 2023-09-07 2023-11-21 湖南省国土资源规划院 Space object recognition and modeling method based on artificial intelligence and related equipment
CN117671131A (en) * 2023-10-20 2024-03-08 南京邮电大学 Industrial part three-dimensional point cloud repairing method and device based on deep learning
CN117115366B (en) * 2023-10-25 2024-02-13 中国科学院自动化研究所 Environmental model reconstruction method, system and equipment based on unmanned system three-dimensional perception
CN117173069A (en) * 2023-11-01 2023-12-05 湖北工业大学 Slope structural surface point cloud data complement method
CN117807875B (en) * 2023-12-28 2024-05-28 上海强华实业股份有限公司 Three-dimensional data reverse reconstruction and dimension measurement system and method for quartz device
CN117975202A (en) * 2024-04-01 2024-05-03 之江实验室 Model training method, service execution method, device, medium and equipment
CN118038085B (en) * 2024-04-09 2024-06-07 无锡学院 Point cloud key point detection method and device based on twin network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10839551B2 (en) * 2018-04-20 2020-11-17 Streem, Inc. Augmentation of 3-D point clouds with subsequently captured data
CN111582105A (en) * 2020-04-28 2020-08-25 清华大学 Unsupervised point cloud feature learning method and unsupervised point cloud feature learning device based on local global bidirectional reasoning
CN113205104A (en) * 2021-04-23 2021-08-03 广西大学 Point cloud completion method based on deep learning
CN113850916A (en) * 2021-09-26 2021-12-28 浪潮电子信息产业股份有限公司 Model training and point cloud missing completion method, device, equipment and medium

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045252A1 (en) * 2021-09-26 2023-03-30 浪潮电子信息产业股份有限公司 Model training method and apparatus, point cloud missing completion method and apparatus, and device and medium
CN114331821B (en) * 2021-12-29 2023-09-22 中国人民解放***箭军工程大学 Image conversion method and system
CN114331821A (en) * 2021-12-29 2022-04-12 中国人民解放***箭军工程大学 Image conversion method and system
CN114820465A (en) * 2022-04-06 2022-07-29 合众新能源汽车有限公司 Point cloud detection model training method and device, electronic equipment and storage medium
CN114820465B (en) * 2022-04-06 2024-04-26 合众新能源汽车股份有限公司 Point cloud detection model training method and device, electronic equipment and storage medium
CN114758078B (en) * 2022-05-17 2024-03-15 北京大学深圳研究生院 Point cloud data processing method and device, electronic equipment and storage medium
CN114758078A (en) * 2022-05-17 2022-07-15 北京大学深圳研究生院 Point cloud data processing method and device, electronic equipment and storage medium
WO2024007616A1 (en) * 2022-07-06 2024-01-11 山东海量信息技术研究院 Point cloud completion method and apparatus, and device and medium
CN115422264A (en) * 2022-11-02 2022-12-02 苏州浪潮智能科技有限公司 Time sequence data processing method, device and equipment and readable storage medium
WO2024093207A1 (en) * 2022-11-02 2024-05-10 苏州元脑智能科技有限公司 Time series data processing method and apparatus, device, and nonvolatile readable storage medium
CN115422264B (en) * 2022-11-02 2023-05-05 苏州浪潮智能科技有限公司 Time sequence data processing method, device, equipment and readable storage medium
CN115994936A (en) * 2023-03-23 2023-04-21 季华实验室 Point cloud fusion model acquisition method and device, electronic equipment and storage medium
CN116758096A (en) * 2023-07-03 2023-09-15 强联智创(北京)科技有限公司 Aneurysm segmentation method, electronic device, and storage medium
CN116957991B (en) * 2023-09-19 2023-12-15 北京渲光科技有限公司 Three-dimensional model completion method
CN116957991A (en) * 2023-09-19 2023-10-27 北京渲光科技有限公司 Three-dimensional model complement method and three-dimensional model complement model generation method
CN117422847A (en) * 2023-10-27 2024-01-19 神力视界(深圳)文化科技有限公司 Model repairing method, device, electronic equipment and computer storage medium
CN117807378A (en) * 2023-12-01 2024-04-02 太极计算机股份有限公司 Intelligent wind power data restoration method and device

Also Published As

Publication number Publication date
WO2023045252A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN113850916A (en) Model training and point cloud missing completion method, device, equipment and medium
US11798132B2 (en) Image inpainting method and apparatus, computer device, and storage medium
US10713818B1 (en) Image compression with recurrent neural networks
CN111860235B (en) Method and system for generating high-low-level feature fused attention remote sensing image description
CN111476719B (en) Image processing method, device, computer equipment and storage medium
CN113705779A (en) Recurrent neural networks for data item generation
WO2019111118A1 (en) Robust gradient weight compression schemes for deep learning applications
CN108665055B (en) Method and device for generating graphic description
US20220207370A1 (en) Inferring device, training device, inferring method, and training method
CN114179816A (en) Vehicle speed prediction device and method
RU2745010C1 (en) Methods for reconstruction of depth map and electronic computer device for their implementation
US20230153965A1 (en) Image processing method and related device
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN113723603A (en) Method, device and storage medium for updating parameters
CN110807428A (en) Coal sample identification method and device, server and storage medium
CN114066899A (en) Image segmentation model training method, image segmentation device, image segmentation equipment and image segmentation medium
CN116993926B (en) Single-view human body three-dimensional reconstruction method
KR101678453B1 (en) Image processing apparatus and method
CN115937516B (en) Image semantic segmentation method and device, storage medium and terminal
CN117315387A (en) Industrial defect image generation method
US20230267652A1 (en) Generating artistic content from a text prompt or a style image utilizing a neural network model
CN115082624A (en) Human body model construction method and device, electronic equipment and storage medium
CN112748089B (en) Phase unwrapping method and device in Doppler optical coherence tomography
CN113392845A (en) Deep learning remote sensing image semantic segmentation method and system based on U-NET
CN113505650A (en) Method, device and equipment for extracting topographic feature line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination