WO2023044605A1 - 极端环境下脑结构的三维重建方法、装置及可读存储介质 - Google Patents

极端环境下脑结构的三维重建方法、装置及可读存储介质 Download PDF

Info

Publication number
WO2023044605A1
WO2023044605A1 PCT/CN2021/119610 CN2021119610W WO2023044605A1 WO 2023044605 A1 WO2023044605 A1 WO 2023044605A1 CN 2021119610 W CN2021119610 W CN 2021119610W WO 2023044605 A1 WO2023044605 A1 WO 2023044605A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
model
module
information
modules
Prior art date
Application number
PCT/CN2021/119610
Other languages
English (en)
French (fr)
Inventor
王书强
胡博闻
申妍燕
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2021/119610 priority Critical patent/WO2023044605A1/zh
Publication of WO2023044605A1 publication Critical patent/WO2023044605A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Definitions

  • the present application belongs to the field of image processing, and in particular relates to a method, device and readable storage medium for three-dimensional reconstruction of brain structures in extreme environments.
  • images of the brain can be collected through optical sensing devices, and the images can be reconstructed to obtain the three-dimensional structure of the brain, so as to assist doctors to observe the lesion and complete the operation.
  • the light environment of the optical sensing device and the possible visual pollution during the operation will cause the image collected by the optical sensing device to be a partial image (that is, showing a part of the brain instead of a complete brain).
  • Two-dimensional image which leads to the incomplete reconstruction of the three-dimensional structure, which affects the doctor's observation and judgment.
  • the embodiment of the present application provides a method, device, equipment and readable storage medium for three-dimensional reconstruction of brain structures in extreme environments, which can solve the problem of how to reconstruct a complete three-dimensional structure based on incomplete images.
  • the embodiment of the present application provides a three-dimensional reconstruction method, including: acquiring a two-dimensional image of the target object, and the two-dimensional image presents a partial area of the target object; converting the two-dimensional image into a first point cloud, and using the first point cloud To describe the three-dimensional structure of the part of the region; the first point cloud is processed through the trained point cloud completion model to obtain the second point cloud, which is used to describe the overall three-dimensional structure of the target object; wherein, the point cloud
  • the completion model includes N-level point cloud compression modules and N-level point cloud expansion modules.
  • the N-level point cloud compression modules are used to compress the first point cloud to obtain N compressed information with different resolutions.
  • N levels of point cloud expansion modules are used to reconstruct compressed information of N different resolutions to obtain a second point cloud, N ⁇ 2, where N is an integer.
  • the input information of the point cloud compression module of the first layer is the first point cloud
  • the output information of the previous layer in the point cloud compression modules of the two adjacent layers is the input information of the next time
  • the point cloud compression module of the first layer The input information of the cloud expansion module is the output information of the N-1 layer and the N-th layer point cloud compression module
  • the input information of the m-th layer point cloud expansion module is the output information of the N-m layer point cloud compression module and the m-1th layer.
  • the output information of the layer point cloud expansion module, the input information of the Nth layer point cloud expansion module is the output information of the N-1 layer point cloud expansion module, the output information of the Nth layer point cloud expansion module is the second point cloud, 2 ⁇ m ⁇ N, m is an integer.
  • the point cloud expansion module includes multiple dynamic information gate modules, multiple layers of fully connected layers are arranged between the multiple dynamic information gate modules, and the dynamic information gate module is used to process the input information of the dynamic information gate Attention Mechanism Computation.
  • the point cloud compression module is a PointNet++ network structure.
  • converting the two-dimensional image into the first point cloud includes: converting the two-dimensional image into the first point cloud by using a trained point cloud reconstruction model.
  • the point cloud reconstruction model includes a sequentially connected ResNet encoder and a graph convolutional neural network
  • the graph convolutional neural network includes multiple groups of alternately arranged graph convolution modules and branch modules
  • the graph The convolution module is used to adjust the position coordinates of the point cloud
  • the branch module is used to expand the number of point clouds.
  • the training method of the point cloud completion model and the point cloud reconstruction model is: constructing an integrated initial model for completion, which includes an initial model for point cloud reconstruction, a discriminator, and a point cloud Complete the initial model; conduct confrontation training on the integrated initial model based on the preset loss function and training set, so as to train the initial model of point cloud reconstruction as a point cloud reconstruction model, and train the initial model of point cloud completion as a point cloud Complementary model; wherein, the training set includes a plurality of two-dimensional image samples presenting some regions of the object sample, and the partial region point cloud samples and the overall point cloud samples corresponding to each two-dimensional image sample; the loss function includes based on relative entropy, Loss calculations for chamfer distance and dozer distance.
  • the embodiment of the present application provides a three-dimensional reconstruction device, including:
  • An acquisition unit configured to acquire a two-dimensional image of the target object, where the two-dimensional image presents a partial area of the target object;
  • a conversion unit configured to convert the two-dimensional image into a first point cloud, and the first point cloud is used to describe the three-dimensional structure of the partial region;
  • the completion unit is used to process the first point cloud through the trained point cloud completion model to obtain a second point cloud, and the second point cloud is used to describe the overall three-dimensional structure of the target object; wherein, the point cloud completion model It includes N-level point cloud compression modules and N-level point cloud expansion modules.
  • the N-level point cloud compression modules are used to compress the first point cloud to obtain N compressed information with different resolutions.
  • N The layered point cloud expansion module is used to reconstruct the compressed information of N different resolutions to obtain the second point cloud, N ⁇ 2, where N is an integer.
  • the embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and operable on the processor.
  • the processor executes the computer program, the above-mentioned first aspect or the first aspect is realized.
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it realizes the above-mentioned first aspect or any of the first aspects.
  • an embodiment of the present application provides a computer program product, which, when the computer program product is run on a terminal device, causes the terminal device to execute the method described in the first aspect or any optional manner of the first aspect.
  • the second point cloud is completed by using the point cloud to complete the model. Complement the point cloud to get the second point cloud. Since the point cloud completion model can extract the compressed information of different resolutions in the first point cloud, and expand the point cloud based on the compressed information of different resolutions, the second point cloud that can describe the overall three-dimensional structure of the target object can be reconstructed. Therefore, the complete three-dimensional structure of the target object can be reconstructed based on the incomplete image of the target object by using the method provided in the present application.
  • FIG. 1 is a schematic flowchart of an embodiment of a three-dimensional reconstruction method provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of the network structure of a complementary integrated model provided by the present application.
  • Fig. 3 is a schematic diagram of the network structure of a point cloud expansion module provided for the present application.
  • Fig. 4 is a schematic diagram of the network structure of a complete and integrated initial model provided by the present application.
  • Fig. 5 is a schematic structural diagram of a three-dimensional reconstruction device provided by the present application.
  • FIG. 6 is a schematic structural diagram of a terminal device provided by the present application.
  • a point cloud is a data structure that describes the shape structure of a specific object in three-dimensional space, expressed as a collection of scattered points. For example, given a three-dimensional structure of the brain, its point cloud can be written as a matrix Y_(y ⁇ 3), where y represents the number of points and 3 represents the three-dimensional coordinates of the point. In particular, when two rows of the matrix are exchanged arbitrarily, it is regarded as exchanging the storage locations of two points in the point cloud. According to the disorder of the set, all properties of the point cloud remain unchanged. Point cloud has the advantages of small space complexity, simple storage form and high computing performance.
  • point clouds Compared with the flat space of two-dimensional images, point clouds contain more spatial structure information, which can provide doctors with more visual information, thereby assisting doctors to make better diagnosis and treatment. Therefore, reconstructing two-dimensional images into accurate and clear point clouds can provide doctors with additional visual and diagnostic information, and assist doctors in real-time decision-making.
  • two-dimensional images of the brain for example, Magnetic Resonance Imaging (MRI) images, computerized tomography (Computed Tomography, CT) images
  • MRI Magnetic Resonance Imaging
  • CT Computerized tomography
  • the two-dimensional image collected by the optical sensing device may be a defective image, which shows the brain
  • the two-dimensional information of the partial area rather than the complete two-dimensional information. This leads to incomplete reconstruction of the three-dimensional structure of the brain, which affects the doctor's observation and judgment.
  • this application provides a 3D reconstruction method.
  • the first point cloud is completed by using the point cloud to complete the model.
  • the point cloud is completed to obtain the second point cloud. Since the point cloud completion model can extract the compressed information of different resolutions in the first point cloud, and expand the point cloud based on the compressed information of different resolutions, the second point cloud that can describe the overall three-dimensional structure of the target object can be reconstructed. Therefore, the complete three-dimensional structure of the target object can be reconstructed based on the incomplete image of the target object.
  • the execution subject of the method may be an image data acquisition device, such as a Positron Emission Computed Tomography (Positron Emission Computed Tomography) , PET) equipment, CT equipment or MRI equipment, diffusion tensor imaging (Diffusion Tensor Imaging, DTI), functional magnetic resonance imaging (Functional Magnetic Resonance Imaging, FMRI) equipment, cameras and other terminal equipment.
  • an image data acquisition device such as a Positron Emission Computed Tomography (Positron Emission Computed Tomography) , PET) equipment, CT equipment or MRI equipment, diffusion tensor imaging (Diffusion Tensor Imaging, DTI), functional magnetic resonance imaging (Functional Magnetic Resonance Imaging, FMRI) equipment, cameras and other terminal equipment.
  • DTI diffusion Tensor Imaging
  • FMRI Functional Magnetic Resonance Imaging
  • the method includes:
  • the target object may be a human body or an animal body, or an organ of a human body or an animal body, such as a brain, a heart, a lung, and the like. It can also be other living or non-living bodies.
  • the two-dimensional image can be a PET image, an MRI image, a CT image, a DTI image, an FMRI image or an image taken by a camera.
  • the 2D image presents a partial region of the target object, that is, due to extreme environments or special reasons, the captured 2D image only contains the 2D structural information of the partial region.
  • a neural network model can be used to convert a 2D image to a point cloud, or a traditional algorithm based on a depth camera can be used to convert a 2D image to a point cloud.
  • the neural network model can be a point cloud reconstruction model as shown in Figure 2, including a ResNet encoder and a graph convolutional neural network.
  • the ResNet encoder is used to quantize the two-dimensional image into a feature vector with a certain mean value ⁇ and variance ⁇ , and obeys the Gaussian distribution, and then randomly extracts a 96-dimensional encoded feature vector z from the feature vector, and passes the encoded feature vector z to Convolutional Neural Networks for Graphs.
  • the encoded feature vector is used as the initial point cloud of the input graph convolutional neural network, the number is 1, and the coordinate dimension is 96.
  • the graph convolutional neural network includes multiple alternately arranged branch modules and graph convolution modules, where the branch module can map a point cloud into multiple point clouds, then, through multiple branch modules, an initial point cloud can be gradually Expand the point cloud to the target number.
  • the graph convolution module is used to adjust the position coordinates of each point cloud. Through multiple graph convolution modules, the coordinate dimension of each input point cloud is increased or reduced to gradually reduce the coordinate dimension of the point cloud from 96 dimensions. is 3D. Therefore, by alternately setting multiple graph convolution modules and branch modules, the graph convolutional neural network finally generates a first point cloud with a specific number of point clouds, and each point cloud has a 3-dimensional position coordinate.
  • the branch module obeys formula (1):
  • the branch module can copy the coordinates of each point cloud in the upper layer into n pieces respectively. If there are a(i ⁇ a) point clouds in the upper layer, and the coordinates of each point cloud are copied into n, then the branch module of this layer can expand the number of point clouds to a ⁇ n, and the a ⁇ n point cloud coordinates are passed to the next layer.
  • the ResNet encoder will use an initial point After the cloud is input into the graph convolutional neural network, each branch module in the graph convolutional neural network copies the coordinates of each point cloud into n, and the predicted first point cloud finally generated by the graph convolutional neural network is Contains n b point clouds.
  • the expansion multiple of each branch module can also be different.
  • the expansion factor of the first-layer branch module is 5, and the ResNet encoder can expand an initial point cloud input into 5 point clouds.
  • the expansion multiple of the branch module of the second layer is 10, then the second layer can expand the 5 point clouds into 50 point clouds after receiving 5 point clouds.
  • the graph convolution module obeys formula (2):
  • the encoded feature information of the two-dimensional image can be effectively extracted through the ResNet encoder, which can guide the graph convolutional neural network to accurately construct the first point cloud, so that the two-dimensional image with limited information can be reconstructed Form the first point cloud with richer information and more accurate information.
  • S103 Process the first point cloud through the trained point cloud completion model to obtain a second point cloud, where the second point cloud is used to describe the overall three-dimensional structure of the target object.
  • the point cloud completion model provided by this application includes N-level point cloud compression modules and N-level point cloud expansion modules. Among them, N levels of point cloud compression modules are used to compress the first point cloud to obtain compressed information of N different resolutions, and N levels of point cloud expansion modules are used to compress information of N different resolutions Perform reconstruction to obtain a second point cloud that can describe the overall three-dimensional structure of the target object, N ⁇ 2, where N is an integer.
  • the point cloud compression module can be a PointNet++ network structure.
  • PointNet++ is a multi-level feature extraction encoder-decoder (encoder-decoder) network structure.
  • the encoder obtains global features of different scales through multi-level downsampling.
  • the encoder works by upsampling point-level features corresponding to the resolution (i.e. compressing information).
  • the PointNet++ network structure can fully capture local features, has the ability to generalize complex scenes, and can fully retain detailed information, so that the obtained compressed information retains more structural information. Thereby, it is beneficial for the follow-up point cloud expansion module to perform point cloud reconstruction.
  • the point cloud compression module of the N-m layer and the point cloud expansion module of the m layer are connected through a hierarchical dynamic information pipeline, and the point cloud compression module of the N-1 layer and the point cloud expansion module of the first layer are connected by a layer Dynamic information pipeline connection, wherein, 2 ⁇ m ⁇ N, m is an integer. So that the output information of the cloud compression module from layer 1 to layer N-1 can be skipped and passed to the point cloud expansion module of the corresponding level, so that the point cloud expansion module of the corresponding level has more prior information in the process of point cloud expansion and feature information.
  • the input information of the point cloud compression module in the first layer is the first point cloud
  • the output information of the previous layer in the point cloud compression modules of two adjacent layers is the input information of the next time
  • the input information of the point cloud expansion module of the first layer is the output information of the point cloud compression module of the N-1 layer and the N layer
  • the input information of the point cloud expansion module of the m layer is the output information of the point cloud compression module of the N-m layer
  • the input information of the point cloud expansion module in layer N is the output information of the point cloud expansion module in layer N-1
  • the output information of the point cloud expansion module in layer N is the second point cloud.
  • m can take a value, that is, m can take a value of 2, and the network structure of the corresponding point cloud completion model can be shown in FIG. 2 .
  • the 3-layer point cloud compression module is marked as point cloud compression module 1, point cloud compression module 2 and point cloud compression module 3
  • the 3-layer point cloud expansion module is marked as point cloud expansion module 1, point cloud expansion module 2 and Point Cloud Expansion Module3.
  • the input information of the point cloud compression module 1 is the first point cloud
  • the input information of the point cloud compression module 2 is the output information of the point cloud compression module 1
  • the input information of the point cloud compression module 3 is the output information of the point cloud compression module 2.
  • the point cloud compression module 1 and the point cloud expansion module 2 are connected through a hierarchical dynamic information pipeline, and the point cloud compression module 2 and the point cloud expansion module 1 are connected through a hierarchical dynamic information pipeline.
  • the input information of the point cloud expansion module 1 is the output information of the point cloud compression module 3 and the point cloud compression module 2
  • the input information of the point cloud expansion module 2 is the output information of the point cloud compression module 1 and the point cloud expansion module 1
  • the input information of the expansion module 3 is the output information of the point cloud expansion module 2.
  • the above-mentioned point cloud expansion module may include a plurality of dynamic information gate modules, and multiple layers of fully connected layers are arranged between the multiple dynamic information gate modules, and the dynamic information gate module is used for the input of the dynamic information gate The information is calculated by the attention mechanism.
  • the network structure of the above-mentioned point cloud expansion module may be shown in FIG. 3 (in FIG. 3, AGB represents the dynamic information gate module).
  • the input information of the dynamic information gate module includes two attention sets K and R.
  • K and R may be the output information of two point cloud compression modules, or the output information of a point cloud compression module and a point cloud expansion module, or the same point cloud expansion module or the same full connection
  • the output information of the layer ie K and R are the same in this case).
  • K and R of the first dynamic information gate module in point cloud expansion module 1 are the output information of point cloud compression module 2 and point cloud compression module 3 respectively, and the first dynamic information gate in point cloud expansion module 2
  • the K and R of the module are the output information of the point cloud compression module 1 and the point cloud expansion module 1 respectively
  • the K and R of the first dynamic information gate module in the point cloud expansion module 3 are the output information of the point cloud expansion module 2
  • the K and R of the second and third dynamic information gate modules in each point cloud expansion module are the output information of the fully connected layer connected with the dynamic information gate module.
  • the dynamic information gate module includes 4 fully connected layers, attention gating and softMax modules. Among them, the four fully connected layers are F 1 , F 2 , F 3 and F 4 . Then, the attention score between any element in the attention set K and R can be calculated by formula (3):
  • ki denotes the i-th element in K
  • rj denotes the j-th element in R
  • T denotes the full-time of the matrix
  • ai,j denotes the attention score between ki and rj .
  • each element in K is updated by the following formula (4):
  • K may specifically be the output information from the upper layer network of the dynamic information gate module
  • R may specifically be the output information from the upper layer network of the dynamic information gate module or different resolutions from the output of the hierarchical dynamic information pipeline.
  • Compress information The updated K is the output information of the dynamic information gate module.
  • the point cloud completion model of the three-layer compression module and expansion module shown in Figure 2 and Figure 3 as an example, assuming that the first point cloud is represented as a matrix 2048*3, which is used to describe the target object (for example, The three-dimensional structure of a partial region of the brain) shown in Figure 2.
  • the point cloud compression module 1 compresses it into a 512*128 matrix and outputs it
  • the point cloud compression module 2 compresses the output of the point cloud compression module 1 into a 256*256 matrix
  • the point cloud compression module 3 compresses the output of the point cloud compression module 2 into a feature vector of 1*512 and outputs it.
  • the point cloud expansion module 1 restores the eigenvector 1*512 output by the point cloud compression module 3 and the matrix 256*256 output by the point cloud compression module 2 into a matrix of 256*256 and outputs it, and the point cloud expansion module 2 converts the point cloud
  • the matrix 256*256 output by the expansion module 1 and the matrix 512*128 output by the point cloud compression module 1 are restored to a matrix of 512*128 and output
  • the point cloud expansion module 3 restores the matrix 512*128 output by the point cloud expansion module 2 to Point cloud 2048*3 capable of describing the overall three-dimensional structure of the brain.
  • the size of the matrix and the value of the matrix are adjusted through the fully connected layer, and the value of the matrix is adjusted through the AGB, that is to say, the scale of the input and output of the AGB is the same. Therefore, if the AGB receives the feature matrix R transmitted by the hierarchical dynamic information pipeline, then the fully connected layer can ensure that the input K received by the AGB from the fully connected layer is equal in size to R (row and column width are equal).
  • the attention score inside the point cloud expansion module is calculated through the dynamic information gate module, so that the compressed information of different resolutions is combined with useless features, and the weight of useful features is adjusted to achieve the first point cloud. More accurate expansion.
  • the point cloud completion model is used to complete the first point cloud to obtain the second point cloud. Since the point cloud completion model can extract the compressed information of different resolutions in the first point cloud, and expand the point cloud based on the compressed information of different resolutions, the second point cloud that can describe the overall three-dimensional structure of the target object can be reconstructed. It solves the problem that the three-dimensional structure of the target object reconstructed based on the incomplete image of the target object is incomplete.
  • the point cloud completion model and the point cloud reconstruction model in this application can be trained independently. It is also possible to construct a complete and integrated initial model for comprehensive training.
  • the integrated initial model for completion may include an initial model for point cloud reconstruction, a discriminator, and an initial model for point cloud completion.
  • adversarial training is carried out on the initial model of the integration of point cloud to train the initial model of point cloud reconstruction as a point cloud reconstruction model, and to train the initial model of point cloud completion as a point cloud completion model.
  • the training set includes a plurality of two-dimensional image samples representing partial regions of the object sample, and a partial region point cloud sample and an overall point cloud sample corresponding to each two-dimensional image sample.
  • the object samples are the brains of multiple different individuals.
  • the 3D MRI images of each brain are collected by MRI equipment, they are preprocessed by cleaning and denoising, removing the skull and neck, and slice the preprocessed 3D MRI images of the brain from different angles. 2D slice image near the best plane.
  • the extreme environment is simulated to visually pollute the two-dimensional slice image, so that the two-dimensional slice image only shows a part of the brain, and a two-dimensional image sample is obtained.
  • the two-dimensional image sample can be expressed as I H ⁇ W , where H and W represent the length and width of the two-dimensional image sample, respectively.
  • a point cloud sample of the whole brain is obtained according to the three-dimensional MRI image.
  • the third point cloud is a point cloud of a partial region presented in the two-dimensional image sample predicted by the point cloud reconstruction initial model.
  • the exemplary I H ⁇ W is input into the ResNet encoder of the initial model of point cloud reconstruction, and the ResNet encoder converts I H ⁇ W into a Gaussian distribution vector with a specific mean value ⁇ and variance ⁇ , and randomly selects from the vector Extract the 96-dimensional coded feature vector z ⁇ N( ⁇ , ⁇ 2 ), and pass the coded feature vector z to the graph convolutional neural network, and the graph convolutional neural network will reconstruct the coded feature vector into the third point cloud.
  • the KL divergence of the ResNet encoder can be calculated by the following formula (5):
  • L KL is the KL divergence
  • X is the total number of Q values or P values
  • Q(x) is the xth probability distribution obtained by the encoder according to the encoded feature vector z
  • P(x) is the preset x probability distributions.
  • the fourth point cloud is the overall point cloud of the object sample predicted by the point cloud completion initial model.
  • the loss functions used during training include loss calculations based on relative entropy, chamfer distance, and bulldozer distance. Exemplarily, it includes a first loss function used for training the initial model of point cloud reconstruction, a second loss function used for training the discriminator, and a third loss function used for training the initial model of point cloud completion.
  • the initial model for point cloud reconstruction, the discriminator, and the initial model for point cloud completion are trained separately.
  • ⁇ 1 and ⁇ 2 are constants;
  • L KL is the KL divergence of the ResNet encoder;
  • Z is the distribution of encoded feature vectors generated by the ResNet encoder;
  • z is the encoded feature vector;
  • G(z) is the third point cloud;
  • D( G(z)) represents the value obtained after the third point cloud is input to the discriminator;
  • E( ⁇ ) represents the expectation;
  • L CD is the chamfer distance between the third point cloud and some area point cloud samples (Chamfer Distance, CD ), the chamfering distance can be expressed as formula (7):
  • Y is a point cloud sample in a part of the area, indicating the coordinate matrix of all point clouds in the point cloud sample in the part of the area, y is a point cloud coordinate vector in the matrix Y; Y' is the point cloud in the third point cloud The coordinate matrix of all point clouds, y' is a point cloud coordinate vector in the matrix Y'.
  • Y is a v ⁇ 3 matrix composed of v point cloud coordinates, then y is a 1 ⁇ 3 coordinate vector corresponding to a point cloud in Y.
  • the second loss function L D of the discriminator is derived from the bulldozer distance (Earth Mover Distance, EMD) loss function L EMD , which can be specifically expressed as formula (8):
  • Equation (8) Represents the sampling of the linear division between the third point cloud and the partial area point cloud samples, which can be expressed as E( ) is the expectation; D(G(z)) represents the value obtained after the third point cloud G(z) is input to the discriminator; D(Y) represents the value obtained after the point cloud sample Y of some regions is input to the discriminator value; R is the point cloud sample distribution in some areas; ⁇ gp is a constant; is the gradient operator.
  • the third loss function of the point cloud completion initial model can be shown in the following formula (9):
  • the point cloud reconstruction model and the point cloud completion model obtained through the initial model training of the completion integration can be used as a complete integration integration model (that is, the model shown in Figure 2) to carry out incomplete two-dimensional Image-to-ensemble point cloud reconstruction. It can also be used independently.
  • the initial model of the completion integration performs loss calculation by fusing the relative entropy, chamfering distance and bulldozer distance, which ensures the global Nash equilibrium in the space and improves the global perceptual field of view of the training completion integration model. , which enhances the generalization, computation and accuracy of the model.
  • the completed integrated model obtained by training uses the graph convolution module and the branch module alternately to expand the number of point clouds and adjust the position coordinates of the point cloud, so that the first point cloud obtained by the graph convolutional neural network is more accurate.
  • the compressed information of different resolutions is extracted through the hierarchically designed point cloud compression module, and the hierarchical dynamic information pipeline is designed to transfer the compressed information to the point cloud expansion module of the corresponding level, so that the point cloud expansion module
  • the internal attention is calculated to merge useless features and adjust the weight of useful features, so as to realize point cloud expansion and reconstruct a point cloud that can describe the overall three-dimensional structure of the target object.
  • some known high-efficiency networks are used to replace some models in the complementary integrated model provided by this application, and to perform training.
  • the model obtained is compared with the experimental data of the complete integration model provided by the application to further illustrate the effect of the completion integration model provided by the application.
  • No-D represents the model obtained after the discriminator is canceled during the model training process
  • PointOutNet represents the model after replacing the point cloud reconstruction model in the integrated model with the PointOutNet structure.
  • FC represents the model after replacing the point cloud completion model in the completion integration model with a fully connected layer network
  • FN represents the model after replacing the point cloud completion model in the completion integration model with the FoldingNet network
  • TN represents the model after replacing the point cloud completion model in the completion integration model with the TopNet network.
  • the same sample is randomly sampled in several areas (for example, area 1, area 2, area 3), and after reconstruction using the complementary integrated model and the other three models involved in Table 2, the reconstruction results of each area are calculated in relation to The reconstruction error between the entire samples can be obtained from the experimental data of Point-to-Point Error.
  • the point-to-point error of the integrated model is about 2
  • the point-to-point error of the model represented by FC is about 5
  • the point-to-point error of the model represented by FN is about 4.5
  • the point-to-point error of the model represented by TN is about 2.
  • the error is about 3.
  • the above-mentioned three-dimensional reconstruction method using the trained point cloud reconstruction model and point cloud completion model can be the same terminal device as the execution subject of the above-mentioned training point cloud reconstruction model and point cloud completion model, or it can be different end devices.
  • sequence numbers of the steps in the above embodiments do not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
  • Fig. 5 shows a structural diagram of an embodiment of a 3D reconstruction device provided in an embodiment of the present application.
  • the three-dimensional reconstruction device may include:
  • the acquiring unit 501 is configured to acquire a two-dimensional image of a target object, where the two-dimensional image presents a partial area of the target object.
  • a conversion unit 502 configured to convert the two-dimensional image into a first point cloud, where the first point cloud is used to describe the three-dimensional structure of the partial region.
  • the completion unit 503 is configured to process the first point cloud through the trained point cloud completion model to obtain a second point cloud, and the second point cloud is used to describe the overall three-dimensional structure of the target object.
  • the point cloud completion model includes N levels of point cloud compression modules and N levels of point cloud expansion modules, and the N levels of point cloud compression modules are used to compress the first point cloud to obtain Compressed information of N different resolutions, the point cloud expansion modules of N levels are used to reconstruct the compressed information of N different resolutions to obtain the second point cloud, N ⁇ 2, N is an integer .
  • the input information of the point cloud compression module in the first layer is the first point cloud
  • the output information of the previous layer in the point cloud compression modules of two adjacent layers is the input information of the next time
  • the first layer The input information of the point cloud expansion module in the layer is the output information of the point cloud compression module in the N-1 layer and the N layer
  • the input information of the point cloud expansion module in the m layer is the point cloud in the N-m layer
  • the input information of the point cloud expansion module described in the N layer is the output information of the point cloud expansion module described in the N-1 layer
  • the output information of the point cloud expansion module of the N layer is the second point cloud, 2 ⁇ m ⁇ N, m is an integer.
  • the point cloud expansion module includes a plurality of dynamic information gate modules, and multiple layers of fully connected layers are arranged between the plurality of dynamic information gate modules, and the dynamic information gate module is used to control the dynamic information gate
  • the input information is calculated by the attention mechanism.
  • the point cloud compression module is a PointNet++ network structure.
  • converting the two-dimensional image into the first point cloud by the conversion unit 502 includes:
  • the point cloud reconstruction model includes a sequentially connected ResNet encoder and a graph convolutional neural network
  • the graph convolutional neural network includes multiple groups of alternately arranged graph convolution modules and branch modules
  • the graph convolution The module is used to adjust the position coordinates of the point cloud
  • the branch module is used to expand the number of the point cloud.
  • the training methods of the point cloud completion model and the point cloud reconstruction model are:
  • Construct a complete and integrated initial model which includes a point cloud reconstruction initial model, a discriminator and a point cloud completion initial model.
  • the training set includes a plurality of two-dimensional image samples representing partial regions of object samples, and the partial region point cloud samples and the overall point cloud samples corresponding to each of the two-dimensional image samples;
  • the loss function includes based on the relative Loss calculations for entropy, chamfer distance, and dozer distance.
  • FIG. 6 shows a schematic block diagram of a terminal device provided by an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
  • the terminal device 6 of this embodiment includes: a processor 60 , a memory 61 , and a computer program 62 stored in the memory 61 and operable on the processor 60 .
  • the processor 60 executes the computer program 62, the steps in the above-mentioned embodiments of the three-dimensional reconstruction method are implemented, for example, steps S101 to S103 shown in FIG. 1 .
  • the processor 60 executes the computer program 62, it realizes the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the acquisition module 501, conversion module 502, and completion module 503 shown in FIG. 5 .
  • the computer program 62 can be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 60 to complete this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of accomplishing specific functions, and the instruction segments are used to describe the execution process of the computer program 62 in the terminal device 6 .
  • FIG. 6 is only an example of the terminal device 6, and does not constitute a limitation on the terminal device 6. It may include more or less components than those shown in the figure, or combine certain components, or different components.
  • the terminal device 6 may also include an input and output device, a network access device, a bus, and the like.
  • the processor 60 can be a central processing unit (Central Processing Unit, CPU), and can also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the storage 61 may be an internal storage unit of the terminal device 6 , such as a hard disk or memory of the terminal device 6 .
  • the memory 61 can also be an external storage device of the terminal device 6, such as a plug-in hard disk equipped on the terminal device 6, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc. Further, the memory 61 may also include both an internal storage unit of the terminal device 6 and an external storage device.
  • the memory 61 is used to store the computer program and other programs and data required by the terminal device 6 .
  • the memory 61 can also be used to temporarily store data that has been output or will be output.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • An embodiment of the present application provides a computer program product.
  • the terminal device can implement the steps in the foregoing method embodiments when executed.
  • the term “if” may be construed, depending on the context, as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrase “if determined” or “if [the described condition or event] is detected” may be construed, depending on the context, to mean “once determined” or “in response to the determination” or “once detected [the described condition or event] ]” or “in response to detection of [described condition or event]”.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the disclosed device/network device and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种极端环境下脑结构的三维重建方法、装置及可读存储介质,涉及图像处理领域,能够基于残缺图像重建出的完整的三维结构。其方法包括:获取目标对象的二维图像,二维图像呈现目标对象的部分区域(S101);将二维图像转换第一点云,第一点云用于描述部分区域的三维结构(S102);通过已训练的点云补全模型对第一点云进行处理,得到第二点云,第二点云用于描述目标对象整体的三维结构(S103);其中,点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的点云扩张模块用于对N个不同分辨率的压缩信息进行重建,得到第二点云,N≥2,N为整数。

Description

极端环境下脑结构的三维重建方法、装置及可读存储介质 技术领域
本申请属于图像处理领域,尤其涉及一种极端环境下脑结构的三维重建方法、装置及可读存储介质。
背景技术
随着医学技术手段的不断发展,微创手术和机器人引导的介入技术已逐渐用于脑外科手术,因其更小手术伤口,更短恢复时间的特点给患者带来了更好的治疗体验。但是,由于在手术期间无法直接观察病变部位,使得医生无法直接操作手术目标。
目前,可以通过光学传感设备采集脑部的图像,并对该图像进行重建来获取脑部的三维结构,以辅助医生观察病变部位,完成手术。但光学传感设备的光环境和手术中可能出现的视觉污染状况(例如,局部出血),会导致光学传感设备采集到的图像可能是残缺图像(即呈现脑部部分区域而不是完整脑部的二维图像),这就导致重建得到三维结构也不完整,从而影响医生的观察和判断。
技术问题
本申请实施例提供的一种极端环境下脑结构的三维重建方法、装置、设备及可读存储介质,可以解决如何基于残缺图像重建出完整的三维结构的问题。
技术解决方案
第一方面,本申请实施例提供一种三维重建方法,包括:获取目标对象的二维图像,二维图像呈现目标对象的部分区域;将二维图像转换第一点云,第一点云用于描述该部分区域的三维结构;通过已训练的点云补全模型对第一点云进行处理,得到第二点云,第二点云用于描述目标对象整体的三维结构;其中,点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的点云扩张模块用于对N个不同分辨率的压缩信息进行重建,得到第二点云,N≥2,N为整数。
在一个可能的实现方式中,第1层点云压缩模块的输入信息为第一点云,相邻两层点云压缩模块中前一层的输出信息为后一次的输入信息,第1层点云扩张模块的输入信息为第N-1层和第N层点云压缩模块的输出信息,第m层点云扩张模块的输入信息为第N-m 层点云压缩模块的输出信息和第m-1层点云扩张模块的输出信息,第N层点云扩张模块的输入信息为第N-1层点云扩张模块的输出信息,第N层点云扩张模块的输出信息为第二点云,2≤m<N,m为整数。
在一个可能的实现方式中,点云扩张模块包括多个动态信息门模块,多个动态信息门模块之间设置有多层全连接层,动态信息门模块用于对动态信息门的输入信息进行注意力机制计算。
在一个可能的实现方式中,点云压缩模块为PointNet++网络结构。
在一个可能的实现方式中,将二维图像转换第一点云包括:利用已训练的点云重建模型将二维图像转换为第一点云。
在一个可能的实现方式中,点云重建模型包括依次连接的ResNet编码器和图卷积神经网络,所述图卷积神经网络包括多组交替设置的图卷积模块和分支模块,所述图卷积模块用于调整点云的位置坐标,所述分支模块用于扩充点云的个数。
在一个可能的实现方式中,点云补全模型和点云重建模型的训练方式为:构建补全一体化初始模型,该补全一体化初始模型包括点云重建初始模型、判别器和点云补全初始模型;根据预设的损失函数和训练集对补全一体化初始模型进行对抗训练,以将点云重建初始模型训练为点云重建模型,将点云补全初始模型训练为点云补全模型;其中,训练集包括多个呈现对象样本部分区域的二维图像样本,和每个二维图像样本所对应的部分区域点云样本和整体点云样本;损失函数包括基于相对熵、倒角距离和推土机距离的损失计算。
第二方面,本申请实施例提供一种三维重建装置,包括:
获取单元,用于获取目标对象的二维图像,二维图像呈现目标对象的部分区域;
转换单元,用于将二维图像转换第一点云,第一点云用于描述该部分区域的三维结构;
补全单元,用于通过已训练的点云补全模型对第一点云进行处理,得到第二点云,第二点云用于描述目标对象整体的三维结构;其中,点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的点云扩张模块用于对N个不同分辨率的压缩信息进行重建,得到第二点云,N≥2,N为整数。
第三方面,本申请实施例提供一种终端设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面或者第 一方面的任意可选方式所述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面或者第一方面的任意可选方式所述的方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面或者第一方面的任意可选方式所述的方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
有益效果
基于本申请提供的一种极端环境下脑结构的三维重建方法、装置、设备及可读存储介质,在将残缺的二维图像转换为第一点云后,通过利用点云补全模型将第一点云进行补全,得到第二点云。由于该点云补全模型可以提取第一点云中不同分辨率的压缩信息,并基于不同分辨率的压缩信息进行点云扩张,重建得到能够描述目标对象整体的三维结构的第二点云。因此,采用本申请提供的方法能够基于目标对象的残缺图像重建出该目标对象完整的三维结构。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提供的一种三维重建方法的一个实施例的流程示意图;
图2是本申请提供的一种补全一体化模型的网络结构示意图;
图3是为本申请提供的一种点云扩张模块的网络结构示意图;
图4是本申请提供的一种补全一体化初始模型的网络结构示意图;
图5是本申请提供的一种三维重建装置的结构示意图;
图6是本申请提供的一种终端设备的结构示意图。
本发明的实施方式
点云是一种描述三维空间中特定对象形状结构的一种数据结构,表示为一个由若干散 点组成的集合。例如,给定一个脑部的三维结构,它的点云可以被写作一个矩阵Y_(y×3),其中y表示点的个数,3表示该点的三维坐标。特别的,当任意交换矩阵两行,即视作交换了点云中的两个点的储存位置,根据集合的无序性,该点云的一切性质保持不变。点云具有空间复杂度小、储存形式简单和计算性能高等优点。与二维图像的扁平空间相比,点云包含有更多的空间结构信息,可以为医生提供更多的视觉信息,从而辅助医生更好地进行诊疗。因此,将二维图像重建为准确且清晰的点云,可以为医生提供额外的视觉及诊疗信息,辅助医生实时决策。
例如,在对脑部的观察分析中,可以通过光学传感设备采集脑部的二维图像(例如,磁共振成像(Magnetic Resonance Imaging,MRI)图像,电子计算机断层扫描(Computed Tomography,CT)图像等),然后将二维图像进行重建来获取脑部的点云,以辅助医生观察脑部的三维结构,从而分析出病变的位置。然而,在一些极端环境中,比如说光环境异常,或者手术中脑部出现局部出血的情况等,导致光学传感设备采集到的二维图像可能是残缺图像,该残缺图像呈现的是脑部的部分区域的二维信息而不是完整的二维信息。这就导致重建得到脑部的三维结构也不完整,从而影响医生的观察和判断。
针对这种从残缺图像重建得到的三维结构不完整的问题,本申请提供一种三维重建方法,在将残缺的二维图像转换为第一点云后,通过利用点云补全模型将第一点云进行补全,得到第二点云。由于该点云补全模型可以提取第一点云中不同分辨率的压缩信息,并基于不同分辨率的压缩信息进行点云扩张,重建得到能够描述目标对象整体的三维结构的第二点云。从而实现了基于目标对象的残缺图像重建出该目标对象完整的三维结构。
下面结合具体实施例,对本申请提供的三维重建方法进行示例性的说明。
参见图1,为本申请实施例提供的一种三维重建方法的一个实施例的流程图,该方法的执行主体可以是影像数据采集设备,例如正电子发射型计算机断层显像(Positron Emission Computed Tomography,PET)设备、CT设备或者MRI设备、扩散张量成像(Diffusion Tensor Imaging,DTI)、功能性磁共振成像(Functional Magnetic Resonance Imaging,FMRI)设备、摄像机等终端设备。还可以是从影像数据采集设备获取二维图形的控制设备、计算机、机器人、移动终端等终端设备。如图1所示,该方法包括:
S101,获取目标对象的二维图像,该二维图像呈现目标对象的部分区域。
其中,目标对象可以是人体或者动物体,也可以人体或者动物体的器官,例如脑部、 心脏、肺部等。还可以是其他生命体或者非生命体。二维图像可以是PET图像、MRI图像、CT图像、DTI图像、FMRI图像或者由摄像机拍摄的图像。
在本申请示例中,二维图像中呈现目标对象的部分区域,也就是说,由于极端环境或者特殊的原因,拍摄到的二维图像中仅包含该部分区域的二维结构信息。
S102,将二维图像转换第一点云,第一点云用于描述该部分区域的三维结构。
例如,可以通过神经网络模型来进行二维图像到点云的转换,也可以基于深度相机的传统算法来实现二维图像到点云的转换。
在一个示例中,神经网络模型可以是如图2所示的点云重建模型,包括ResNet编码器和图卷积神经网络。
其中,ResNet编码器用于将二维图像量化成具有一定均值μ和方差σ,且服从高斯分布的特征向量,再从特征向量中随机抽取96维的编码特征向量z,并将编码特征向量z传递给图卷积神经网络。该编码特征向量作为输入图卷积神经网络的初始点云,数量为1,坐标维度为96。
图卷积神经网络包括多个交替设置的分支模块和图卷积模块,其中,分支模块可以将一个点云映射成多个点云,那么,通过多个分支模块可以将1个初始点云逐渐扩充到目标个数的点云。图卷积模块用于调整各个点云的位置坐标,通过多个图卷积模块对输入的每个点云的坐标维度进行升维或者降维,以逐渐将点云的坐标维度从96维降为3维。因此,通过交替设置的多个图卷积模块和分支模块,使得图卷积神经网络最终生成一个具有特定点云个数的第一点云,每个点云具有3维的位置坐标。
其中,分支模块服从公式(1):
Figure PCTCN2021119610-appb-000001
公式(1)中,
Figure PCTCN2021119610-appb-000002
表示图卷积神经网络第l层网络中的第i个点云;
Figure PCTCN2021119610-appb-000003
表示图卷积神经网络第l+1层网络中的第i个点云;
Figure PCTCN2021119610-appb-000004
表示图卷积神经网络第l+1层网络中的第i+1个点云;
Figure PCTCN2021119610-appb-000005
表示图卷积神经网络第l+1层网络中的第i+n个点云。
也就是说,在本实施例中,分支模块可以将上层中的每个点云的坐标分别复制成n个。若上层有a(i∈a)个点云,将每个点云的坐标复制成n个,则本层的分支模块可以将点云的个数扩充为a×n个,并将所述a×n个点云坐标传递给下一层。若图卷积神经网络中包括b(l∈b,且b≥1,b为正整数)个分支模块,且每个分支模块的扩展倍数相同,均为n, 则ResNet编码器将一个初始点云输入到图卷积神经网络中后,图卷积神经网络中的每个分支模块将每个点云的坐标复制成n个,则图卷积神经网络最终生成的预测的第一点云中包含n b个点云。
当然,每个分支模块的扩展倍数也可以不同。比如说第一层分支模块的扩展倍数是5,可以将ResNet编码器将输入的一个初始点云扩充为5个点云。第二层分支模块的扩展倍数是10,则第二层接收到5个点云后即可将5个点云扩充为50个点云。
图卷积模块服从公式(2):
Figure PCTCN2021119610-appb-000006
公式(2)中,
Figure PCTCN2021119610-appb-000007
表示第l层中的K个感知机;
Figure PCTCN2021119610-appb-000008
为全连接层,表示第l层节点与第l+1层节点之间的映射关系;
Figure PCTCN2021119610-appb-000009
表示与第l层中第i个节点
Figure PCTCN2021119610-appb-000010
对应的第1至l-1层的所有节点(即祖先节点)的合集;
Figure PCTCN2021119610-appb-000011
为稀疏矩阵;
Figure PCTCN2021119610-appb-000012
表示第l层节点的各祖先节点到第l+1层节点的特征分布,q j表示第j个祖先节点节点;b l为偏置参数;σ(·)表示激活函数。
在该示例中,通过ResNet编码器可以有效地提取二维图像的编码特征信息,编码特征信息可以引导图卷积神经网络精准地构建第一点云,从而可以将包含有限信息的二维图像重建成信息更加丰富、更加准确的第一点云。
S103,通过已训练的点云补全模型对第一点云进行处理,得到第二点云,该第二点云用于描述目标对象整体的三维结构。
本申请提供的点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块。其中,N个层级的点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的点云扩张模块用于对N个不同分辨率的压缩信息进行重建,得到能够描述目标对象整体三维结构的第二点云,N≥2,N为整数。
在一个示例中,点云压缩模块可以为PointNet++网络结构。PointNet++是一种多层次特征提取的编码器-解码器(encoder-decoder)的网络结构。编码器通过多层次的降采样,得到不同规模的全局特征。编码器通过上采样对应分辨率的点级特征(即压缩信息)。
采用PointNet++网络结构能够充分捕获局部特征,对复杂场景具备泛化能力,可以充 分保留细节信息,使得得到的压缩信息保留有更多的结构信息。从而利于后续点云扩张模块进行点云重建。
在一个示例中,第N-m层点云压缩模块与第m层点云扩张模块之间通过层级动态信息管道连接,第N-1层点云压缩模块与第1层点云扩张模块之间通过层级动态信息管道连接,其中,2≤m<N,m为整数。以使得第1层至第N-1层云压缩模块的输出信息可以跳跃传递给对应层级的点云扩张模块,使得对应层级的点云扩张模块在点云扩张过程中拥有更多的先验信息和特征信息。
也就是说,在点云补全模型中,第1层点云压缩模块的输入信息为第一点云,相邻两层点云压缩模块中前一层的输出信息为后一次的输入信息,第1层点云扩张模块的输入信息为第N-1层和第N层点云压缩模块的输出信息,第m层点云扩张模块的输入信息为第N-m层点云压缩模块的输出信息和第m-1层点云扩张模块的输出信息,第N层点云扩张模块的输入信息为第N-1层点云扩张模块的输出信息,第N层点云扩张模块的输出信息为第二点云。
例如,以N=3为例,m可以取一个值,即m可以取值为2,相应的点云补全模型的网络结构可以如图2所示。其中,3层点云压缩模块依次标注为点云压缩模块1、点云压缩模块2和点云压缩模块3,3层点云扩张模块依次标注为点云扩张模块1、点云扩张模块2和点云扩张模块3。
点云压缩模块1的输入信息为第一点云,点云压缩模块2的输入信息是点云压缩模块1的输出信息,点云压缩模块3的输入信息是点云压缩模块2的输出信息。点云压缩模块1与点云扩张模块2之间通过层级动态信息管道连接,点云压缩模块2与点云扩张模块1之间通过层级动态信息管道连接。点云扩张模块1的输入信息为点云压缩模块3和点云压缩模块2的输出信息,点云扩张模块2的输入信息为点云压缩模块1和点云扩张模块1的输出信息,点云扩张模块3的输入信息为点云扩张模块2的输出信息。
在一个示例中,上述点云扩张模块可以包括多个动态信息门模块,该多个动态信息门模块之间设置有多层全连接层,该动态信息门模块用于对该动态信息门的输入信息进行注意力机制计算。
示例性的,以3个动态信息门模块为例,上述点云扩张模块的网络结构可以如图3所示(图3中以AGB表示动态信息门模块)。其中,动态信息门模块的输入信息包括两个注 意力集K和R。K和R可能分别是两个点云压缩模块的输出信息,也可能分别是一个点云压缩模块和一个点云扩张模块的输出信息,还可能是同一个点云扩张模块或者是同一个全连接层的输出信息(即这种情况下K和R相同)。
例如,点云扩张模块1中的第1个动态信息门模块的K和R分别是点云压缩模块2和点云压缩模块3的输出信息,点云扩张模块2中的第1个动态信息门模块的K和R分别是点云压缩模块1和点云扩张模块1的输出信息,点云扩张模块3中的第1个动态信息门模块的K和R均为点云扩张模块2的输出信息,每个点云扩张模块中的第2个和第3个动态信息门模块的K和R均为与该动态信息门模块连接的全连接层的输出信息。
如图3所示,动态信息门模块中包括4个全连接层、注意力门控和softMax模块。其中,4个全连接层分别为F 1、F 2、F 3和F 4。那么,对于注意力集K和R中任意元素之间的注意力分数可以通过公式(3)计算得到:
Figure PCTCN2021119610-appb-000013
其中,k i表示K中的第i个元素,r j表示R中的第j个元素,T表示矩阵的专职,a i,j表示k i和r j之间的注意力分数。
然后基于注意力分数,通过如下公式(4)对K中的每个元素进行更新:
Figure PCTCN2021119610-appb-000014
其中,K具体可以是来自该动态信息门模块上一层网络的输出信息,R具体可以是来自该动态信息门模块上一层网络的输出信息或者是来自层级动态信息管道输出的不同分辨率的压缩信息。更新后的K即为该动态信息门模块的输出信息。
示例性的,以图2、图3所示的三层压缩模块和扩张模块的点云补全模型为例,假设根据第一点云表示为矩阵2048*3,用于描述目标对象(例如,图2中所示的脑部)的部分区域的三维结构。矩阵2048*3输入到点云补全模型后,点云压缩模块1将其压缩成512*128的矩阵并输出,点云压缩模块2将点云压缩模块1的输出压缩成256*256的矩阵并输出,点云压缩模块3将点云压缩模块2的输出压缩成1*512的特征向量并输出。
接着,点云扩张模块1将点云压缩模块3输出的特征向量1*512和点云压缩模块2输出的矩阵256*256还原成256*256的矩阵并输出,点云扩张模块2将点云扩张模块1输出 的矩阵256*256和点云压缩模块1输出的矩阵512*128还原成512*128的矩阵并输出,点云扩张模块3将点云扩张模块2输出的矩阵512*128还原成能够描述脑部整体三维结构的点云2048*3。
在点云扩张模块中,通过全连接层调整矩阵规模的大小和矩阵的数值,而通过AGB调整矩阵的数值,也就是说AGB输入输出的规模相同。因此,如果AGB接收层级动态信息管道传输的特征矩阵R,那么全连接层可以保证该AGB接收的来自全连接层的输入K的大小规模与R相等(行列宽相等)。
在本申请实施例中,通过动态信息门模块计算点云扩张模块内部的注意力分数,从而对于不同分辨率的压缩信息合并无用特征,并调整有用特征的权重,以对第一点云实现了更加准确的扩张。
综上可知,采用本申请提供的三维重建方法,在将残缺的二维图像转换为第一点云后,利用点云补全模型将第一点云进行补全,得到第二点云。由于该点云补全模型可以提取第一点云中不同分辨率的压缩信息,并基于不同分辨率的压缩信息进行点云扩张,重建得到能够描述目标对象整体的三维结构的第二点云。解决了目前基于目标对象的残缺图像重建出的目标对象的三维结构不完整的问题。
下面对上述涉及的点云补全模型和点云重建模型的训练方法进行示例性的说明。
本申请中的点云补全模型和点云重建模型可以分别独立进行训练。也可以构建补全一体化初始模型进行综合训练。
示例性的,如图4所示,补全一体化初始模型可以包括点云重建初始模型、判别器和点云补全初始模型。根据预设的损失函数和训练集对补全一体化初始模型进行对抗训练,以将点云重建初始模型训练为点云重建模型,将点云补全初始模型训练为点云补全模型。
其中,训练集包括多个呈现对象样本部分区域的二维图像样本,和每个二维图像样本所对应的部分区域点云样本和整体点云样本。
例如,需要训练一个能够重建脑部三维结构的模型,则对象样本为多个不同个体的脑部。通过MRI设备采集到每个脑部的三维MRI图像之后,通过清洗去噪、去除颅骨和去除脖骨等预处理,并从不同角度对预处理后的脑部的三维MRI图像进行切片,选取最佳平面附近的二维切片图像。模拟极端环境对二维切片图像进行视觉污染,使得该二维切片图像仅呈现脑部的部分区域,得到二维图像样本。该二维图像样本可以表示为I H×W,其中H 和W分别表示二维图像样本的长度和宽度。
相对应的,根据三维MRI图像获取脑部整体的点云样本。
在训练时,首先将二维图像样本输入到点云重建初始模型得到预测的第三点云。第三点云为点云重建初始模型预测到的该二维图像样本中呈现的部分区域的点云。
示例性的I H×W输入到点云重建初始模型的ResNet编码器中,由ResNet编码器将I H×W的转换为一个具有特定均值μ和方差σ的高斯分布向量,并从向量中随机抽取96维的编码特征向量z~N(μ,σ 2),并将编码特征向量z传递给图卷积神经网络,由图卷积神经网络将该编码特征向量重建为第三点云。
其中ResNet编码器的KL散度可以通过如下公式(5)计算得到:
Figure PCTCN2021119610-appb-000015
其中,L KL为KL散度;X为Q值或P值的总个数;Q(x)为编码器根据编码特征向量z得到的第x个概率分布;P(x)为预设的第x个概率分布。
之后,将第三点云与点云样本输入到判别器中进行判别;并将第三点云输入到点云补全初始模型,得到重建的第四点云。第四点云即为点云补全初始模型预测到的对象样本的整体点云。
训练过程中所使用的损失函数包括基于相对熵、倒角距离和推土机距离的损失计算。示例性的,包括用于训练点云重建初始模型的第一损失函数、用于训练判别器的第二损失函数以及用于训练点云补全初始模型的第三损失函数。点云重建初始模型、判别器以及点云补全初始模型单独训练。
其中,第一损失函数L E,G如下公式(6)所示:
L E,G=λ 1L KL2L CD+E z~Z[D(G(z))]      (6)
λ 1和λ 2为常数;L KL为ResNet编码器的KL散度;Z为ResNet编码器生成的编码特征向量的分布;z表示编码特征向量;G(z)为第三点云;D(G(z))表示第三点云输入到判别器后得到的值;E(·)表示期望;L CD为第三点云与部分区域点云样本之间的倒角距离(Chamfer Distance,CD),该倒角距离可以表示为公式(7):
Figure PCTCN2021119610-appb-000016
在公式(7)中,Y为部分区域点云样本,表示该部分区域点云样本中所有点云的坐标矩阵,y为矩阵Y中的一个点云坐标向量;Y′为第三点云中所有点云的坐标矩阵,y′为矩 阵Y′中的一个点云坐标向量。示例性的,若Y为由v个点云坐标组成的v×3的矩阵,则y为Y中一个点云对应的大小为1×3的坐标向量。
判别器的第二损失函数L D由推土机距离(Earth Mover Distance,EMD)损失函数L EMD推导而成,具体可以表示为公式(8):
Figure PCTCN2021119610-appb-000017
在公式(8)中,
Figure PCTCN2021119610-appb-000018
表示第三点云与部分区域点云样本之间线性分割的采样,可以表示为
Figure PCTCN2021119610-appb-000019
E(·)为期望;D(G(z))表示第三点云G(z)输入到判别器后得到的值;D(Y)表示部分区域点云样本Y输入到判别器后得到的值;R为部分区域点云样本分布;λ gp为常数;
Figure PCTCN2021119610-appb-000020
为梯度算子。
点云补全初始模型的第三损失函数可以如下公式(9)所示:
L P=λ 3L EMD3L CD     (9)
其中,λ 3和λ 4为常数,L EMD可以表示为如下公式(10)所示:
Figure PCTCN2021119610-appb-000021
在公式(10)中,
Figure PCTCN2021119610-appb-000022
是一个双射函数。
可以理解的是,通过补全一体化初始模型训练得到点云重建模型和点云补全模型可以作为一个完整的补全一体化模型(即如图2所示的模型),进行残缺的二维图像到整体的点云的重建。也可以分开来独立使用。
该补全一体化初始模型在训练过程中通过融合相对熵、倒角距离和推土机距离来进行损失计算,保证了空间中的全局纳什均衡,提升了训练得到的补全一体化模型的全局感知视野,增强了模型的泛化性,计算性以及精确性。
训练得到的补全一体化模型,通过交替使用图卷积模块和分支模块扩充点云的个数以及调整点云的位置坐标,使得图卷积神经网络得到的第一点云更加精确。针对该第一点云,通过层级设计的点云压缩模块来提取不同分辨率的压缩信息,并设计层级动态信息管道将压缩信息跳跃传输至对应层级的点云扩展模块,以使得点云扩展模块基于不同分辨率的压缩信息,通过计算内部注意力,来进行无用特征的合并和有用特征的权重调整,从而实现点云扩张,重建得到能够描述目标对象整体三维结构的点云。
下面利用部分已知的高效网络代替本申请提供的补全一体化模型中的部分模型,并进行训练。将得到的模型与本申请提供的补全一体化模型的实验数据进行对比,以对本申请 提供的补全一体化模型的效果做进一步说明。
针对点云重建模型部分的替换,以倒角距离(Chamfer Distance,CD)作为评价指标,实验数据对比如表1所示:
表1
模型 补全一体化模型 No-D PointOutNet
CD(×10 -1) 4.461 5.309 5.492
其中,No-D代表在模型训练过程取消判别器后,训练得到的模型;PointOutNet代表将补全一体化模型中的点云重建模型更换为PointOutNet结构后的模型。
针对点云补全模型部分的替换,以CD作为评价指标时,实验数据对比如表2所示:
表2
模型 补全一体化模型 FC FN TN
CD(×10 -1) 4.461 10.572 9.863 6.225
其中,FC代表将补全一体化模型中的点云补全模型替换为全连接层网络后的模型,FN代表将补全一体化模型中的点云补全模型替换为FoldingNet网络后的模型,TN代表将补全一体化模型中的点云补全模型替换为TopNet网络后的模型。
而将同一样本随机采样数个区域(例如,区域1、区域2、区域3),利用补全一体化模型和表2中涉及的其他三个模型进行重建后,计算每个区域的重建结果与整个样本之间的重建误差,可以得到点到点误差(Point-to-Point Error)的实验数据。其中,补全一体化模型的点到点误差约为2,FC代表的模型的点到点误差约为5,FN代表的模型的点到点误差约为4.5,TN代表的模型的点到点误差约为3。
通过上述实验数据对比,可以看出,从定量分析的结果上看,本申请所提出的补全一体化模型在倒角距离,点到点误差上都优于现有方法。
可以理解的是,上述采用已训练的点云重建模型和点云补全模型进行三维重建方法,和上述训练点云重建模型和点云补全模型的执行主体可以是同一终端设备,也可以是不同终端设备。
且上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的一种三维重建方法,图5示出了本申请实施例提供的一种三 维重建装置的一个实施例结构图。参见图5,该三维重建装置可以包括:
获取单元501,用于获取目标对象的二维图像,所述二维图像呈现所述目标对象的部分区域。
转换单元502,用于将所述二维图像转换第一点云,所述第一点云用于描述所述部分区域的三维结构。
补全单元503,用于通过已训练的点云补全模型对第一点云进行处理,得到第二点云,所述第二点云用于描述所述目标对象整体的三维结构。
其中,所述点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的所述点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的所述点云扩张模块用于对N个不同分辨率的所述压缩信息进行重建,得到所述第二点云,N≥2,N为整数。
可选的,第1层所述点云压缩模块的输入信息为所述第一点云,相邻两层所述点云压缩模块中前一层的输出信息为后一次的输入信息,第1层所述点云扩张模块的输入信息为第N-1层和第N层所述点云压缩模块的输出信息,第m层所述点云扩张模块的输入信息为第N-m层所述点云压缩模块的输出信息和第m-1层所述点云扩张模块的输出信息,第N层所述点云扩张模块的输入信息为第N-1层所述点云扩张模块的输出信息,第N层所述点云扩张模块的输出信息为所述第二点云,2≤m<N,m为整数。
可选的,所述点云扩张模块包括多个动态信息门模块,所述多个动态信息门模块之间设置有多层全连接层,所述动态信息门模块用于对所述动态信息门的输入信息进行注意力机制计算。
可选的,所述点云压缩模块为PointNet++网络结构。
可选的,所述转换单元502将所述二维图像转换第一点云包括:
利用已训练的点云重建模型将所述二维图像转换为所述第一点云。
可选的,所述点云重建模型包括依次连接的ResNet编码器和图卷积神经网络,所述图卷积神经网络包括多组交替设置的图卷积模块和分支模块,所述图卷积模块用于调整点云的位置坐标,所述分支模块用于扩充点云的个数。
可选的,所述点云补全模型和所述点云重建模型的训练方式为:
构建补全一体化初始模型,所述补全一体化初始模型包括点云重建初始模型、判别器 和点云补全初始模型。
根据预设的损失函数和训练集对所述补全一体化初始模型进行对抗训练,以将所述点云重建初始模型训练为所述点云重建模型,将所述点云补全初始模型训练为所述点云补全模型。
其中,所述训练集包括多个呈现对象样本部分区域的二维图像样本,和每个所述二维图像样本所对应的部分区域点云样本和整体点云样本;所述损失函数包括基于相对熵、倒角距离和推土机距离的损失计算。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置,模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图6示出了本申请实施例提供的一种终端设备的示意框图,为了便于说明,仅示出了与本申请实施例相关的部分。
如图6所示,该实施例的终端设备6包括:处理器60、存储器61以及存储在所述存储器61中并可在所述处理器60上运行的计算机程序62。所述处理器60执行所述计算机程序62时实现上述各个三维重建方法实施例中的步骤,例如图1所示的步骤S101至步骤S103。或者,所述处理器60执行所述计算机程序62时实现上述各装置实施例中各模块/单元的功能,例如图5所示获取模块501、转换模块502以及补全模块503的功能。
示例性的,所述计算机程序62可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器61中,并由所述处理器60执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序62在所述终端设备6中的执行过程。
本领域技术人员可以理解,图6仅仅是终端设备6的示例,并不构成对终端设备6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备6还可以包括输入输出设备、网络接入设备、总线等。
所述处理器60可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器61可以是所述终端设备6的内部存储单元,例如终端设备6的硬盘或内存。所述存储器61也可以是所述终端设备6的外部存储设备,例如所述终端设备6上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述终端设备6的内部存储单元也包括外部存储设备。所述存储器61用于存储所述计算机程序以及所述终端设备6所需的其它程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现可实现上述各个方法实施例中的步骤。
以上描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列 出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述 模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制。尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (11)

  1. 一种三维重建方法,其特征在于,包括:
    获取目标对象的二维图像,所述二维图像呈现所述目标对象的部分区域;
    将所述二维图像转换第一点云,所述第一点云用于描述所述部分区域的三维结构;
    通过已训练的点云补全模型对第一点云进行处理,得到第二点云,所述第二点云用于描述所述目标对象整体的三维结构;
    其中,所述点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的所述点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的所述点云扩张模块用于对N个不同分辨率的所述压缩信息进行重建,得到所述第二点云,N≥2,N为整数。
  2. 根据权利要求1所述的方法,其特征在于,第1层所述点云压缩模块的输入信息为所述第一点云,相邻两层所述点云压缩模块中前一层的输出信息为后一次的输入信息,第1层所述点云扩张模块的输入信息为第N-1层和第N层所述点云压缩模块的输出信息,第m层所述点云扩张模块的输入信息为第N-m层所述点云压缩模块的输出信息和第m-1层所述点云扩张模块的输出信息,第N层所述点云扩张模块的输入信息为第N-1层所述点云扩张模块的输出信息,第N层所述点云扩张模块的输出信息为所述第二点云,2≤m<N,m为整数。
  3. 根据权利要求2所述的方法,其特征在于,所述点云扩张模块包括多个动态信息门模块,所述多个动态信息门模块之间设置有多层全连接层,所述动态信息门模块用于对所述动态信息门的输入信息进行注意力机制计算。
  4. 根据权利要求1所述的方法,其特征在于,所述点云压缩模块为PointNet++网络结构。
  5. 根据权利要求1所述的方法,其特征在于,所述将所述二维图像转换第一点云包括:
    利用已训练的点云重建模型将所述二维图像转换为所述第一点云。
  6. 根据权利要求5所述的方法,其特征在于,所述点云重建模型包括依次连接的ResNet编码器和图卷积神经网络,所述图卷积神经网络包括多组交替设置的图卷积模块和分支模块,所述图卷积模块用于调整点云的位置坐标,所述分支模块用于扩充点云的个数。
  7. 根据权利要求5所述的方法,其特征在于,所述点云补全模型和所述点云重建模型的训练方式为:
    构建补全一体化初始模型,所述补全一体化初始模型包括点云重建初始模型、判别器和点云补全初始模型;
    根据预设的损失函数和训练集对所述补全一体化初始模型进行对抗训练,以将所述点云重建初始模型训练为所述点云重建模型,将所述点云补全初始模型训练为所述点云补全模型;
    其中,所述训练集包括多个呈现对象样本部分区域的二维图像样本,和每个所述二维图像样本所对应的部分区域点云样本和整体点云样本;所述损失函数包括基于相对熵、倒角距离和推土机距离的损失计算。
  8. 一种三维重建装置,其特征在于,包括:
    获取单元,用于获取目标对象的二维图像,所述二维图像呈现所述目标对象的部分区域;
    转换单元,用于将所述二维图像转换第一点云,所述第一点云用于描述所述部分区域的三维结构;
    补全单元,用于通过已训练的点云补全模型对第一点云进行处理,得到第二点云,所述第二点云用于描述所述目标对象整体的三维结构;
    其中,所述点云补全模型包括N个层级的点云压缩模块和N个层级的点云扩张模块,N个层级的所述点云压缩模块用于对第一点云进行压缩处理,得到N个不同分辨率的压缩信息,N个层级的所述点云扩张模块用于对N个不同分辨率的所述压缩信息进行重建,得到所述第二点云,N≥2,N为整数。
  9. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
  11. 一种计算机程序产品,其特征在于,当计算机程序产品在终端设备上运行时,使得终端设备执行如权利要求1至7任一项所述的方法。
PCT/CN2021/119610 2021-09-22 2021-09-22 极端环境下脑结构的三维重建方法、装置及可读存储介质 WO2023044605A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/119610 WO2023044605A1 (zh) 2021-09-22 2021-09-22 极端环境下脑结构的三维重建方法、装置及可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/119610 WO2023044605A1 (zh) 2021-09-22 2021-09-22 极端环境下脑结构的三维重建方法、装置及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023044605A1 true WO2023044605A1 (zh) 2023-03-30

Family

ID=85719775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119610 WO2023044605A1 (zh) 2021-09-22 2021-09-22 极端环境下脑结构的三维重建方法、装置及可读存储介质

Country Status (1)

Country Link
WO (1) WO2023044605A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129060A (zh) * 2023-04-18 2023-05-16 心航路医学科技(广州)有限公司 心脏三维解剖模型构建方法和心脏三维标测***
CN116503551A (zh) * 2023-04-14 2023-07-28 海尔数字科技(上海)有限公司 一种三维重建方法及装置
CN116957991A (zh) * 2023-09-19 2023-10-27 北京渲光科技有限公司 三维模型补全方法以及三维模型补全模型的生成方法
CN117197003A (zh) * 2023-11-07 2023-12-08 杭州灵西机器人智能科技有限公司 一种多条件控制的纸箱样本生成方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598790A (zh) * 2021-01-08 2021-04-02 中国科学院深圳先进技术研究院 脑结构三维重建方法、装置及终端设备
CN112767554A (zh) * 2021-04-12 2021-05-07 腾讯科技(深圳)有限公司 一种点云补全方法、装置、设备及存储介质
WO2021120175A1 (zh) * 2019-12-20 2021-06-24 驭势科技(南京)有限公司 三维重建方法、装置、***和存储介质
CN113052955A (zh) * 2021-03-19 2021-06-29 西安电子科技大学 一种点云补全方法、***及应用
CN113160327A (zh) * 2021-04-09 2021-07-23 上海智蕙林医疗科技有限公司 一种点云补全的实现方法和***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021120175A1 (zh) * 2019-12-20 2021-06-24 驭势科技(南京)有限公司 三维重建方法、装置、***和存储介质
CN112598790A (zh) * 2021-01-08 2021-04-02 中国科学院深圳先进技术研究院 脑结构三维重建方法、装置及终端设备
CN113052955A (zh) * 2021-03-19 2021-06-29 西安电子科技大学 一种点云补全方法、***及应用
CN113160327A (zh) * 2021-04-09 2021-07-23 上海智蕙林医疗科技有限公司 一种点云补全的实现方法和***
CN112767554A (zh) * 2021-04-12 2021-05-07 腾讯科技(深圳)有限公司 一种点云补全方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEN XIN; LI TIANYANG; HAN ZHIZHONG; LIU YU-SHEN: "Point Cloud Completion by Skip-Attention Network With Hierarchical Folding", 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 13 June 2020 (2020-06-13), pages 1936 - 1945, XP033804610, DOI: 10.1109/CVPR42600.2020.00201 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503551A (zh) * 2023-04-14 2023-07-28 海尔数字科技(上海)有限公司 一种三维重建方法及装置
CN116129060A (zh) * 2023-04-18 2023-05-16 心航路医学科技(广州)有限公司 心脏三维解剖模型构建方法和心脏三维标测***
CN116957991A (zh) * 2023-09-19 2023-10-27 北京渲光科技有限公司 三维模型补全方法以及三维模型补全模型的生成方法
CN116957991B (zh) * 2023-09-19 2023-12-15 北京渲光科技有限公司 三维模型补全方法
CN117197003A (zh) * 2023-11-07 2023-12-08 杭州灵西机器人智能科技有限公司 一种多条件控制的纸箱样本生成方法
CN117197003B (zh) * 2023-11-07 2024-02-27 杭州灵西机器人智能科技有限公司 一种多条件控制的纸箱样本生成方法

Similar Documents

Publication Publication Date Title
WO2020238734A1 (zh) 图像分割模型的训练方法、装置、计算机设备和存储介质
WO2023044605A1 (zh) 极端环境下脑结构的三维重建方法、装置及可读存储介质
Sobhaninia et al. Fetal ultrasound image segmentation for measuring biometric parameters using multi-task deep learning
CN110996789B (zh) 执行筛查、诊断或其他基于图像的分析任务的***和方法
CN110599528B (zh) 一种基于神经网络的无监督三维医学图像配准方法及***
EP3355273B1 (en) Coarse orientation detection in image data
EP3611699A1 (en) Image segmentation using deep learning techniques
US10366488B2 (en) Image processing used to estimate abnormalities
CN111429421A (zh) 模型生成方法、医学图像分割方法、装置、设备及介质
CN111815766B (zh) 基于2d-dsa图像重建血管三维模型处理方法及***
Huang et al. Attention-VGG16-UNet: a novel deep learning approach for automatic segmentation of the median nerve in ultrasound images
KR20170069587A (ko) 영상처리장치 및 그의 영상처리방법
Gupta et al. Study on anatomical and functional medical image registration methods
CN113920243A (zh) 极端环境下脑结构的三维重建方法、装置及可读存储介质
Tursynova et al. Brain Stroke Lesion Segmentation Using Computed Tomography Images based on Modified U-Net Model with ResNet Blocks.
Liu et al. RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning
CN113272869A (zh) 医学成像中从定位片进行三维形状重构
WO2022163402A1 (ja) 学習済みモデルの生成方法、機械学習システム、プログラムおよび医療画像処理装置
CN114972026A (zh) 图像处理方法和存储介质
WO2022084074A1 (en) Detecting anatomical abnormalities by segmentation results with and without shape priors
CN114169468A (zh) 图像分类方法、计算机设备、存储介质和计算机程序产品
CN115239688B (zh) 基于磁共振对比增强3d-t1wi图像的脑转移瘤识别方法与***
CN117393100B (zh) 诊断报告的生成方法、模型训练方法、***、设备及介质
CN113192014B (zh) 改进脑室分割模型的训练方法、装置、电子设备和介质
CN110415239B (zh) 图像处理方法、装置、设备、医疗电子设备以及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21957750

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE