CN117725966B - Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment - Google Patents

Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment Download PDF

Info

Publication number
CN117725966B
CN117725966B CN202410179959.XA CN202410179959A CN117725966B CN 117725966 B CN117725966 B CN 117725966B CN 202410179959 A CN202410179959 A CN 202410179959A CN 117725966 B CN117725966 B CN 117725966B
Authority
CN
China
Prior art keywords
point cloud
sketch
sequence
primitive
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410179959.XA
Other languages
Chinese (zh)
Other versions
CN117725966A (en
Inventor
孙文愈
张楠
刘向东
江平
幺宝刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanyi Digital Technology Co ltd
International Digital Economy Academy IDEA
Original Assignee
Shenzhen Wanyi Digital Technology Co ltd
International Digital Economy Academy IDEA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wanyi Digital Technology Co ltd, International Digital Economy Academy IDEA filed Critical Shenzhen Wanyi Digital Technology Co ltd
Priority to CN202410179959.XA priority Critical patent/CN117725966B/en
Publication of CN117725966A publication Critical patent/CN117725966A/en
Application granted granted Critical
Publication of CN117725966B publication Critical patent/CN117725966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Generation (AREA)

Abstract

The application discloses a training method of a sketch sequence reconstruction model, a geometric model reconstruction method and equipment, wherein the training method comprises the steps of obtaining a training data set; inputting training point cloud data in a training data set into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model; and determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model. According to the application, the point cloud segmentation task and the sketch sequence prediction task are combined, so that the extraction of deep point cloud information by a network is remarkably improved, the parameter prediction precision is improved, the problems that the boundary in the reconstruction from the three-dimensional point cloud to the sketch sequence is unclear and the three-dimensional point cloud cannot be directly converted into a modeling sequence are solved, and the intellectualization and the efficiency of reverse engineering are greatly improved.

Description

Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment
Technical Field
The application relates to the technical field of CAD reverse engineering, in particular to a training method of a sketch sequence reconstruction model, a geometric model reconstruction method and equipment.
Background
Computer Aided Design (CAD) is a basic software system for industrial product design, development and manufacture, and has been widely used in the fields of machinery, construction, aviation, ships, automobiles, etc., where a two-dimensional sketch sequence is the basis of a three-dimensional design, and is one of the cores of CAD software systems. In reverse engineering, recovering a CAD modeling sequence from a three-dimensional point cloud is an indispensable key technology, wherein a sketch sequence is taken as a basis of the three-dimensional modeling sequence, and is a ring of vital importance for recovering an accurate three-dimensional modeling sequence.
The traditional CAD modeling sequence recovered from point cloud data is generally based on RANSEC, hough transformation algorithm and the like. However, these algorithms are limited by the problems that good initial conditions are required, edges are not clear enough, and the algorithms cannot be directly converted into CAD modeling sequences, so that the accuracy of directly recovering the sketch sequences from the point cloud is affected, and the accuracy of reverse engineering is further affected.
There is thus a need for improvements and improvements in the art.
Disclosure of Invention
The application aims to solve the technical problem of providing a training method of a sketch sequence reconstruction model, a geometric model reconstruction method and equipment aiming at the defects of the prior art.
In order to solve the technical problem, a first aspect of the present application provides a training method for a sketch sequence reconstruction model, where the training method for a sketch sequence reconstruction model includes:
Acquiring a training data set, wherein the training data set comprises a plurality of training data, and each training data in the plurality of training data comprises training point cloud data, and a marked sketch sequence parameter and a marked point cloud boundary corresponding to the training point cloud data;
Inputting the training point cloud data into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model;
And determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model.
The training method of the sketch sequence reconstruction model, wherein the acquiring the training data set specifically comprises the following steps:
Acquiring a plurality of entity three-dimensional modeling sequence data;
reconstructing the solid three-dimensional modeling sequence data into modeling entities for each solid three-dimensional modeling sequence data; determining a three-dimensional point cloud model according to the modeling entity, and projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data; determining a modeling sequence implicit field according to the training point cloud data, and determining a labeling point cloud boundary based on the modeling sequence implicit field; vectorizing sketch sequence parameters in the solid three-dimensional modeling sequence data to obtain labeled sketch sequence parameters so as to obtain training data;
and taking the obtained set formed by all training data as a training data set.
The training method of the sketch sequence reconstruction model, wherein the determining a three-dimensional point cloud model according to the modeling entity, and projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data specifically comprises:
Dividing the modeling entity to obtain a plurality of stretching entities;
sampling the mesh model of each stretching entity to obtain a three-dimensional point cloud model corresponding to the stretching entity;
and projecting the three-dimensional point cloud model corresponding to the stretching entity to a sketch plane according to a stretching axis to obtain two-dimensional training point cloud data.
The training method of the sketch sequence reconstruction model, wherein vectorizing the sketch sequence parameters in the entity three-dimensional modeling sequence data to obtain labeled sketch sequence parameters specifically comprises:
selecting sketch sequence parameters corresponding to the training point cloud data from the entity three-dimensional modeling sequence data, and reading primitive types and primitive attribute parameters of each primitive contained in the sketch sequence parameters;
Constructing a primitive vector corresponding to each primitive based on the primitive type and the primitive attribute parameter of each primitive, wherein the vector dimensions of the primitive vectors corresponding to each primitive are the same;
selecting a starting primitive from the primitives, taking the primitive vector corresponding to the starting primitive as a starting vector element, and arranging the primitive vectors in a column direction according to a preset sequence to obtain an initial vector matrix;
and adding a start indicator before the forefront vector element of the initial vector matrix and adding a stop indicator after the last vector element of the initial vector matrix to obtain the marked sketch sequence parameter.
The training method of the sketch sequence reconstruction model, wherein the construction of the primitive vector corresponding to each primitive based on the primitive type and the primitive attribute parameter of each primitive specifically comprises the following steps:
Converting the primitive types and primitive attribute parameters of each primitive into initial primitive vectors according to a preset format;
And compensating the element positions without corresponding parameter information in each initial primitive vector by adopting a preset value to obtain primitive vectors of each image.
The training method of the sketch sequence reconstruction model comprises the steps of presetting parameter information corresponding to the primitive types, parameter information corresponding to the primitive position parameters, parameter information corresponding to the arc midpoint parameters and parameter information corresponding to the circle radius parameters.
The training method of the sketch sequence reconstruction model comprises the steps that the preset network model comprises a backbone point cloud encoder, a backbone point cloud decoder, a transducer encoder and a transducer decoder, wherein the backbone point cloud encoder is respectively connected with the backbone point cloud decoder and the transducer encoder, the transducer encoder is connected with the transducer decoder, the transducer decoder is used for determining predicted sketch sequence parameters, and the backbone point cloud decoder is used for determining predicted point cloud boundaries.
The training method of the sketch sequence reconstruction model comprises the steps that the predicted sketch sequence parameters comprise predicted sketch sequence parameters output by an output layer of the converter decoder and predicted sketch sequence parameters output by at least one middle layer of the converter decoder.
The training method of the sketch sequence reconstruction model, wherein the determining the target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the labeled sketch sequence parameter and the labeled point cloud boundary specifically comprises:
Determining a sequence parameter loss term based on the predicted sketch sequence parameter and the annotated sketch sequence parameter;
Determining a segmentation loss term based on the predicted point cloud boundary and the marked point cloud boundary;
a target penalty term is determined based on the sequence parameter penalty term and the segmentation penalty term.
The training method of the sketch sequence reconstruction model, wherein the marked sketch sequence parameters comprise primitive types and primitive attribute parameters, and determining the sequence parameter loss items based on the predicted sketch sequence parameters and the marked sketch sequence parameters specifically comprises the following steps:
For each of the predicted sketch sequence parameters, calculating a type loss term for the predicted primitive type in the predicted sketch sequence parameter and the primitive type in the labeling sketch sequence parameter, and a parameter loss term for the predicted primitive attribute parameter in the predicted sketch sequence parameter and the labeling primitive attribute parameter in the labeling sketch sequence parameter;
and determining a sequence parameter loss term according to the calculated type loss term and the parameter loss term.
In the training method of the sketch sequence reconstruction model, in the process of inputting the training point cloud data into a preset network model and determining the predicted sketch sequence parameters and the predicted point cloud boundary through the preset network model, the method further comprises the following steps:
executing noise adding operation on the marked sketch sequence parameters of the training point cloud data to obtain noise marked sketch sequence parameters;
And inputting the noise marking sketch sequence parameters into the preset network model, and outputting noise removing marking sketch sequence parameters through the preset network model.
The target loss term comprises a denoising loss term, and the denoising loss term is determined based on the denoising marked sketch sequence parameter and the marked sketch sequence parameter corresponding to the training point cloud data.
The second aspect of the present application provides a geometric model reconstruction method, which applies a sketch sequence reconstruction model obtained by a training method based on the sketch sequence reconstruction model, and the geometric model reconstruction method comprises:
acquiring point cloud data to be reconstructed;
Inputting the point cloud data to be reconstructed into the sketch sequence reconstruction model, and outputting sketch sequence parameters and boundary point clouds corresponding to the point cloud data to be reconstructed through the sketch sequence reconstruction model;
and constructing a geometric model corresponding to the point cloud data based on the sketch sequence parameters and the boundary point cloud.
The geometric model reconstruction method, wherein the constructing the geometric model corresponding to the point cloud data based on the sketch sequence parameter and the boundary point cloud specifically includes:
correcting the sketch sequence parameters by taking the boundary point cloud as an implicit field grid point to obtain target sketch sequence parameters;
And converting the target sketch sequence parameters into modeling sequences, and importing the modeling sequences into modeling software to obtain a geometric model corresponding to the point cloud data.
The geometric model reconstruction method further comprises a preprocessing process for the point cloud data to be reconstructed after the point cloud data to be reconstructed are acquired, wherein the preprocessing process specifically comprises the following steps:
acquiring a minimum coordinate value and a maximum average value coordinate value of the point cloud data to be reconstructed;
Subtracting the minimum coordinate value from each point cloud coordinate in the point cloud data to be reconstructed, and dividing the point cloud coordinate by the maximum average coordinate value to obtain an adjustment coordinate of each point cloud;
Subtracting the point cloud center of the point cloud data to be reconstructed from the adjustment coordinates of the point cloud to obtain the preprocessed point cloud coordinates of the point cloud.
A third aspect of the application provides a computer readable storage medium storing one or more programs executable by one or more processors to implement steps in a training method of a sketch sequence reconstruction model as described above and/or to implement steps in a method of a geometric model reconstruction method as described above.
A fourth aspect of the present application provides a terminal device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
The processor, when executing the computer readable program, implements steps in a training method of a sketch sequence reconstruction model as described above and/or steps in a method of a geometric model reconstruction method as described above.
The beneficial effects are that: compared with the prior art, the application provides a training method of a sketch sequence reconstruction model, a geometric model reconstruction method and equipment, wherein the training method comprises the steps of obtaining a training data set; inputting training point cloud data in the training data set into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model; and determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model. According to the application, the point cloud segmentation task and the sketch sequence prediction task are combined, so that the extraction of deep point cloud information by a network is remarkably improved, the parameter prediction precision is improved, the problems that the boundary in the reconstruction from the three-dimensional point cloud to the sketch sequence is unclear and the three-dimensional point cloud cannot be directly converted into a modeling sequence are solved, and the intellectualization and the efficiency of reverse engineering are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a training method of a sketch sequence reconstruction model provided by an embodiment of the present application.
Fig. 2 is an exemplary diagram of labeling sketch sequence parameters.
Fig. 3 is a structural schematic diagram of a training method of a sketch sequence reconstruction model provided by an embodiment of the present application.
Fig. 4 is a flowchart of a geometric model reconstruction method according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a training method of a sketch sequence reconstruction model, a geometric model reconstruction method and equipment, and aims to make the purposes, technical schemes and effects of the application clearer and more definite. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the sequence number and the size of each step in this embodiment do not mean the sequence of execution, and the execution sequence of each process is determined by the function and the internal logic of each process, and should not be construed as limiting the implementation process of the embodiment of the present application.
Through research, a Computer Aided Design (CAD) system is a basic software system for industrial product design, research, development and manufacture, is widely applied to the industrial fields of machinery, construction, aviation, ships, automobiles and the like, and a two-dimensional sketch sequence is a three-dimensional design basis and is one of the cores of the CAD software system. In reverse engineering, recovering a CAD modeling sequence from a three-dimensional point cloud is an indispensable key technology, wherein a sketch sequence is taken as a basis of the three-dimensional modeling sequence, and is a ring of vital importance for recovering an accurate three-dimensional modeling sequence.
The traditional CAD modeling sequence recovered from point cloud data is generally based on RANSEC, hough transformation algorithm and the like. However, these algorithms are limited by the problems that good initial conditions are required, edges are not clear enough, and the algorithms cannot be directly converted into CAD modeling sequences, so that the accuracy of directly recovering the sketch sequences from the point cloud is affected, and the accuracy of reverse engineering is further affected.
In order to solve the above problems, an embodiment of the present application provides a training method for a graph sequence reconstruction model, including: acquiring a training data set; inputting training point cloud data in the training data set into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model; and determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model. According to the application, the point cloud segmentation task and the sketch sequence prediction task are combined, so that the extraction of deep point cloud information by a network is remarkably improved, the parameter prediction precision is improved, the problems that the boundary in the reconstruction from the three-dimensional point cloud to the sketch sequence is unclear and the three-dimensional point cloud cannot be directly converted into a modeling sequence are solved, and the intellectualization and the efficiency of reverse engineering are greatly improved.
The application will be further described by the description of embodiments with reference to the accompanying drawings.
The embodiment provides a training method of a sketch sequence reconstruction model, as shown in fig. 1, the method includes:
S10, acquiring a training data set.
The training data set is used for training a sketch sequence reconstruction model, wherein the training data set comprises a plurality of training point cloud data, and the training point cloud data corresponds to the annotation sketch sequence parameters and the annotation point cloud boundaries. The training point cloud data corresponds to a stretching entity in an entity three-dimensional model, the sketch sequence parameter is marked to reflect the primitive type and the primitive attribute information of the primitives contained in the stretching entity, and the point cloud boundary is marked to reflect the entity edge corresponding to the stretching entity. The original large-scale solid three-dimensional modeling sequence data is converted into a tensor form data format which can be directly input by a deep learning model.
In one implementation manner of the embodiment of the present application, the acquiring a training data set specifically includes:
s11, acquiring a plurality of entity three-dimensional modeling sequence data;
s12, reconstructing the entity three-dimensional modeling sequence data into modeling entities for each entity three-dimensional modeling sequence data; determining a three-dimensional point cloud model according to the modeling entity, and projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data; determining a modeling sequence implicit field according to the training point cloud data, and determining a labeling point cloud boundary based on the modeling sequence implicit field; vectorizing sketch sequence parameters in the solid three-dimensional modeling sequence data to obtain labeled sketch sequence parameters so as to obtain training data;
s13, taking a set formed by all the obtained training data as a training data set.
Specifically, in step S11, the solid three-dimensional modeling sequence data is modeling data for constructing a geometric model, the geometric model may be reconstructed based on the solid three-dimensional modeling sequence data, for example, the solid three-dimensional modeling sequence data is modeling data of a three-dimensional geometric model drawn by CAD software, and then the three-dimensional geometric model may be reconstructed based on the solid three-dimensional modeling sequence data to obtain a modeled solid.
In step S12, after the modeling entity is reconstructed, a three-dimensional point cloud model may be obtained by sampling the modeling entity. Because the sketch sequence reconstruction model obtained by training in the embodiment of the application is used for carrying out sketch sequence reconstruction based on Ping Miandian cloud (namely two-dimensional point cloud), after the three-dimensional point cloud model is obtained, the three-dimensional point cloud model is required to be projected to a two-dimensional plane so as to obtain training point cloud data. In addition, it should be noted that the modeling entity may be a large-scale geometric model, that is, the modeling entity may include one or more stretching entities, where when the modeling entity includes one stretching entity, the stretching entity is directly sampled to obtain one training point cloud data, and when the modeling entity includes a plurality of stretching entities, the modeling entity is divided to obtain a plurality of stretching entities, and then each stretching entity is sampled as one modeling entity to obtain one training point cloud data.
In the practice of the present application, a modeling entity comprising a plurality of tensile entities is illustrated. Correspondingly, determining a three-dimensional point cloud model according to the modeling entity, and projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data specifically includes:
Dividing the modeling entity to obtain a plurality of stretching entities;
sampling the mesh model of each stretching entity to obtain a three-dimensional point cloud model corresponding to the stretching entity;
and projecting the three-dimensional point cloud model corresponding to the stretching entity to a sketch plane according to a stretching axis to obtain two-dimensional training point cloud data.
Specifically, a plurality of stretching entities form a modeling entity, each stretching entity in the plurality of stretching entities is obtained through stretching, one stretching entity corresponds to one point cloud set, and one point cloud set is formed by N space points. Thus, a two-dimensional planar point cloud can be obtained by projecting the three-dimensional point cloud model corresponding to the stretching entity to the sketch plane. That is, when the three-dimensional point cloud model is projected, a plane perpendicular to the sketch plane may be used as the preset plane, wherein the stretching axis is perpendicular to the sketch plane. In addition, it should be noted that, the mesh model is used to represent a data structure of a stretching entity, which represents the shape and surface detail of an object by defining vertices, edges and faces, and a three-dimensional point cloud model of the stretching entity can be obtained by sampling the mesh model.
Further, in step S12, the labeling point cloud boundary is a boundary point in the training point cloud data, where the labeling point cloud boundary may be labeled by a manual means, or may be obtained by performing edge recognition on the training point cloud data or by performing edge recognition on the training point cloud data after performing edge recognition on the three-dimensional point cloud model of the stretching entity. In the embodiment of the application, the labeling point cloud boundary is obtained by carrying out edge recognition on training point cloud data, wherein the recognition process of the labeling point cloud boundary can be as follows: and calculating the modeling sequence data SDF implicit field of the grid points according to the SDF field calculation method of the solid three-dimensional modeling sequence data by taking the training point cloud data as the grid points, and then screening out the point cloud data with the implicit field value of 0 to obtain the marked point cloud boundary.
Further, in step S12, after the training point cloud data and the labeling point cloud boundary are obtained, the sketch sequence data in the three-dimensional modeling sequence data corresponding to the training point cloud data is vectorized to obtain a labeling sketch sequence parameter, where the labeling sketch sequence parameter is formed by sequentially arranging parameters of all primitives constituting a sketch corresponding to the training point cloud data, and the labeling sketch sequence parameter includes primitive types and primitive attribute parameters, where the primitive types include, but are not limited to, a point, a line segment, a circle, an arc, and the like. The primitive attribute parameters are used for reflecting the primitive position and primitive shape, for example, the primitive attribute parameters of the point are composed of x and y coordinates of the point, the primitive attribute parameters of the line segment are composed of a line segment start point (x 10, y 10) and an end point coordinate (x 11, y 11), the primitive attribute parameters of the circular arc are composed of a start point (x 20, y 20), a middle point (x 21, y 21) and an end point coordinate (x 22, y 22), and the primitive attribute parameters of the circle are composed of a circle center (x, y) and a radius r.
In one implementation manner of the present application, vectorizing the sketch sequence parameters in the solid three-dimensional modeling sequence data to obtain labeled sketch sequence parameters specifically includes:
selecting sketch sequence parameters corresponding to the training point cloud data from the entity three-dimensional modeling sequence data, and reading primitive types and primitive attribute parameters of each primitive contained in the sketch sequence parameters;
Constructing a primitive vector corresponding to each primitive based on the primitive type and the primitive attribute parameter of each primitive, wherein the vector dimensions of the primitive vectors corresponding to each primitive are the same;
Selecting a starting primitive from the primitives, taking the primitive vector corresponding to the starting primitive as a starting vector element, and arranging the primitive vectors according to a preset sequence to obtain an initial vector matrix;
and adding a start indicator before the forefront vector element of the initial vector matrix and adding a stop indicator after the last vector element of the initial vector matrix to obtain the marked sketch sequence parameter.
Specifically, when the entity three-dimensional modeling sequence data corresponds to one training point cloud data, directly reading sketch sequence parameters in the entity three-dimensional modeling sequence data, and when the entity three-dimensional modeling sequence data corresponds to a plurality of training point cloud data, selecting sketch sequence parameters for constructing a stretching entity corresponding to the training point cloud data from the entity three-dimensional modeling sequence data, wherein the sketch sequence parameters comprise primitive types and primitive attribute parameters for forming all primitives of the stretching entity corresponding to the training point cloud data, and the primitive attribute parameters comprise position parameters and drawing parameters.
It should be noted that, since a part of the primitives may directly reflect the drawing parameters through the position parameters, the primitive attribute parameters of a part of the primitives may only include the position parameters, and the drawing parameters are null. For example, when the primitive is a point, the position parameter is the x, y coordinates of the point, the drawing parameter is null, and when the primitive is a line segment, the position parameter is the start point (x 10, y 10) and end point coordinates (x 11, y 11) of the line segment, and the drawing parameter is null; when the primitive is a circular arc, the position attribute is a starting point (x 20, y 20) and an end point coordinate (x 22, y 22), and the drawing parameter is a middle point (x 21, y 21); when the primitive is a circle, the position attribute is a circle center (x, y), and the drawing parameter is a radius.
After the primitive types and the primitive attribute parameters of the primitives are read, the primitive types and the primitive attribute parameters of each primitive can be converted into primitive vectors according to a preset format, wherein the preset format can be set according to parameter information included in the image types and the image attribute parameters. Specifically, the preset format may be a primitive type-position parameter-a center parameter in drawing parameters-a radius parameter in drawing parameters; the drawing parameter may be a radius parameter of the drawing parameter, a center parameter of the drawing parameter, or the like, and the drawing parameter is not particularly limited herein, but a preset format may be used as an example of the radius parameter of the drawing parameter.
Further, when converting the primitive type and the primitive attribute parameter into primitive vectors, the type value corresponding to each image type may be preset, for example, a point corresponds to 1, a line segment corresponds to 2, an arc corresponds to 3, a circle corresponds to 4, and the like; then, the primitive types can be directly converted into corresponding type values so as to obtain vector elements; the position parameters and the drawing parameters included in the primitive attribute parameters are represented in a numerical mode, so that the position parameters and the drawing parameters can be directly used as vector elements, and finally all the vector elements are arranged according to a preset format to obtain primitive vectors. Of course, it should be noted that, in order to make vector dimensions of primitive vectors corresponding to each primitive identical, primitive types and primitive attribute parameters of each primitive are directly converted into initial primitive vectors according to a preset format, and parameter information which is not added in the initial primitive vectors is compensated by adopting preset values, where the preset values can be set according to actual requirements, and only values which are not used as type values of position parameters, drawing parameters and primitive types are needed. For example, the type values of the position parameter, the drawing parameter and the primitive type are positive numbers, and the preset value may be-1.
After the primitive vectors of the primitives are obtained, the primitive vectors can be arranged according to the column direction to obtain the sequence parameters of the marked sketch; that is, each primitive vector is spliced according to the row direction to obtain the sketch sequence parameter. In the embodiment of the application, the labeling sketch sequence parameters are determined by adopting the mode that the primitive vectors are arranged in the column direction, wherein when the labeling sketch sequence parameters are determined by adopting the mode that the primitive vectors are arranged in the column direction, one primitive can be randomly selected from all primitives to be used as a starting primitive, or the primitive which is drawn first according to the sketch drawing sequence can be used as the starting primitive, and the like. In the embodiment of the application, the first drawing primitive according to the sketch drawing sequence is used as the initial primitive, and the vectors of the primitives are arranged in the column direction according to the preset sequence to obtain the initial vector matrix, wherein the preset sequence can be the anticlockwise drawing sequence of the sketch or the clockwise drawing sequence of the sketch. In the embodiment of the present application, a drawing order of a sketch anticlockwise is adopted. In addition, it should be noted that, since the application adopts the anticlockwise drawing sequence of the sketch to arrange, the vector elements used for representing the position parameters in each primitive vector can only adopt the position information of the end position, and the position information of the previous vector element is used as the start position of the primitive, thus the data volume of the sketch sequence parameters can be reduced, and the calculation speed of the sketch sequence reconstruction model can be improved. Of course, in practical application, the vector elements used for representing the position parameters in the primitive vector may include the position information of the start position and the end position at the same time.
After the initial sketch sequence data is obtained, the initial sketch sequence data can be directly used as the marked sketch sequence data, a starting point indicator can be added in front of the initial sketch sequence data, and a termination indicator can be added behind the initial sketch sequence data, so that the sketch sequence data can be rapidly identified through the starting indicator and the termination indicator. In the embodiment of the application, a starting point indicator is added in front of initial sketch sequence data, a termination indicator is added behind the initial sketch sequence data, and an initial vector matrix added with the starting point indicator and the termination indicator is used as a parameter for representing the sketch sequence, namely, a three-dimensional sketch sequence is vectorized into a data structure suitable for training and reasoning of a network model.
In a specific implementation of the present application, as shown in fig. 2, the sketch sequence parameter is stored in the form of a vector matrix, where the first row of the vector matrix is a start indicator, the last row of the vector matrix is a stop matrix, and the primitive vector of the primitive is a row in the vector matrix. That is, when the primitive vectors are arranged according to the preset sequence, the primitive vectors may be used as rows, each primitive vector is arranged according to the column direction, then each start indicator row is added to the first row of the initial vector matrix formed by the arrangement, and one end indicator row is added to the last row of the initial vector matrix, so as to obtain the sequence parameter of the marked sketch.
The preset format comprises parameter information corresponding to the primitive type, parameter information corresponding to the primitive position parameter, parameter information corresponding to the circular arc midpoint parameter and parameter information corresponding to the circular radius parameter. Correspondingly, as shown in fig. 2, the primitive vector obtained based on the conversion of the preset format includes four components, namely a primitive type, a primitive position parameter, an arc midpoint parameter and a circle radius parameter, wherein the primitive type occupies one vector element position, the primitive position parameter occupies two vector element positions, the arc midpoint parameter occupies two vector element positions, and the circle radius parameter occupies one vector element position, that is, the vector dimension of the primitive vector is 6. In addition, the primitive position parameters only record the termination point of each primitive, all the primitives are constructed according to the anticlockwise drawing sequence of the sketch, and unused parameters are compensated by-1. Of course, in practical application, the preset format may further include other parameter information, which may be specifically determined according to the parameter information item included in the primitive attribute data.
In addition, in practical applications, the process of generating the training data set may be packaged as one data loader, and the process of step S10 is performed by the data loader to generate the training data set. The data loader can inherit dataloader module in PyTorch, and can realize the acquisition process of the training data set through dataloader module in PyTorch, namely, the conversion process from the entity three-dimensional modeling sequence data to the training point cloud data, the labeling point cloud boundary and the labeling sketch sequence parameter can be realized.
S20, inputting the training point cloud data into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model.
Specifically, the preset network model is built based on deep learning, an input item of the preset network model is training point cloud data, an output item is predicted sketch sequence parameters and predicted point cloud boundaries, wherein the preset network model is used as an initial network model of the sketch sequence reconstruction model, and the sketch sequence reconstruction model can be obtained by training the preset network model. That is, the model structure of the preset network model is the same as the model structure of the sketch sequence reconstruction model, and the difference between the model structure and the model structure is different from model parameters, wherein the model parameters of the preset network model are initial model parameters, and the model parameters of the sketch sequence reconstruction model are model parameters trained by a training data set. Thus, the model structure will be described herein by taking a preset network model as an example.
As shown in fig. 3, the preset network model may include a backbone point cloud encoder, a backbone point cloud decoder, a transducer encoder, and a transducer decoder, where the backbone point cloud encoder is connected to the backbone point cloud decoder and the transducer encoder, and the transducer encoder is connected to the transducer decoder, and the backbone point cloud encoder is used for extracting features of training point cloud data, and the transducer encoder is used for encoding a point cloud feature sequence extracted by the backbone point cloud encoder to obtain encoding features; the transform decoder is used for determining predicted sketch sequence parameters based on coding features, and the backbone point cloud decoder is used for determining predicted point cloud boundaries based on point cloud feature sequences. According to the embodiment of the application, the sketch sequence parameter prediction task and the point cloud boundary segmentation task are combined by the bone dry point cloud decoder and the transducer decoder, so that the bone dry point cloud encoder is helped to learn deeper layers and is more in line with the characteristics of sketch sequence parameter prediction, and the accuracy of the sketch sequence parameters predicted by the sketch sequence reconstruction model obtained through training can be improved. In addition, the embodiment of the application learns the primitive types and the primitive attribute parameters of each primitive without preprocessing (such as discretization, integer and the like) the primitive types and the primitive attribute parameters, thereby reducing the operation flow and improving the prediction efficiency.
It should be noted that, the backbone point cloud encoder and the backbone point cloud decoder may use the existing point cloud feature extraction network, and specific network models used by the backbone point cloud encoder and the backbone point cloud decoder are not limited herein, and only a specific example is given, that is, the backbone point cloud encoder is an encoder in PointNet ++, the backbone point cloud decoder is a decoder in PointNet ++, and the like.
Further, the predicted sketch sequence parameters may include predicted sketch sequence parameters of an output of a preset network model, and may further include predicted sketch sequence parameters of one or more intermediate network layers of the preset network model. In one implementation, the predicted sketch sequence parameters include predicted sketch sequence parameters output by the output layer and predicted sketch sequence parameters output by all middle layers, and the loss term is constructed by acquiring the predicted sketch sequence parameters output by the output layer and all middle layers, so that the preset network model can better learn the characteristics more in line with the prediction of the sketch sequence parameters.
Furthermore, it should be noted that, when the preset network model determines the predicted draft sequence parameters through the converter decoder, since the converter decoder includes several cascaded converter decoding modules, the predicted draft sequence parameters include the predicted draft sequence parameters output by the last converter decoding module (i.e., the output layer) in the converter decoder and the predicted draft sequence parameters output by at least one middle converter decoding module (i.e., the middle layer). In one exemplary implementation, the predicted draft sequence parameters include predicted draft sequence parameters output by all transform decoding modules. In addition, for each intermediate converter decoding module, the draft sequence parameter output by the converter decoding module is used as an input item of the next converter decoding module in addition to the predicted draft sequence parameter.
S30, determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model.
Specifically, the target loss term is used for optimizing model parameters of a preset network model, wherein the target loss term comprises a sequence parameter loss term and a segmentation loss term. The sequence parameter loss item is used for supervising the sketch sequence parameter prediction task, the segmentation loss item is used for supervising the point cloud boundary segmentation task, and therefore the target loss item is determined by combining the sequence parameter loss item and the segmentation loss item, and the sketch sequence parameter prediction task and the point cloud boundary segmentation task can be combined to improve the model performance of the sketch sequence reconstruction model obtained through training.
In one implementation manner, the determining the target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the labeled sketch sequence parameter, and the labeled point cloud boundary specifically includes:
Determining a sequence parameter loss term based on the predicted sketch sequence parameter and the annotated sketch sequence parameter;
Determining a segmentation loss term based on the predicted point cloud boundary and the marked point cloud boundary;
a target penalty term is determined based on the sequence parameter penalty term and the segmentation penalty term.
Specifically, the sequence parameter loss term is obtained by comparing the predicted sketch sequence parameter with the labeled sketch sequence parameter, and the segmentation loss term is obtained by comparing the predicted point cloud boundary with the labeled point cloud boundary. The target loss term is obtained by solving and dividing the loss term and the sequence parameter loss term. The sequence parameter loss item can be obtained by directly carrying out loss calculation on the calculated and predicted sketch sequence parameter and the marked sketch sequence parameter, or can be obtained by respectively carrying out loss calculation on the primitive type and the primitive attribute parameter included in the sketch sequence parameter. The segmentation loss term can be obtained by directly mapping the prediction value of the probability of the boundary point cloud to between 0 and 1 through Sigmoid processing, taking the result and the label (0, 1) of the corresponding boundary point cloud as the input of the cross entropy loss function, or can be obtained by adopting other modes of calculation. Of course, in practical application, the segmentation loss term may also be calculated by using other loss functions, for example, an L1 loss function, etc.
In the embodiment of the application, the marked sketch sequence parameters comprise primitive types and primitive attribute parameters, and the sequence parameter loss items are obtained by respectively calculating losses of the primitive types and the primitive attribute parameters included in the sketch sequence parameters. Correspondingly, the determining the sequence parameter loss item based on the predicted sketch sequence parameter and the marked sketch sequence parameter specifically comprises the following steps:
For each of the predicted sketch sequence parameters, calculating a type loss term for the predicted primitive type in the predicted sketch sequence parameter and the primitive type in the labeling sketch sequence parameter, and a parameter loss term for the predicted primitive attribute parameter in the predicted sketch sequence parameter and the labeling primitive attribute parameter in the labeling sketch sequence parameter;
and determining a sequence parameter loss term according to the calculated type loss term and the parameter loss term.
Specifically, in the labeling sketch sequence parameter and the predicting sketch sequence parameter, the primitive types of the primitives are located in the same column, so that a labeling primitive type vector and a predicting primitive type vector can be obtained by extracting the column in which the primitive types are located, and then a type loss item can be calculated through a first preset loss function (for example, a cross entropy loss function) according to the extracted labeling primitive type vector and predicting primitive type vector. Similarly, the labeling primitive attribute parameter and the prediction primitive attribute parameter can be respectively extracted from the labeling sketch sequence parameter and the prediction sketch sequence parameter, and then the parameter loss term is calculated through a second preset loss function (for example, an L1 loss function). Finally, the sequence parameter loss term is obtained by adding or weighting the type loss term and the parameter loss term.
The above description of one implementation of a training process for sketch sequence reconstruction models is completed. In practical application, the sketch sequence reconstruction model can be obtained by training through the implementation mode, and a denoising task can be added in the process of training the sketch sequence reconstruction model through the implementation mode so as to increase the learning of the sketch sequence reconstruction model on the labeling information, so that the learning speed of the sketch sequence reconstruction model on the labeling information can be increased, and the training speed of the sketch sequence reconstruction model is further increased.
Based on this, in an implementation manner of the embodiment of the present application, in a training process of the preset network model based on training point cloud data, the method further includes:
executing noise adding operation on the marked sketch sequence parameters of the training point cloud data to obtain noise marked sketch sequence parameters;
And inputting the noise marking sketch sequence parameters into the preset network model, and outputting noise removing marking sketch sequence parameters through the preset network model.
Specifically, the noise in performing the noise adding operation is randomly generated, and the noise added for each of the annotation sketch sequence parameters may be the same or different. In the embodiment of the application, in the training process, the noise added to each marked sketch sequence parameter is different from each other, so that the diversity of noise data carried in a training data set can be enriched, and the sketch sequence reconstruction model can cope with various noise conditions.
In one implementation, the adding noise operation includes adding type noise operation to the primitive type and/or adding parameter noise operation to the image attribute parameter, that is, when adding noise operation to the annotation sketch sequence parameter, adding type noise operation may be performed only to the primitive type or adding parameter noise operation may be performed only to the image attribute parameter, or adding type noise operation may be performed to the primitive type and adding parameter noise operation may be performed to the image attribute parameter, respectively. In an exemplary implementation, the noise adding operation is to perform the noise adding operation on the primitive type and the noise adding operation on the image attribute parameter, that is, to add noise to the primitive type and the image attribute parameter at the same time, so as to increase the difference between the noise-marked sketch sequence parameter after noise and the primitive type and the image attribute parameter. Wherein the image type may be replaced with the noise type according to the probability when the type noise operation is performed on the image type, and one or more attribute parameter values in the image attribute parameter may be adjusted when the parameter noise operation is added to the image attribute parameter.
Further, after the noise marking sketch sequence parameters are obtained, the noise marking sketch sequence parameters are used as input items of a preset network model, and denoising marking sketch sequence parameters corresponding to the noise marking sketch sequence parameters are output through the preset network model. For example, as shown in fig. 2, when the preset network model adopts the above model structure, the noise-labeling sketch sequence parameters may be used as input items of a transducer decoder, and the noise-labeling sketch sequence parameters are output through the transducer decoder.
When the noise labeling sketch sequence parameters are used as input items of a preset network model, when the target loss items are determined, denoising loss items are built based on the noise labeling sketch sequence parameters and the denoising labeling sketch sequence parameters, and then the denoising loss items are used as part of the target loss items. The target loss term comprises a denoising loss term, and the denoising loss term is determined based on the denoising annotation sketch sequence parameter and the annotation sketch sequence parameter corresponding to the training point cloud data. Based on this, the target penalty term may be weighted based on the sequence parameter penalty term (including the type penalty term and the parameter penalty term), the partition penalty term, and the denoising penalty term, i.e., the target penalty termCan be expressed as:
Wherein, Representing type loss items,/>Representing parameter loss terms,/>The segmentation loss term is represented as a term of segmentation loss,Representing denoising penalty term,/>,/>,/>And/>All represent weighting coefficients.
In summary, the present embodiment provides a training method for a sketch sequence reconstruction model; inputting the training point cloud data into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model; and determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model. According to the application, the point cloud segmentation task and the sketch sequence prediction task are combined, so that the extraction of deep point cloud information by a network is remarkably improved, the parameter prediction precision is improved, the problems that the boundary in the reconstruction from the three-dimensional point cloud to the sketch sequence is unclear and the three-dimensional point cloud cannot be directly converted into a modeling sequence are solved, and the efficiency of reverse engineering is greatly improved. Meanwhile, the application adopts the sketch sequence parameters comprising the primitive types and the primitive attribute parameters as sketch representation, and outputs the sketch sequence parameters through the sketch sequence reconstruction model, so that the geometrical model can be directly reconstructed through the sketch sequence parameters, and the intellectualization of reverse engineering is greatly improved.
Based on the above training method of the sketch sequence reconstruction model, the embodiment provides a geometric model reconstruction method, and the sketch sequence reconstruction model obtained by applying the training method of the sketch sequence reconstruction model according to any one of the above is applied, as shown in fig. 4, and the geometric model reconstruction method includes:
B10, acquiring point cloud data to be reconstructed;
B20, inputting the point cloud data to be reconstructed into the sketch sequence reconstruction model, and outputting sketch sequence parameters and boundary point clouds corresponding to the point cloud data to be reconstructed through the sketch sequence reconstruction model;
and B30, constructing a geometric model corresponding to the point cloud data based on the sketch sequence parameters and the boundary point cloud.
Specifically, a complete reconstruction process consists of parameter reasoning and parameter correction, wherein the parameter reasoning directly applies a trained sketch sequence reconstruction model to output sketch sequence parameters and boundary point clouds. The point cloud data to be reconstructed is a point cloud representation of a geometric model, wherein the point cloud data is planar point cloud data. For example, the point cloud data to be reconstructed is a point cloud representation of a three-dimensional model constructed by CAD design software, and the like. In addition, in practical application, after the point cloud data to be reconstructed is obtained, the point cloud data to be reconstructed may be preprocessed, where the preprocessing may include standardization, normalization, and the like. In this embodiment, the pretreatment process may specifically include: acquiring a minimum coordinate value and a maximum average value coordinate value of the point cloud data to be reconstructed; subtracting the minimum coordinate value from each point cloud coordinate in the point cloud data to be reconstructed, and dividing the point cloud coordinate by the maximum average coordinate value to obtain an adjustment coordinate of each point cloud; subtracting the point cloud center of the point cloud data to be reconstructed from the adjustment coordinates of the point cloud to obtain the preprocessed point cloud coordinates of the point cloud.
In one implementation manner, the constructing the geometric model corresponding to the point cloud data based on the sketch sequence parameter and the boundary point cloud specifically includes:
correcting the sketch sequence parameters by taking the boundary point cloud as an implicit field grid point to obtain target sketch sequence parameters;
And converting the target sketch sequence parameters into modeling sequences, and importing the modeling sequences into modeling software to obtain a geometric model corresponding to the point cloud data.
Specifically, the sketch sequence parameters output in the step B20 are used as initial values, and are optimized by an objective function, and the sketch sequence parameters are corrected to enable the sketch sequence parameters to coincide with the boundary point cloud, so that accuracy of the sketch sequence parameters is improved. The correction process is implemented by using a pyTorch random gradient descent method, specifically, the sketch sequence parameter is used as an optimization variable, the boundary point cloud is used as an implicit field grid point, the difference between the implicit distance field calculated according to the sketch sequence parameter and the implicit distance field of the real point cloud is used as an objective function, and the sketch sequence parameter is optimized by using the gradient descent method so as to obtain the target sketch sequence parameter.
Further, the data storage format of the modeling sequence is different from the data storage format of the target sketch sequence parameters, wherein the data storage format of the modeling sequence can be determined according to modeling software adopted by a geometric model corresponding to point cloud data to be reconstructed, so that the modeling sequence can be led into the modeling software, and the geometric model is reconstructed through the modeling software. For example, the modeling software is CAD, the modeling sequence includes sketch data, after the sketch sequence parameters are obtained, the sketch sequence parameters can be converted into sketch data, the sketch data is imported into the CAD, and the geometric model is reconstructed by the CAD.
Based on the above training method of the sketch sequence reconstruction model, the present embodiment provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the training method of the sketch sequence reconstruction model as described in the above embodiment.
Based on the training method of the sketch sequence reconstruction model, the application also provides a terminal device, as shown in fig. 5, which comprises at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and a bus 24. Wherein the processor 20, the display 21, the memory 22 and the communication interface 23 may communicate with each other via a bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may invoke logic instructions in the memory 22 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 22, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 22 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (16)

1. The training method of the sketch sequence reconstruction model is characterized by comprising the following steps of:
Acquiring a training data set, wherein the training data set comprises a plurality of training point cloud data, and a marked sketch sequence parameter and a marked point cloud boundary corresponding to each training point cloud data;
Inputting the training point cloud data into a preset network model, and determining predicted sketch sequence parameters and predicted point cloud boundaries through the preset network model;
Determining a target loss item based on the predicted sketch sequence parameter, the predicted point cloud boundary, the marked sketch sequence parameter and the marked point cloud boundary, and optimizing the preset network model based on the target loss item to obtain a sketch sequence reconstruction model;
the acquiring the training data set specifically comprises the following steps:
Acquiring a plurality of entity three-dimensional modeling sequence data;
Reconstructing the solid three-dimensional modeling sequence data into modeling entities for each solid three-dimensional modeling sequence data; determining a three-dimensional point cloud model according to the modeling entity, and projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data; determining a modeling sequence implicit field according to the training point cloud data, and determining a labeling point cloud boundary based on the modeling sequence implicit field; vectorizing the sketch sequence parameters in the entity three-dimensional modeling sequence data to obtain marked sketch sequence parameters so as to obtain training data, wherein the sketch sequence parameters comprise primitive types and primitive attribute parameters of all primitives;
and taking the obtained set formed by all training data as a training data set.
2. The method for training a sketch sequence reconstruction model according to claim 1, wherein determining a three-dimensional point cloud model according to the modeling entity, projecting the three-dimensional point cloud model to a preset plane to obtain training point cloud data specifically comprises:
Dividing the modeling entity to obtain a plurality of stretching entities;
sampling the mesh model of each stretching entity to obtain a three-dimensional point cloud model corresponding to the stretching entity;
and projecting the three-dimensional point cloud model corresponding to the stretching entity to a sketch plane according to a stretching axis to obtain two-dimensional training point cloud data.
3. The method according to claim 1, wherein vectorizing the sketch sequence parameters in the solid three-dimensional modeling sequence data to obtain labeled sketch sequence parameters specifically comprises:
selecting sketch sequence parameters corresponding to the training point cloud data from the entity three-dimensional modeling sequence data, and reading primitive types and primitive attribute parameters of each primitive contained in the sketch sequence parameters;
Constructing a primitive vector corresponding to each primitive based on the primitive type and the primitive attribute parameter of each primitive, wherein the vector dimensions of the primitive vectors corresponding to each primitive are the same;
selecting a starting primitive from the primitives, taking the primitive vector corresponding to the starting primitive as a starting vector element, and arranging the primitive vectors in a column direction according to a preset sequence to obtain an initial vector matrix;
And adding a starting point indicator in front of the forefront vector element of the initial vector matrix and adding a termination indicator behind the last vector element of the initial vector matrix to obtain the marked sketch sequence parameter.
4. A training method of a sketch sequence reconstruction model according to claim 3, wherein constructing a primitive vector corresponding to each primitive based on a primitive type and a primitive attribute parameter of each primitive specifically comprises:
Converting the primitive types and primitive attribute parameters of each primitive into initial primitive vectors according to a preset format;
And compensating the element positions without corresponding parameter information in each initial primitive vector by adopting a preset value to obtain primitive vectors of each image.
5. The training method of a sketch sequence reconstruction model according to claim 4, wherein the preset format includes parameter information corresponding to a primitive type, parameter information corresponding to a primitive position parameter, parameter information corresponding to a circular arc midpoint parameter, and parameter information corresponding to a circular radius parameter.
6. The method for training a sketch sequence reconstruction model according to claim 1, wherein the preset network model comprises a backbone point cloud encoder, a backbone point cloud decoder, a transform encoder and a transform decoder, the backbone point cloud encoder is respectively connected with the backbone point cloud decoder and the transform encoder, the transform encoder is connected with the transform decoder, wherein the transform decoder is used for determining predicted sketch sequence parameters, and the backbone point cloud decoder is used for determining predicted point cloud boundaries.
7. The method according to claim 6, wherein the predicted sketch sequence parameters include predicted sketch sequence parameters output by an output layer of the converter decoder and predicted sketch sequence parameters output by at least one intermediate layer of the converter decoder.
8. The method for training a sketch sequence reconstruction model according to claim 1, wherein the determining a target loss term based on the predicted sketch sequence parameter, the predicted point cloud boundary, the labeled sketch sequence parameter and the labeled point cloud boundary specifically comprises:
Determining a sequence parameter loss term based on the predicted sketch sequence parameter and the annotated sketch sequence parameter;
Determining a segmentation loss term based on the predicted point cloud boundary and the marked point cloud boundary;
a target penalty term is determined based on the sequence parameter penalty term and the segmentation penalty term.
9. The method according to claim 8, wherein the annotated sketch sequence parameters include primitive types and primitive attribute parameters, and wherein determining a sequence parameter loss term based on the predicted sketch sequence parameters and the annotated sketch sequence parameters specifically includes:
For each of the predicted sketch sequence parameters, calculating a type loss term for the predicted primitive type in the predicted sketch sequence parameter and the primitive type in the labeling sketch sequence parameter, and a parameter loss term for the predicted primitive attribute parameter in the predicted sketch sequence parameter and the labeling primitive attribute parameter in the labeling sketch sequence parameter;
and determining a sequence parameter loss term according to the calculated type loss term and the parameter loss term.
10. The method for training a sketch sequence reconstruction model according to any one of claims 1-9, wherein in inputting the training point cloud data into a preset network model, determining a predicted sketch sequence parameter and a predicted point cloud boundary through the preset network model, the method further comprises:
executing noise adding operation on the marked sketch sequence parameters of the training point cloud data to obtain noise marked sketch sequence parameters;
And inputting the noise marking sketch sequence parameters into the preset network model, and outputting noise removing marking sketch sequence parameters through the preset network model.
11. The method of claim 10, wherein the target loss term comprises a denoising loss term, the denoising loss term being determined based on the denoising annotated sketch sequence parameter and the annotated sketch sequence parameter corresponding to the training point cloud data.
12. A geometric model reconstruction method, characterized in that a sketch sequence reconstruction model obtained by applying a training method based on the sketch sequence reconstruction model according to any one of claims 1-11 is applied, the geometric model reconstruction method comprising:
acquiring point cloud data to be reconstructed;
Inputting the point cloud data to be reconstructed into the sketch sequence reconstruction model, and outputting sketch sequence parameters and boundary point clouds corresponding to the point cloud data to be reconstructed through the sketch sequence reconstruction model;
and constructing a geometric model corresponding to the point cloud data based on the sketch sequence parameters and the boundary point cloud.
13. The geometric model reconstruction method according to claim 12, wherein the constructing the geometric model corresponding to the point cloud data based on the sketch sequence parameter and the boundary point cloud specifically includes:
correcting the sketch sequence parameters by taking the boundary point cloud as an implicit field grid point to obtain target sketch sequence parameters;
and converting the target sketch sequence parameters into modeling sequences, and importing the modeling sequences into modeling software to obtain a geometric model corresponding to the point cloud data.
14. The geometric model reconstruction method according to claim 12, further comprising a preprocessing process performed on the point cloud data to be reconstructed after the point cloud data to be reconstructed is acquired, wherein the preprocessing process specifically comprises:
Acquiring a minimum coordinate value and a maximum absolute value coordinate value of the point cloud data to be reconstructed;
Subtracting the minimum coordinate value from the point cloud coordinates of the point cloud data to be reconstructed, and dividing the point cloud coordinates by the maximum absolute value coordinate value to obtain the adjustment coordinates of the point cloud;
Subtracting the point cloud center of the point cloud data to be reconstructed from the adjustment coordinates of the point cloud to obtain the preprocessed point cloud coordinates of the point cloud.
15. A computer readable storage medium, characterized in that it stores one or more programs executable by one or more processors to implement steps in a method of training a sketch sequence reconstruction model as claimed in any one of claims 1-11 and/or to implement steps in a method of a geometric model reconstruction method as claimed in any one of claims 12-14.
16. A terminal device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
The processor, when executing the computer readable program, implements the steps of the training method of the sketch sequence reconstruction model as claimed in any one of claims 1-11 and/or the steps of the method of the geometrical model reconstruction method as claimed in any one of claims 12-14.
CN202410179959.XA 2024-02-18 2024-02-18 Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment Active CN117725966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410179959.XA CN117725966B (en) 2024-02-18 2024-02-18 Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410179959.XA CN117725966B (en) 2024-02-18 2024-02-18 Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment

Publications (2)

Publication Number Publication Date
CN117725966A CN117725966A (en) 2024-03-19
CN117725966B true CN117725966B (en) 2024-06-11

Family

ID=90211103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410179959.XA Active CN117725966B (en) 2024-02-18 2024-02-18 Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment

Country Status (1)

Country Link
CN (1) CN117725966B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935291B (en) * 2024-03-22 2024-06-11 粤港澳大湾区数字经济研究院(福田) Training method, sketch generation method, terminal and medium for sketch generation model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622479A (en) * 2012-03-02 2012-08-01 浙江大学 Reverse engineering computer-aided design (CAD) modeling method based on three-dimensional sketch
CN115965833A (en) * 2022-12-26 2023-04-14 中山大学·深圳 Point cloud sequence recognition model training and recognition method, device, equipment and medium
CN116721207A (en) * 2023-05-30 2023-09-08 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method, device, equipment and storage medium based on transducer model
CN116863091A (en) * 2023-06-30 2023-10-10 中水珠江规划勘测设计有限公司 Method and device for creating three-dimensional model of earth-rock dam and extracting engineering quantity
CN117058384A (en) * 2023-08-22 2023-11-14 山东大学 Method and system for semantic segmentation of three-dimensional point cloud
CN117150664A (en) * 2023-07-25 2023-12-01 北京航空航天大学 CAD model analysis method, device, equipment and storage medium
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4200739A1 (en) * 2020-08-20 2023-06-28 Siemens Industry Software Inc. Method and system for providing a three-dimensional computer aided-design (cad) model in a cad environment
EP4092558A1 (en) * 2021-05-21 2022-11-23 Dassault Systèmes Parameterization of cad model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622479A (en) * 2012-03-02 2012-08-01 浙江大学 Reverse engineering computer-aided design (CAD) modeling method based on three-dimensional sketch
CN115965833A (en) * 2022-12-26 2023-04-14 中山大学·深圳 Point cloud sequence recognition model training and recognition method, device, equipment and medium
CN116721207A (en) * 2023-05-30 2023-09-08 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method, device, equipment and storage medium based on transducer model
CN116863091A (en) * 2023-06-30 2023-10-10 中水珠江规划勘测设计有限公司 Method and device for creating three-dimensional model of earth-rock dam and extracting engineering quantity
CN117150664A (en) * 2023-07-25 2023-12-01 北京航空航天大学 CAD model analysis method, device, equipment and storage medium
CN117058384A (en) * 2023-08-22 2023-11-14 山东大学 Method and system for semantic segmentation of three-dimensional point cloud
CN117454495A (en) * 2023-12-25 2024-01-26 北京飞渡科技股份有限公司 CAD vector model generation method and device based on building sketch outline sequence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds;Yujia Liu et al.;《arXiv:2312.04962v1》;20231207;第1-12页 *
建筑点云几何模型重建方法研究进展;杜建丽 等;《遥感学报》;20190331;第23卷(第3期);第374-391页 *

Also Published As

Publication number Publication date
CN117725966A (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN111627065B (en) Visual positioning method and device and storage medium
CN117725966B (en) Training method of sketch sequence reconstruction model, geometric model reconstruction method and equipment
JP7194685B2 (en) Convolutional neural network based on octree
CN113706686B (en) Three-dimensional point cloud reconstruction result completion method and related assembly
CN113705588A (en) Twin network target tracking method and system based on convolution self-attention module
CN111476719A (en) Image processing method, image processing device, computer equipment and storage medium
KR102305230B1 (en) Method and device for improving accuracy of boundary information from image
CN113516133B (en) Multi-modal image classification method and system
CN113222964B (en) Method and device for generating coronary artery central line extraction model
CN116152611B (en) Multistage multi-scale point cloud completion method, system, equipment and storage medium
CN112132739A (en) 3D reconstruction and human face posture normalization method, device, storage medium and equipment
CN111598111A (en) Three-dimensional model generation method and device, computer equipment and storage medium
CN116543388B (en) Conditional image generation method and related device based on semantic guidance information
CN115908908A (en) Remote sensing image gathering type target identification method and device based on graph attention network
CN111707262A (en) Point cloud matching method, medium, terminal and device based on closest point vector projection
CN114638866A (en) Point cloud registration method and system based on local feature learning
CN112825199A (en) Collision detection method, device, equipment and storage medium
CN117422823A (en) Three-dimensional point cloud characterization model construction method and device, electronic equipment and storage medium
CN115222947B (en) Rock joint segmentation method and device based on global self-attention transformation network
CN116912296A (en) Point cloud registration method based on position-enhanced attention mechanism
CN116597071A (en) Defect point cloud data reconstruction method based on K-nearest neighbor point sampling capable of learning
CN116110102A (en) Face key point detection method and system based on auxiliary thermodynamic diagram
CN112837420B (en) Shape complement method and system for terracotta soldiers and horses point cloud based on multi-scale and folding structure
CN111860824A (en) Data processing method and related product
CN114842153A (en) Method and device for reconstructing three-dimensional model from single two-dimensional wire frame diagram and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant