CN115982864B - Reconstruction method for assembly coordination boundary characteristics of large composite material component - Google Patents

Reconstruction method for assembly coordination boundary characteristics of large composite material component Download PDF

Info

Publication number
CN115982864B
CN115982864B CN202310273356.1A CN202310273356A CN115982864B CN 115982864 B CN115982864 B CN 115982864B CN 202310273356 A CN202310273356 A CN 202310273356A CN 115982864 B CN115982864 B CN 115982864B
Authority
CN
China
Prior art keywords
boundary
point cloud
point
feature
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310273356.1A
Other languages
Chinese (zh)
Other versions
CN115982864A (en
Inventor
汪俊
曾航彬
陈红华
李子宽
张沅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310273356.1A priority Critical patent/CN115982864B/en
Publication of CN115982864A publication Critical patent/CN115982864A/en
Application granted granted Critical
Publication of CN115982864B publication Critical patent/CN115982864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of assembly coordination boundary feature reconstruction, solves the technical problems of low precision and accuracy of large composite material member assembly coordination boundary feature reconstruction in the prior art, and particularly relates to a reconstruction method of large composite material member assembly coordination boundary features, wherein S1, point cloud data of large composite material member assembly butt joint are acquired; s2, preprocessing point cloud data and dividing boundary characteristic points and non-boundary characteristic points; s3, constructing a multi-scale deep learning feature coding network model; s4, carrying out regression according to the loss function to train a multi-scale deep learning feature coding network model; s5, the multi-scale deep learning characteristic coding network model which is completed through training is used for completing reconstruction of the assembly coordination boundary characteristics of the large composite material component. The invention can better reconstruct the coordination boundary characteristics of the assembly of the large composite material component, thereby improving the precision and the accuracy of the assembly of the large composite material component.

Description

Reconstruction method for assembly coordination boundary characteristics of large composite material component
Technical Field
The invention relates to the technical field of assembly coordination boundary feature reconstruction, in particular to a reconstruction method of assembly coordination boundary features of a large composite material component.
Background
With the development of aviation technology, the performance requirements on the structure of aviation equipment are gradually increased. The composite material has the advantages of light weight, high specific strength, designability of the structure, high comprehensive cost efficiency ratio and the like, and is widely applied to the aerospace industry.
Composite components, however, have characteristics and requirements that differ from conventional metal structures when assembled. First, there are two geometric deviations of the composite member: in the curing process of the composite material, the composite material member can be subjected to buckling deformation after being molded due to factors such as material anisotropy, resin shrinkage non-uniformity, thermal expansion coefficient difference of a die material and a part material, and the like, so that more remarkable part manufacturing deviation is generated;
the composite wallboard is integrally formed through a skin-stringer co-curing process, the size is increased continuously, the whole is still weak in rigidity, and the assembly conditions and the assembly adjusting force can have great influence on the appearance of the wallboard.
Due to the two geometric deviations, shim shimming and forced assembly are two common assembly deformation control techniques during assembly of composite aircraft panels. However, in the prior art, the accuracy and the accuracy of the reconstruction of the assembly coordination boundary characteristic of the large composite material component are low, so that the assembly coordination boundary characteristic line cannot be accurately reconstructed, and the digital assembly level of the aircraft and the assembly accuracy of the aircraft parts are reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a reconstruction method of the assembly coordination boundary characteristics of a large composite material component, which solves the technical problems of low accuracy and low accuracy of the reconstruction of the assembly coordination boundary characteristics of the large composite material component in the prior art.
In order to solve the technical problems, the invention provides the following technical scheme: a method for reconstructing assembly coordination boundary features of a large composite member, the method comprising the steps of:
s1, acquiring point cloud data of a large composite material member assembly butt joint;
s2, preprocessing point cloud data and dividing boundary characteristic points and non-boundary characteristic points;
s3, constructing a multi-scale deep learning feature coding network model;
s4, carrying out regression according to the loss function to train a multi-scale deep learning feature coding network model;
s5, the multi-scale deep learning characteristic coding network model which is completed through training is used for completing reconstruction of the assembly coordination boundary characteristics of the large composite material component.
Further, in step S2, the point cloud data is preprocessed and divided into boundary feature points and non-boundary feature points, and the specific process includes the following steps:
s21, denoising the acquired point cloud data, and setting the characteristic distanceThreshold of separation
Figure SMS_1
Distance threshold
Figure SMS_2
The distance from the point in the point cloud data to the adjacent boundary feature;
s22, constructing a theoretical digital-analog of the large composite material member, and downsampling the theoretical digital-analog into point cloud;
s23, locating boundary feature positions, and finding boundary feature points in adjacent theoretical digital-analog point clouds for points P in the point cloud data
Figure SMS_3
S24, projecting the point P to the boundary feature point
Figure SMS_4
On, calculate point P and boundary feature point +.>
Figure SMS_5
A distance D between the two;
s25, according to the distance threshold
Figure SMS_6
Judging whether the point P is a boundary feature point or not;
if the distance D is smaller than the distance threshold
Figure SMS_7
The point P is marked as a boundary feature point.
Further, in step S2, the criterion for dividing the boundary feature point and the non-boundary feature point is a comparison result of the distance from the point in the point cloud data to the adjacent boundary feature and the distance threshold.
Further, in step S3, the multi-scale deep learning feature encoding network model is composed of a feature extraction module, a category prediction module and a displacement prediction module;
the feature extraction module adopts a point-by-point backhaul network;
the category prediction module and the displacement prediction module are respectively composed of multi-layer MLP and an activation function and are used for predicting the category of the point cloud and the displacement vector of the boundary characteristic point.
Further, in step S4, the multi-scale deep learning feature encoding network model is trained by regression according to the loss function, and the specific process includes the following steps:
s41, carrying out normalization processing on point cloud data to obtain processed point cloud data;
s42, inputting the processed point cloud data into a feature extraction module to obtain a first multi-scale fusion feature;
s43, inputting the first multi-scale fusion features into a category prediction module and a displacement prediction module respectively, and obtaining binary classification labels of each point in the point cloud data and predicted displacement vectors from boundary feature points to adjacent boundary features as predicted values;
s44, constructing a loss function according to the predicted value and the true value, wherein the true value is the position of the boundary characteristic point cloud and the actual displacement vector in the theoretical digital-analog point cloud;
s45, obtaining a trained multi-scale deep learning feature coding network model through the optimized parameters of the random gradient descent algorithm.
Further, in step S41, the point cloud data is normalized to the space of the unit sphere and centered on the origin.
Further, in step S42, specifically including:
the feature extraction module extracts local features in the processed point cloud data point by point, global features are obtained through maximum pooling, and finally the local features and the global features are spliced together to obtain a first multi-scale fusion feature.
Further, in step S5, the reconstruction of the assembly coordination boundary features of the large composite member is completed by using the trained multi-scale deep learning feature encoding network model, and the specific process includes the following steps:
s51, inputting the preprocessed point cloud data into a trained multi-scale deep learning feature coding network model to obtain a second multi-scale fusion feature;
s52, respectively inputting the second multi-scale fusion features into a category prediction module and a displacement prediction module to obtain boundary feature points and displacement vectors reaching adjacent boundary features;
and S53, the boundary characteristic points are displaced to corresponding boundary characteristics according to the displacement vectors, and reconstruction of the assembly coordination boundary characteristics of the large composite material component is completed.
By means of the technical scheme, the invention provides a reconstruction method of the assembly coordination boundary characteristics of a large composite material member, which has at least the following beneficial effects:
1. the invention can accurately reconstruct the assembly coordination boundary characteristic line of the cloud data of the real-time points of the large composite material components, provides a processing basis for high-precision processing of the large composite material components with mutual assembly relationship in the assembly stage, and has important significance for improving the digital assembly level of the aircraft and the assembly precision of the aircraft parts;
2. according to the invention, the multiscale fusion characteristic of the point cloud is extracted from the deep learning network, the boundary characteristic points and the displacement vectors reaching the adjacent boundary characteristics are predicted, and the coordination boundary characteristics of the assembly of the large-scale composite material component can be better reconstructed, so that the precision and the accuracy of the assembly of the large-scale composite material component are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of a method for rebuilding assembly coordination boundary features of a large composite member of the present invention;
FIG. 2 is a network frame diagram of a multi-scale deep learning feature encoding network model of the present invention;
FIG. 3 is an exemplary graph of the results of the assembly coordination boundary feature reconstruction of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. Therefore, the implementation process of how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in a method of implementing an embodiment described above may be implemented by a program to instruct related hardware, and thus the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Background overview
In recent years, with the development of aviation technology, performance requirements for aviation equipment structures are gradually increasing. The composite material has the advantages of light weight, high specific strength, designability of the structure, high comprehensive cost efficiency ratio and the like, and is widely applied to the aerospace industry. Composite components, however, have characteristics and requirements that differ from conventional metal structures when assembled. First, there are two geometric deviations of the composite member: in the curing process of the composite material, the composite material member can be subjected to buckling deformation after being molded due to factors such as material anisotropy, resin shrinkage non-uniformity, thermal expansion coefficient difference of a die material and a part material, and the like, so that more remarkable part manufacturing deviation is generated; the composite wallboard is integrally formed through a skin-stringer co-curing process, the size is increased continuously, the whole is still weak in rigidity, and the assembly conditions and the assembly adjusting force can have great influence on the appearance of the wallboard.
Due to the two geometric deviations, shim shimming and forced assembly are two common assembly deformation control techniques during assembly of composite aircraft panels. The composite material member is formed by laying a plurality of layers of materials, belongs to anisotropic heterogeneous materials, and has strong brittleness, poor impact resistance and pressure resistance, low interlayer strength and easy damage when being subjected to external force; even if undamaged, residual stresses can affect its life and reliability.
Meanwhile, with the rapid development of computer network integration, digital analog simulation, automatic control and other technologies, modern aircraft assembly is developed towards digitization, automation and flexibility. The digital measurement technology is sequentially applied to the assembly process of the airplane by each domestic large main engine factory, and is mainly used for adjusting the pose of the parts, detecting the assembly quality of the parts and the like in the assembly process of the large parts of the airplane. The point cloud data is used as a main data form obtained by a digital measurement technology, and has wide application in the field of aviation, such as error analysis based on the real point cloud of the aircraft part and theoretical digital and analog, virtual assembly based on the real point cloud data of the aircraft part, reverse reconstruction based on the real point cloud data of the aircraft part and the like. The characteristic line of the point cloud data is a surface contour characteristic line of an actual measured object, and is used as a carrier for describing important geometric characteristics of the measured object, and has important applications in the field of aviation, for example: describing the remarkable characteristics of the aircraft parts, performing constraint modeling on the aircraft parts, determining the trimming line of the aircraft panel parts, detecting the assembly precision of the aircraft parts and the like.
However, in the prior art, the accuracy and the accuracy of the reconstruction of the assembly coordination boundary characteristic of the large composite material component are low, so that the assembly coordination boundary characteristic line cannot be accurately reconstructed, and the digital assembly level of the aircraft and the assembly accuracy of the aircraft parts are reduced.
In order to solve the technical problems described above, the present embodiment provides a method for reconstructing the assembly coordination boundary features of a large composite member, which solves the technical problems of low accuracy and low accuracy of the reconstruction of the assembly coordination boundary features of the large composite member in the prior art.
Referring to fig. 1-3, a specific implementation manner of the present embodiment is shown, in the present embodiment, by extracting multi-scale fusion features of point clouds in a deep learning network, predicting boundary feature points and displacement vectors reaching adjacent boundary features, the coordination boundary features of the assembly of the large-scale composite material component can be better reconstructed, so as to improve the precision and accuracy of the assembly of the large-scale composite material component.
Referring to fig. 1, the present embodiment provides a reconstruction method of a coordination boundary feature of a large composite member assembly, which includes the following steps:
s1, acquiring point cloud data of a large composite material member assembly butt joint, acquiring the point cloud data of the large composite material member assembly butt joint by adopting a three-dimensional scanner in the embodiment, wherein the assembly butt joint is a joint of a plurality of large composite material members after assembly, acquiring the point cloud data of the joint by adopting the three-dimensional scanner, and accurately reconstructing an assembly coordination boundary characteristic line after processing.
S2, preprocessing the point cloud data and dividing boundary feature points and non-boundary feature points, wherein the standard for dividing the boundary feature points and the non-boundary feature points is the comparison result of the distance from the point in the point cloud data to the adjacent boundary feature and the distance threshold value.
In step S2, the point cloud data is preprocessed and divided into boundary feature points and non-boundary feature points, and the specific process includes the following steps:
s21, denoising the acquired point cloud data, and setting a characteristic distance threshold value
Figure SMS_8
Distance threshold
Figure SMS_9
The distance from the point in the point cloud data to the adjacent boundary feature;
s22, constructing a theoretical digital-analog of the large composite material member according to a theoretical design drawing of the large composite material member, and downsampling the theoretical digital-analog into point cloud;
s23, roughly positioning the boundary characteristic position in the actual point cloud according to the theoretical digital-analog downsampled point cloud, and calculating a certain point in the boundary characteristic point
Figure SMS_10
Normal vector of->
Figure SMS_11
Finding out adjacent theoretical digital-analog point clouds for the point P in the point cloud dataBoundary feature points->
Figure SMS_12
S24, projecting the point P to the boundary feature point
Figure SMS_13
On, calculate point P and boundary feature point +.>
Figure SMS_14
A distance D between the two;
point P and boundary feature Point
Figure SMS_15
The calculation formula of the distance D between the two is as follows:
Figure SMS_16
Figure SMS_17
in the above-mentioned method, the step of,
Figure SMS_18
for a certain point in the boundary feature points +.>
Figure SMS_19
Is the normal vector of Q.
S25, according to the distance threshold
Figure SMS_20
Judging whether the point P is a boundary feature point or not;
if the distance D is smaller than the distance threshold
Figure SMS_21
The point P is marked as a boundary feature point.
Specifically, the steps can avoid processing the whole point cloud through preprocessing, and only focus on points near boundary features in the down-sampled theoretical digital-analog point cloud, thereby reducing the calculated amount and the feature distance thresholdValue of
Figure SMS_22
The number of boundary feature points can be controlled and is generally set to be 0.03 times of the diagonal length of the whole point cloud.
S3, constructing a multi-scale deep learning feature coding network model;
referring to fig. 2, the multi-scale deep learning feature encoding network model is composed of a feature extraction module, a category prediction module and a displacement prediction module;
the feature extraction module adopts a point-by-point backhaul network, such as DGCNN and PCPNet,
Figure SMS_23
representing a first multi-scale fusion feature output by a point-by-point backhaul network;
the category prediction module and the displacement prediction module are respectively composed of multi-layer MLP and an activation function, wherein the multi-layer MLP is shown in figure 2
Figure SMS_24
And->
Figure SMS_25
The multi-layer activation functions are shown as Dis and Cla in fig. 2, and the activation functions corresponding to the at least two layers of MLPs are both known means, and are not described in detail herein, and are used for predicting the category of the point cloud and the displacement vector of the boundary feature point.
S4, carrying out regression according to the loss function to train a multi-scale deep learning feature coding network model;
in step S4, the multi-scale deep learning feature encoding network model is trained by regression according to the loss function, and the specific process includes the following steps:
s41, carrying out normalization processing on point cloud data to obtain processed point cloud data;
in step S41, the point cloud data is normalized to the space of the unit sphere, and the unit limitation of the data is removed after the point cloud data is normalized by taking the origin as the center, and the point cloud data is converted into a dimensionless pure value, so that indexes of different units or orders of magnitude can be compared and weighted conveniently;
s42, inputting the processed point cloud data into a feature extraction module to obtain a first multi-scale fusion feature;
in step S42, specifically, the feature extraction module extracts local features in the processed point cloud data point by point, obtains global features through maximum pooling, and finally splices the local features and the global features together to obtain a first multi-scale fusion feature.
The feature information of the local features is rich, the receptive field range is smaller, surrounding information cannot be well related, the receptive field range of the global features is large, the feature information is not rich, the distinguishing power is not strong, the first multi-scale fusion features combining the two can combine the advantages of the two, and the receptive field range is large, the information is rich and the surrounding information can be related.
S43, inputting the first multi-scale fusion features into a category prediction module and a displacement prediction module respectively, and obtaining binary classification labels of each point in the point cloud data and predicted displacement vectors from boundary feature points to adjacent boundary features as predicted values;
specifically, in the step, the spliced first multi-scale fusion features are respectively input into a category prediction module and a displacement prediction module, through an MLP and an activation function, a binary classification label (1 or 0) of each point in the point cloud data and a prediction displacement vector from a boundary feature point to an adjacent boundary feature are respectively output, wherein the point with the classification label of 1 is the boundary feature point, and only the boundary feature point carries out regression of the adjacent boundary feature displacement vector.
S44, constructing a loss function according to the predicted value and the true value, wherein the true value is the position of the boundary characteristic point cloud and the actual displacement vector in the theoretical digital-analog point cloud;
the position of the boundary characteristic point cloud is three-dimensional space coordinates of the boundary characteristic point cloud, and the actual displacement vector
Figure SMS_26
For the point in the actual point cloud +.>
Figure SMS_27
Projection points in boundary features->
Figure SMS_28
A connection between them.
Constructed loss function
Figure SMS_29
The following are provided:
Figure SMS_30
Figure SMS_31
Figure SMS_32
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_35
and->
Figure SMS_38
Two coefficients for balancing classification label prediction and displacement vector prediction; />
Figure SMS_40
And
Figure SMS_34
loss functions of classification prediction and displacement prediction respectively; />
Figure SMS_37
And->
Figure SMS_41
Respectively obtaining a point cloud label in a theoretical digital model after downsampling and a point cloud label for classification prediction; />
Figure SMS_43
Representing judgment of point cloud label, only boundary feature points are used for displacement prediction;/>
Figure SMS_33
And->
Figure SMS_36
The actual displacement vector and the predicted displacement vector from the boundary feature point to the adjacent boundary feature point are respectively +.>
Figure SMS_39
、/>
Figure SMS_42
Is a common classical function log (1+e≡x), e is a natural logarithm.
S45, obtaining a trained multi-scale deep learning feature coding network model through the optimized parameters of the random gradient descent algorithm.
S5, completing reconstruction of the assembly coordination boundary characteristics of the large composite material component by using the trained multi-scale deep learning network model.
In step S5, the rebuilding of the coordination boundary features of the assembly of the large composite material member is completed by using the trained multi-scale deep learning network model, and the specific process includes the following steps:
s51, inputting the preprocessed point cloud data into a trained multi-scale deep learning feature coding network model to obtain a second multi-scale fusion feature;
s52, respectively inputting the second multi-scale fusion features into a category prediction module and a displacement prediction module to obtain boundary feature points and displacement vectors reaching adjacent boundary features;
and S53, the boundary characteristic points are displaced to corresponding boundary characteristics according to the displacement vectors, and reconstruction of the assembly coordination boundary characteristics of the large composite material component is completed.
Specifically, after network training, boundary feature points and displacement vectors reaching adjacent boundary features are respectively output, the boundary feature points are converted according to the displacement vectors and finally converted to corresponding boundary features, and point clouds at the boundary features are increased, so that the final boundary feature reconstruction effect is better.
Referring to fig. 3, a schematic diagram of a reconstruction result obtained by the reconstruction method provided by the present invention is illustrated as an example, where an upper half part in the diagram is a boundary characteristic line of a part before boundary characteristic reconstruction, and a lower half part in the diagram is a boundary characteristic line after reconstruction, and two sets of diagrams are compared to accurately reconstruct an assembly coordination boundary characteristic line of real-point cloud data of a large composite material member.
The foregoing embodiments have been presented in a detail description of the invention, and are presented herein with a particular application to the understanding of the principles and embodiments of the invention, the foregoing embodiments being merely intended to facilitate an understanding of the method of the invention and its core concepts; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (7)

1. A method for reconstructing assembly coordination boundary features of a large composite member, the method comprising the steps of:
s1, acquiring point cloud data of a large composite material member assembly butt joint;
s2, preprocessing point cloud data and dividing boundary characteristic points and non-boundary characteristic points;
s3, constructing a multi-scale deep learning feature coding network model;
s4, carrying out regression according to the loss function to train a multi-scale deep learning feature coding network model;
in step S4, the specific process includes the following steps:
s41, carrying out normalization processing on point cloud data to obtain processed point cloud data;
s42, inputting the processed point cloud data into a feature extraction module to obtain a first multi-scale fusion feature;
s43, inputting the first multi-scale fusion features into a category prediction module and a displacement prediction module respectively, and obtaining binary classification labels of each point in the point cloud data and predicted displacement vectors from boundary feature points to adjacent boundary features as predicted values;
s44, constructing a loss function according to the predicted value and the true value, wherein the true value is the position of the boundary characteristic point cloud and the actual displacement vector in the theoretical digital-analog point cloud;
s45, obtaining a trained multi-scale deep learning feature coding network model through parameters optimized by a random gradient descent algorithm;
s5, the multi-scale deep learning characteristic coding network model which is completed through training is used for completing reconstruction of the assembly coordination boundary characteristics of the large composite material component.
2. Reconstruction method according to claim 1, characterized in that: in step S2, the point cloud data is preprocessed and divided into boundary feature points and non-boundary feature points, and the specific process includes the following steps:
s21, denoising the acquired point cloud data, and setting a characteristic distance threshold value
Figure QLYQS_1
Distance threshold
Figure QLYQS_2
The distance from the point in the point cloud data to the adjacent boundary feature;
s22, constructing a theoretical digital-analog of the large composite material member, and downsampling the theoretical digital-analog into point cloud;
s23, positioning boundary feature positions, which are points in the point cloud data
Figure QLYQS_3
Finding boundary feature points in adjacent theoretical digital-analog point clouds>
Figure QLYQS_4
S24, projecting the point P to the boundary feature point
Figure QLYQS_5
On, calculate Point->
Figure QLYQS_6
And boundary feature points->
Figure QLYQS_7
A distance D between the two;
s25, according to the distance threshold
Figure QLYQS_8
Decision Point->
Figure QLYQS_9
Whether the boundary feature points are boundary feature points or not;
if the distance D is smaller than the distance threshold
Figure QLYQS_10
Then ∈point>
Figure QLYQS_11
Marked as boundary feature points.
3. Reconstruction method according to claim 1 or 2, characterized in that: in step S2, the criterion for dividing boundary feature points and non-boundary feature points is a comparison result of the distance from a point in the point cloud data to an adjacent boundary feature and the distance threshold.
4. Reconstruction method according to claim 1, characterized in that: in step S3, the multi-scale deep learning feature encoding network model is composed of a feature extraction module, a category prediction module and a displacement prediction module;
the feature extraction module adopts a point-by-point backhaul network;
the category prediction module and the displacement prediction module are respectively composed of multi-layer MLP and an activation function and are used for predicting the category of the point cloud and the displacement vector of the boundary characteristic point.
5. Reconstruction method according to claim 1, characterized in that: the step S41 specifically includes:
the point cloud data is normalized into the space of the unit sphere and centered on the origin.
6. Reconstruction method according to claim 1, characterized in that: in step S42, specifically, the method includes:
the feature extraction module extracts local features in the processed point cloud data point by point, global features are obtained through maximum pooling, and finally the local features and the global features are spliced together to obtain a first multi-scale fusion feature.
7. Reconstruction method according to claim 1, characterized in that: in step S5, the rebuilding of the coordination boundary features of the assembly of the large composite material member is completed by using the trained multi-scale deep learning feature coding network model, and the specific process includes the following steps:
s51, inputting the preprocessed point cloud data into a trained multi-scale deep learning feature coding network model to obtain a second multi-scale fusion feature;
s52, respectively inputting the second multi-scale fusion features into a category prediction module and a displacement prediction module to obtain boundary feature points and displacement vectors reaching adjacent boundary features;
and S53, the boundary characteristic points are displaced to corresponding boundary characteristics according to the displacement vectors, and reconstruction of the assembly coordination boundary characteristics of the large composite material component is completed.
CN202310273356.1A 2023-03-21 2023-03-21 Reconstruction method for assembly coordination boundary characteristics of large composite material component Active CN115982864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310273356.1A CN115982864B (en) 2023-03-21 2023-03-21 Reconstruction method for assembly coordination boundary characteristics of large composite material component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310273356.1A CN115982864B (en) 2023-03-21 2023-03-21 Reconstruction method for assembly coordination boundary characteristics of large composite material component

Publications (2)

Publication Number Publication Date
CN115982864A CN115982864A (en) 2023-04-18
CN115982864B true CN115982864B (en) 2023-06-27

Family

ID=85974470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310273356.1A Active CN115982864B (en) 2023-03-21 2023-03-21 Reconstruction method for assembly coordination boundary characteristics of large composite material component

Country Status (1)

Country Link
CN (1) CN115982864B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082446A (en) * 2022-07-25 2022-09-20 南京航空航天大学 Method for measuring aircraft skin rivet based on image boundary extraction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111077844B (en) * 2019-12-12 2020-09-22 南京航空航天大学 Part accurate machining method based on measured data feature guidance
CN111814888A (en) * 2020-07-14 2020-10-23 南京航空航天大学苏州研究院 Three-dimensional scanning line point cloud gap step extraction method for aircraft skin butt joint
CN112967249B (en) * 2021-03-03 2023-04-07 南京工业大学 Intelligent identification method for manufacturing errors of prefabricated pier reinforcing steel bar holes based on deep learning
CN114626470B (en) * 2022-03-18 2024-02-02 南京航空航天大学深圳研究院 Aircraft skin key feature detection method based on multi-type geometric feature operator
CN115147834B (en) * 2022-09-06 2023-05-05 南京航空航天大学 Point cloud-based plane feature extraction method, device and equipment for airplane stringer

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082446A (en) * 2022-07-25 2022-09-20 南京航空航天大学 Method for measuring aircraft skin rivet based on image boundary extraction

Also Published As

Publication number Publication date
CN115982864A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US11557029B2 (en) Method for detecting and recognizing surface defects of automated fiber placement composite based on image converted from point cloud
CN115147834B (en) Point cloud-based plane feature extraction method, device and equipment for airplane stringer
CN110929696A (en) Remote sensing image semantic segmentation method based on multi-mode attention and self-adaptive fusion
CN111947595A (en) Ship outer plate reverse modeling implementation method based on three-dimensional laser scanning
CN114972312A (en) Improved insulator defect detection method based on YOLOv4-Tiny
CN107578448B (en) CNN-based method for identifying number of spliced curved surfaces contained in calibration-free curved surface
CN116310329A (en) Skin lesion image segmentation method based on lightweight multi-scale UNet
CN116681895A (en) Method, system, equipment and medium for segmenting airplane grid model component
CN115982864B (en) Reconstruction method for assembly coordination boundary characteristics of large composite material component
CN117809164A (en) Substation equipment fault detection method and system based on multi-mode fusion
CN117540779A (en) Lightweight metal surface defect detection method based on double-source knowledge distillation
Fan et al. Application of YOLOv5 neural network based on improved attention mechanism in recognition of Thangka image defects
CN116012687A (en) Image interaction fusion method for identifying tread defects of wheel set
CN113989631A (en) Infrared image target detection network compression method based on convolutional neural network
CN116342542A (en) Lightweight neural network-based steel product surface defect detection method
Yu et al. Machine vision problem for fast recognition of surface defects of thermoelectric cooler components based on deep learning method
Liu et al. DeviationGAN: A generative end-to-end approach for the deviation prediction of sheet metal assembly
CN115587989A (en) Workpiece CT image defect detection and segmentation method and system
Wang et al. High-Voltage Transmission Line Foreign Object and Power Component Defect Detection Based on Improved YOLOv5
CN116011107A (en) Method, device and system for extracting hole characteristics of large composite material component
CN116128058B (en) Heterogeneous power generation equipment state judging method, heterogeneous power generation equipment state judging device, storage medium and heterogeneous power generation equipment
CN110659525B (en) Optimized design method for composite structure
CN117892590B (en) Concrete dam damage identification and safety assessment method based on crack intelligent identification and finite element inversion
CN116911079B (en) Self-evolution modeling method and system for incomplete model
CN117456530B (en) Building contour segmentation method, system, medium and equipment based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant