CN117132638A - Volume data acquisition method based on image scanning - Google Patents

Volume data acquisition method based on image scanning Download PDF

Info

Publication number
CN117132638A
CN117132638A CN202311069552.3A CN202311069552A CN117132638A CN 117132638 A CN117132638 A CN 117132638A CN 202311069552 A CN202311069552 A CN 202311069552A CN 117132638 A CN117132638 A CN 117132638A
Authority
CN
China
Prior art keywords
value
image
noise
window
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311069552.3A
Other languages
Chinese (zh)
Inventor
尚晨希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyun Chengeng Culture Communication Co ltd
Original Assignee
Beijing Haiyun Chengeng Culture Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyun Chengeng Culture Communication Co ltd filed Critical Beijing Haiyun Chengeng Culture Communication Co ltd
Priority to CN202311069552.3A priority Critical patent/CN117132638A/en
Publication of CN117132638A publication Critical patent/CN117132638A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a volume data acquisition method based on image scanning, which comprises the following steps: scanning the target object based on the image scanning equipment to acquire image data of the surface of the target object; preprocessing the image data based on the filter window to obtain contour information of the target object; inputting the contour information into a trained three-dimensional reconstruction network to obtain volume data; the three-dimensional reconstruction network is trained based on a neural network of the self-encoder 2D-3D attention mechanism. The invention can make the outline and texture of the target area clearer, and realize the high-quality three-dimensional reconstruction of the target object, thereby improving the accuracy of volume data acquisition.

Description

Volume data acquisition method based on image scanning
Technical Field
The invention relates to the technical field of image processing, in particular to a volume data acquisition method based on image scanning.
Background
In many fields, such as medical, industrial and construction, accurate acquisition of volumetric information of an object is critical to design, production and management efforts.
Conventional volumetric data acquisition methods typically require manual measurement and calculation, are time consuming and prone to error. Therefore, there is a need for an efficient and accurate method of volumetric data acquisition to address this problem.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a volume data acquisition method based on image scanning.
In order to achieve the above object, the present invention provides the following solutions:
a volumetric data acquisition method based on image scanning, comprising:
scanning a target object based on an image scanning device to acquire image data of the surface of the target object; the number of the image scanning devices is at least 1;
preprocessing the image data based on a filtering window to obtain contour information of the target object;
inputting the contour information into a trained three-dimensional reconstruction network to obtain volume data;
the construction method of the three-dimensional reconstruction network comprises the following steps:
constructing a neural network based on a self-encoder 2D-3D attention mechanism;
initializing parameters of the neural network;
performing feedforward conduction on the input training image in the initialized neural network and calculating a training loss value of the projection of the input training image;
and acquiring the variation value of each layer of parameters in the neural network by adopting an error back propagation method, and updating the parameters of the corresponding layer until the training loss value is lower than a preset threshold value or the training times reach a preset value, so as to obtain a trained three-dimensional reconstruction model.
Preferably, the method further comprises:
and processing and analyzing the acquired volume data to generate a volume data report.
Preferably, preprocessing the image data based on a filtering window to obtain contour information of the target object, including:
detecting noise points on the image data by using a filtering window to obtain a noise average value;
when the average value of noise in the filtering window is larger than a preset threshold value, denoising the image data in the corresponding filtering window;
sliding the filter window, returning to the step of denoising the image data in the corresponding filter window when the average value of noise in the filter window is larger than a preset threshold value until the whole image data is traversed to obtain denoising data;
taking any point on the denoising data as a center, taking a neighborhood window, and calculating the gray average value of all pixel points in the neighborhood window;
taking the gray average value of the corresponding pixel point as the output of the central pixel point to obtain average value image data;
calculating the correlation degree between the mean image data and the denoising data, and obtaining an optimal segmentation threshold according to the correlation degree;
and dividing the denoising data by using the optimal dividing threshold value to obtain the divided contour information.
Preferably, detecting noise points on the image data using a filter window to obtain a noise mean value includes:
constructing a noise point detection model according to the mean value and the median value of each image point in the filter window; the formula of the noise point detection model is as follows: the noise point detection model is as follows:wherein f (x) represents a similar noise value of the pixel point x, u (x) represents a gray value of the pixel point x, u men (x) Representing the gray average value of all pixels in the filter window centered on pixel x,/->Represents the gradient mean value of pixel x, +.>Is the gray median value of all pixel points in the filter window taking the pixel point x as the center,/for the pixel points>Gradient values of the pixel points x in the horizontal direction are represented;
detecting each image point in the filter window by using the noise point detection model to obtain a similar noise value of each image point;
taking corresponding image points larger than the similar noise value as noise points;
and determining the noise average value according to the number of the noise points and the number of the image points in the filtering window.
Preferably, when the average noise value in the filtering window is greater than a preset threshold, denoising the image data in the corresponding filtering window includes:
calculating a pseudo pixel variance according to the gray median value of all pixel points in the filter window; wherein, the false pixel variance calculation formula is:wherein (1)>Representing the pseudo pixel variance of the pixel point (a, b) in the area with the size of (2n+1) x (2n+1) of the filter window, mean (a, b) represents the gray median of the pixel point (a, b) in the filter window, and x (k, l) represents the gray value of the pixel point in the (k, l) position;
constructing a window denoising model by using the pseudo pixel variance; the formula of the window denoising model is as follows:wherein f (a, b) represents the gray value of the pixel point (a, b) after denoising, D is an adjustable coefficient, and x (a, b) represents the gray value of the pixel point (a, b) in the filter window.
Preferably, calculating a correlation between the mean image data and the denoising data, and obtaining an optimal segmentation threshold according to the correlation, includes:
extracting gray values at the same position on the denoising data and the mean image data to form a gray array;
constructing a segmentation function by using the gray array;
acquiring a preset segmentation array, and continuously adjusting the preset segmentation array until the value of a segmentation function is maximum;
and taking the segmentation array corresponding to the maximum value of the segmentation function as the optimal segmentation threshold value.
Preferably, the neural network of the self-encoder 2D-3D attention mechanism is constructed from a residual network, a convolutional recurrent neural network and a long-term memory network.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a volume data acquisition method based on image scanning, which comprises the following steps: scanning a target object based on an image scanning device to acquire image data of the surface of the target object; the number of the image scanning devices is at least 1; preprocessing the image data based on a filtering window to obtain contour information of the target object; inputting the contour information into a trained three-dimensional reconstruction network to obtain volume data; the construction method of the three-dimensional reconstruction network comprises the following steps: constructing a neural network based on a self-encoder 2D-3D attention mechanism; initializing parameters of the neural network; performing feedforward conduction on the input training image in the initialized neural network and calculating a training loss value of the projection of the input training image; and acquiring the variation value of each layer of parameters in the neural network by adopting an error back propagation method, and updating the parameters of the corresponding layer until the training loss value is lower than a preset threshold value or the training times reach a preset value, so as to obtain a trained three-dimensional reconstruction model. According to the invention, the filtering window is utilized to filter the image data, so that the background area of the image of the target object can be stripped off, the outline and texture of the target area are clearer, and the attention mechanism is introduced into the self-encoder network, thereby realizing high-quality three-dimensional reconstruction of the target object, and further improving the accuracy of volume data acquisition.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a volume data acquisition method based on image scanning, which comprises the steps of firstly filtering image data by utilizing a filtering window, stripping a background area of an image of a target object, enabling the outline and texture of the target area to be clearer, introducing an attention mechanism into a self-encoder network, realizing high-quality three-dimensional reconstruction of the target object, and improving the accuracy of volume data acquisition.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a flowchart of a method provided by an embodiment of the present invention, and as shown in fig. 1, the present invention provides a method for acquiring volumetric data based on image scanning, including:
step 100: scanning a target object based on an image scanning device to acquire image data of the surface of the target object; the number of the image scanning devices is at least 1;
step 200: preprocessing the image data based on a filtering window to obtain contour information of the target object;
step 300: inputting the contour information into a trained three-dimensional reconstruction network to obtain volume data;
the construction method of the three-dimensional reconstruction network comprises the following steps:
step 301: constructing a neural network based on a self-encoder 2D-3D attention mechanism;
step 302: initializing parameters of the neural network;
step 302: performing feedforward conduction on the input training image in the initialized neural network and calculating a training loss value of the projection of the input training image;
step 304: and acquiring the variation value of each layer of parameters in the neural network by adopting an error back propagation method, and updating the parameters of the corresponding layer until the training loss value is lower than a preset threshold value or the training times reach a preset value, so as to obtain a trained three-dimensional reconstruction model.
Specifically, the present embodiment can be applied to the medical field, the industrial field, and the construction field: the medical field is used for measuring volumes of organs, tumors, etc., assisting medical diagnosis and treatment. The industrial field is used for measuring the volume of parts, products and the like, and assisting production and quality control. The building field is used for measuring the volume of buildings, lands and the like, assisting in design and planning.
Preferably, the method further comprises:
and processing and analyzing the acquired volume data to generate a volume data report.
Specifically, the present embodiment is capable of data processing and analysis. The acquired volume data is processed and analyzed, such as generating reports, comparing volumes of different objects, etc.
Further, the three-dimensional reconstruction model in the present embodiment includes: the method comprises a feature extraction network, a three-dimensional model generation decoding network and an attention mechanism network. The feature extraction network is mainly responsible for feature extraction of original data, the resolution of an input image is 64 multiplied by 64, a residual network is adopted to obtain low-dimensional rich features, a convolution long-short-term memory network is used to enable the features to be in spatial and time connection, a 5 multiplied by 5 convolution kernel is used to obtain a 32 multiplied by 64 feature map, finally, multiple layers of convolution are adopted to control the dimensionality of an implicit vector, and 3 multiplied by 3,4 multiplied by 4 and 5 multiplied by 5 convolution kernels are respectively used to obtain a series of 1 multiplied by 512 implicit vectors. The implicit vector will be the input to the decoding section and the long and short term memory network.
The three-dimensional model generation decoding network adopts three-dimensional convolution kernels in order to obtain a three-dimensional model, adopts a form symmetrical to an image coding part, firstly adopts a convolution long-short-period memory network to obtain characteristic graphs of 3 multiplied by 3 and 512 channels, then adopts three-layer convolution, and finally obtains the three-dimensional model of 32 multiplied by 1 channel, wherein the convolution kernels are respectively 4 multiplied by 4,5 multiplied by 5,6 multiplied by 6. The convolution and the convolution long-term and short-term memory network are effectively combined, so that the precision of the three-dimensional model can be ensured.
Note that the mechanism network part aims at inputting the implicit vector obtained by the original data and the coding part, so that the long-term memory network is continuously updated, and finally, the angle of the image can be output. And inputting the images corresponding to the angles into the network again, promoting the three-dimensional reconstruction to be completed more quickly, and finally using as few images as possible to achieve a high-precision three-dimensional reconstruction result. In the network, an input image and an implicit vector are combined and input into a long-period and short-period memory network, the long-period and short-period memory network continuously updates a hidden layer, then an image required to be input in a next frame is obtained through a full-link layer, and the image is input into an image coding network again to realize circulation.
Preferably, preprocessing the image data based on a filtering window to obtain contour information of the target object, including:
detecting noise points on the image data by using a filtering window to obtain a noise average value;
when the average value of noise in the filtering window is larger than a preset threshold value, denoising the image data in the corresponding filtering window;
sliding the filter window, returning to the step of denoising the image data in the corresponding filter window when the average value of noise in the filter window is larger than a preset threshold value until the whole image data is traversed to obtain denoising data;
taking any point on the denoising data as a center, taking a neighborhood window, and calculating the gray average value of all pixel points in the neighborhood window;
taking the gray average value of the corresponding pixel point as the output of the central pixel point to obtain average value image data;
calculating the correlation degree between the mean image data and the denoising data, and obtaining an optimal segmentation threshold according to the correlation degree;
and dividing the denoising data by using the optimal dividing threshold value to obtain the divided contour information.
Preferably, detecting noise points on the image data using a filter window to obtain a noise mean value includes:
constructing a noise point detection model according to the mean value and the median value of each image point in the filter window; the formula of the noise point detection model is as follows: the noise point detection model is as follows:wherein f (x) represents a similar noise value of the pixel point x, u (x) represents a gray value of the pixel point x, u men (x) Representing the gray average value of all pixels in the filter window centered on pixel x,/->Represents the gradient mean value of pixel x, +.>Is the gray median value of all pixel points in the filter window taking the pixel point x as the center,/for the pixel points>Gradient values of the pixel points x in the horizontal direction are represented;
detecting each image point in the filter window by using the noise point detection model to obtain a similar noise value of each image point;
taking corresponding image points larger than the similar noise value as noise points;
and determining the noise average value according to the number of the noise points and the number of the image points in the filtering window.
Furthermore, in the embodiment, the noise point detection model is constructed based on the mean value, the median value and the gradient mean value of each image point in the filtering window, so that the difference between the noise point and the original pixel point can be detected from multiple aspects, and the detection of the noise point is more accurate.
Preferably, when the average noise value in the filtering window is greater than a preset threshold, denoising the image data in the corresponding filtering window includes:
calculating a pseudo pixel variance according to the gray median value of all pixel points in the filter window; wherein, the false pixel variance calculation formula is:wherein (1)>Representing the pseudo pixel variance of the pixel point (a, b) in the area with the size of (2n+1) x (2n+1) of the filter window, mean (a, b) represents the gray median of the pixel point (a, b) in the filter window, and x (k, l) represents the gray value of the pixel point in the (k, l) position;
constructing a window denoising model by using the pseudo pixel variance; the formula of the window denoising model is as follows:wherein f (a, b) represents the gray value of the pixel point (a, b) after denoising, D is an adjustable coefficient, and x (a, b) represents the gray value of the pixel point (a, b) in the filter window.
Alternatively, the original filtering algorithm, such as the mean filtering algorithm, performs mean processing (whether containing noise points or not) on the pixels in each neighborhood of the image data, so that the processed image becomes blurred, and the invention can find the noise points on the image data by using the noise point detection model, and then performs filtering processing on the corresponding noise points, so that the original pixel information of the image data can be maintained while the noise points in the image are smoothed. In practical application, the invention can set corresponding detection interval values according to practical situations.
Furthermore, the invention is based on the window denoising model, and denoising the image data in the corresponding filtering window can alleviate the problem that some characteristic gradients in the image data disappear by the existing denoising method (such as median filtering denoising, mean filtering denoising, wavelet denoising and the like), can reserve the original pixel information of the image to the greatest extent, and improves the interpretation effect of the image.
Preferably, calculating a correlation between the mean image data and the denoising data, and obtaining an optimal segmentation threshold according to the correlation, includes:
extracting gray values at the same position on the denoising data and the mean image data to form a gray array;
constructing a segmentation function by using the gray array;
acquiring a preset segmentation array, and continuously adjusting the preset segmentation array until the value of a segmentation function is maximum;
and taking the segmentation array corresponding to the maximum value of the segmentation function as the optimal segmentation threshold value.
The invention divides the image based on the principle of the histogram, can obtain the optimal dividing threshold value on the whole according to the probability of the gray value distribution of the image, and can divide the background area and the target area of the image data by dividing the image by using the dividing threshold value, thereby being convenient for the technicians to identify and extract the image data.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (7)

1. A volumetric data acquisition method based on image scanning, comprising:
scanning a target object based on an image scanning device to acquire image data of the surface of the target object; the number of the image scanning devices is at least 1;
preprocessing the image data based on a filtering window to obtain contour information of the target object;
inputting the contour information into a trained three-dimensional reconstruction network to obtain volume data;
the construction method of the three-dimensional reconstruction network comprises the following steps:
constructing a neural network based on a self-encoder 2D-3D attention mechanism;
initializing parameters of the neural network;
performing feedforward conduction on the input training image in the initialized neural network and calculating a training loss value of the projection of the input training image;
and acquiring the variation value of each layer of parameters in the neural network by adopting an error back propagation method, and updating the parameters of the corresponding layer until the training loss value is lower than a preset threshold value or the training times reach a preset value, so as to obtain a trained three-dimensional reconstruction model.
2. The image scanning-based volumetric data acquisition method according to claim 1, characterized by further comprising:
and processing and analyzing the acquired volume data to generate a volume data report.
3. The image scanning-based volumetric data acquisition method according to claim 1, wherein preprocessing the image data based on a filter window to obtain profile information of the target object, comprises:
detecting noise points on the image data by using a filtering window to obtain a noise average value;
when the average value of noise in the filtering window is larger than a preset threshold value, denoising the image data in the corresponding filtering window;
sliding the filter window, returning to the step of denoising the image data in the corresponding filter window when the average value of noise in the filter window is larger than a preset threshold value until the whole image data is traversed to obtain denoising data;
taking any point on the denoising data as a center, taking a neighborhood window, and calculating the gray average value of all pixel points in the neighborhood window;
taking the gray average value of the corresponding pixel point as the output of the central pixel point to obtain average value image data;
calculating the correlation degree between the mean image data and the denoising data, and obtaining an optimal segmentation threshold according to the correlation degree;
and dividing the denoising data by using the optimal dividing threshold value to obtain the divided contour information.
4. A volumetric data acquisition method according to claim 3, wherein detecting noise points on the image data using a filter window to obtain a noise mean value comprises:
constructing a noise point detection model according to the mean value and the median value of each image point in the filter window; the formula of the noise point detection model is as follows: the noise point detection model is as follows:wherein f (x) represents a similar noise value of the pixel point x, u (x) represents a gray value of the pixel point x, u men (x) Representing the gray average value of all pixels in the filter window centered on pixel x,/->Represents the gradient mean value of pixel x, +.>Is the gray median value of all pixel points in the filter window taking the pixel point x as the center,/for the pixel points>Gradient values of the pixel points x in the horizontal direction are represented;
detecting each image point in the filter window by using the noise point detection model to obtain a similar noise value of each image point;
taking corresponding image points larger than the similar noise value as noise points;
and determining the noise average value according to the number of the noise points and the number of the image points in the filtering window.
5. The image scanning-based volumetric data collection method according to claim 3, wherein denoising the image data in the corresponding filter window when the noise mean in the filter window is greater than a preset threshold, comprising:
calculating a pseudo pixel variance according to the gray median value of all pixel points in the filter window; wherein, the false pixel variance calculation formula is:wherein (1)>Representing the pseudo pixel variance of pixel points (a, b) in the region of the filter window size (2n+1) x (2n+1), mean (a, b) representing the pixelThe gray median of the points (a, b) in the filter window, x (k, l) represents the gray value of the pixel point at the (k, l) position;
constructing a window denoising model by using the pseudo pixel variance; the formula of the window denoising model is as follows:wherein f (a, b) represents the gray value of the pixel point (a, b) after denoising, D is an adjustable coefficient, and x (a, b) represents the gray value of the pixel point (a, b) in the filter window.
6. The image scan based volumetric data collection method according to claim 5, wherein calculating a correlation between the mean image data and the denoising data and deriving an optimal segmentation threshold based on the correlation comprises:
extracting gray values at the same position on the denoising data and the mean image data to form a gray array;
constructing a segmentation function by using the gray array;
acquiring a preset segmentation array, and continuously adjusting the preset segmentation array until the value of a segmentation function is maximum;
and taking the segmentation array corresponding to the maximum value of the segmentation function as the optimal segmentation threshold value.
7. The image scanning-based volumetric data collection method according to claim 1, wherein said neural network of the self-encoder 2D-3D attention mechanism is constructed from a residual network, a convolutional cyclic neural network and a long-short-term memory network.
CN202311069552.3A 2023-08-24 2023-08-24 Volume data acquisition method based on image scanning Pending CN117132638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311069552.3A CN117132638A (en) 2023-08-24 2023-08-24 Volume data acquisition method based on image scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311069552.3A CN117132638A (en) 2023-08-24 2023-08-24 Volume data acquisition method based on image scanning

Publications (1)

Publication Number Publication Date
CN117132638A true CN117132638A (en) 2023-11-28

Family

ID=88862226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311069552.3A Pending CN117132638A (en) 2023-08-24 2023-08-24 Volume data acquisition method based on image scanning

Country Status (1)

Country Link
CN (1) CN117132638A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks
CN110335344A (en) * 2019-06-20 2019-10-15 中国科学院自动化研究所 Three-dimensional rebuilding method based on 2D-3D attention mechanism neural network model
CN115311309A (en) * 2022-09-05 2022-11-08 中科微影(浙江)医疗科技有限公司 Method and system for identifying and extracting focus of nuclear magnetic resonance image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903292A (en) * 2019-01-24 2019-06-18 西安交通大学 A kind of three-dimensional image segmentation method and system based on full convolutional neural networks
CN110335344A (en) * 2019-06-20 2019-10-15 中国科学院自动化研究所 Three-dimensional rebuilding method based on 2D-3D attention mechanism neural network model
CN115311309A (en) * 2022-09-05 2022-11-08 中科微影(浙江)医疗科技有限公司 Method and system for identifying and extracting focus of nuclear magnetic resonance image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗静: "乳腺癌筛查", 31 October 2021, 四川大学出版社 *

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN109242888B (en) Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
EP1163644B1 (en) Method and apparatus for image processing
CN104933709B (en) Random walk CT lung tissue image automatic segmentation methods based on prior information
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN103440644B (en) A kind of multi-scale image weak edge detection method based on minimum description length
US20070223815A1 (en) Feature Weighted Medical Object Contouring Using Distance Coordinates
US7929741B2 (en) System and method for automated detection of mucus plugs within bronchial tree in MSCT images
US9536318B2 (en) Image processing device and method for detecting line structures in an image data set
CN108932699B (en) Three-dimensional matching harmonic filtering image denoising method based on transform domain
CN113160392B (en) Optical building target three-dimensional reconstruction method based on deep neural network
CN109993797A (en) Door and window method for detecting position and device
CN113570658A (en) Monocular video depth estimation method based on depth convolutional network
CN113052866B (en) Ultrasonic image tongue contour extraction method based on local binary fitting model
CN112819739B (en) Image processing method and system for scanning electron microscope
CN113012127A (en) Cardiothoracic ratio measuring method based on chest medical image
EP4343680A1 (en) De-noising data
CN117132638A (en) Volume data acquisition method based on image scanning
CN116523739A (en) Unsupervised implicit modeling blind super-resolution reconstruction method and device
CN115601535A (en) Chest radiograph abnormal recognition domain self-adaption method and system combining Wasserstein distance and difference measurement
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
CN112396579A (en) Human tissue background estimation method and device based on deep neural network
CN112419283A (en) Neural network for estimating thickness and method thereof
Zhang et al. Insights into local stereo matching: Evaluation of disparity refinement approaches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination