CN117611542B - Fetal intrauterine craniocerebral image-based detection method and system - Google Patents

Fetal intrauterine craniocerebral image-based detection method and system Download PDF

Info

Publication number
CN117611542B
CN117611542B CN202311568148.0A CN202311568148A CN117611542B CN 117611542 B CN117611542 B CN 117611542B CN 202311568148 A CN202311568148 A CN 202311568148A CN 117611542 B CN117611542 B CN 117611542B
Authority
CN
China
Prior art keywords
image
dimensional
pixel
pixels
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311568148.0A
Other languages
Chinese (zh)
Other versions
CN117611542A (en
Inventor
王振宇
涂益建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Healthway Information Technology Co ltd
Original Assignee
Shanghai Healthway Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Healthway Information Technology Co ltd filed Critical Shanghai Healthway Information Technology Co ltd
Priority to CN202311568148.0A priority Critical patent/CN117611542B/en
Publication of CN117611542A publication Critical patent/CN117611542A/en
Application granted granted Critical
Publication of CN117611542B publication Critical patent/CN117611542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a fetal intrauterine craniocerebral image-based detection method and system, and belongs to the technical field of digital image processing. The embodiment of the invention discloses a fetal intrauterine cranium brain image-based detection method and system. Wherein the method comprises the following steps: acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image, and enhancing the boundary to obtain a slice medical image; establishing a gray image data analysis layer, setting a pixel marking vector, setting an intensity value for pixels of the slice medical image, and extracting a feature vector of the slice medical image; setting seed pixels, traversing the gray image data analysis layer through pixel mark vectors to obtain a two-dimensional segmentation image; presetting a three-dimensional scene, and overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain a three-dimensional visual model; and calculating the craniocerebral volume in the uterus of the fetus according to the three-dimensional visualization model. The information transfer between the fetal intrauterine craniocerebral three-dimensional model and the image is realized.

Description

Fetal intrauterine craniocerebral image-based detection method and system
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a fetal intrauterine craniocerebral image-based detection method and system.
Background
With the development of image processing technology, the method has important significance in the field of medical image analysis, and can realize analysis, processing and processing of related medical image information, wherein image segmentation is an important component of medical image processing. There are still the following areas to be improved:
(1) In traditional medical diagnostics, the judgment of an expert is almost a decisive consideration. However, depending on the judgment of the doctor, there is a certain problem that the doctor needs to have a rich priori knowledge, the processing time is long, and misdiagnosis is easily caused in high-intensity work; three-dimensional reconstruction of medical images is currently limited to examination and measurement of adult cranium, etc. For fetal ages less than 20 weeks, the resolution of the acquired images is often affected and some finer structures are difficult to display.
(2) The existing image segmentation method is operated on gray level, the calculation complexity is high, sufficient consistency still does not exist between the segmentation result and the capability of getting rid of local optimization, the difference between the fetal cranium image and surrounding tissues is small, the segmentation is difficult, the manual layer-by-layer segmentation is adopted before, the efficiency is too low, the accuracy is not enough, and the popularization and the application are difficult.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a fetal intrauterine craniocerebral image-based detection method and system;
the aim of the invention can be achieved by the following technical scheme:
S1: acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image, and enhancing the boundary to obtain a slice medical image;
S2: establishing a gray image data analysis layer, wherein the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
S3: setting a seed pixel, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixel and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into the seed and updating the intensity value and the characteristic vector, if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmentation image after traversing is completed;
S4: presetting a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
S5: and calculating the intracranial volume in the uterus of the fetus according to the three-dimensional visual model.
Specifically, the step S1 specifically includes:
S101: normalizing all pixel values of the fetal craniocerebral image, and converting the pixel values into 0 or 1;
S102: gamma correction is carried out on the pixel values to obtain nonlinear mapping values, and a calculation formula is as follows: f (I) =i γ, where f (I) is a nonlinear mapping value output after gamma correction, I is a pixel value, and γ is a nonlinear mapping parameter;
S103: performing inverse normalization processing on the nonlinear mapping value to obtain a brightness image pixel value, layering processing and weighting fusion are performed on the brightness image by using three different Gaussian filter templates to obtain an image detail layer, and the image detail layer and the brightness image are added to obtain a slice medical image, wherein the calculation formula is as follows:
D1=Iin(x,y)-G1*Iin(x,y),
D2=(G1-G2)*Iin(x,y),
D3=(G2-G3)*Iin(x,y),
Inew(x,y)=Iin(x,y)+[1-w1×sgn(D1)]D1+w2D2+w3D3,
Wherein G 1,G2,G3 is a gaussian template with standard deviation 1,2,5, D 1,D2,D3 is three detail layers corresponding to G 1,G2,G3, I in (x, y) is luminance image information, w 1=0.5,w2=0.25,w3 =0.25, sgn is a sign function, I new = (x, y) is slice medical image information, x is an abscissa of an image pixel, and y is an ordinate of an image pixel.
Specifically, the pixel marking vector specifically comprises a non-empty state, a neighborhood set and a local transfer state, wherein the initial value of the non-empty state of the pixel marking vector is 1, and if the pixel corresponding to the pixel marking vector is diffused into a seed, the non-empty state is changed into 0;
The neighborhood set is a set of pixels in 8 grids surrounding the pixel corresponding to the pixel marking vector, and the local transition state is a state of the pixel corresponding to the pixel marking vector under the last time step.
Specifically, the traversing method specifically includes:
starting diffusion from the seed pixel position into a neighborhood set, and calculating the number of diffusion pixels in the neighborhood set of the seed pixel through a pixel marking vector, wherein the calculation formula is as follows:
Wherein E is the number of diffusion pixels, p is a seed pixel, l is the number of pixels in the neighborhood set, q is the number of pixels in the neighborhood set, and N is the neighborhood set;
Presetting a neighborhood diffusion threshold and a back propagation threshold, if the number of the diffusion pixels is larger than or equal to the neighborhood diffusion threshold, not diffusing the seed pixels in the neighborhood, and if the number of the diffusion pixels is larger than or equal to the back propagation threshold, replacing the seed pixels with the minimum intensity values in the neighborhood set.
Specifically, the step S4 specifically includes:
s401: extracting characteristic points of the two-dimensional segmented image, presetting the size of a neighborhood, selecting characteristic point pairs in the neighborhood of the characteristic points to form description sub-point pairs, and forming a binary description subset by all the description sub-point pairs in the neighborhood;
S402: presetting a feature matching distance, performing exclusive OR operation on the descriptor point pairs, and counting the number of different bit values as a Hamming distance, and judging the feature point as correct matching if the Hamming distance is less than twice the feature matching distance;
S403: obtaining a translation vector and a rotation matrix according to a coordinate system participating in the three-dimensional scene in a camera, and calculating the space data of the two-dimensional segmented image according to the correctly matched characteristic points;
S404: optimizing the reprojection error of the space data to obtain sparse point cloud, wherein the calculation formula is as follows:
Wherein g is a beam adjustment function, C i is two-dimensional image data under a view angle i, X j is three-dimensional point data of a track j, i is view angle count, j is track count, w ij is a measurement parameter of the track j of the view angle i, q is a projection point of space data, P is a real point of the two-dimensional image, n is a total number of view angles, and m is a total number of tracks.
S405: and combining the sparse point cloud characteristic points with the adjacent points to obtain voxel points by using image information, expanding, filtering out wrong voxel points to obtain a three-dimensional surface, and combining the three-dimensional surface to obtain a three-dimensional visual model.
Preferably, the step S5 specifically includes:
s501: obtaining vertex coordinates, voxel unit numbers and network node physical quantity high equivalence of a voxel network according to the three-dimensional visual model, and interpolating equivalence points;
S502: searching a contour line starting point, searching a voxel unit which is not used and has adjacent edges as boundary edges, if the voxel unit is found, comparing two characteristic adjacent edges of the voxel unit, and if the voxel unit is the boundary edges, taking any edge as a starting point edge and the other edge as a subsequent edge; otherwise, taking the edge with the adjacent edge of the feature as the boundary as the starting point edge; respectively storing the coordinates of the equivalent points corresponding to the starting point edge and the subsequent edge into a first point and a second point of the contour line structure, and recording the nodes corresponding to the subsequent edge into P1 and P2;
S503: searching a voxel unit which is shared with adjacent edges of the node P1 and the node P2 and is not used, taking a non-shared adjacent characteristic edge of the voxel unit as a next subsequent edge, repeatedly executing the S502 process until the voxel unit which is shared with adjacent edges and is not used does not exist, and storing the contour line structure as a two-dimensional linked list;
s504: calculating the volume of the three-dimensional visual model according to the two-dimensional linked list, wherein the calculation formula is as follows:
Wherein S is the calculated area of the closed contour, S i is the area of the ith closed contour, x j-1 is the ordinate of the j-1 th contour node, x j+1 is the ordinate of the j+1 th contour node, y j is the abscissa of the j-th contour node, n is the number of contour nodes, j is the contour node sequence number, V is the three-dimensional model volume, i is the closed contour count, m is the total number of closed contours, and H is the closed contour numerical interval.
A fetal intrauterine craniocerebral image-based detection system comprises an image acquisition module, an image analysis module, an image segmentation module, a three-dimensional reconstruction module and a volume calculation module;
The image acquisition module is used for acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image and enhancing the boundary to obtain a slice medical image;
The image analysis module is used for establishing a gray image data analysis layer, and the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
The image segmentation module is used for setting seed pixels, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixels and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into seeds, updating the intensity value and the characteristic vector, and if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmented image after traversing is completed;
The three-dimensional reconstruction module is used for establishing a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
The volume calculation module is used for calculating the craniocerebral volume in the uterus of the fetus according to the three-dimensional visual model.
The beneficial effects of the invention are as follows:
(1) The fetal brain image can be rapidly and accurately segmented, and the structure of the fetal brain can be accurately displayed by performing image segmentation on the pixel intensity difference of the fetal brain image, calibrating initial seed points, setting seed growth rules and automatically identifying and editing a system of tissues around the fetal brain.
(2) The three-dimensional model is constructed through the fetal brain segmentation image, the visual fetal brain surface model is obtained through the curved surface reconstruction of the measurement technology, the measurement data is combined with the construction requirement of the solid model by means of the mathematical description of volume measurement based on the three-dimensional model, the communication and information circulation between the mathematical model and the image are realized, the shape of the fetal brain in the uterus can be intuitively reflected, and various operations such as rotation, scaling, movement, section display and the like are carried out on the image, so that a doctor can observe and analyze the numerical value of the fetal brain volume from multiple angles and multiple layers conveniently.
Drawings
The present invention is further described below with reference to the accompanying drawings for the convenience of understanding by those skilled in the art.
Fig. 1 is a flow chart of a fetal intrauterine cranium brain image-based detection method according to the invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention for achieving the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects according to the invention with reference to the attached drawings and the preferred embodiment.
Referring to fig. 1, a method and a system for detecting brain images based on fetal intrauterine;
S1: acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image, and enhancing the boundary to obtain a slice medical image;
S2: establishing a gray image data analysis layer, wherein the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
S3: setting a seed pixel, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixel and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into the seed and updating the intensity value and the characteristic vector, if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmentation image after traversing is completed;
S4: presetting a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
S5: and calculating the intracranial volume in the uterus of the fetus according to the three-dimensional visual model.
Specifically, the step S1 specifically includes:
S101: normalizing all pixel values of the fetal craniocerebral image, and converting the pixel values into 0 or 1;
S102: gamma correction is carried out on the pixel values to obtain nonlinear mapping values, and a calculation formula is as follows: f (I) =i γ, where f (I) is a nonlinear mapping value output after gamma correction, I is a pixel value, and γ is a nonlinear mapping parameter;
S103: performing inverse normalization processing on the nonlinear mapping value to obtain a brightness image pixel value, layering processing and weighting fusion are performed on the brightness image by using three different Gaussian filter templates to obtain an image detail layer, and the image detail layer and the brightness image are added to obtain a slice medical image, wherein the calculation formula is as follows:
D1=Iin(x,y)-G1*Iin(x,y),
D2=(G1-G2)*Iin(x,y),
D3=(G2-G3)*Iin(x,y),
Inew(x,y)=Iin(x,y)+[1-w1×sgn(D1)]D1+w2D2+w3D3,
Wherein G 1,G2,G3 is a gaussian template with standard deviation 1,2,5, D 1,D2,D3 is three detail layers corresponding to G 1,G2,G3, I in (x, y) is luminance image information, w 1=0.5,w2=0.25,w3 =0.25, sgn is a sign function, I new = (x, y) is slice medical image information, x is an abscissa of an image pixel, and y is an ordinate of an image pixel.
In the embodiment, hls, straem type declaration images are input and output in the form of video streams, the input images are 1920 multiplied by 1080, the bit width is 12 bits, and then the images with the bit width of 8 bits are output; for the gamma correction algorithm, window with the size of 3×3 is adopted to access image pixel data for operation; the top-level function of the GAMMA correction algorithm is the image_gamma_core_1 () function: wherein the axi_stream_in & input definition inputs image data through axi_stream protocol, the axi_stream_out & output definition outputs image data through axi_stream protocol, and the image data input IN video STREAM format is converted into Mat data type by calling AXIvideo Mat function; a data window of 3 x 3 size is obtained from an image in RGB format. After a 3 x 3 window is obtained, gamma correction is performed using the center point data within the window.
Specifically, the pixel marking vector specifically comprises a non-empty state, a neighborhood set and a local transfer state, wherein the initial value of the non-empty state of the pixel marking vector is 1, and if the pixel corresponding to the pixel marking vector is diffused into a seed, the non-empty state is changed into 0;
The neighborhood set is a set of pixels in 8 grids surrounding the pixel corresponding to the pixel marking vector, and the local transition state is a state of the pixel corresponding to the pixel marking vector under the last time step.
Specifically, the traversing method specifically includes:
starting diffusion from the seed pixel position into a neighborhood set, and calculating the number of diffusion pixels in the neighborhood set of the seed pixel through a pixel marking vector, wherein the calculation formula is as follows:
Wherein E is the number of diffusion pixels, p is a seed pixel, l is the number of pixels in the neighborhood set, q is the number of pixels in the neighborhood set, and N is the neighborhood set;
Presetting a neighborhood diffusion threshold and a back propagation threshold, if the number of the diffusion pixels is larger than or equal to the neighborhood diffusion threshold, not diffusing the seed pixels in the neighborhood, and if the number of the diffusion pixels is larger than or equal to the back propagation threshold, replacing the seed pixels with the minimum intensity values in the neighborhood set.
Specifically, the step S4 specifically includes:
s401: extracting characteristic points of the two-dimensional segmented image, presetting the size of a neighborhood, selecting characteristic point pairs in the neighborhood of the characteristic points to form description sub-point pairs, and forming a binary description subset by all the description sub-point pairs in the neighborhood;
S402: presetting a feature matching distance, performing exclusive OR operation on the descriptor point pairs, and counting the number of different bit values as a Hamming distance, and judging the feature point as correct matching if the Hamming distance is less than twice the feature matching distance;
S403: obtaining a translation vector and a rotation matrix according to a coordinate system participating in the three-dimensional scene in a camera, and calculating the space data of the two-dimensional segmented image according to the correctly matched characteristic points;
S404: optimizing the reprojection error of the space data to obtain sparse point cloud, wherein the calculation formula is as follows:
Wherein g is a beam adjustment function, C i is two-dimensional image data under a view angle i, X j is three-dimensional point data of a track j, i is view angle count, j is track count, w ij is a measurement parameter of the track j of the view angle i, q is a projection point of space data, P is a real point of the two-dimensional image, n is a total number of view angles, and m is a total number of tracks.
S405: and combining the sparse point cloud characteristic points with the adjacent points to obtain voxel points by using image information, expanding, filtering out wrong voxel points to obtain a three-dimensional surface, and combining the three-dimensional surface to obtain a three-dimensional visual model.
In the embodiment, the simulation dependency library is OpenMVG, openMVS and OpenCV, the hardware environment is Intel Core i5-8250U (1.6 GHz), and the development language is C++, which are developed on the Ubuntu 16.04 platform. The four-fork tree structure is adopted to iteratively divide the pixel point plane, so that the effect of feature equalization is achieved. The detected feature points are rationally distributed by an improved quadtree method. And (3) carrying out image enhancement processing on the area with too little characteristic point acquisition and uneven illumination, and supplementing and extracting the characteristic points again so that enough key points can be extracted from the high-exposure area or the low-illumination area. The maximum decomposition depth is set to solve the problem of "over-equalization" caused by too many iterations.
Preferably, the step S5 specifically includes:
s501: obtaining vertex coordinates, voxel unit numbers and network node physical quantity high equivalence of a voxel network according to the three-dimensional visual model, and interpolating equivalence points;
S502: searching a contour line starting point, searching a voxel unit which is not used and has adjacent edges as boundary edges, if the voxel unit is found, comparing two characteristic adjacent edges of the voxel unit, and if the voxel unit is the boundary edges, taking any edge as a starting point edge and the other edge as a subsequent edge; otherwise, taking the edge with the adjacent edge of the feature as the boundary as the starting point edge; respectively storing the coordinates of the equivalent points corresponding to the starting point edge and the subsequent edge into a first point and a second point of the contour line structure, and recording the nodes corresponding to the subsequent edge into P1 and P2;
S503: searching a voxel unit which is shared with adjacent edges of the node P1 and the node P2 and is not used, taking a non-shared adjacent characteristic edge of the voxel unit as a next subsequent edge, repeatedly executing the S502 process until the voxel unit which is shared with adjacent edges and is not used does not exist, and storing the contour line structure as a two-dimensional linked list;
s504: calculating the volume of the three-dimensional visual model according to the two-dimensional linked list, wherein the calculation formula is as follows:
Wherein S is the calculated area of the closed contour, S i is the area of the ith closed contour, x j-1 is the ordinate of the j-1 th contour node, x j+1 is the ordinate of the j+1 th contour node, y j is the abscissa of the j-th contour node, n is the number of contour nodes, j is the contour node sequence number, V is the three-dimensional model volume, i is the closed contour count, m is the total number of closed contours, and H is the closed contour numerical interval.
In this embodiment, based on the Open GL implementation, the NURBS function in the Open GL is used to fit the surface, and then the contour is generated by generating the Delaunay triangle mesh and calculating the volume of the object. The expression is performed using non-uniform rational B-spline surfaces that are easy to construct and control.
A fetal intrauterine craniocerebral image-based detection system comprises an image acquisition module, an image analysis module, an image segmentation module, a three-dimensional reconstruction module and a volume calculation module;
The image acquisition module is used for acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image and enhancing the boundary to obtain a slice medical image;
The image analysis module is used for establishing a gray image data analysis layer, and the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
The image segmentation module is used for setting seed pixels, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixels and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into seeds, updating the intensity value and the characteristic vector, and if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmented image after traversing is completed;
The three-dimensional reconstruction module is used for establishing a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
The volume calculation module is used for calculating the craniocerebral volume in the uterus of the fetus according to the three-dimensional visual model.
The present invention is not limited to the above embodiments, but is capable of modification and variation in detail, and other modifications and variations can be made by those skilled in the art without departing from the scope of the present invention.

Claims (7)

1. A method and a system for detecting brain images based on fetus intrauterine, which are characterized by comprising the following steps:
S1: acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image, and enhancing the boundary to obtain a slice medical image;
S2: establishing a gray image data analysis layer, wherein the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
S3: setting a seed pixel, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixel and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into the seed and updating the intensity value and the characteristic vector, if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmentation image after traversing is completed;
S4: presetting a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
S5: and calculating the intracranial volume in the uterus of the fetus according to the three-dimensional visual model.
2. The method according to claim 1, wherein the step S1 specifically comprises:
S101: normalizing all pixel values of the fetal craniocerebral image, and converting the pixel values into 0 or 1;
S102: gamma correction is carried out on the pixel values to obtain nonlinear mapping values, and a calculation formula is as follows: f (I) =i γ, where f (I) is a nonlinear mapping value output after gamma correction, I is a pixel value, and γ is a nonlinear mapping parameter;
S103: performing inverse normalization processing on the nonlinear mapping value to obtain a brightness image pixel value, layering processing and weighting fusion are performed on the brightness image by using three different Gaussian filter templates to obtain an image detail layer, and the image detail layer and the brightness image are added to obtain a slice medical image, wherein the calculation formula is as follows:
D1=Iin(x,y)-G1*Iin(x,y),
D2=(G1-G2)*Iin(x,y),
D3=(G2-G3)*Iin(x,y),
Inew(x,y)=Iin(x,y)+[1-w1×sgn(D1)]D1+w2D2+w3D3,
Wherein G 1,G2,G3 is a gaussian template with standard deviation 1,2,5, D 1,D2,D3 is three detail layers corresponding to G 1,G2,G3, I in (x, y) is luminance image information, w 1=0.5,w2=0.25,w3 =0.25, sgn is a sign function, I new = (x, y) is slice medical image information, x is an abscissa of an image pixel, and y is an ordinate of an image pixel.
3. The method according to claim 1, wherein the pixel marking vector specifically comprises a non-empty state, a neighborhood set and a local transition state, the initial value of the non-empty state of the pixel marking vector is 1, and if a pixel corresponding to the pixel marking vector is diffused as a seed, the non-empty state is changed to 0;
The neighborhood set is a set of pixels in 8 grids surrounding the pixel corresponding to the pixel marking vector, and the local transition state is a state of the pixel corresponding to the pixel marking vector under the last time step.
4. The method according to claim 1, characterized in that said traversal method comprises in particular: starting diffusion from the seed pixel position into a neighborhood set, and calculating the number of diffusion pixels in the neighborhood set of the seed pixel through a pixel marking vector, wherein the calculation formula is as follows:
Wherein E is the number of diffusion pixels, p is a seed pixel, l is the number of pixels in the neighborhood set, q is the number of pixels in the neighborhood set, and N is the neighborhood set;
Presetting a neighborhood diffusion threshold and a back propagation threshold, if the number of the diffusion pixels is larger than or equal to the neighborhood diffusion threshold, not diffusing the seed pixels in the neighborhood, and if the number of the diffusion pixels is larger than or equal to the back propagation threshold, replacing the seed pixels with the minimum intensity values in the neighborhood set.
5. The method according to claim 1, wherein the step S4 specifically includes:
s401: extracting characteristic points of the two-dimensional segmented image, presetting the size of a neighborhood, selecting characteristic point pairs in the neighborhood of the characteristic points to form description sub-point pairs, and forming a binary description subset by all the description sub-point pairs in the neighborhood;
S402: presetting a feature matching distance, performing exclusive OR operation on the descriptor point pairs, and counting the number of different bit values as a Hamming distance, and judging the feature point as correct matching if the Hamming distance is less than twice the feature matching distance;
S403: obtaining a translation vector and a rotation matrix according to a coordinate system participating in the three-dimensional scene in a camera, and calculating the space data of the two-dimensional segmented image according to the correctly matched characteristic points;
S404: optimizing the reprojection error of the space data to obtain sparse point cloud, wherein the calculation formula is as follows:
Wherein g is a beam adjustment function, C i is two-dimensional image data under a view angle i, X j is three-dimensional point data of a track j, i is view angle count, j is track count, w ij is a measurement parameter of the track j of the view angle i, q is a projection point of space data, P is a real point of the two-dimensional image, n is a total number of view angles, and m is a total number of tracks.
S405: and combining the sparse point cloud characteristic points with the adjacent points to obtain voxel points by using image information, expanding, filtering out wrong voxel points to obtain a three-dimensional surface, and combining the three-dimensional surface to obtain a three-dimensional visual model.
6. The method according to claim 1, wherein the step S5 specifically includes:
s501: obtaining vertex coordinates, voxel unit numbers and network node physical quantity high equivalence of a voxel network according to the three-dimensional visual model, and interpolating equivalence points;
S502: searching a contour line starting point, searching a voxel unit which is not used and has adjacent edges as boundary edges, if the voxel unit is found, comparing two characteristic adjacent edges of the voxel unit, and if the voxel unit is the boundary edges, taking any edge as a starting point edge and the other edge as a subsequent edge; otherwise, taking the edge with the adjacent edge of the feature as the boundary as the starting point edge; respectively storing the coordinates of the equivalent points corresponding to the starting point edge and the subsequent edge into a first point and a second point of the contour line structure, and recording the nodes corresponding to the subsequent edge into P1 and P2;
S503: searching a voxel unit which is shared with adjacent edges of the node P1 and the node P2 and is not used, taking a non-shared adjacent characteristic edge of the voxel unit as a next subsequent edge, repeatedly executing the S502 process until the voxel unit which is shared with adjacent edges and is not used does not exist, and storing the contour line structure as a two-dimensional linked list;
s504: calculating the volume of the three-dimensional visual model according to the two-dimensional linked list, wherein the calculation formula is as follows:
Wherein S is the calculated area of the closed contour, S i is the area of the ith closed contour, x j-1 is the ordinate of the j-1 th contour node, x j+1 is the ordinate of the j+1 th contour node, y j is the abscissa of the j-th contour node, n is the number of contour nodes, j is the contour node sequence number, V is the three-dimensional model volume, i is the closed contour count, m is the total number of closed contours, and H is the closed contour numerical interval.
7. A fetal intrauterine cranium image-based detection system, characterized in that it operates with the method of any one of claims 1-6, comprising an image acquisition module, an image analysis module, an image segmentation module, a three-dimensional reconstruction module, a volume calculation module;
The image acquisition module is used for acquiring a fetal craniocerebral image, removing noise interference of the fetal craniocerebral image and enhancing the boundary to obtain a slice medical image;
The image analysis module is used for establishing a gray image data analysis layer, and the gray image data analysis layer comprises a first layer and a second layer; presetting a sliding image, importing the slice medical image into the first image layer, importing the sliding image into the second image layer, correlating pixel values of the first image layer and the second image layer, and setting a pixel mark vector; setting intensity values for pixels of the slice medical image, and extracting feature vectors of the slice medical image;
The image segmentation module is used for setting seed pixels, traversing the sliding image through the pixel marking vector, presetting a time step, calculating the characteristic distance between the seed pixels and the pixels in a neighborhood set in the time step, if the characteristic distance is larger than the intensity value of the pixels in the neighborhood set, developing the pixels in the neighborhood set into seeds, updating the intensity value and the characteristic vector, and if the characteristic distance is smaller than or equal to the intensity value of the pixels in the neighborhood set, changing the pixel marking vector and the pixel value of the pixels in the neighborhood set, and obtaining a two-dimensional segmented image after traversing is completed;
The three-dimensional reconstruction module is used for establishing a three-dimensional scene, overlapping and matching the acquired two-dimensional segmentation images into the same coordinate system by taking the three-dimensional scene as a reference to obtain space data, converting a three-dimensional space into a voxel network according to the three-dimensional scene, establishing a three-dimensional surface in the voxel network according to the space data, and merging the three-dimensional surfaces to obtain a three-dimensional visual model;
The volume calculation module is used for calculating the craniocerebral volume in the uterus of the fetus according to the three-dimensional visual model.
CN202311568148.0A 2023-11-23 2023-11-23 Fetal intrauterine craniocerebral image-based detection method and system Active CN117611542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311568148.0A CN117611542B (en) 2023-11-23 2023-11-23 Fetal intrauterine craniocerebral image-based detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311568148.0A CN117611542B (en) 2023-11-23 2023-11-23 Fetal intrauterine craniocerebral image-based detection method and system

Publications (2)

Publication Number Publication Date
CN117611542A CN117611542A (en) 2024-02-27
CN117611542B true CN117611542B (en) 2024-05-28

Family

ID=89945653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311568148.0A Active CN117611542B (en) 2023-11-23 2023-11-23 Fetal intrauterine craniocerebral image-based detection method and system

Country Status (1)

Country Link
CN (1) CN117611542B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974647B (en) * 2024-03-29 2024-06-07 青岛大学 Three-dimensional linkage type measurement method, medium and system for two-dimensional medical image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101760287B1 (en) * 2016-02-05 2017-07-25 한국광기술원 Device and method for medical image segmentation
EP3456265A1 (en) * 2017-09-14 2019-03-20 Koninklijke Philips N.V. Fetal development monitoring
CN110974302A (en) * 2019-10-21 2020-04-10 李胜利 Automatic detection method and system for fetal head volume in ultrasonic image
CN111932513A (en) * 2020-08-07 2020-11-13 深圳市妇幼保健院 Method and system for imaging three-dimensional image of fetal sulcus gyrus in ultrasonic image
CN116993947A (en) * 2023-09-26 2023-11-03 光谷技术有限公司 Visual display method and system for three-dimensional scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8845539B2 (en) * 2010-12-22 2014-09-30 General Electric Company Methods and systems for estimating gestation age of a fetus
US20230005133A1 (en) * 2021-06-24 2023-01-05 Carolyn M Salafia Automated placental measurement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101760287B1 (en) * 2016-02-05 2017-07-25 한국광기술원 Device and method for medical image segmentation
EP3456265A1 (en) * 2017-09-14 2019-03-20 Koninklijke Philips N.V. Fetal development monitoring
CN110974302A (en) * 2019-10-21 2020-04-10 李胜利 Automatic detection method and system for fetal head volume in ultrasonic image
CN111932513A (en) * 2020-08-07 2020-11-13 深圳市妇幼保健院 Method and system for imaging three-dimensional image of fetal sulcus gyrus in ultrasonic image
CN116993947A (en) * 2023-09-26 2023-11-03 光谷技术有限公司 Visual display method and system for three-dimensional scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于MRI的在体足月胎儿体表数字化三维模型的构建;刘萍 等;中国医学影像学杂志;20150131;23(01);第23-26页 *

Also Published As

Publication number Publication date
CN117611542A (en) 2024-02-27

Similar Documents

Publication Publication Date Title
Zhang et al. Image engineering
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
CN108830913B (en) Semantic level line draft coloring method based on user color guidance
CN117611542B (en) Fetal intrauterine craniocerebral image-based detection method and system
US20210012550A1 (en) Additional Developments to the Automatic Rig Creation Process
CN112613097A (en) BIM rapid modeling method based on computer vision
Xu et al. Pixel-level non-local image smoothing with objective evaluation
CN110176064B (en) Automatic identification method for main body object of photogrammetric generation three-dimensional model
CN110992366B (en) Image semantic segmentation method, device and storage medium
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN113177592B (en) Image segmentation method and device, computer equipment and storage medium
Vu et al. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing
US11995786B2 (en) Interactive image editing
CN113450396A (en) Three-dimensional/two-dimensional image registration method and device based on bone features
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN106548476A (en) Using medical image statistics pulmonary three-dimensional feature Method On Shape
CN113538682B (en) Model training method, head reconstruction method, electronic device, and storage medium
Muñoz-Benavent et al. Impact evaluation of deep learning on image segmentation for automatic bluefin tuna sizing
CN116993947B (en) Visual display method and system for three-dimensional scene
CN113643281A (en) Tongue image segmentation method
CN117501313A (en) Hair rendering system based on deep neural network
CN111369662A (en) Three-dimensional model reconstruction method and system for blood vessels in CT (computed tomography) image
CN113487728B (en) Fish body model determination method and system
Amur et al. Adaptive Numerical Regularization for Variational Denoising Model with Complementary Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant