WO2018095058A1 - 超声三维胎儿面部轮廓图像处理方法及*** - Google Patents

超声三维胎儿面部轮廓图像处理方法及*** Download PDF

Info

Publication number
WO2018095058A1
WO2018095058A1 PCT/CN2017/093457 CN2017093457W WO2018095058A1 WO 2018095058 A1 WO2018095058 A1 WO 2018095058A1 CN 2017093457 W CN2017093457 W CN 2017093457W WO 2018095058 A1 WO2018095058 A1 WO 2018095058A1
Authority
WO
WIPO (PCT)
Prior art keywords
slice
target area
boundary
fetal
frame
Prior art date
Application number
PCT/CN2017/093457
Other languages
English (en)
French (fr)
Inventor
黄柳倩
孙慧
艾金钦
刘旭江
喻美媛
Original Assignee
深圳开立生物医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳开立生物医疗科技股份有限公司 filed Critical 深圳开立生物医疗科技股份有限公司
Publication of WO2018095058A1 publication Critical patent/WO2018095058A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Definitions

  • the present invention relates to the field of ultrasonic imaging technology, and in particular, to an ultrasonic three-dimensional fetal facial contour image processing method and system.
  • the three-dimensional ultrasound imaging system is based on the traditional two-dimensional ultrasound, and acquires two-dimensional ultrasonic images of multiple sequential spatial sequences. According to the spatial positional relationship of the data acquisition, the volume data is reproduced by steps such as scanning conversion.
  • the three-dimensional ultrasound imaging system provides high-dimensional spatial information that traditional two-dimensional ultrasound can not provide, making clinical diagnosis and observation more intuitive and flexible, and making the communication between the doctors and patients more smooth. Due to its informative and intuitive information, the three-dimensional ultrasound imaging system is currently mainly used for observation of fetal morphology in the field of obstetrics, especially for facial observation.
  • the front of the fetal face may be blocked by the placenta, umbilical cord, arm, uterine wall, etc., so that the collected three-dimensional volume data may include placenta and amniocentesis.
  • Substances, umbilical cords, uterine tissue, etc. cause obstruction to the imaging target, which brings difficulties to the target observation.
  • the current volumetric cutting method usually has an interactive volume cutting method for the inspector to cut the shielding portion, the operation is cumbersome and takes a long time.
  • the invention provides an ultrasonic three-dimensional fetal facial contour image processing method and system, which automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
  • the present invention adopts the following technical solutions:
  • an ultrasonic three-dimensional fetal facial contour image processing method comprising:
  • the fetal volume data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the method further includes:
  • the position of the current frame slice target area is corrected.
  • the step of correcting the location of the current frame slice target area includes:
  • the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the step of performing facial boundary detection on the selected slice to obtain a trusted boundary point includes:
  • the method before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary, the method further includes: a step of boundary growth.
  • the step of acquiring multiple trusted boundary points according to the candidate segmentation boundary includes:
  • the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
  • the fetal volume data is cropped according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the step of detecting the multi-frame slice of the fetal volume data in a predetermined direction to obtain the target area of each frame slice includes:
  • the target area and its corresponding slice are saved.
  • an ultrasound three-dimensional fetal facial contour image processing system comprising:
  • a target area detecting module configured to detect a multi-frame slice of the fetal body data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area;
  • a trusted boundary point acquiring module configured to filter out a slice including a target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point
  • a cropping module configured to crop the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the system further includes: a correction module, configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
  • a correction module configured to correct a position of the current frame slice target area when a difference between a position of the current frame slice target area and a position of the adjacent slice target area exceeds a predetermined threshold.
  • the correction module is further configured to: traverse all the selected slices to obtain a sequence of ⁇ frame number, target area>; respectively obtain a position of a frame target area of each frame in the ⁇ frame number, target area> sequence and a neighboring slice thereof Deviation of the position of the target area; calculating the mean of the deviation; when the current frame slice target area When the deviation of the position of the domain from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region of the selected slice in each frame; according to the gray value and the contour shape of the connected region where the transition boundary is located, Determining a face region of the sliced slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
  • the trusted boundary point acquisition module comprises a boundary growth unit, and the boundary growth unit is used for boundary growth.
  • the trusted boundary point obtaining module is further configured to: construct a boundary matrix corresponding to the filtered slice in each frame according to the candidate segmentation boundary; and superimpose all the boundary matrix and the accumulation matrix, A voting matrix is obtained; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the cutting module is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal body data according to the cutting module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the target area detecting module is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal body data in a predetermined direction; and save the target area and the corresponding slice thereof.
  • the beneficial effects of the present invention are: detecting a multi-frame slice of a fetal body data in a predetermined direction to obtain a target region of each frame slice, wherein the target region includes a fetal head region; A slice including the target area is extracted, and the selected slice is subjected to face boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, simplifies the operation of the detecting personnel, and improves the three-dimensional imaging rate of the ultrasound.
  • FIG. 1 is a flow chart of a method of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a multi-frame slice of fetal volume data in a predetermined direction provided in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a process for correcting the position of a current frame slice target area provided in an embodiment of the present invention.
  • Figure 4a is a schematic illustration of a target area prior to correction in accordance with an embodiment of the present invention.
  • Figure 4b is a schematic illustration of the target area of Figure 4a after correction.
  • FIG. 5 is a schematic diagram of a process for performing facial boundary detection on a selected slice to obtain a trusted boundary point according to an embodiment of the present invention.
  • Figure 6a is a schematic illustration of alternative segmentation boundaries provided in an embodiment of the invention.
  • Figure 6b is a schematic illustration of the alternate segmentation boundary of Figure 6a after boundary growth.
  • FIG. 7 is a schematic diagram of a process for acquiring multiple trusted boundary points according to an alternative segmentation boundary according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a process of cropping fetal body data according to a trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image according to an embodiment of the present invention.
  • 9 is a cut template provided in an embodiment of the present invention.
  • Figure 10a is a schematic illustration of a sagittal direction slice prior to trimming in accordance with an embodiment of the present invention.
  • Figure 10b is a schematic view of the cut of Figure 10a.
  • FIG. 11 is a block diagram showing the structure of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention.
  • FIG. 1 is a flowchart of an ultrasonic three-dimensional fetal facial contour image processing method provided in an embodiment of the present invention. As shown in Figure 1, the method includes:
  • Step S101 detecting a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
  • the target area of the multi-frame (slice) slice of the fetal volume data in a predetermined direction is detected to acquire a target area of at least one frame slice.
  • the target area includes a fetal head region
  • the predetermined direction may be a facet direction of the fetal volume data, such as a planar direction parallel to the transducer array direction.
  • the 3D/4D imaging examiner uses this plane direction to obtain the sagittal plane of the fetus, and the sagittal plane can also be replaced with a characteristic section such as the coronal plane.
  • the predetermined direction can also be used to traverse the plane direction of the three axes of the volume data. Of course, it can also be a predetermined direction determined by other algorithms, which will not be repeated here.
  • the target region detection adopts a Histogram of Oriented Gradient (HOG) feature extraction algorithm and an adboost classifier algorithm.
  • HOG Histogram of Oriented Gradient
  • the classifier is trained in advance using the data of the sagittal head region of the fetus, and the classifier is set according to the training result, and the target region of the sagittal plane is automatically positioned on each frame slice of the predetermined direction of the fetal volume data by using a preset classifier. , save the target area and its corresponding slice.
  • Step S102 Filter out a slice including the target area, and perform face boundary detection on the selected slice to obtain a trusted boundary point.
  • the multi-frame slice of the fetal volume data in a predetermined direction may not include all the slices in the target area, so it is necessary to screen out the slice containing the target area.
  • the gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find the face region of each frame slice. And using the upper surface boundary of the face area as the face boundary, voting for each face boundary point according to the face boundary of each frame slice, and determining the trusted boundary by the point where the face boundary passes the most point.
  • Step S103 crop the fetal volume data according to the trusted boundary point to obtain an ultrasound fetal facial contour image.
  • the fetal body data is cropped according to the template to obtain an image of the ultrasonic three-dimensional fetal facial contour.
  • the automatic cutting of the occlusion portion of the fetal face is realized, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound.
  • the ultrasonic fetal facial contour image processing method of the above embodiment detects the multi-frame slice of the fetal volume data in a predetermined direction to acquire a target region of each frame slice; selects a slice including the target region, and performs face on the selected slice Boundary detection to obtain a trusted boundary point; the fetal body data is cropped according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image.
  • the invention automatically cuts the occlusion portion of the fetal face by cutting the volume data by using the trusted boundary point, so that the operation is simple and quick, the error caused by the manual operation is reduced, and the image quality is higher.
  • the method further includes:
  • the position of the current target area is corrected.
  • the adjacent slice refers to a slice that is temporally adjacent to the current frame slice. For example, as shown in FIG. 2, on the slice sequence of the Z-axis, slices numbered "1", “3", "5", and "6" can detect a region of interest (ROI). Then, the neighboring ROI of the slice ROI numbered "3" is the ROI of the slice numbered "1" and "numbered 5".
  • ROI region of interest
  • the step of correcting the position of the current frame slice target area includes:
  • Step S301 Traverse all the selected slices to obtain a ⁇ frame number, target area> sequence.
  • Step S302 Find the deviation between the position of each frame slice target area in the ⁇ frame number, target area> sequence and the position of the adjacent slice target area.
  • the deviation includes a center point of each frame slice target area and a neighboring slice target area thereof.
  • the obtained X coordinate deviation and Y coordinate deviation are respectively stored in the ⁇ frame number, the X coordinate deviation from the center point of the adjacent slice target area> and ⁇ the frame number, and the Y coordinate deviation from the center point of the adjacent slice target area> In the sequence.
  • Step S303 Calculate the mean value of the deviation.
  • the mean value includes the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation.
  • the ⁇ frame number obtained according to the above steps, the X coordinate deviation from the center point of the target area of the adjacent slice> and the ⁇ frame number, and the Y coordinate deviation from the center point of the target area of the adjacent slice> are two sequences, respectively, are eliminated.
  • the maximum value of the two deviation sequences is then calculated, and the mean value of the X coordinate deviation and the mean value of the Y coordinate deviation of the two sequences are calculated, that is, the mean value of the deviation between the target region of each frame slice and the target region of the adjacent slice is obtained.
  • Step S304 When the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center point coordinate of the target slice target area.
  • the distance between the target slice and the current frame slice is the smallest, and the deviation between the position of the target slice target area and the position of the adjacent slice target area is less than the mean value.
  • facial segmentation detection is performed on the selected slice to obtain
  • the steps to take a trusted boundary point include:
  • Step S501 Obtain a transition boundary from a darker region to a brighter region in the slice selected by each frame.
  • the boundary detection operator is used to find the transition boundary of each frame slice from the darker region to the brighter region.
  • the boundary detection operator here includes but is not limited to the following operator prewitt operator and sobel operator.
  • Step S502 determining a face region of each frame slice according to the gray value and the contour shape of the connected region where the transition boundary is located, and using the upper surface boundary of the face region as an alternative segmentation boundary, as shown in FIG. 6a.
  • the gray value and the contour form of the connected region where the transition boundary of each frame slice is located are analyzed, and the gray value and the contour form are compared with the pre-stored face gray value and the face contour form to find each frame.
  • the sliced face area and the upper surface boundary of the face area is used as an alternative segmentation boundary.
  • Step S503 acquiring a plurality of trusted boundary points according to the candidate segmentation boundary acquired in the above step.
  • Each of the face boundary points is voted according to an alternative segmentation boundary of each frame slice, and the point at which the candidate segmentation boundary passes the most is determined as a trusted boundary point.
  • the step of: boundary growth is further included before the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary of each frame slice.
  • the starting and ending points of the candidate segmentation boundaries of each frame slice may be inconsistent, it will affect the subsequent acquisition of the trusted boundary. Therefore, by using the boundary growth, the complete face boundary of the slice can be obtained from left to right, thereby improving the accuracy of the boundary detection. .
  • the boundary growth is performed based on the gradient image. Taking the left and right endpoints of the candidate segmentation boundary as the growth point in the lateral direction, searching for the boundary point on the gradient image based on the neighborhood of the current growth point, and adding the searched boundary point to the current candidate segmentation boundary to obtain each frame Slice the full face border.
  • the contrast effects before and after the boundary growth are shown in Figures 6a and 6b.
  • the step of acquiring a plurality of trusted boundary points according to the candidate segmentation boundary includes:
  • Step S701 Construct a boundary matrix corresponding to the slice selected by each frame according to the candidate segmentation boundary.
  • each boundary matrix is the same, and the specific size can be determined as the case may be.
  • the background point of the boundary matrix is set to 0, and the boundary point is set to 1. Of course, it can be set to other values as long as the background and boundary points can be distinguished.
  • the boundary points are points corresponding to alternative segmentation boundaries in each frame slice.
  • the accumulation matrix may be a zero matrix of the same dimension as the boundary matrix. Superimpose all boundary matrices onto an accumulating matrix so that alternate partition boundaries that traverse the same point will vote for that point.
  • Step S703 the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the step of cropping the fetal volume data according to the trusted boundary point to obtain an ultrasound three-dimensional fetal facial contour image includes:
  • Step S801 a cropping template is created according to the trusted boundary point.
  • the main operation of cropping the fetal body data is to make a cropping template, which is to fill the trusted boundary point after the voting, the lower right corner of the image, and the lower left corner of the image as a closed area, as shown in FIG. 9 , which is a specific embodiment of the present invention.
  • a cropping template provided in .
  • Step S802 cutting the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the trimming template obtained in the above step is “ANDed” with the selected frame slices—that is, the data of each frame slice and the white area of the cropping template are retained, and the data of the black portion is deleted.
  • 10a and 10b are schematic views of the sagittal direction slice before and after cutting, respectively, provided in the embodiment of the present invention.
  • the method further comprises: three-dimensional rendering of the cropped fetal volume data.
  • the cropped volume data is rendered, and the rendering can be performed by a three-dimensional rendering method such as the well-known "ray casting method" to obtain a more intuitive image of the fetal facial contour.
  • face detection is performed on each frame slice that includes the target area, and an alternative segmentation boundary of each frame slice is obtained, and each of the face boundary points is performed by using an alternative segmentation boundary of each slice. Voting results in the point where the candidate segmentation boundary passes the most - the trusted boundary point.
  • the boundary growth can be performed after the candidate segmentation boundary is obtained, and the cropping template is created according to the trusted boundary point, and the cropping is performed.
  • the template automatically cuts the fetal body data, and automatically cuts the occlusion portion of the fetal face, which makes the operation simple and quick, reduces the error caused by the manual operation, and the image quality is higher.
  • the following is an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention.
  • the embodiment of the ultrasonic three-dimensional fetal facial contour image processing system is implemented based on the above-described embodiment of the ultrasonic three-dimensional fetal facial contour image processing method.
  • For an exhaustive description of the ultrasound three-dimensional fetal facial contour image processing system please refer to the aforementioned embodiment of the ultrasonic three-dimensional fetal facial contour image processing method.
  • FIG. 11 is a structural block diagram of an embodiment of an ultrasonic three-dimensional fetal facial contour image processing system provided in an embodiment of the present invention. As shown, the system includes:
  • the target area detecting module 111 is configured to detect a multi-frame slice of the fetal volume data in a predetermined direction to obtain a target area of each frame slice, wherein the target area includes a fetal head area.
  • the cropping module 113 is further configured to: create a cropping template according to the trusted boundary point; and perform cropping on the fetal volume data according to the cropping module to obtain an ultrasound three-dimensional fetal facial contour image.
  • the correction module is further configured to traverse all the selected slices to obtain a sequence of ⁇ frame number, target area>; respectively obtain the position of the frame target area of each frame in the ⁇ frame number, target area> sequence and the position of the adjacent slice target area Deviation of the deviation; calculating the mean value of the deviation; when the deviation of the position of the current frame slice target area from the position of the adjacent slice target area is greater than the mean value, the center point coordinate of the current frame slice target area is replaced with the center of the target slice target area Point coordinates.
  • the trusted boundary point obtaining module is further configured to: obtain a transition boundary from the darker region to the brighter region in the filtered slice of each frame; according to the gray of the connected region where the transition boundary is located And a contour shape, determining a face region of the selected slice of each frame, and using an upper surface boundary of the face region as an alternative segmentation boundary; and acquiring a plurality of trusted boundary points according to the candidate segmentation boundary.
  • the trusted boundary point acquisition module includes a boundary growth unit for boundary growth.
  • the trusted boundary point obtaining module is further configured to: construct, according to the candidate partitioning boundary, a boundary matrix corresponding to the filtered slice in each frame; and The accumulation matrix is superimposed to obtain a voting matrix; the maximum value of each column in the voting matrix is counted, and the point corresponding to the maximum value is determined as a trusted boundary point.
  • the target area detecting module 111 is further configured to: detect, by using a preset classifier, the target area of each frame slice of the fetal volume data in a predetermined direction; and save the target area and its corresponding slice.
  • system further includes a rendering module for three-dimensional rendering of the cropped fetal volume data.
  • the ultrasonic three-dimensional fetal facial contour image processing system of the present embodiment is used to implement the aforementioned ultrasonic three-dimensional fetal facial contour image processing method. Therefore, the specific embodiment in the ultrasonic three-dimensional fetal facial contour image processing system can be seen in the foregoing ultrasonic three-dimensional fetal facial contour image.
  • An embodiment of the processing method for example, the target area detecting module 111, the trusted boundary point acquiring module 112, and the trimming module 113 are respectively used to implement steps S101, S102, and S103 in the above-described ultrasonic three-dimensional fetal facial contour image processing method,
  • the target area detecting module 111, the trusted boundary point acquiring module 112, and the trimming module 113 are respectively used to implement steps S101, S102, and S103 in the above-described ultrasonic three-dimensional fetal facial contour image processing method.
  • the ultrasonic three-dimensional fetal facial contour image processing system performs face detection on the slice including the target region, obtains a face boundary of each frame slice, and sets a deviation from a position difference with a target region of the adjacent slice.
  • the position of the target area of the value is corrected to improve the accuracy of the detection, and then the face boundary detection is performed on the target area of the slice, and the point at which the face boundary passes the most is obtained by voting on the face boundary point - credible
  • the fetal body data is cut by the trusted boundary point, and the occlusion portion of the fetal face is automatically cut, which simplifies the operation of the detecting person and improves the three-dimensional imaging rate of the ultrasound.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种超声三维胎儿面部轮廓图像处理方法及***。该方法包括:S101:对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;S102:筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;S103:根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。该图像处理方法通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。

Description

超声三维胎儿面部轮廓图像处理方法及***
本申请要求于2016年11月22日提交中国专利局、申请号为201611055976.4、发明名称为“一种图像转换方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及超声成像技术领域,尤其涉及超声三维胎儿面部轮廓图像处理方法及***。
背景技术
三维超声成像***是在传统二维超声的基础上,采集多个顺序空间序列的二维超声图像,根据数据采集的空间位置关系,通过扫描转换等步骤将体积数据再现。三维超声成像***提供了传统二维超声所不能提供的高一维的空间信息,使临床诊断、观察变得更为直观和灵活,使医患双方对诊断结果的交流变得更为顺畅。也正由于其信息丰富直观,三维超声成像***目前被主要用于产科领域胎儿形态学的观察,尤其是面部观察。然而,由于成像环境的特殊性,在对胎儿面部进行三维可视化绘制的时候,胎儿面部前方有可能会被胎盘、脐带、手臂、子宫壁等遮挡,使得采集的三维体数据可能包括胎盘、羊水悬浮物质、脐带、子宫组织等,对成像目标造成遮挡,给目标观察带来困难。
虽然目前超声检查设备上通常有交互的容积剪裁方法可提供给检查人员对遮挡部位进行裁剪,但是操作较为繁琐,耗费时间较长。
发明内容
本发明提供了超声三维胎儿面部轮廓图像处理方法及***,通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。
为实现上述设计,本发明采用以下技术方案:
一方面,提供了超声三维胎儿面部轮廓图像处理方法,该方法,包括:
对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;
筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;
根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。
其中,在所述筛选出包含目标区域的切片的步骤之后还包括:
当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。
其中,所述对所述当前帧切片目标区域的位置进行校正的步骤,包括:
遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;
分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;
计算所述偏差的均值;
当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。
其中,所述对筛选出的切片进行面部边界检测,以获取可信边界点的步骤,包括:
获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;
根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;
根据所述备选分割边界,获取多个可信边界点。
其中,在所述根据所述备选分割边界,获取多个可信边界点的步骤之前,还包括:边界生长的步骤。
其中,所述根据所述备选分割边界,获取多个可信边界点的步骤,包括:
根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩 阵;
将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;
统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。
其中,所述根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤包括:
根据所述可信边界点制作裁剪模板;
根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。
其中,所述对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域的步骤包括:
利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;
保存所述目标区域及其对应的切片。
另一方面,提供了超声三维胎儿面部轮廓图像处理***,该***,包括:
目标区域检测模块,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;
可信边界点获取模块,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;
裁剪模块,用于根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。
其中,所述***还包括:校正模块,用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。
所述校正模块还用于,遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;计算所述偏差的均值;当当前帧切片目标区 域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。
其中,所述可信边界点获取模块还用于:获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;根据所述备选分割边界,获取多个可信边界点。
其中,所述可信边界点获取模块包括边界生长单元,所述边界生长单元用于边界生长。
其中,所述可信边界点获取模块还用于:根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。
其中,所述剪裁模块还用于根据所述可信边界点制作裁剪模板;根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。
其中,所述目标区域检测模块还用于:利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;保存所述目标区域及其对应的切片。
与现有技术相比,本发明的有益效果为:对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。本发明通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的 前提下,还可以根据本发明实施例的内容和这些附图获得其他的附图。
图1是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理方法的实施例的方法流程图。
图2是本发明具体实施方式中提供的胎儿体数据在预定方向的多帧切片的示意图。
图3是本发明具体实施方式中提供的对当前帧切片目标区域的位置进行校正的过程示意图。
图4a是本发明具体实施方式中提供的目标区域校正前的示意图。
图4b是图4a的目标区域校正后的示意图。
图5是本发明具体实施方式中提供的对筛选出的切片进行面部边界检测,以获取可信边界点的过程示意图。
图6a是本发明具体实施方式中提供的备选分割边界的示意图。
图6b是图6a中的备选分割边界进行边界生长后的示意图。
图7是本发明具体实施方式中提供的根据备选分割边界,获取多个可信边界点的过程示意图。
图8是本发明具体实施方式中提供的根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的过程示意图。
图9是本发明具体实施方式中提供的一种裁剪模板。
图10a是本发明具体实施方式中提供的矢状面方向切片裁剪前的示意图。
图10b是图10a的剪裁后的示意图。
图11是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理***的实施例的结构方框图。
具体实施方式
为使本发明解决的技术问题、采用的技术方案和达到的技术效果更加清楚,下面将结合附图对本发明实施例的技术方案作进一步的详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其 他实施例,都属于本发明保护的范围。
请参考图1,其是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理方法的流程图。如图1所示,该方法包括:
步骤S101:对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域。
对胎儿体数据在预定方向的多帧(张)切片的目标区域进行检测,以获取至少一帧切片的目标区域。在本实施例中,目标区域包括胎儿头部区域,预定方向可以为胎儿体数据的切面方向,如与换能器阵列方向平行的平面方向。通常三维/四维成像检测人员会使用这个平面方向来获取胎儿的矢状面,矢状面也可以替换为冠状面等特征明显的切面。预定方向也可以用遍历体数据三个轴的平面方向的方式。当然也可以为通过其他算法确定的预定方向,这里不再一一赘述。
在本实施例中,目标区域检测采用方向梯度直方图(Histogram of Oriented Gradient,HOG)特征提取算法及adboost分类器算法。分类器预先使用胎儿的矢状面头部区域的数据进行了训练,根据训练结果设置分类器,利用预先设置的分类器自动在胎儿体数据预定方向的各帧切片上定位矢状面的目标区域,保存目标区域及其对应的切片。
步骤S102:筛选出包含目标区域的切片,对筛选出的切片进行面部边界检测,以获取可信边界点。
胎儿体数据在预定方向的多帧切片,可能不是所有的切片都包括目标区域,因此需要筛选出包含目标区域的切片。
对包含所述目标区域的切片进行面部边界检测,可使用边界检测算子找出各帧切片从较暗区域到较亮区域的过渡边界,这里的边界检测算子包括但不仅限于prewitt算子和sobel算子。
分析各帧切片的所述过渡边界所在连通区域的灰度值和轮廓形态,将这些灰度值和轮廓形态与预先存储的面部灰度值和面部轮廓形态比较,找出各帧切片的面部区域,并将面部区域的上表面边界作为面部边界,根据各帧切片的面部边界对各个面部边界点进行投票,被面部边界经过最多的点确定为可信边界 点。
步骤S103:根据可信边界点对胎儿体数据进行裁剪,以得到超声胎儿面部轮廓图像。
根据上述步骤S102获取的可信边界点制作模块,根据模板裁剪胎儿体数据,得到超声三维胎儿面部轮廓的图像。实现了自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。
上述实施例的超声胎儿面部轮廓图像处理方法,对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域;筛选出包含目标区域的切片,对筛选出的切片进行面部边界检测,以获取可信边界点;根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。本发明通过利用可信边界点裁剪体数据,自动对胎儿面部的遮挡部分进行裁剪,使得操作简便、快捷,减少了操作手动操作带来的误差,图像质量更高。
在一个实施例中,在筛选出包含目标区域的切片的步骤之后,还包括:
当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对当前目标区域的位置进行校正。
筛选出包含目标区域的切片之后,需要对与邻近切片的目标区域的位置的差值达到预定阈值的目标区域的位置进行校正,以增加胎儿面部轮廓检测的准确性。所述预定阈值,这里不做具体限定,可以根据实际应用对准确率的要求进行选值。在本实施例中,邻近切片是指与当前帧切片在位置上相邻的切片。例如图2所示,在Z轴的切片序列上,编号为“1”、“3”、“5”、“6”的切片可检测出目标区域(ROI,region of interest)。那么编号为“3”的切片ROI的邻近ROI为编号为“1”及“编号为5”的切片的ROI。
优选地,在一个实施例中,如图3所示,对当前帧切片目标区域的位置进行校正的步骤包括:
步骤S301:遍历所有筛选出的切片,得到<帧号,目标区域>序列。
步骤S302:分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差。
在本实施例中,偏差包括各帧切片目标区域的中心点与其邻近切片目标区 域的中心点的X坐标偏差和Y坐标偏差。将得到的X坐标偏差和Y坐标偏差分别保存在<帧号,与相邻切片目标区域的中心点的X坐标偏差>和<帧号,与相邻切片目标区域的中心点的Y坐标偏差>的序列中。
步骤S303:计算偏差的均值。
在本实施例中,均值包括X坐标偏差的均值和Y坐标偏差的均值。根据上述步骤得到的<帧号,与相邻切片的目标区域的中心点的X坐标偏差>和<帧号,与相邻切片的目标区域的中心点的Y坐标偏差>两个序列,分别剔除两个偏差序列中的最大值,然后计算上述两个序列的X坐标偏差的均值和Y坐标偏差的均值,即得到各帧切片目标区域与其邻近切片目标区域偏差的均值。
步骤S304:当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。其中,目标切片与当前帧切片的距离最小,且目标切片目标区域的位置与其邻近切片目标区域的位置的偏差均小于均值。
在本实施例中,存在三种情况:
(1)当当前帧切片目标区域与邻近切片目标区域的中心点的X坐标偏差大于X坐标偏差的均值时,则将当前帧切片目标区域的中心点的X坐标替换为目标切片目标区域的中心点的X坐标。
(2)当当前帧切片目标区域与邻近切片目标区域的中心点的Y坐标偏差大于Y坐标偏差的均值时,则将当前帧切片目标区域中心点的Y坐标替换为目标切片目标区域中心点的Y坐标。
(3)当当前帧切片的目标区域与邻近切片目标区域的中心点的X坐标偏差大于X坐标偏差的均值,并且当前帧切片目标区域与邻近切片目标区域的中心点的Y坐标偏差大于Y坐标偏差的均值时,则将当前帧切片目标区域中心点的X坐标和Y坐标分别替换为目标切片目标区域中心点的X坐标和Y坐标。如图4a和图4b所示,图中的小方框代表目标区域,大方块代表切片,图4a和图4b分别是本发明具体实施方式中提供的一种面部目标区域的校正前和校正后的对比示意图。
在一个实施例中,如图5所示,对筛选出的切片进行面部边界检测,以获 取可信边界点的步骤包括:
步骤S501,获取各帧筛选出的切片中从较暗区域到较亮区域的过渡边界。
利用边界检测算子找出各帧切片从较暗区域到较亮区域的过渡边界,这里的边界检测算子包括但不仅限于如下算子prewitt算子和sobel算子。
步骤S502,根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧切片的面部区域,并将面部区域的上表面边界作为备选分割边界,如图6a所示。
在本实施例中,分析各帧切片的过渡边界所在连通区域的灰度值和轮廓形态,将这些灰度值和轮廓形态与预先存储的面部灰度值和面部轮廓形态比较,找出各帧切片的面部区域,并将面部区域的上表面边界作为备选分割边界。
步骤S503,根据上述步骤获取的备选分割边界,获取多个可信边界点。
根据各帧切片的备选分割边界对各个面部边界点进行投票,被备选分割边界经过最多的点确定为可信边界点。在一个实施例中,在根据各帧切片的备选分割边界,获取多个可信边界点的步骤之前,还包括:边界生长的步骤。
由于各帧切片的备选分割边界的起止点可能不一致,将会影响后续获取可信边界,因此采用边界生长,能够得到从左到右贯穿切片的完整的面部边界,从而提高边界检测的准确性。
在本实施例中,边界生长是基于梯度图像进行的。以备选分割边界的左右两个端点作为横向方向的生长点,基于当前生长点的邻域在梯度图像上搜索边界点,将搜索到的边界点添加到当前备选分割边界上,得到各帧切片的完整的面部边界。边界生长的前后对比效果,如图6a和图6b所示。
在一个实施例中,如图7所示,根据备选分割边界,获取多个可信边界点的步骤,包括:
步骤S701,根据备选分割边界,构造与每帧筛选出的切片一一对应的边界矩阵。
每个边界矩阵的维度相同,具体大小可以视情况而定。在本实施例中,边界矩阵的背景点设置为0,边界点设置为1。当然也可以是设置为其它值,只要能区分出背景和边界点即可。边界点为各帧切片中的备选分割边界对应的点。
步骤S702,将所有边界矩阵与累加矩阵叠加,得到投票矩阵。
在本实施例中,累加矩阵可以为与边界矩阵同维度的零矩阵。将所有边界矩阵叠加到一个累加矩阵,这样穿过同一个点的备选分割边界会投票该点。
步骤S703,统计投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。
备选分割边界穿过同一个点会投票该点,背景点为0,边界点为1,如果一个元素(点)上累加的1的个数越多,分别统计每一列元素(点)的最大值,便可得到各列被备选分割边界经过最多的点,即亮度最大的点,确定该点为可信边界点,便可获取各列的可信边界点,由各列的可信边界点得到超声三维胎儿面部轮廓的可信边界。
在一个实施例中,如图8所示,根据可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤,包括:
步骤S801,根据可信边界点,制作裁剪模板。
裁剪胎儿体数据的主要操作是制作裁剪模板,裁剪模板是将投票后的可信边界点、图像右下角、图像左下角作为一个封闭区域进行填充,如图9所示,是本发明具体实施方式中提供的一种裁剪模板。
步骤S802,根据裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。
将上述步骤获取的剪裁模板与筛选出的各帧切片做“与”操作——即各帧切片与裁剪模板白色区域一致的数据保留,黑色部分的数据删去。图10a和图10b分别是本发明具体实施方式中提供的矢状面方向切片裁剪前和裁剪后的示意图。利用剪裁模板裁剪胎儿体数据,便可得到胎儿面部轮廓的图像,实现了自动对胎儿面部的遮挡部分进行裁剪,简化了操作,提高了出图率。
在一个实施例中,在根据可信边界点对胎儿体数据进行裁剪,以得到胎儿面部轮廓图像的步骤之后,还包括:对裁剪后的胎儿体数据进行三维渲染。
将裁剪后的体数据进行渲染,渲染可采用公知的“光线投射法”等三维渲染方法来进行,得到更为直观的胎儿面部轮廓的图像。
本实施例对筛选出包含所述目标区域的各帧切片进行面部检测,得到各帧切片的备选分割边界,通过利用各切片的备选分割边界对各个面部边界点进行 投票得到被备选分割边界经过最多的点——可信边界点,为提高边界检测的准确性,也可在得到备选分割边界之后进行边界生长,根据可信边界点制作裁剪模板,利用裁剪模板自动裁剪胎儿体数据,实现了自动对胎儿面部的遮挡部分进行裁剪,使得操作简便、快捷,减少了操作手动操作带来的误差,图像质量更高。
以下是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理***的实施例,超声三维胎儿面部轮廓图像处理***的实施例基于上述的超声三维胎儿面部轮廓图像处理方法的实施例实现,在超声三维胎儿面部轮廓图像处理***中未尽的描述,请参考前述超声三维胎儿面部轮廓图像处理方法的实施例。
请参考图11,其是本发明具体实施方式中提供的超声三维胎儿面部轮廓图像处理***的实施例的结构方框图。如图所示,该***,包括:
目标区域检测模块111,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域。
可信边界点获取模块112,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点。
剪裁模块113还用于根据所述可信边界点制作裁剪模板;根据裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。
在一个实施例中,该***还包括:校正模块。校正模块用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对当前帧切片目标区域的位置进行校正。
校正模块还用于,遍历所有筛选出的切片,得到<帧号,目标区域>的序列;分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;计算偏差的均值;当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。
在一个实施例中,所述可信边界点获取模块还用于:获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;根据过渡边界所在连通区域的灰 度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;根据所述备选分割边界,获取多个可信边界点。
在一个实施例中,所述可信边界点获取模块包括边界生长单元,所述边界生长单元用于边界生长。
在一个实施例中,所述可信边界点获取模块还用于:根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。
在一个实施例中,目标区域检测模块111还用于:利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;保存所述目标区域及其对应的切片。
在一个实施例中,所述***还包括渲染模块,用于对裁剪后的胎儿体数据进行三维渲染。
本实施例的超声三维胎儿面部轮廓图像处理***用于实现前述的超声三维胎儿面部轮廓图像处理方法,因此超声三维胎儿面部轮廓图像处理***中的具体实施方式可见前文中的超声三维胎儿面部轮廓图像处理方法的实施例部分,例如,目标区域检测模块111、可信边界点获取模块112、剪裁模块113,分别用于实现上述超声三维胎儿面部轮廓图像处理方法中步骤S101,S102,和S103,所以,其具体实施方式可以参照相应的各个部分实施例的描述,在此不再赘述。
本实施例提供的超声三维胎儿面部轮廓图像处理***对筛选出包含所述目标区域的切片进行面部检测,得到各帧切片的面部边界,对与邻近切片的目标区域的位置相差值达到设定偏差值的目标区域的位置进行校正,以提高检测的准确性,然后对所述切片的所述目标区域进行面部边界检测,通过对面部边界点进行投票得到被面部边界经过最多的点——可信边界点,利用可信边界点裁剪胎儿体数据,实现了自动对胎儿面部的遮挡部分进行裁剪,简化了检测人员的操作,提高了超声三维出图率。以上结合具体实施例描述了本发明的技术原理。这些描述只是为了解释本发明的原理,而不能以任何方式解释为对本发 明保护范围的限制。基于此处的解释,本领域的技术人员不需要付出创造性的劳动即可联想到本发明的其它具体实施方式,这些方式都将落入本发明的保护范围之内。

Claims (10)

  1. 超声三维胎儿面部轮廓图像处理方法,其特征在于,包括:
    对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;
    筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;
    根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。
  2. 根据权利要求1所述的方法,其特征在于,在所述筛选出包含目标区域的切片的步骤之后还包括:
    当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述当前帧切片目标区域的位置进行校正的步骤,包括:
    遍历所有所述筛选出的切片,得到<帧号,目标区域>的序列;
    分别求出<帧号,目标区域>序列中各帧切片目标区域的位置与其邻近切片目标区域的位置的偏差;
    计算所述偏差的均值;
    当当前帧切片目标区域的位置与其邻近切片目标区域的位置的偏差大于所述均值时,则将当前帧切片目标区域的中心点坐标替换为目标切片目标区域的中心点坐标。
  4. 根据权利要求1所述的方法,其特征在于,所述对筛选出的切片进行面部边界检测,以获取可信边界点的步骤,包括:
    获取各帧所述筛选出的切片中从较暗区域到较亮区域的过渡边界;
    根据过渡边界所在连通区域的灰度值和轮廓形态,确定各帧所述筛选出的切片的面部区域,并将面部区域的上表面边界作为备选分割边界;
    根据所述备选分割边界,获取多个可信边界点。
  5. 权利要求4所述的方法,其特征在于,在所述根据所述备选分割边界, 获取多个可信边界点的步骤之前,还包括:边界生长的步骤。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述备选分割边界,获取多个可信边界点的步骤,包括:
    根据所述备选分割边界,构造与每帧所述筛选出的切片一一对应的边界矩阵;
    将所有所述边界矩阵与累加矩阵叠加,得到投票矩阵;
    统计所述投票矩阵中每列的最大值,将最大值对应的点确定为可信边界点。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像的步骤包括:
    根据所述可信边界点制作裁剪模板;
    根据所述裁剪模块对胎儿体数据进行裁剪,得到超声三维胎儿面部轮廓图像。
  8. 根据权利要求1所述的方法,其特征在于,所述对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域的步骤包括:
    利用预先设置的分类器检测胎儿体数据在预定方向的各帧切片的所述目标区域;
    保存所述目标区域及其对应的切片。
  9. 超声三维胎儿面部轮廓图像处理***,其特征在于,包括:
    目标区域检测模块,用于对胎儿体数据在预定方向的多帧切片进行检测,以获取各帧切片的目标区域,其中,所述目标区域包括胎儿头部区域;
    可信边界点获取模块,用于筛选出包含目标区域的切片,并对筛选出的切片进行面部边界检测,以获取可信边界点;
    裁剪模块,用于根据所述可信边界点对胎儿体数据进行裁剪,以得到超声三维胎儿面部轮廓图像。
  10. 根据权利要求9所述的***,其特征在于,所述***还包括:
    校正模块,用于当当前帧切片目标区域的位置与邻近切片目标区域的位置的差值超过预定阈值时,则对所述当前帧切片目标区域的位置进行校正。
PCT/CN2017/093457 2016-11-22 2017-07-19 超声三维胎儿面部轮廓图像处理方法及*** WO2018095058A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611055976.4A CN106725593B (zh) 2016-11-22 2016-11-22 超声三维胎儿面部轮廓图像处理方法***
CN201611055976.4 2016-11-22

Publications (1)

Publication Number Publication Date
WO2018095058A1 true WO2018095058A1 (zh) 2018-05-31

Family

ID=58910667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/093457 WO2018095058A1 (zh) 2016-11-22 2017-07-19 超声三维胎儿面部轮廓图像处理方法及***

Country Status (2)

Country Link
CN (1) CN106725593B (zh)
WO (1) WO2018095058A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112155603A (zh) * 2020-09-24 2021-01-01 广州爱孕记信息科技有限公司 胎儿结构特征的权重值确定方法及装置
CN112638267A (zh) * 2018-11-02 2021-04-09 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及***、存储介质、处理器和计算机设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725593B (zh) * 2016-11-22 2020-08-11 深圳开立生物医疗科技股份有限公司 超声三维胎儿面部轮廓图像处理方法***
CN108322605A (zh) * 2018-01-30 2018-07-24 上海摩软通讯技术有限公司 智能终端及其人脸解锁方法及***
CN109584368B (zh) * 2018-10-18 2021-05-28 中国科学院自动化研究所 生物样品三维结构的构建方法及装置
CN111281430B (zh) * 2018-12-06 2024-02-23 深圳迈瑞生物医疗电子股份有限公司 超声成像方法、设备及可读存储介质
CN109727240B (zh) * 2018-12-27 2021-01-19 深圳开立生物医疗科技股份有限公司 一种三维超声图像的遮挡组织剥离方法及相关装置
CN110706222B (zh) * 2019-09-30 2022-04-12 杭州依图医疗技术有限公司 一种检测影像中骨骼区域的方法及装置
CN111568471B (zh) * 2020-05-20 2021-01-01 杨梅 足月成型胎儿外形分析***
CN116687442A (zh) * 2023-08-08 2023-09-05 汕头市超声仪器研究所股份有限公司 一种基于三维容积数据的胎儿脸部成像方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676104A (zh) * 2004-04-01 2005-10-05 株式会社美蒂森 用于形成3d超声图像的设备和方法
US20090030314A1 (en) * 2007-07-23 2009-01-29 Sotaro Kawae Ultrasonic imaging apparatus and image processing apparatus
CN102283674A (zh) * 2010-04-15 2011-12-21 通用电气公司 用于在超声数据中确定感兴趣区的方法和***
US20120078102A1 (en) * 2010-09-24 2012-03-29 Samsung Medison Co., Ltd. 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
CN104939864A (zh) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 诊断图像生成装置以及诊断图像生成方法
CN106725593A (zh) * 2016-11-22 2017-05-31 深圳开立生物医疗科技股份有限公司 超声三维胎儿面部轮廓图像处理方法***

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102949206B (zh) * 2011-08-26 2015-12-02 深圳迈瑞生物医疗电子股份有限公司 一种三维超声成像的方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1676104A (zh) * 2004-04-01 2005-10-05 株式会社美蒂森 用于形成3d超声图像的设备和方法
US20090030314A1 (en) * 2007-07-23 2009-01-29 Sotaro Kawae Ultrasonic imaging apparatus and image processing apparatus
CN102283674A (zh) * 2010-04-15 2011-12-21 通用电气公司 用于在超声数据中确定感兴趣区的方法和***
US20120078102A1 (en) * 2010-09-24 2012-03-29 Samsung Medison Co., Ltd. 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
CN104939864A (zh) * 2014-03-28 2015-09-30 日立阿洛卡医疗株式会社 诊断图像生成装置以及诊断图像生成方法
CN106725593A (zh) * 2016-11-22 2017-05-31 深圳开立生物医疗科技股份有限公司 超声三维胎儿面部轮廓图像处理方法***

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112638267A (zh) * 2018-11-02 2021-04-09 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及***、存储介质、处理器和计算机设备
CN112638267B (zh) * 2018-11-02 2023-10-27 深圳迈瑞生物医疗电子股份有限公司 超声成像方法及***、存储介质、处理器和计算机设备
CN112155603A (zh) * 2020-09-24 2021-01-01 广州爱孕记信息科技有限公司 胎儿结构特征的权重值确定方法及装置

Also Published As

Publication number Publication date
CN106725593A (zh) 2017-05-31
CN106725593B (zh) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2018095058A1 (zh) 超声三维胎儿面部轮廓图像处理方法及***
EP3432803B1 (en) Ultrasound system and method for detecting lung sliding
JP4709604B2 (ja) 画像表示装置、画像表示方法、記憶媒体及びプログラム
RU2657855C2 (ru) Система трехмерной ультразвуковой визуализации
KR101121396B1 (ko) 2차원 초음파 영상에 대응하는 2차원 ct 영상을 제공하는 시스템 및 방법
WO2022052303A1 (zh) 超声图像和ct图像的配准方法、装置及设备
US20120078102A1 (en) 3-dimensional (3d) ultrasound system using image filtering and method for operating 3d ultrasound system
EP3174467B1 (en) Ultrasound imaging apparatus
US20050074153A1 (en) Method of tracking position and velocity of objects&#39; borders in two or three dimensional digital images, particularly in echographic images
JP5138369B2 (ja) 超音波診断装置及びその画像処理方法
CN104408398B (zh) 一种肝脏边界的识别方法及***
CN111368586B (zh) 超声成像方法及***
US20140371591A1 (en) Method for automatically detecting mid-sagittal plane by using ultrasound image and apparatus thereof
CN109727240B (zh) 一种三维超声图像的遮挡组织剥离方法及相关装置
CN111374712B (zh) 一种超声成像方法及超声成像设备
CN105678746A (zh) 一种医学图像中肝脏范围的定位方法及装置
CN101568941A (zh) 医学成像***
US10398411B2 (en) Automatic alignment of ultrasound volumes
US10970921B2 (en) Apparatus and method for constructing a virtual 3D model from a 2D ultrasound video
JP5134287B2 (ja) 医用画像表示装置、医用画像表示方法、プログラム、記憶媒体およびマンモグラフィ装置
KR102182357B1 (ko) Ct 영상 내 간암 영역 기반으로 3차원 분석을 하는 수술 보조 장치 및 방법
JP2006068373A (ja) 乳頭検出装置およびそのプログラム
US20200305837A1 (en) System and method for guided ultrasound imaging
CN115187640A (zh) 一种基于点云的ct与mri 3d/3d图像配准方法
WO2020133236A1 (zh) 一种脊柱的成像方法以及超声成像***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.10.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17874549

Country of ref document: EP

Kind code of ref document: A1