WO2023186133A1 - 一种用于穿刺路径规划的***及方法 - Google Patents

一种用于穿刺路径规划的***及方法 Download PDF

Info

Publication number
WO2023186133A1
WO2023186133A1 PCT/CN2023/085618 CN2023085618W WO2023186133A1 WO 2023186133 A1 WO2023186133 A1 WO 2023186133A1 CN 2023085618 W CN2023085618 W CN 2023085618W WO 2023186133 A1 WO2023186133 A1 WO 2023186133A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
path
segmentation
target structure
connected domain
Prior art date
Application number
PCT/CN2023/085618
Other languages
English (en)
French (fr)
Inventor
汪国强
廖明哲
汪珂
方伟
张璟
张天
Original Assignee
武汉联影智融医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210342911.7A external-priority patent/CN116919584A/zh
Priority claimed from CN202210577448.4A external-priority patent/CN117173077A/zh
Priority claimed from CN202210764219.3A external-priority patent/CN117392144A/zh
Application filed by 武汉联影智融医疗科技有限公司 filed Critical 武汉联影智融医疗科技有限公司
Publication of WO2023186133A1 publication Critical patent/WO2023186133A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling

Definitions

  • This description relates to the field of medical technology, and in particular to a puncture path planning method and system.
  • Puncture biopsy is a method under the guidance of medical imaging equipment to aspirate a target organ (for example, a diseased organ or an organ to be detected) to obtain a small amount of tissue for pathological examination and diagnosis. It is the main method for pathological diagnosis. approach, widely used in clinical scenarios.
  • the planning of the puncture path is crucial in needle biopsy, which not only requires the selection of appropriate puncture needle length, skin needle entry point and needle entry angle, but also requires the selection of sensitive tissues (e.g., blood vessels) in and/or around the target organ. , bones) to maintain a certain safe distance to avoid complications caused by puncture.
  • One embodiment of this specification provides a system for puncture path planning, including: at least one storage medium, including a set of instructions; and one or more processors in communication with the at least one storage medium. Wherein, when executing the instruction, the one or more processors are configured to: determine a target point based on the target image; determine a candidate path based on the target point and at least two constraints, wherein in the process of determining the candidate path , based on the first preset condition, adaptively adjust the path planning conditions; based on the candidate path, determine the target path.
  • determining the target point based on the target image includes: roughly segmenting the target structure in the target image to obtain a target structure mask; and determining the positioning of the target structure mask based on soft connected domain analysis. information; based on the positioning information of the target structure mask, accurately segment the target structure; based on the segmentation result, determine the target point.
  • determining the positioning information of the target structure mask based on soft connected domain analysis includes: determining the number of connected domains in the target structure mask; determining the number of connected domains based on the number of connected domains. Describes the positioning information of the target structure mask.
  • the positioning information of the target structure mask includes position information of the circumscribing rectangle of the target structure mask; and/or the determining the positioning information of the target structure mask includes: based on a preset structure positioning coordinates to position the target structure mask.
  • the precise segmentation of the target structure based on the positioning information of the target structure mask includes: performing a preliminary precise segmentation of the target structure to obtain a preliminary precise segmentation result; based on the preliminary precise segmentation Segmentation results, determine whether the positioning information of the target structure mask is accurate; if so, use the preliminary accurate segmentation result as the target segmentation result; otherwise, determine the target segmentation result of the target structure through an adaptive sliding window method.
  • the one or more processors are further configured to: obtain the first segmentation result of the target image based on the first segmentation model; perform skeletonization processing on the first segmentation result to obtain the first pulse A blood vessel skeleton set, wherein the first blood vessel skeleton set includes at least one first blood vessel skeleton of a determined type; based on the second segmentation model, a second segmentation result of the target image is obtained, the second segmentation result including at least one vessel of an undetermined type; fusing the first segmentation result and the second segmentation result to obtain a fusion result; and determining a dangerous area based on the fusion result.
  • determining the dangerous area based on the fusion result includes: skeletonizing the fusion result Processing: obtain a second vascular skeleton of the type of vascular to be determined; obtain a first vascular skeleton whose minimum spatial distance from the second vascular skeleton is less than a second threshold, and use it as a reference vascular skeleton; determine the The spatial distance between the second vascular skeleton and the reference vascular skeleton is determined, and the two points with the smallest spatial distance are determined as the closest point group; the vein of the undetermined type of vessel is determined based on the closest point group. Tube type; based on the vessel type of the fusion result, the risk area is determined.
  • the constraints include: the distance between the path and the dangerous area is greater than a preset distance threshold, the path is located in an adjacent slice of the slice where the target area is located, and the needle entry point on the body contour that is in contact with the bed board is excluded, The puncture depth of the path is less than the preset depth threshold, or the angle between the path and the vertical line of the flat surface of the flat lesion is within the preset range.
  • determining the candidate path based on the target point and at least two constraints includes: determining an initial path based on the target point and the first constraint condition; based on the second constraint condition, starting from the Determine the candidate path from the initial path; where, the The first constraint includes at least one of the following: the path is located in the adjacent slice of the slice where the target area is located, the needle entry point on the body contour that is in contact with the bed board is excluded, the puncture depth of the path is less than the preset depth threshold, or the path is different from the flat surface.
  • the angle between the vertical lines of the flat surface of the lesion is within a preset range; the second constraint condition includes that the distance between the path and the dangerous area is greater than the preset distance threshold.
  • adaptively adjusting path planning conditions based on the first preset condition includes: when there is no candidate path that meets the path planning condition, resetting puncture parameters; the puncture parameters at least include The length and/or diameter of the puncture needle.
  • the candidate paths are divided into coplanar candidate paths and non-coplanar candidate paths; determining the target path based on the candidate paths includes: if the candidate paths include both coplanar candidate paths and non-coplanar candidate paths, surface candidate path, it is based on the shortest puncture depth D1 among the non-coplanar candidate paths, the shortest puncture depth D2 among the paths with small angle deflection perpendicular to the direction of the bed board and the path with non-small angle deflection among the coplanar candidate paths.
  • the target path is screened by the shortest puncture depth D3 in D3; if the candidate path only contains non-coplanar candidate paths, the target path is screened based on D1; if the candidate path only contains coplanar candidate paths, then the target path is screened based on the coplanar candidate path.
  • the D2 and D3 of the candidate paths screen the target path.
  • One embodiment of this specification provides a system for medical image segmentation, including: at least one storage medium, including a set of instructions; and one or more processors in communication with the at least one storage medium.
  • the one or more processors when executing the instruction, are used to: obtain a target image; perform rough segmentation on the target structure in the target image to obtain a target structure mask; and determine the target structure mask based on soft connected domain analysis. Positioning information of the target structure mask; based on the positioning information of the target structure mask, accurately segment the target structure to determine the segmentation result.
  • One embodiment of this specification provides a system for identifying blood vessels in a living body, including: at least one storage medium, including a set of instructions; and one or more processors in communication with the at least one storage medium. Wherein, when the instruction is executed, the one or more processors are configured to: obtain a target image of the biological body; obtain a first segmentation result of the target image based on the first segmentation model; perform an operation on the first segmentation result.
  • Skeletonization processing obtaining a first vascular skeleton set, wherein the first vascular skeleton set includes at least one first vascular skeleton of a determined type; based on the second segmentation model, obtaining a second segmentation of the target image As a result, the second segmentation result includes at least one vessel of an undetermined type; the first segmentation result and the second segmentation result are fused to obtain a fusion result.
  • Figure 1 is a schematic diagram of an application scenario of an exemplary puncture path planning system according to some embodiments of this specification
  • FIG. 2 is a schematic diagram of hardware and/or software of an exemplary computing device according to some embodiments of this specification;
  • Figure 3 is a module schematic diagram of an exemplary puncture path planning device according to some embodiments of this specification.
  • Figure 4 is a schematic flowchart of an exemplary puncture path planning method according to some embodiments of this specification.
  • Figure 5 is a module schematic diagram of an exemplary image segmentation device according to some embodiments of this specification.
  • Figure 6 is a schematic flowchart of an exemplary image segmentation method according to some embodiments of this specification.
  • Figure 7 is a schematic flowchart of an exemplary determination of positioning information of a target structure mask according to some embodiments of this specification.
  • Figure 8 is a schematic flowchart of an exemplary determination of positioning information of a target structure mask according to other embodiments of this specification.
  • Figure 9 is a schematic diagram of an exemplary determination of positioning information of a target structure mask according to some embodiments of this specification.
  • Figure 10 is a comparative schematic diagram of exemplary coarse segmentation results according to some embodiments of this specification.
  • Figure 11 is a schematic flowchart of an exemplary precise segmentation process according to some embodiments of this specification.
  • Figure 12 is a schematic diagram of positioning information determination of an exemplary target structure mask according to some embodiments of this specification.
  • Figure 13 is a schematic diagram of an exemplary determination of the sliding direction according to some embodiments of this specification.
  • Figure 14 is a schematic diagram of accurate segmentation after an exemplary sliding window according to some embodiments of this specification.
  • Figure 15 is a comparative schematic diagram of exemplary segmentation results according to some embodiments of this specification.
  • Figure 16 is a module schematic diagram of an exemplary vessel identification device according to some embodiments of this specification.
  • FIG 17 is a schematic flowchart of an exemplary vessel identification method according to some embodiments of this specification.
  • Figure 18 is a schematic diagram of exemplary vessel identification results according to some embodiments of the present specification.
  • Figure 19 is a flow diagram of an exemplary vessel type determination according to some embodiments of the present specification.
  • Figure 20 is a schematic flowchart of an exemplary vessel type determination according to other embodiments of this specification.
  • 21 is a schematic diagram of exemplary vessel type determination according to some embodiments of the present specification.
  • Figure 22 is a schematic diagram of exemplary vessel type determination according to other embodiments of the present specification.
  • Figure 23 is a schematic diagram of exemplary model training according to some embodiments of the present specification.
  • Figure 24 is a schematic flowchart of an exemplary puncture path planning method according to some embodiments of this specification.
  • Figure 25 is a schematic diagram of exemplary target determination according to some embodiments of the present specification.
  • 26A-26C are schematic diagrams of exemplary initial path determination according to some embodiments of the present specification.
  • Figure 27 is a schematic diagram of an exemplary candidate path shown in accordance with some embodiments of the present specification.
  • Figure 28 is a schematic diagram of an exemplary puncture path planning method according to other embodiments of this specification.
  • system means of distinguishing between different components, elements, parts, portions, or assemblies at different levels.
  • said words may be replaced by other expressions if they serve the same purpose.
  • the method for identifying blood vessels in a living body can be applied to determine the type of blood vessels in an animal's body.
  • the specific embodiments of the present application will be mainly explained by taking the determination of blood vessel types in the human body as an example. However, for those of ordinary skill in the art, this description can be applied to other similar situations, such as other blood vessels of the human body, or blood vessels of other animals (such as dogs, cats, etc.), without exerting creative efforts. Determination of other vessel types.
  • Embodiments of this specification provide a puncture path planning method that automatically performs organ segmentation on target images, locates the best puncture target point, and adaptively selects the best puncture instrument and puncture path based on the target point and at least two constraints.
  • the puncture path can be made smarter and more in line with clinical needs, thereby improving the accuracy and efficiency of puncture biopsy.
  • Figure 1 is a schematic diagram of an application scenario of an exemplary puncture path planning system according to some embodiments of this specification.
  • the puncture path planning system 100 may include an imaging device 110 , an end-execution device 120 , a processing device 130 , a terminal device 140 , a storage device 150 and a network 160 .
  • processing device 130 may be part of imaging device 110 and/or end-effector device 120 .
  • imaging device 110 may be connected to processing device 130 via network 160 .
  • imaging device 110 may be directly connected to processing device 130, as indicated by the dashed bidirectional arrow connecting imaging device 110 and processing device 130.
  • storage device 150 may be connected to processing device 130 directly or through network 160 .
  • the terminal device 140 may be directly connected to the processing device 130 (as shown by the dashed arrow connecting the terminal device 140 and the processing device 130), or may be connected to the processing device 130 through the network 160.
  • the imaging device 110 may scan a target object (scanning object) within the detection area or scanning area to obtain scanning data (for example, a target image) of the target object.
  • the imaging device 110 may use high-energy rays (such as X-rays, gamma rays, etc.) to scan the target object to collect scan data related to the target object, such as three-dimensional images.
  • Target objects can include living or non-living things.
  • target objects may include patients, artificial objects (eg, artificial phantoms), and the like.
  • the target object may include specific parts, organs and/or tissues of the patient (such as the head, ears, nose, mouth, neck, chest, abdomen, liver, gallbladder, pancreas, spleen, kidneys, spine, heart or tumor tissue, etc.).
  • specific parts, organs and/or tissues of the patient such as the head, ears, nose, mouth, neck, chest, abdomen, liver, gallbladder, pancreas, spleen, kidneys, spine, heart or tumor tissue, etc.
  • imaging device 110 may include a single-modality scanner and/or a multi-modality scanner.
  • Single-modality scanners may include, for example, X-ray scanners, computed tomography (CT) scanners, magnetic resonance imaging (MRI) scanners, positron emission computed tomography (PET) scanners, optical coherence tomography (OCT) Scanner, ultrasound (US) scanner, intravascular ultrasound (IVUS) scanner, near infrared spectroscopy (NIRS) scanner, far infrared (FIR) scanner, digital radiography (DR) scanner (e.g., mobile digital radiography photography), digital subtraction angiography (DSA) scanner, dynamic space reconstruction (DSR) scanner, etc.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission computed tomography
  • OCT optical coherence tomography
  • US ultrasound
  • IVUS intravascular ultrasound
  • NIRS near infrared spectroscopy
  • FIR far infrared
  • Multimodal scanners may include, for example, X-ray imaging-magnetic resonance imaging (X-ray-MRI) scanners, positron emission tomography-X-ray imaging (PET-X-ray) scanners, single-photon emission computed tomography-MRI Resonance imaging (SPECT-MRI) scanner, positron emission tomography-computed tomography (PET-CT) scanner, digital subtraction angiography-magnetic resonance imaging (DSA-MRI) scanner, etc.
  • X-ray imaging-magnetic resonance imaging X-ray-MRI
  • PET-X-ray positron emission tomography-X-ray imaging
  • SPECT-MRI single-photon emission computed tomography-MRI Resonance imaging
  • PET-CT positron emission tomography-computed tomography
  • DSA-MRI digital subtraction angiography-magnetic resonance imaging
  • imaging device 110 may include a medical bed 115 .
  • the medical bed 115 may be used to place a target object so that the target object can be scanned to obtain a target image.
  • the medical bed 115 may include an automated medical bed and/or a mobile medical bed. treatment bed.
  • medical bed 115 may be independent of imaging device 110 .
  • imaging device 110 may include a display device.
  • the display device may be used to display scan data of the target object (eg, target image, segmented image, puncture path, etc.).
  • the imaging device 110 may also include a rack, a detector, a workbench, a radioactive source, etc. (not shown in the figure).
  • the rack supports the detector and radioactive source.
  • Target objects can be placed on the workbench for scanning.
  • a radioactive source emits radiation towards a target object.
  • a detector can detect radiation emitted from a radioactive source (eg, X-rays).
  • a detector may include one or more detector units.
  • the detector unit may include a scintillation detector (eg, a cesium iodide detector), a gas detector, or the like.
  • the detector unit may include a single row of detectors and/or multiple rows of detectors.
  • the end effector device 120 may be a robot that performs end procedures (eg, ablation, puncture, radioactive seed implantation).
  • the processing device 130 can guide the end-execution device 120 to perform a corresponding operation (eg, puncture operation) through remote operation control.
  • the end effector device 120 may include a robotic arm tip, a functional component (eg, a puncture needle), and a robotic host.
  • the end of the robot arm can be used to carry functional components; the robot host can be the main body of the robot arm, used to drive the end of the robot arm to move to adjust the posture (for example, angle, position, etc.) of the functional components.
  • the processing device 130 can be connected to the robot arm body or the robot arm end through a communication device (eg, network 160), and is used to control the robot arm end to drive functional components (eg, puncture needles, etc.) to perform synchronous operations.
  • the processing device 130 can drive the puncture needle to perform the puncture operation by controlling the rotation, translation, forward advancement, etc. of the end of the robot arm.
  • the end effector device 120 may also include a main hand control device.
  • the main hand control device can be electrically connected to the robot host or the end of the robot arm through a communication device (eg, network 160), and is used to control the end of the robot arm to drive functional components (eg, puncture needles, etc.) to perform puncture operations.
  • a communication device eg, network 160
  • functional components eg, puncture needles, etc.
  • the processing device 130 may process data and/or information obtained from the imaging device 110 , the end effector device 120 , the terminal device 140 , the storage device 150 , or other components of the puncture path planning system 100 .
  • the processing device 130 can acquire a target image (eg, tomography image, PET scan image, MR scan image, etc.) from the imaging device 110 and perform analysis and processing on it (eg, perform rough segmentation, precise segmentation of the target structure, etc.) , and/or carry out vessel identification, vessel type identification, etc.) to determine the target point, determine the target path based on the target point, etc.
  • processing device 130 may be local or remote.
  • processing device 130 may access information and/or data from imaging device 110, end execution device 120, terminal device 140, and/or storage device 150 through network 160.
  • processing device 130 and imaging device 110 may be integrated. In some embodiments, the processing device 130 and the imaging device 110 may be directly or indirectly connected to jointly implement the methods and/or functions described in this specification.
  • processing device 130 and end execution device 120 may be integrated into one body. In some embodiments, the processing device 130 and the end execution device 120 may be connected directly or indirectly, and work together to implement the methods and/or functions described in this specification. In some embodiments, the imaging device 110, the end effector device 120, and the processing device 130 may be integrated into one body. In some embodiments, the imaging device 110, the end execution device 120, and the processing device 130 may be directly or indirectly connected to jointly implement the methods and/or functions described in this specification.
  • processing device 130 may include input devices and/or output devices. Interaction with the user (eg, display of target images, segmented images, target paths, etc.) can be achieved through input devices and/or output devices.
  • input devices and/or output devices may include a display screen, a keyboard, a mouse, a microphone, etc., or any combination thereof.
  • the terminal device 140 may be connected to and/or communicate with the imaging device 110, the end execution device 120, the processing device 130, and/or the storage device 150.
  • the terminal device 140 can obtain the target image after the organ or tissue segmentation is completed from the processing device 130 and display it so that the user can understand the patient information.
  • the terminal device 140 can obtain the identified image of the vessel from the processing device 130 and display it.
  • the terminal device 140 may include a mobile device 141, a tablet computer 142, a notebook computer 143, etc. or any combination thereof.
  • terminal device 140 (or all or part of its functionality) may be integrated into processing device 130 .
  • Storage device 150 may store data, instructions, and/or any other information.
  • storage device 150 may store data obtained from imaging device 110, end execution device 120, and/or processing device 130 (e.g., target images, segmented images, initial paths, candidate paths, target paths, puncture parameters, etc. ).
  • processing device 130 e.g., target images, segmented images, initial paths, candidate paths, target paths, puncture parameters, etc.
  • the storage device 150 may store computer instructions and the like for implementing the puncture path planning method.
  • the storage device 150 may include one or more storage components, and each storage component may be an independent device or a part of other devices.
  • storage device 150 may include random access memory (RAM), read only memory (ROM), mass memory, removable memory, volatile read-write memory, etc., or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid state disks, etc.
  • RAM can include dynamic RAM (DRAM), double rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM) and zero capacitance (Z-RAM), etc.
  • ROM may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (PEROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital versatile disk ROM wait.
  • MROM mask ROM
  • PROM programmable ROM
  • PEROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • CD-ROM compact disk ROM
  • digital versatile disk ROM wait digital versatile disk ROM wait.
  • storage device 150 may be implemented on a cloud platform.
  • Network 160 may include any suitable network capable of facilitating the exchange of information and/or data.
  • the puncture path At least one component of the path planning system 100 (e.g., the imaging device 110, the end effector device 120, the processing device 130, the terminal device 140, the storage device 150) may exchange information and information with at least one other component of the puncture path planning system 100 through the network 160. /or data.
  • processing device 130 may obtain the target image from imaging device 110 over network 160 .
  • puncture path planning system 100 is provided for illustrative purposes only and is not intended to limit the scope of this description. For those of ordinary skill in the art, various modifications or changes can be made based on the description of this specification. For example, the puncture path planning system 100 may implement similar or different functions on other devices. However, such changes and modifications do not depart from the scope of this specification.
  • Figure 2 is a schematic diagram of hardware and/or software of an exemplary computing device in accordance with some embodiments of this specification.
  • computing device 200 may include processor 210 , memory 220 , input/output interface 230 , and communication port 240 .
  • the processor 210 can execute computational instructions (program code) and perform the functions of the puncture path planning system 100 described herein.
  • the computing instructions may include programs, objects, components, data structures, procedures, modules, and functions (the functions refer to the specific functions described in this application).
  • processor 210 may process images and/or data obtained from any component of puncture path planning system 100.
  • the processor 210 may perform rough segmentation on the target structure in the target image acquired from the imaging device 110 to obtain the target structure mask; determine the positioning information of the target structure mask based on soft connected domain analysis; and based on the target structure mask The positioning information accurately segments the target structure and obtains the segmentation results of the target image, thereby planning the puncture path.
  • the processor 210 may obtain the target image of the biological body from the imaging device 110; obtain the first segmentation result of the target image based on the first segmentation model; obtain the second segmentation result of the target image based on the second segmentation model; and fuse The first segmentation result and the second segmentation result are used to obtain the fusion result.
  • processor 210 may include a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a central processing unit (CPU) , graphics processing unit (GPU), physical processing unit (PPU), microcontroller unit, digital signal processor (DSP), field programmable gate array (FPGA), advanced RISC machine (ARM), programmable logic devices and capable Any circuit, processor, etc., or any combination thereof, that performs one or more functions.
  • RISC reduced instruction set computer
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • CPU central processing unit
  • GPU graphics processing unit
  • PPU physical processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ARM advanced RISC machine
  • Memory 220 may store data/information obtained from any other component of puncture path planning system 100.
  • memory 220 may include bulk memory, removable memory, volatile read and write memory, ROM, etc., or any combination thereof.
  • Input/output interface 230 may be used to input or output signals, data or information. In some embodiments, input/output interface 230 may enable a user to communicate with puncture path planning system 100 . In some embodiments, input/output interface 230 may include input devices and output devices. Communications port 240 may be connected to a network for data communications. The connection may be a wired connection, a wireless connection, or a combination of both. Wired connections may include electrical, optical, or telephone lines, or any combination thereof. Wireless connections may include one or any combination of Bluetooth, Wi-Fi, WiMax, WLAN, ZigBee, mobile networks (eg, 3G, 4G or 5G, etc.), etc.
  • communication port 240 may be a standardized port, such as RS232, RS485, etc. In some embodiments, communication port 240 may be a specially designed port. For example, communication port 240 may be designed in accordance with the Digital Imaging and Medical Communications Protocol (DICOM).
  • DICOM Digital Imaging and Medical Communications Protocol
  • Figure 3 is a module schematic diagram of an exemplary puncture path planning device according to some embodiments of this specification.
  • the puncture path planning device 300 may include a data preprocessing module 310 , a path screening module 320 and a path recommendation module 330 .
  • the corresponding functions of the puncture path planning device 300 can be implemented by the processing device 130 .
  • the data preprocessing module 310 may be used to preprocess the target image. In some embodiments, the data preprocessing module 310 may be used to determine the target based on the target image. For example, the data preprocessing module 310 can roughly segment the target structure in the target image to obtain the target structure mask; determine the positioning information of the target structure mask based on soft connected domain analysis; and determine the positioning information of the target structure mask based on the positioning information of the target structure mask. The target structure is accurately segmented to determine the target point. In some embodiments, data preprocessing module 310 may be used to determine hazardous areas.
  • the data preprocessing module 310 can obtain the first segmentation result of the target image based on the first segmentation model; obtain the second segmentation result of the target image based on the second segmentation model; fuse the first segmentation result and the second segmentation result to obtain Fusion results; based on the fusion results, determine the dangerous area.
  • Path screening module 320 may be used to determine initial paths and/or candidate paths.
  • the pathway screening module 320 may determine candidate pathways based on the target and at least two constraints.
  • the constraint conditions may include: the distance between the path and the dangerous area is greater than a preset distance threshold, the path is located in an adjacent slice of the slice where the target area is located, the needle entry point on the body contour that is in contact with the bed board is excluded, and the path The puncture depth is less than the preset depth threshold or the angle between the path and the vertical line of the flat surface of the flat lesion is within the preset range.
  • the path recommendation module 330 may be used to determine the target path based on the candidate paths.
  • the path recommendation module 330 may be based on the shortest puncture depth D 1 in the non-coplanar candidate path, the perpendicular to the patient bed in the coplanar candidate path The shortest puncture depth D 2 in the path with small angle deflection of the bed board direction and the path with non-small angle deflection The shortest puncture depth D 3 in the path filters the target path.
  • the path recommendation module 330 may filter the target path based on D 1 .
  • the path recommendation module 330 may filter the target path based on D 2 and D 3 of the coplanar candidate paths.
  • the path recommendation module 330 may be used to recommend a target path.
  • the path recommendation module 330 can transmit the determined target path to the terminal device 140 to output it to the doctor for selection by the doctor.
  • system and its modules shown in Figure 3 can be implemented in various ways.
  • the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
  • the data preprocessing module 310 may further include: an image acquisition unit for acquiring a target image; an image segmentation unit for organ segmentation; a vessel identification unit for identifying vessels and/or vessel types in the target image; and a target determination unit for determining the target based on the segmented image or the image after vessel identification.
  • the path screening module 320 may further include an initial path determination unit and a candidate path determination unit, respectively configured to determine the initial path based on the target point and the first constraint condition, and determine the candidate path from the initial path based on the second constraint condition. All modifications, including these, are within the scope of this manual.
  • Figure 4 is a schematic flowchart of an exemplary puncture path planning method according to some embodiments of this specification.
  • the process 400 may be performed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the puncture path planning device 300.
  • the process 400 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 3 executes the program or instructions, the process 400 may be implemented.
  • process 400 may include the following steps.
  • Step 410 Determine the target point based on the target image.
  • step 410 may be performed by the processing device 130 or the data preprocessing module 310.
  • the target image may refer to an image that can reflect the structure, composition, etc. of organs or tissues in the human body.
  • the target images may include medical images generated based on various different imaging mechanisms.
  • the target image may be a CT scan image, an MR scan image, an ultrasound scan image, an X-ray scan image, an MRI scan image, a PET scan image, an OCT scan image, a NIRS scan image, an FIR scan image, an X-ray-MRI scan image, PET-X-ray scan image, SPECT-MRI scan image, DSA-MRI scan image, PET-CT scan image or US scan image, etc.
  • the target image may include a two-dimensional image, a three-dimensional image, a four-dimensional image, etc.
  • the three-dimensional image of an organism can reflect the structure, density and other information of the internal tissues and organs of the organism.
  • the three-dimensional image may be an image obtained by converting a two-dimensional tomographic data sequence obtained by a medical imaging device (for example, the imaging device 110) into three-dimensional data to intuitively and three-dimensionally display the three-dimensional morphology, spatial information, etc. of an organism.
  • a target image of the target object may be obtained.
  • a target image of the target object may be acquired through imaging device 110 .
  • the imaging device 110 can scan a target object located in the detection area to obtain a target image, and transmit it to the puncture path planning device 300 or the processing device 130 .
  • the target image of the target object, etc. may be obtained from the processing device 130, the terminal device 140, or the storage device 150.
  • the processing device 130 can obtain the target image of the target object by reading from the storage device 150, a database, calling a data interface, etc.
  • the target image can also be obtained through any other feasible method.
  • the target image of the target object can be obtained from a cloud server and/or a medical system (such as a medical system center of a hospital, etc.) via the network 160. This application The examples are not particularly limited.
  • the target point may reflect the end point of the puncture path.
  • the target point may be the volume center or center of gravity of a lesion area (eg, a diseased organ or tissue) or an area to be detected (eg, an organ or tissue to be detected).
  • a lesion area eg, a diseased organ or tissue
  • an area to be detected eg, an organ or tissue to be detected.
  • the lesion area or the area to be detected is collectively referred to as the "target organ”.
  • the target point can be determined based on the segmentation result by segmenting the target image (for example, performing organ or tissue segmentation).
  • Different tissues or organs have different grayscales on scan images (for example, CT scan images).
  • organs or tissues have their own shape features or position features, and organ or tissue segmentation can be achieved based on these features.
  • the difference in imaging combined with the characteristics of the lesion can achieve segmentation of the lesion area.
  • organ or tissue segmentation can be performed on the target image through methods such as deep learning models, threshold segmentation, and level sets. Taking thoracoabdominal puncture as an example, organ or tissue segmentation can be performed on target images of the thorax and abdomen to segment and determine skin, bones, liver, kidneys, heart, lungs, internal and external blood vessels in organs, spleen, pancreas, etc.
  • the target image can be roughly segmented to obtain the target structure mask, and the positioning information of the target structure mask can be determined. Based on the positioning information of the target structure mask, accurate segmentation can be performed to obtain Segmentation results. For more information on obtaining segmentation results through rough segmentation and precise segmentation, please refer to Figures 5 to 15 and their related descriptions, which will not be described again here.
  • the segmented target image and/or the target image for determining the vessel type may be displayed on a terminal device (eg, terminal device 140) to be output to the user to facilitate the user to understand the target object's organs and/or Structural and/or lesion information of the tissue.
  • a terminal device eg, terminal device 140
  • Step 420 Determine candidate paths based on the target point and at least two constraints.
  • step 420 may be performed by processing device 130 or path filtering module 320.
  • the constraint conditions may include but are not limited to: the distance between the path and the dangerous area is greater than a preset distance threshold, the path is located in adjacent slices of the slice where the target area is located, and needle insertion on the body contour that is in contact with the bed board is excluded.
  • the puncture depth of the point or path is less than the preset depth threshold or the angle between the path and the vertical line of the flat surface of the flat lesion is within the preset range, etc.
  • vessels and/or vessel types in the target image may be identified, and risk areas may be determined based on the vessels and/or vessel types.
  • the processing device 130 can respectively obtain the first segmentation result and the second segmentation result of the target image using the first segmentation model and the second segmentation model, fuse the first segmentation result and the second segmentation result, and obtain the fusion result. Further, the processing device 130 may perform skeletonization processing on the first segmentation result to obtain a first vascular skeleton set including at least one first vascular skeleton of a determined type; perform skeletonization processing on the fusion result to obtain a first vascular skeleton set of a type to be determined.
  • a second vascular skeleton of the tube determines a vascular type of the second vascular skeleton based on the first vascular skeleton, thereby determining a dangerous area based on the vascular type.
  • candidate paths may be determined based on any two or more of the aforementioned constraints.
  • the candidate path may be determined based on any one or more of the distance between the path and the dangerous area being greater than a preset distance threshold and other constraints.
  • the type and/or number of constraints may be determined according to actual conditions. For example, the processing device 130 may filter paths that simultaneously satisfy multiple constraints mentioned above as candidate paths.
  • an initial path may be determined based on a first constraint, and a candidate path may be determined from the initial path based on a second constraint.
  • a candidate path may be determined from the initial path based on a second constraint.
  • Step 430 Determine the target path based on the candidate paths.
  • step 430 may be performed by the processing device 130 or the path recommendation module 330.
  • candidate paths may be divided into coplanar paths and non-coplanar paths.
  • the coplanar path can refer to the path that is located in the same slice as the target area (for example, the same transverse plane in CT imaging) or within several adjacent slices.
  • the non-coplanar path refers to the path that is not in the same slice as the target area. or adjacent to paths within several slices.
  • the target path may be determined based on coplanar and non-coplanar characteristics of the candidate paths. For more information on target path determination, please refer to Figure 24 and its related description, which will not be described again here.
  • the target path may be recommended to the user.
  • the processing device 130 may send the target path to the terminal device 140 or the imaging device 110 to output to the doctor for reference.
  • the puncture operation may be performed based on the target path.
  • the processing device 130 may control the end-execution device 120 to perform the puncture operation according to the target path.
  • relevant parameters of the initial path, candidate path and/or target path for example, puncture depth, puncture angle, danger zone, preset safety distance, preset depth threshold, third preset value, preset value, etc.
  • process 400 is only for example and illustration, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 400 under the guidance of this specification. However, such modifications and changes remain within the scope of this specification.
  • Medical image (eg, target image) segmentation (eg, organ or tissue segmentation) can be used not only for puncture path planning, but also for scenarios such as medical research, clinical diagnosis, and image information processing.
  • a coarse-to-fine organ segmentation method can be used. The advantage of this method is that it can effectively improve the accuracy of segmentation, reduce the occupied hardware resources, and reduce the time consumed by segmentation.
  • the segmentation results of this method heavily depend on the accuracy of rough positioning. In clinical applications, there may be changes in organ shape, small size, lesions, etc., resulting in inaccurate rough positioning. Inaccurate coarse segmentation and positioning will also seriously affect the accuracy of fine segmentation, resulting in poor processing results for medical image segmentation.
  • the embodiments of this specification provide an image segmentation method.
  • the target structure area can be accurately retained while false positive areas can be effectively eliminated, which not only improves the accuracy of target structure positioning in the coarse positioning stage. accuracy, and contributes to subsequent accurate segmentation, thereby improving segmentation efficiency and accuracy.
  • the image segmentation method will be described in detail below with reference to the accompanying drawings (for example, Figures 5-15).
  • Figure 5 is a module schematic diagram of an exemplary image segmentation device according to some embodiments of this specification.
  • the image segmentation device 500 may include an image acquisition module 510 , a coarse segmentation module 520 , a positioning information determination module 530 and a precise segmentation module 540 .
  • the corresponding functions of the image segmentation device 500 can be implemented by the processing device 130 or the puncture path planning device 300 (for example, the data preprocessing module 310).
  • the image acquisition module 510 can be used to acquire the target image.
  • the target image may include a two-dimensional image, Three-dimensional images or four-dimensional images, etc.
  • the image acquisition module 510 may acquire a target image of the target object.
  • the coarse segmentation module 520 can be used to roughly segment the target structure in the target image to obtain a target structure mask. In some embodiments, the coarse segmentation module 520 may be used to perform coarse segmentation on at least one target structure in the target image to obtain at least one target structure mask.
  • the positioning information determination module 530 can be used to determine the positioning information of the target structure mask based on soft connected domain analysis. In some embodiments, the positioning information determination module 530 may be used to determine the number of connected domains in the target structure mask, and determine the positioning information of the target structure mask based on the number of connected domains. In some embodiments, the positioning information determination module 530 may be used to position the target structure mask based on the positioning coordinates of the preset structure.
  • the precise segmentation module 540 can be used to accurately segment the target structure based on the positioning information of the target structure mask.
  • the precise segmentation module 540 can be used to perform a preliminary precise segmentation of the target structure to obtain a preliminary precise segmentation result; based on the preliminary precise segmentation result, determine whether the positioning information of the target structure mask is accurate; if so, perform the preliminary precise segmentation The result is used as the target segmentation result; otherwise, the target segmentation result of the target structure is determined through the adaptive sliding window method.
  • Figure 6 is a schematic flowchart of an exemplary image segmentation method according to some embodiments of this specification.
  • the process 600 may be executed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the image segmentation device 500.
  • the process 600 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 5 executes the program or instructions, the process 600 may be implemented.
  • process 600 may include the following steps.
  • Step 610 Perform rough segmentation on the target structure in the target image to obtain a target structure mask.
  • step 610 may be performed by the processing device 130 or the coarse segmentation module 520.
  • the target structure may refer to the target organ and/or organ tissue used for segmentation, for example, the target organ, blood vessels in the target organ, etc.
  • one or more target structures may be included in the target image.
  • target structures may include the heart, liver, spleen, kidneys, blood vessels, and/or any other possible organ or organ tissue.
  • the target structure mask can refer to the pixel-level classification label.
  • the target structure mask represents the classification of each pixel in the target image. For example, it can be divided into background, liver, spleen, kidney, etc.
  • the summary area of a specific category is represented by the corresponding label value (for example, , all pixels classified as liver are summarized, and the summary area is represented by the label value corresponding to the liver), where the label value can be set according to the specific coarse segmentation task.
  • the target structure mask obtained by coarse segmentation may be a relatively coarse organ mask.
  • the target structure mask obtained by rough segmentation may also be called the first mask.
  • the target image may be preprocessed, and at least one target structure in the preprocessed target image may be roughly segmented to obtain a target structure mask.
  • preprocessing may include normalization processing and/or background removal processing, etc.
  • a threshold segmentation method, a region growing method, or a level set method may be used to perform coarse segmentation on at least one target structure in the target image.
  • the processing device 130 can classify each pixel in the target image according to the pixel value of the input target image by setting multiple different pixel threshold ranges, and segment the pixel points whose pixel values are within the same pixel threshold range into The same area to achieve coarse segmentation of the target image.
  • the processing device 130 can preset similarity discrimination conditions according to needs based on known pixels or predetermined areas composed of pixels on the target image, and based on the preset similarity discrimination conditions, compare the pixels with their surrounding pixels.
  • the preset similarity discrimination conditions can be determined based on preset image features, such as grayscale, texture and other image features.
  • the processing device 130 can set the target contour of the target image as a zero-level set of a high-dimensional function, differentiate the function, extract the zero-level set from the output to obtain the contour of the target, and then convert the pixels within the contour range into Regions are segmented to achieve rough segmentation of the target image.
  • a trained deep learning model (eg, UNet) may be used to perform coarse segmentation of at least one target structure in the target image. For example, after the target image is input into the trained convolutional neural network, the encoder of the convolutional neural network extracts the features of the target image through convolution, and then the decoder of the convolutional neural network restores the features into pixel-level segmentation probabilities.
  • the segmentation probability map represents the probability that each pixel in the image belongs to a specific category. Finally, the segmentation probability map is output as a segmentation mask, thereby completing rough segmentation.
  • Step 620 Determine the positioning information of the target structure mask based on soft connected domain analysis.
  • step 620 may be performed by the processing device 130 or the positioning information determination module 530.
  • Connected domain i.e. connected area
  • Connected domain can refer to the shadow composed of foreground pixels with the same pixel value and adjacent positions in the target image. like area.
  • one or more connected domains may be included in the target structure mask.
  • the positioning information of the target structure mask (also referred to as the first positioning information) can be determined by performing soft connected domain analysis on the target structure mask.
  • Soft connected domain analysis can refer to the analysis and calculation of the number of connected domains and their corresponding areas within the target structure mask.
  • the number of connected domains in the target structure mask can be determined, and the positioning information of the target structure mask is determined based on the number of connected domains.
  • the location information of the multiple connected domains can be determined first, and then the positioning information of the target structure mask is obtained based on the location information of the multiple connected domains.
  • the retained connected domains may be determined based on the number of connected domains, and the positioning information of the target structure mask may be determined based on the location information of the retained connected domains.
  • the processing device 130 may determine that the connected domains that meet the set conditions are reserved connected domains.
  • the setting condition may be a limiting condition on the area of the connected domain.
  • all connected domains may be determined to be retained connected domains (for example, the number of connected domains is 1) or the output retained connected domains may be empty. (For example, the number of connected domains is 0).
  • the number of connected domains is greater than the first preset value, it can be determined whether all or part of the multiple connected domains (for example, connected domains with an area within the preset order n) are reserved. connected domain.
  • the ratio of the area of the largest connected domain in the target structure mask to the total area of the connected domains can be determined; determine whether the ratio is greater than the first threshold; if so, it is determined that the maximum connected domain is a retained connected domain; otherwise, it is determined that each connected domain in the target structure mask is the retained connected domain.
  • the maximum connected domain can refer to the connected domain with the largest area in the target structure mask.
  • the total area of connected domains can refer to the sum of the areas of all connected domains in the target structure mask. More details can be found in Figure 7 and its related description, which will not be described again here.
  • each connected domain in the target structure mask can be sorted in order from large to small in area; based on the sorting results, the top ranked The connected domain of n (that is, the preset sequence n) is the target connected domain; based on the second preset condition, the retained connected domain is determined from the target connected domain.
  • the processing device 130 can sort multiple connected domains with different areas according to the area from large to small, and the sorted connected domains are recorded as the first connected domain, the second connected domain, ..., and the kth connected domain.
  • the first connected domain is the connected domain with the largest area among multiple connected domains, so it is also called the maximum connected domain.
  • the processing device 130 can sequentially determine the first connected domain based on the second preset condition in the order of area ranking. Whether one or more of the first connected domain, the second connected domain, and the third connected domain are reserved connected domains. That is, first determine whether the first connected domain is a retained connected domain, and then determine whether the second connected domain is a retained connected domain, until the n-1th determination is completed. More details can be found in Figure 8 and its related description, which will not be described again here.
  • connected domains with different area orders are determined as settings for retaining connected domains.
  • the conditions may be different.
  • Step 630 Accurately segment the target structure based on the positioning information of the target structure mask.
  • step 630 may be performed by the processing device 130 or the precise segmentation module 540.
  • precise segmentation may include: performing a preliminary precise segmentation on the target structure, and determining whether the positioning information of the target structure mask is accurate based on the preliminary precise segmentation result. If so, the preliminary accurate segmentation result will be used as the target segmentation result; otherwise, the target segmentation result of the target structure will be determined through the adaptive sliding window method. More details can be found in Figure 11 and its related description, which will not be described again here.
  • process 600 is only for example and explanation, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 600 under the guidance of this description. However, such modifications and changes remain within the scope of this specification.
  • FIG. 7 is a schematic flowchart of an exemplary method of determining positioning information of a target structure mask according to some embodiments of this specification.
  • the process 700 may be executed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the image segmentation device 500 (eg, the positioning information determination module 530).
  • the process 700 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 5 executes the program or instructions, the process 700 may be implemented.
  • process 700 may include the following steps.
  • Step 710 Determine the number of connected domains in the target structure mask.
  • multiple connected domains in the target structure mask may have different areas.
  • the number of connected domains in the target structure mask can be determined in any feasible manner, and this specification does not limit this.
  • Step 720 In response to the number of connected domains being greater than the first preset value and less than the second preset value, determine the ratio of the area of the largest connected domain in the target structure mask to the total area of the connected domains.
  • the first preset value may be 1.
  • the number of connected domains when the number of connected domains is 0, it means that the corresponding mask is empty, that is, the mask of the target structure Membrane acquisition failed or rough segmentation failed, or the segmentation object does not exist.
  • the mask of the spleen when segmenting the spleen in the abdominal cavity, there may be a case of splenectomy. At this time, the mask of the spleen is empty and the number of connected domains is 0. At this time, the output preserved connected domain is empty.
  • the number of connected domains is 1, it means there is only one connected domain, and there are no false positives or segmentation disconnections.
  • the connected domain can be retained, that is, the connected domain is determined to be a retained connected domain. It can be understood that when the number of connected domains is 0 and 1, there is no need to judge whether the connected domain is a reserved connected domain based on the set conditions.
  • the positioning information of the target structure mask may be determined through the operations of steps 730 to 740 .
  • the second preset value may be 3.
  • the processing device 130 may determine the area of the largest connected domain in the target structure mask and the total area of the connected domains. ratio.
  • the positioning information of the target structure mask can be determined through the operations in process 800. For more details, see steps 820 to 840, which will not be described again here.
  • Step 730 Determine whether the ratio of the area of the largest connected domain to the total area of the connected domain is greater than the first threshold.
  • the first threshold may range from 0.8 to 0.95.
  • the first threshold is within the value range of 0.8 to 0.95, which can ensure the expected accuracy of soft connected domain analysis.
  • the first threshold may range from 0.9 to 0.95.
  • the first threshold is in the range of 0.9 to 0.95, which can further improve the accuracy of soft connected domain analysis.
  • the first threshold may be set based on the category of the target structure (eg, thoracic target structure, abdominal target structure). In some embodiments, the first threshold can be reasonably set based on machine learning and/or big data, which is not further limited here.
  • step 731 determines the largest connected domain as a retained connected domain. Otherwise, perform step 735: determine that each connected domain in the target structure mask is a retained connected domain.
  • the processing device 130 may obtain connected domains A and B respectively according to the size of the area (S). , where the area of connected domain A is greater than the area of connected domain B, that is, S(A)>S(B).
  • connected domain A can also be called the first connected domain or the maximum connected domain; connected domain B can be called the second connected domain.
  • the connected domain By calculating the connected domain, when the ratio of the area of connected domain A to the total area of connected domains A and B is greater than the first threshold, that is, S(A)/S(A+B)>the first threshold, the connected domain can be B is determined to be a false positive area, and only connected domain A is retained, that is, the maximum connected domain A is determined to be the retained connected domain.
  • the ratio of the area of connected domain A to the total area of connected domains A and B is less than or equal to the first threshold, both A and B can be determined to be part of the target structure mask, while connected domains A and B are retained, that is, the connectivity is determined Domains A and B are both preserved connected domains.
  • Step 740 Determine the positioning information of the target structure mask based on the preserved connected domain.
  • the positioning information of the target structure mask may include position information of a circumscribing rectangle of the target structure mask, for example, coordinate information of a border line of the circumscribing rectangle.
  • the bounding rectangle of the target structure mask may cover the location area of the target structure.
  • the bounding rectangle of the target structure mask may be displayed in the target image in the form of a bounding rectangular box.
  • a bounding rectangular frame relative to the target structure mask can be constructed based on the bottom edges of the connected regions in the target structure in each direction (for example, the bottom edges of the connected areas in the up, down, left, and right directions).
  • the bounding rectangle of the target structure mask may include one bounding rectangle in which only one rectangle exists.
  • a larger circumscribed rectangle can be constructed based on the bottom edges of the connected area in each direction.
  • the above-mentioned large-area circumscribed rectangle can be applied to organs where there is a connected area.
  • the circumscribed rectangle of the target structure mask may include a circumscribed rectangular frame composed of multiple rectangular frames.
  • the multiple connected regions correspond to multiple rectangular frames, and a larger-area circumscribed rectangular frame can be constructed based on the bottom edges of the multiple rectangular frames.
  • the circumscribed rectangular frame of the target structure mask is formed by a combination of multiple small rectangular frames, for example, the bottom edges of three rectangular frames corresponding to three connected areas are constructed into a total circumscribed rectangular frame, and the calculation can be based on a total It is processed by enclosing a rectangular frame, thereby reducing the amount of calculation while ensuring the expected accuracy.
  • the target structure mask when the positioning of the circumscribed rectangle of the target structure mask fails, can be positioned based on the positioning coordinates of the preset structure. It can be understood that when the coordinates of the circumscribed rectangle of the target structure mask do not exist, the determination of the corresponding organ positioning fails.
  • the preset structure may select a target structure with a relatively stable position (for example, an organ with a relatively stable position).
  • the probability of positioning failure when locating such target structures is low, so that the target structure mask can be accurately positioned.
  • the liver, stomach, spleen, and kidneys can be used as preset organs in the abdominal cavity, that is, the preset structures can include the liver, stomach, spleen, kidneys, lungs, or any other possible organ tissue. .
  • the target structure mask can be repositioned using the positioning coordinates of the preset structure as the reference coordinates. For example, when the target structure that fails to be positioned is located in the abdominal cavity, the positioning coordinates of the liver, stomach, spleen, and kidneys can be used as coordinates for repositioning, and the target structure that fails to be positioned in the abdominal cavity is repositioned accordingly.
  • the positioning of the lungs may be based on Target the target structure mask within the chest cavity. For example, when the target structure that fails to be positioned is located in the chest cavity, the positioning coordinates of the lungs can be used as the coordinates for re-positioning, and the target structure that fails to be positioned in the chest is re-positioned accordingly.
  • the positioning coordinates of the liver top, kidney base, left spleen, and right liver can be used as the cross-sectional direction (upper and lower sides), coronal direction (the coordinates of the left and right sides), and take the most anterior and posterior ends of the coordinates of these four organs as the coordinates of the newly positioned sagittal direction (anterior and posterior), based on which the target structures in the abdominal cavity that failed to be positioned are Reposition.
  • the target structure that fails to be positioned is located in the chest cavity, the circumscribed rectangular frame formed by the lung positioning coordinates is expanded by a certain pixel, and the target structure that fails to be positioned in the chest is positioned again accordingly.
  • determining the positioning information of the target structure mask also includes the following operations: post-processing the target structure mask to reduce noise and optimize the image display effect.
  • post-processing may include the following image post-processing operations: edge smoothing and/or image denoising, etc.
  • edge smoothing processing may include smoothing processing or blurring processing to reduce noise or distortion of medical images.
  • smoothing processing or blurring processing may adopt the following methods: mean filtering, median filtering, Gaussian filtering, and bilateral filtering.
  • process 700 is only for example and explanation, and does not limit the scope of application of this specification.
  • process 700 can be made to process 700 under the guidance of this description. However, such modifications and changes remain within the scope of this specification.
  • FIG. 8 is a schematic flowchart of an exemplary method of determining positioning information of a target structure mask according to other embodiments of this specification.
  • the process 800 may be executed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the image segmentation device 500 (eg, the positioning information determination module 530).
  • the process 800 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 5 executes the program or instructions, the process 800 may be implemented.
  • process 800 may include the following steps.
  • Step 810 Determine the number of connected domains in the target structure mask. More information can be found in step 710 and its description.
  • Step 820 In response to the number of connected domains being greater than or equal to the second preset value, sort each connected domain in the target structure mask in descending order of area.
  • the second preset value may be 3.
  • the processing device 130 may sort each connected domain in the target structure mask in order from large to small area.
  • Step 830 Based on the sorting results, determine the top n connected domains as the target connected domain.
  • the processing device 130 may determine that the top n (eg, 3) connected domains are the target connected domains.
  • the preset order n may be set based on the category of the target structure (eg, chest target structure, abdominal target structure). In some embodiments, the preset order n can be reasonably set based on machine learning and/or big data, and is not further limited here.
  • Step 840 Based on the second preset condition, determine the retained connected domain from the target connected domain.
  • the retained connected domain may include at least the largest connected domain in the target structure mask.
  • in order of area order it can be determined according to the second preset condition whether each connected domain (or each connected domain in the target structure mask) whose area order is within the preset order n is To preserve the connected domain, the preserved connected domain is finally output.
  • the second preset condition may be a limiting condition related to the area of the connected domain.
  • the second preset condition may include an area of a specific connected domain (for example, a maximum connected domain, or a connected domain whose area order is within the preset order m, where m is less than or equal to n) and the connected domain
  • a specific connected domain for example, a maximum connected domain, or a connected domain whose area order is within the preset order m, where m is less than or equal to n
  • the connected domain The relationship between the ratio of the total area and the threshold (for example, the first threshold).
  • the condition that needs to be met as a retained connected domain for the largest connected domain in the preset sequence n may be the relationship between the ratio of the area of the maximum connected domain and the total area of the connected domain and the first threshold. If it is greater than the first threshold, then The maximum connected domain is determined as the preserved connected domain.
  • the condition that the second connected domain (the second-ordered connected domain) in the preset order n needs to be satisfied as a retained connected domain can be the area of the first connected domain (ie, the maximum connected domain) and the second connected domain.
  • the condition that needs to be met for the third connected domain in the preset order n (the third connected domain) to be retained as a connected domain can be the sum of the areas of the first connected domain, the second connected domain and the third connected domain.
  • the relationship between the ratio of the specific connected domain that is, the area of the specific connected domain
  • the total area of the connected domain the first threshold. If the ratio is greater than the first threshold, then the first connected domain, the second connected domain, and the third connected domain are all determined to be retained. connected domain.
  • the second preset condition may include a relationship between the area ratio between the first preset connected domain and the second preset connected domain and the fifth threshold.
  • the conditions that need to be met for the largest connected domain in the preset sequence n to be a retained connected domain may be the area of the second connected domain (i.e., the first preset connected domain) and the area of the largest connected domain (i.e., the second preset connected domain).
  • the relationship between the ratio of and the fifth threshold When the area ratio is less than the fifth threshold, the maximum connected domain is determined as a retained connected domain.
  • the condition that the second connected domain in the preset sequence n needs to satisfy as a retained connected domain may be that the area of the third connected domain (that is, the area of the first preset connected domain) accounts for 10% of the area of the first connected domain.
  • the relationship between the ratio of the area and the area of the second connected domain (that is, the area of the second preset connected domain) and the fifth threshold When it is less than the fifth threshold, the second connected domain is determined to be a retained connected domain. At this time, the maximum connected domain and the second connected domain are both retained connected domains.
  • the condition that the third connected domain in the preset order n needs to satisfy as a retained connected domain may be that the area of the connected domain ranked fourth (that is, the area of the first preset connected domain) accounts for the area of the first connected domain, The relationship between the ratio of the sum of the area of the second connected domain and the area of the third connected domain (that is, the area of the second preset connected domain) and the fifth threshold. When it is less than the fifth threshold, it is determined that the first connected domain, the second connected domain and the third connected domain are all reserved connected domains.
  • the fifth threshold may be in the range of 0.05 to 0.2. Within this value range, the expected accuracy of soft connected domain analysis can be guaranteed. In some embodiments, the fifth threshold may be 0.05. Under this setting, excellent soft connected domain analysis accuracy can be obtained. In some embodiments, the fifth threshold can be other reasonable values, and this specification does not limit this.
  • the processing device 130 may obtain the connected domains A, B, C, ..., P respectively by area (S), Among them, the area of connected domain A is greater than the area of connected domain B, the area of connected domain B is greater than the area of connected domain C, and so on, that is, S(A)>S(B)>S(C)>...>S( P). Further, the processing device 130 may calculate the total area S(T) of the connected domains A, B, C, ..., P to perform calculations on the connected domains.
  • the processing device 130 can select the connected domains within the preset sequence n (such as connected domains A, B, and C) in order of area, and sequentially determine whether each connected domain within the preset sequence n is retained connected. area.
  • the proportion of the area of connected domain A to the total area S(T) is greater than the first threshold M, that is, S(A)/S(T)>M, or the proportion of the area of connected domain B to the area of connected domain A is less than the fifth
  • the threshold is N, that is, S(B)/S(A) ⁇ N
  • connected domain A is determined as the organ mask part and retained (that is, connected domain A is retained as a connected domain), and the remaining connected domains are determined as false positive areas; Otherwise, continue the calculation, that is, continue to determine whether the second connected domain (that is, connected domain B) is a reserved connected domain.
  • FIG. 9 only shows the judgment of whether the three connected domains are retained connected domains. It can also be understood that the value of the preset sequence n in Figure 9 is set to 4. Therefore, only the connected domains with sequence 1, 2, and 3, that is, connected domain A, connected domain B, and connected domain C, need to be Determine whether to preserve the connected domain.
  • Step 850 Determine the positioning information of the target structure mask based on the preserved connected domain. See step 740 and its description for more information.
  • process 800 is only for example and illustration, and does not limit the scope of application of this specification.
  • process 800 under the guidance of this specification. However, such modifications and changes remain within the scope of this specification.
  • Figure 10 is a comparative schematic diagram of exemplary coarse segmentation results according to some embodiments of this specification.
  • the upper and lower figures to the left of the dotted line are respectively the cross-sectional target image and the stereoscopic target image of the rough segmentation results without using soft connected domain analysis
  • the right side of the dotted line are respectively the rough segmentation results using soft connected domain analysis.
  • the resulting cross-sectional target image and stereoscopic target image show that the false positive area enclosed by the box in the left image is removed.
  • the accuracy and accuracy of excluding false positive areas are improved. It is more reliable and directly contributes to the subsequent reasonable extraction of bounding boxes of target structure mask positioning information, improving segmentation efficiency.
  • Figure 11 is a schematic flowchart of an exemplary precise segmentation process according to some embodiments of this specification.
  • the process 1100 may be executed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the image segmentation device 500 (eg, the precise segmentation module 540).
  • the process 1100 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 5 executes the program or instructions, the process 1100 may be implemented.
  • process 1100 may include the following steps.
  • Step 1110 Perform a preliminary precise segmentation on the target structure to obtain a preliminary precise segmentation result.
  • Preliminary precise segmentation can refer to precise segmentation based on the positioning information of the roughly segmented target structure mask.
  • a preliminary precise segmentation of the target structure can be performed based on the circumscribed rectangular frame positioned by rough segmentation to obtain a preliminary precise segmentation result.
  • a more accurate mask of the target structure can be generated through preliminary accurate segmentation, that is, the preliminary accurate segmentation result includes an accurately segmented target structure mask.
  • the target structure mask obtained through precise segmentation can also be called the second mask.
  • Step 1120 Determine whether the positioning information of the target structure mask is accurate.
  • step 1120 it can be determined whether the positioning information of the target structure mask obtained by rough segmentation is accurate, that is, based on the soft connected domain Analyze whether the determined first positioning information is accurate, thereby determining whether the rough segmentation is accurate.
  • whether the positioning information of the roughly segmented target structure mask is accurate can be determined based on the positioning information of the initially accurately segmented target structure mask.
  • the second mask can be calculated to obtain the second positioning information (that is, the positioning information of the preliminary precise segmentation result), and the positioning information of the coarse segmentation (the first positioning information) and the positioning information of the precise segmentation (the third positioning information) can be obtained. Two positioning information) are compared to determine whether the first positioning information of the first mask (that is, the roughly segmented target structure mask) is accurate.
  • the preliminary accurate segmentation result may include the second mask and/or positioning information of the second mask.
  • the bounding rectangular frame of the roughly segmented target structure mask can be compared with the bounded rectangular frame of the precisely segmented target structure mask to determine the difference between the two.
  • the circumscribed rectangular frame of the roughly segmented target structure mask can be combined with the precisely segmented target structure in six directions of the three-dimensional space (that is, the entire circumscribed rectangular frame is a cube in the three-dimensional space). Compare the surrounding rectangular frame of the mask to determine the difference between the two.
  • the processing device 130 may calculate each side of the circumscribed rectangular box of the coarsely segmented target structure mask (the first mask) and each side of the circumscribed rectangular box of the precisely segmented target structure mask (the second mask). The degree of coincidence, or the difference between the vertex coordinates of the circumscribed rectangular box of the roughly segmented target structure mask and the vertex coordinates of the circumscribed rectangular box of the precisely segmented target structure mask is calculated.
  • whether the result of the coarse segmentation target structure mask is accurate can be determined based on the difference between the positioning information of the coarse segmentation and the positioning information of the precise segmentation.
  • the positioning information may be a circumscribed rectangle (such as a circumscribed rectangle) of the target structure mask.
  • the rough segmentation is determined based on the circumscribed rectangle of the roughly segmented target structure mask and the circumscribed rectangle of the accurately segmented target structure mask. Whether the bounding rectangle of the target structure mask is accurate.
  • the difference between the coarse segmentation positioning information and the precise segmentation positioning information may refer to the distance between the closest border lines in the coarse segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame.
  • the positioning information of coarse segmentation is significantly different from the positioning information of precise segmentation (that is, the distance between the closest border lines in the rough segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame is relatively large)
  • the positioning information of coarse segmentation is accurate
  • the difference is small (that is, the distance between the closest border lines in the outer rectangular frame of rough segmentation and the outer rectangular frame of precise segmentation is small)
  • the positioning information of rough segmentation is inaccurate.
  • the rough segmentation bounding rectangle is obtained by pixel expansion (for example, 15-20 voxels) on the border line of the original rough segmentation close to the target structure.
  • whether the positioning information of coarse segmentation is accurate may be determined based on the relationship between the distance between the closest border lines in the roughly segmented circumscribed rectangular frame and the precisely segmented circumscribed rectangular frame and a preset threshold. For example, when the distance is less than a preset threshold, it is determined to be inaccurate, and when the distance is greater than the preset threshold, it is determined to be accurate. In some embodiments, in order to ensure the accuracy of judgment, the preset threshold value may be less than or equal to 5 voxels.
  • step 1130 can be entered: use the preliminary precise segmentation result as the target segmentation result.
  • step 1140 may be performed: determine the target segmentation result of the target structure through an adaptive sliding window method.
  • Figure 12 is a schematic diagram of positioning information determination of an exemplary target structure mask according to some embodiments of this specification.
  • Figures 12(a) and (b) show the target structure mask A obtained by rough segmentation, the surrounding rectangular frame B of the target structure mask A (that is, the positioning information of the target structure mask of the rough segmentation), and The circumscribed rectangular frame C after preliminary precise segmentation based on the roughly segmented circumscribed rectangular frame (that is, the positioning information of the accurately segmented target structure mask).
  • the figure uses a planar rectangular frame within one plane of the three-dimensional circumscribed rectangular frame as an example.
  • the difference between the right border line in the precisely segmented circumscribed rectangular frame C and the corresponding border line in the coarsely segmented circumscribed rectangular frame B is small (the distance is small), so it can be It is judged that the direction corresponding to the right side of the rough segmentation external rectangular frame B is inaccurate, and the right border line needs to be adjusted.
  • the upper, lower, and left border lines of the external rectangular frame C are significantly different from the upper, lower, and left border lines of the external rectangular frame B. From this, the corresponding directions of the upper, lower, and left sides of the roughly segmented external rectangular frame B can be determined.
  • the above is accurate. In this case, it is determined that the positioning information of the rough segmentation target structure mask is inaccurate.
  • the right border line can be adjusted through the adaptive sliding window method to determine the target segmentation result of the target structure. For more information, see the description in step 1140. .
  • the border lines of the four sides of the precisely segmented circumscribed rectangular frame C and the corresponding border lines of the coarsely segmented circumscribed rectangular frame B are quite different, and it can be judged that the roughly segmented circumscribed rectangular frame
  • the border lines on the four sides in B are all accurate, that is, the positioning information of the roughly segmented target structure mask is accurate.
  • the preliminary accurate segmentation result can be used as the target segmentation result.
  • Step 1130 Use the preliminary accurate segmentation result as the target segmentation result.
  • the positioning information of coarse segmentation accurately indicates that the result of coarse segmentation is accurate, and the preliminary precise segmentation result obtained based on the positioning information of coarse segmentation is also accurate. Therefore, the preliminary precise segmentation result can be output as the target segmentation result, that is, a precise segmentation is performed. .
  • Step 1140 Determine the target segmentation result of the target structure through the adaptive sliding window method.
  • the direction in which the positioning information has deviations can be determined as the target direction, and adaptive sliding window calculation is performed in the target direction according to the overlap rate parameter.
  • at least one direction in which the circumscribed rectangular frame is inaccurate can be determined as the target direction, for example, the direction corresponding to the right side of the circumscribed rectangular frame B in Figure 12(a).
  • the overlap rate parameter can refer to the ratio of the overlapping area between the initial circumscribed rectangular frame and the sliding circumscribed rectangular frame to the area of the initial circumscribed rectangular frame.
  • the sliding step size of the sliding window operation is shorter.
  • the overlap rate parameter can be set smaller to make the sliding window calculation process more concise (that is, the sliding window operation requires fewer steps); the overlap rate parameter can be set larger to make the results of the sliding window calculation more accurate.
  • the sliding step size for sliding window operation may be calculated according to the current overlap rate parameter.
  • Figure 13 is a schematic diagram of an exemplary determination of the sliding direction according to some embodiments of this specification.
  • Figure 13 shows the sliding window B1 obtained by sliding the roughly divided circumscribed rectangular frame B, in which (a) is a schematic diagram before the sliding operation, and (b) is a schematic diagram after the sliding operation.
  • the right border line of rectangular frame B can slide a* (1-60%) along the first direction.
  • the lower border line of the external rectangular frame B can slide along the second direction by a corresponding step. Repeat the corresponding sliding window operation on the right border line and the lower border line of the external rectangular frame B until the external rectangular frame B is completely accurate, as shown in the sliding window B1 in Figure 13(b).
  • the pixel point coordinates in the 4 directions corresponding to the 4 sides in the finely segmented circumscribed rectangular frame C are compared with the 4 pixel coordinates corresponding to the 4 border lines in the coarsely segmented circumscribed rectangular frame B. Compare the pixel coordinates in one direction one by one. When the difference in pixel coordinates in one direction is less than the coordinate difference threshold of 8pt, it can be determined that the rough segmentation circumscribed rectangular frame in Figure 12(a) is inaccurate in that direction. .
  • the direction corresponding to the right is inaccurate, and the directions corresponding to the top, bottom, and left are accurate.
  • the direction corresponding to the right is determined as the target. direction.
  • B1 is the circumscribed rectangular frame (also called sliding window) obtained by sliding the roughly segmented circumscribed rectangular frame B.
  • the sliding window is in line with the expected accuracy.
  • the direction corresponding to each border line that does not meet the standard is moved in sequence.
  • the sliding step size of each side depends on the overlap rate of B1 and B.
  • the overlap rate may be the ratio of the current overlapping area of the rough segmented circumscribed rectangular frame B and the sliding window B1 to the total area.
  • the current overlap rate is 40% and so on.
  • the sliding order of the border lines of the roughly divided circumscribed rectangular frame B may be from left to right, from top to bottom, or other feasible order, which is not further limited here.
  • Figure 14 is a schematic diagram of accurate segmentation after an exemplary sliding window according to some embodiments of this specification.
  • the accurate coordinate values of the circumscribed rectangular frame can be obtained, and based on the coordinates value and overlap rate parameters, perform accurate segmentation on the new sliding window, superimpose the accurate segmentation results with the preliminary accurate segmentation results, and obtain the final accurate segmentation results.
  • the sliding window operation can be performed on the original circumscribed rectangular frame B to obtain the sliding window B1 (the maximum range of the circumscribed rectangular frame after the sliding window operation).
  • B slides the corresponding step along the first direction to obtain Sliding window B1-1, and then accurately segment the entire range of sliding window B1-1 to obtain the accurate segmentation result of sliding window B1-1.
  • B can slide the corresponding step along the second direction to obtain the sliding window B1-2, and then accurately segment the entire range of the sliding window B1-2 to obtain an accurate segmentation of the sliding window B1-2. result.
  • B can obtain the sliding window B1-3 by sliding (for example, B can obtain the sliding window B1-2 by sliding as shown in Figure 14(c), and then slide the sliding window B1-2 to obtain Sliding window B1-3), and then accurately segment the entire range of sliding window B1-3 to obtain the accurate segmentation result of sliding window B1-3.
  • the precise segmentation results of sliding window B1-1, sliding window B1-2 and sliding window B1-3 are superimposed with the preliminary precise segmentation results to obtain the final precise segmentation result.
  • Sliding window B1 is the final sliding window result obtained by performing continuous sliding window operations on original sliding window B, namely sliding window B1-1, sliding window B1-2 and sliding window B1-3.
  • there may be overlapping portions For example, in Figure 14(d), there may be an intersection between the sliding window B1-1 and the sliding window B1-2.
  • the intersection may be repeatedly superimposed.
  • the following method can be used to deal with it: for a certain part of the target structure mask A, if the segmentation result of one sliding window is accurate for this part and the segmentation result of the other sliding window is inaccurate, then the segmentation result will be accurate
  • the segmentation result of the sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are accurate, the segmentation result of the right sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are not accurate, If it is accurate, the segmentation result of the right sliding window will be used as the segmentation result of this part, and accurate segmentation will continue until the segmentation result is accurate.
  • obtaining accurate positioning information based on the adaptive sliding window may be a cyclic process, that is, performing it twice or more is the same as the preliminary precise segmentation. operation. For example, after comparing the preliminary precise segmentation border line and the coarse segmentation border line, the updated coordinate value of the precise segmentation circumscribed rectangular frame can be obtained through the adaptive sliding window.
  • the precise segmented circumscribed rectangular frame is expanded by a certain pixel and is set to A new round of rough segmentation of the circumscribed rectangular frame (also called the target circumscribed rectangular frame) is performed, and then the new circumscribed rectangular frame (i.e., the target circumscribed rectangular frame) is accurately segmented again to obtain a new precisely segmented circumscribed rectangular frame, and Calculate whether the bounding rectangle of the target is accurate. If it is accurate, the loop ends and the new accurately segmented circumscribed rectangular box is output as the target segmentation result; otherwise, the loop continues.
  • the target circumscribed rectangular frame also called the target circumscribed rectangular frame
  • a deep convolutional neural network model can be used to accurately segment at least one target structure obtained by rough segmentation.
  • the historical target images initially acquired before rough segmentation can be used as training data, and the historical accurate segmentation result data can be used to train a deep convolutional neural network model.
  • historical target images and historical accurate segmentation result data can be obtained from the imaging device 110 , or from the processing device 130 , the terminal device 140 or the storage device 150 .
  • the result data of at least one target structure accurately segmented above can be output, that is, the target segmentation result.
  • post-processing operations may be performed on the target segmentation results before they are output.
  • post-processing operations may include edge smoothing and/or denoising on images/images.
  • edge smoothing may include smoothing or blurring to reduce noise or distortion of the image.
  • smoothing processing or blurring processing may adopt the following methods: mean filtering, median filtering, Gaussian filtering, bilateral filtering, etc. or any combination thereof.
  • Figure 15 is a comparative schematic diagram of exemplary segmentation results according to some embodiments of this specification.
  • the upper and lower parts of the left side of the dotted line are respectively the cross-sectional target image and the three-dimensional target image using the rough segmentation results of traditional technology
  • the right side are respectively the cross-sectional target image and the three-dimensional target image using the organ segmentation method provided by the embodiment of the present application. image.
  • process 1100 is only for example and explanation, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 1100 under the guidance of this description. However, such modifications and changes remain within the scope of this specification.
  • Some embodiments of this specification also provide an image segmentation device, including a processor, and the processor is configured to execute the image segmentation method described in any embodiment.
  • the image segmentation device further includes a display device, and the display device displays the results of the medical image segmentation method executed based on the processor.
  • the display device displays the results of the medical image segmentation method executed based on the processor.
  • the image segmentation method provided by the embodiments of this specification (1) adopts the soft connected domain analysis method in the coarse segmentation stage to accurately retain the target structure area while effectively eliminating false positive areas. First, it improves the positioning of the target structure in the coarse positioning stage. accuracy, and directly contributes to the subsequent reasonable extraction of the bounding box of the target structure mask positioning information, thus improving the segmentation efficiency; (2) In view of the unfavorable situation where the rough positioning is inaccurate but not invalid in the coarse segmentation stage, use adaptive Sliding window calculations and corresponding sliding window operations can complete the missing parts of the positioning area, and can automatically plan and execute reasonable sliding window operations, reducing the dependence on rough positioning results in the fine segmentation stage, while maintaining segmentation time and computing resources.
  • the segmentation accuracy is improved without significant increase; (3) When rough positioning fails, the target structure mask is accurately positioned based on the preset positioning coordinates of the target structure, which not only improves the segmentation accuracy, but also reduces the It reduces the segmentation time, reduces the amount of segmentation calculations, and further improves the segmentation efficiency; (4) Because the overall workflow of the target structure segmentation fully takes into account various unfavorable situations that reduce the accuracy of target structure segmentation, it is suitable for different types of targets.
  • the effective implementation of the structure segmentation task has high target structure segmentation accuracy and segmentation robustness.
  • Blood vessels In their bodies, such as blood vessels, trachea, bile ducts or ureters. There are often many types of blood vessels in living organisms. The same type of blood vessel can be divided into multiple types due to different structures and functions. For example, blood vessels include at least two main types: arteries and veins. In some embodiments, the types of blood vessels in a living body may include subdivided types of blood vessels, for example, pulmonary veins, pulmonary arteries, hepatic veins, hepatic portal veins, hepatic arteries, etc.
  • the embodiments of this specification provide a vascular identification method.
  • a first segmentation model with lower richness but accuracy and a second segmentation model with higher richness and unclassification are trained.
  • a post-processing algorithm is used to use high-quality
  • the results of the richness model are used to grow vessels on the results of the low-richness model, and the two models are fused, and finally a multi-category vessel with high richness and accuracy is obtained accurately and effectively. Pipe segmentation results.
  • the specific operation of vessel identification will be described in detail below with reference to Figures 16-23.
  • Figure 16 is a block diagram of an exemplary vessel identification device according to some embodiments of the present specification.
  • the vessel identification device 1600 may include a first segmentation module 1610 , a processing module 1620 , a second segmentation module 1630 and a fusion module 1640 .
  • the corresponding functions of the vessel identification device 1600 can be implemented by the processing device 130 or the puncture path planning device 300 (for example, the data preprocessing module 310).
  • the first segmentation module 1610 may be used to obtain a first segmentation result of the target image based on the first segmentation model.
  • the processing module 1620 may be configured to perform skeletonization processing on the first segmentation result to obtain a first vascular skeleton set, where the first vascular skeleton set includes at least one first vascular skeleton of a determined type.
  • the second segmentation module 1630 may be used to obtain a second segmentation result of the target image based on the second segmentation model, where the second segmentation result includes at least one type of undetermined vessel.
  • the fusion module 1640 can be used to fuse the first segmentation result and the second segmentation result to obtain the fusion result. In some embodiments, the fusion module 1640 may also be used to determine vessel type. Specifically: the fusion module 1640 can perform skeletonization processing on the fusion result to obtain the second vascular skeleton of the undetermined type of vessel; obtain the first vascular skeleton whose minimum spatial distance from the second vascular skeleton is less than the second threshold, and then It serves as the reference vascular skeleton; determine the spatial distance between the second vascular skeleton and the reference vascular skeleton, and determine the two points with the smallest spatial distance as the closest point group; determine the vessel type of the undetermined vessel based on the closest point group type.
  • the vessel identification device 1600 may also include a calculation module, a determination module, and a training module (not shown in the figure).
  • the calculation module can be used to obtain the first vascular skeleton whose minimum spatial distance from the second vascular skeleton is less than the second threshold, and use it as a reference vascular skeleton; and determine the relationship between the second vascular skeleton and the reference vascular skeleton. The two points with the smallest spatial distance are determined as the closest point group.
  • the determining module may be used to determine a vessel type of the vessel to be typed based on the set of closest points.
  • the training module can be used to perform model training, such as training to obtain a machine learning model for determining the second threshold.
  • vessel identification device 1600 is for illustrative purposes only and is not intended to limit the scope of the present application.
  • improvements and changes in various forms and details can be made to the application of the above methods and systems without departing from the principles of this application. However, such changes and modifications would not depart from the scope of the present application.
  • FIG. 17 is a schematic flowchart of an exemplary vessel identification method according to some embodiments of this specification.
  • process 1700 may be performed by puncture path planning system 100 (eg, processing device 130 in puncture path planning system 100) or vessel identification device 1600.
  • the process 1700 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 16 executes the program or instructions, the process 1700 may be implemented.
  • process 1700 may include the following steps.
  • Step 1710 Obtain the first segmentation result of the target image based on the first segmentation model.
  • step 1710 may be performed by the processing device 130 or the first segmentation module 1610.
  • the first segmentation result may include a segmented image of a blood vessel in a specific living body, that is, an image or image obtained after the first segmentation of the target image.
  • the type of at least one vessel in the first segmentation result has been determined.
  • the first segmentation model can more accurately segment the blood vessels in the living body and determine the types of some blood vessels. Using the first segmentation model, the precise and/or subdivided types of blood vessels in the biological body in the target image can be obtained, for example, pulmonary veins, pulmonary arteries, hepatic veins, hepatic portal veins, etc.
  • the first segmentation model may include a multi-class segmentation model, which can classify blood vessels more accurately.
  • the first segmentation model can classify all or part of the vessels in the target image.
  • the first segmentation model can segment and classify vessels within a set level range.
  • the first segmentation model can segment and classify some vessels within the set level range and outside the set level range.
  • the first segmentation model can segment vessels within a set level range.
  • the first segmentation model can segment and/or classify a three-dimensional image (that is, the target image is a three-dimensional image).
  • the type of vessel may include two or more types.
  • the types of blood vessels may include a first type and a second type.
  • the first type and the second type are blood vessel types that appear in the target image at the same time and have different categories.
  • the first type of blood vessel and the second type of blood vessel in the target image usually have similar or similar features (eg, contour, gray value, etc.).
  • the first type and the second type may be veins and arteries respectively.
  • the first type and the second type are binary pairs such as (renal vein, ureter) and (celiac portal vein, celiac artery) respectively.
  • the types of blood vessels in the target image of the abdomen or liver region may include hepatic portal vein, hepatic vein, hepatic artery, etc.
  • the first segmentation model can be obtained through training.
  • the first segmentation model may be a machine learning model, and the machine learning model may include but is not limited to a neural network model, a support vector machine model, a k-nearest neighbor model, a decision tree model, or a combination thereof.
  • the neural network model may include but is not limited to one or a combination of CNN, LeNet, GoogLeNet, ImageNet, AlexNet, VGG, ResNet, etc.
  • the first segmentation model may include a CNN model.
  • the processing device 130 can perform model training by increasing the network receptive field, increasing the network depth, and other methods to improve the accuracy of the first segmentation model in classifying blood vessels within a set level range in the living body. For example, methods such as dilated convolution can be used to improve the receptive field of the network.
  • methods such as dilated convolution can be used to improve the receptive field of the network.
  • the input of the first segmentation model is a target image (for example, a three-dimensional image of an organism), and the output is a first segmentation result.
  • the first segmentation result includes segmented images of blood vessels in a specific organism (for example, human blood vessels).
  • the first segmentation result may include a segmented image of the pulmonary artery and the pulmonary vein, or a segmented image of the hepatic artery and the hepatic portal vein, etc.
  • different types of blood vessels in the living body can be distinguished by separate coloring or different grayscale values.
  • the pixels (or voxels) of the artery in (a) are uniformly set to a darker gray scale, and the pixels (or voxels) of the vein in (b) are ) are uniformly set to a lighter grayscale.
  • Step 1720 Skeletonize the first segmentation result to obtain a first vascular skeleton set.
  • step 1720 may be performed by processing device 130 or processing module 1620.
  • Skeletonization is the process of simplifying a vascular image or image into a center line of unit width (eg, unit pixel width, unit voxel width). Skeletonization processing can preserve the center line, endpoints, intersections, etc. of the original image or image, thus retaining the connectivity of the original image. Skeletonization can reduce redundant information and retain only useful information for topology analysis, shape analysis, etc. Skeletonization enables objects to be represented by simpler data structures to simplify data analysis and reduce data storage and transmission equipment requirements.
  • unit width eg, unit pixel width, unit voxel width
  • the skeletonization processing method may include parallel fast thinning algorithm, K3M algorithm, etc.
  • the type of at least one vessel in the first segmentation result has been determined.
  • the first segmentation result is subjected to skeletonization processing, and the skeletons in the obtained first vascular skeleton set correspond to the vascular types of which the type has been determined, that is, the first vascular skeleton set includes at least one first vein of the determined type. Tube skeleton.
  • Step 1730 Obtain the second segmentation result of the target image based on the second segmentation model.
  • step 1730 may be performed by the processing device 130 or the second segmentation module 1630.
  • the second segmentation result may include segmented images of blood vessels in the living body, that is, segmented images or images obtained after performing the second segmentation on the target image.
  • the second segmentation result includes at least one vessel of undetermined type.
  • the type of the undetermined vessel means that the type of the vessel is uncertain, and the type of the undetermined vessel may be any of the above types. For example, it is temporarily unclear whether the blood vessels in the lungs are veins or arteries, the blood vessels in the kidneys are temporarily unclear whether they are renal veins or ureters, and the blood vessels in the liver are temporarily unclear whether they are hepatic veins, hepatic portal veins, or hepatic arteries.
  • the first type, the second type, and the third type are triads such as (hepatic artery, hepatic vein, and hepatic portal vein) respectively.
  • at least one vessel in the second segmentation result is not included in the first segmentation result.
  • the vessels in the second segmentation result that are not included in the first segmentation result are vessels of undetermined type.
  • the second segmentation model can be a relatively rich segmentation model of blood vessels in the living body, so as to segment smaller blood vessels as much as possible.
  • images including deep branches and/or small blood vessels can be obtained.
  • the second segmentation model can segment images including blood vessels of levels 1-6 or even smaller, including blood vessels of levels 1-6 or even smaller. Images of small blood vessels, etc.
  • the second segmentation model may include a single-class segmentation model that is capable of segmenting more vessels.
  • the second segmentation model of at least one vessel in the second segmentation result can segment all or part of the vessels in the target image.
  • the second segmentation model can be obtained by training a machine learning model.
  • Machine learning models may include, but are not limited to, neural network models, support vector machine models, k-nearest neighbor models, decision tree models, or a combination of one or more.
  • the second segmentation model may include a CNN model.
  • the number of downsampling can be reduced to avoid loss of details caused by excessive downsampling, so that the second segmentation model can identify more detailed vessels.
  • the input of the second segmentation model is the target image
  • the output is the second segmentation result.
  • the edges of the blood vessels in the second segmentation result have been marked, and the blood vessels in the output image are uniformly colored.
  • the edges of the vessels have been marked, and the pixels (or voxels) of the vessels in the image are filled with the same gray value.
  • the type of all or part of the vessels in the segmented image output by the second segmentation model is uncertain.
  • the second segmentation model uses the second segmentation model to obtain deep branches and/or small vessels.
  • the second segmentation model has higher richness.
  • the range of the first segmentation level of the first segmentation model is smaller than the range of the second segmentation level of the second segmentation model.
  • the second segmentation model can segment a wider range of blood vessels than the first segmentation model.
  • the range of the second segmentation level of the second segmentation model intersects with the range of the first segmentation level of the first segmentation model, but the second segmentation model can perform operations on finer vessels than the first segmentation model. segmentation.
  • the range of the first segmentation level of the first segmentation model may overlap with the range of the second segmentation level of the second segmentation model.
  • the second segmentation model is superior to the first segmentation model in richness and/or recognition when segmenting relatively fine vessels.
  • the first segmentation result includes vessels of levels 1-4
  • the second segmentation result includes vessels of levels 1-6 or even
  • the 5-6 level or even smaller vessels in the second segmentation result may not be included in the first segmentation result.
  • the higher the level value the more difficult it is to identify the corresponding blood vessels.
  • blood vessels in level 5 are thinner than blood vessels in level 4, making it more difficult to identify.
  • Step 1740 Fusion of the first segmentation result and the second segmentation result to obtain the fusion result.
  • step 1740 may be performed by processing device 130 or fusion module 1640.
  • the processing device 130 may fuse based on the information of the first segmentation result and the second segmentation result to obtain the fusion result.
  • the fusion result may be an image/image containing the vessel in the target image and the type of all or part of the vessel.
  • a union of the first segmentation result and the second segmentation result may be obtained, and a fusion result may be obtained based on the union and the first segmentation result.
  • the processing device 130 may calculate and process the union of the first segmentation result and the second segmentation result, then remove the first segmentation result set from the processed union, and use the resulting difference set as the fusion result.
  • the difference set may be a set of remaining vessels of undetermined type after removing the labeled vessels in the first segmentation result from the second segmentation result.
  • the first segmentation result labels the categories of blood vessels of level 1-4
  • the second segmentation result includes blood vessels of level 1-6 or even smaller
  • the fusion result can be blood vessels of level 5-6 or even smaller whose types are not yet clear. composed collection.
  • the processing device 130 may fuse the first segmentation result and the second segmentation result based on multiple fusion methods to obtain the fusion result.
  • the fusion method may include principal component transform fusion method, product transform fusion method, wavelet transform fusion method, Laplace transform fusion method, etc. or any combination thereof.
  • the second segmentation result contains more blood vessels than the first segmentation result. After being merged with the first segmentation result, it is equivalent to a blood vessel growth process. Because the first segmentation result is more accurate and the second segmentation result is more rich, through fusion, it is possible to obtain vessels with a certain richness and sufficient accuracy, as well as category information of all or part of the vessels, so that Improve the accuracy and richness of vessel segmentation results.
  • the type of the vessel to be typed may be determined based on the fusion results.
  • the type of the vessel to be determined can be determined based on connectivity relationships, spatial relationships, etc. See the descriptions in Figures 19 and 20 for more details.
  • Figure 18 is a schematic diagram of exemplary vessel identification results according to some embodiments of the present specification.
  • the type of vessel in the first segmentation result shown in (a) has been determined.
  • the black and gray colored vessels 1810 are arteries, and the dark gray colored vessels are 1820 is a vein;
  • the second segmentation result shown in (b) has vessels marked, but the specific vessel types are not distinguished, and a large number of small above-mentioned vessels are not included in the first segmentation result.
  • two or more types of vessels whose grayscale values are close to those that are easily misclassified can be identified to obtain accuracy.
  • the identification results of the blood vessels in the living body are both rich and rich.
  • the embodiments of this specification can identify the hepatic portal vein, hepatic vein, hepatic artery, etc. at levels 5 to 6.
  • the target can be determined based on the fusion results. In some embodiments, the target may be determined based on the type of vessel in the fusion result.
  • process 1700 is only for example and illustration, and does not limit the scope of application of this specification.
  • process 1700 can be made under the guidance of this specification. However, such modifications and changes remain within the scope of this specification.
  • Figure 19 is a flow diagram of an exemplary vessel type determination according to some embodiments of the present specification.
  • process 1900 may be performed by puncture path planning system 100 (eg, processing device 130 in puncture path planning system 100) or vessel identification device 1600.
  • the process 1900 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 16 executes the program or instructions, the process 1900 may be implemented.
  • process 1900 may include the following steps.
  • Step 1910 Skeletonize the fusion result to obtain a second vessel skeleton of the vessel type to be determined.
  • step 1910 may be performed by processing device 130 or vessel identification device 1600.
  • the fusion result may be a set of vessels of an undetermined type.
  • the undetermined skeleton can be obtained, that is, the second vascular skeleton of the undetermined type of vessel.
  • Step 1920 Obtain the first vascular skeleton whose minimum spatial distance from the second vascular skeleton is less than the second threshold and use it as a reference vascular skeleton.
  • step 1920 may be performed by processing device 130 or vessel identification device 1600.
  • the vascular type of the undetermined type of vessel may be determined based on the connectivity relationship between the second vascular skeleton of the undetermined type of vessel and the first vascular skeleton in the first vascular skeleton set. Specifically, if there is a first vascular skeleton (such as skeleton K2 with a type determined) connected to a second vascular skeleton (such as skeleton K1 to be determined) in the first vascular skeleton set, then the second vascular skeleton of the type to be determined is The type of tubular scaffold is the same as the type of the first tubular scaffold. From this, the vessel type of the second vessel skeleton can be determined.
  • the blood vessel corresponding to the segment to be determined is also a vein.
  • each second vascular skeleton for example, a certain type of undetermined vascular skeleton, it is possible to obtain that the minimum spatial distance between the first vascular skeleton set and the second vascular skeleton is less than a second threshold.
  • the first vascular skeleton is used as the reference vascular skeleton.
  • One or more reference vascular skeletons form a reference vascular skeleton set.
  • the vessels in the reference vessel skeleton set are the vessels most closely related to the vessel to be determined.
  • the second threshold can determine the range of the reference vascular skeleton, and its value affects the final recognition effect.
  • the second threshold used as a comparison parameter of the spatial distance, may be different physical quantities.
  • the second threshold may be a physical quantity that specifically represents the length, such as 10 mm.
  • the spatial distance may be calculated based on conversion of voxel points in the image information. In this way, the actual distance value can be converted into the number of voxel points in the image, and the second threshold value is expressed by the number of voxel points.
  • the second threshold value is 5.
  • the actual distance value can be converted into the number of pixels, and the number of pixels is determined as the second threshold. For example, if the actual distance value is converted into 5 pixels, the second threshold may be 5.
  • the second threshold may be obtained based on experience or needs. In some embodiments, the second threshold may be user-defined. In some embodiments, the second threshold may be obtained based on the part of the biological body corresponding to the target image. In some embodiments, the second threshold may vary based on the level of the type of vessel to be determined.
  • the second threshold may be obtained through machine learning methods. For example, by constructing a machine learning model, based on the training data of the parts of different organisms, the optimized second threshold corresponding to the parts of the organism is obtained through machine learning. In practical applications, when identifying the part, the corresponding second threshold obtained after optimization training is used.
  • Machine learning models may include, but are not limited to, neural network models, support vector machine models, k-nearest neighbor models, decision tree models, or a combination of one or more.
  • the machine learning method for the second threshold can be obtained based on medical images and type judgment results of corresponding parts of similar organisms. For example, medical images of corresponding parts of the same type of organisms can be used as samples, the type judgment results can be used as labels, and through training, the second threshold of the type of organisms can be obtained.
  • machine training can target at least one of the organism's gender, age, region, and race as a parameter, and obtain a second threshold related to the gender, age, region, race, and other parameters through training.
  • the second threshold may be 5 for women over 50 years old, and 6 for women under 50 years old.
  • Obtaining the second threshold through multiple methods can reduce manual operations and can be applied to a variety of scenarios to improve universality.
  • Step 1930 Determine the spatial distance between the second vascular skeleton and the reference vascular skeleton, and determine the two points with the smallest spatial distance as the closest point group.
  • step 1930 may be performed by processing device 130 or vessel identification device 1600.
  • the closest point group may refer to a point group consisting of two points with the smallest spatial distance between the second vascular skeleton (ie, the undetermined skeleton) of the undetermined type of vessel and the reference vascular skeleton.
  • (a) shows the reconstructed local three-dimensional image
  • (b) is the skeleton simulation diagram corresponding to (a).
  • the two vessels in Figure 21(a) are on the same plane in space (the same applies to vessels that are not on the same plane);
  • the solid line in (b) is the skeleton, and the dotted line is the shortest distance.
  • the two points with the smallest spatial distance can be determined as the closest point group between the undetermined skeleton 2110 and the reference vascular skeleton 2120 .
  • the spatial distance between the second vascular skeleton and the reference vascular skeleton can be determined, and the two points with the smallest spatial distance are determined as the closest point group.
  • Step 1940 Determine the vessel type of the vessel whose type is to be determined based on the closest point group.
  • step 1940 may be performed by processing device 130 or vessel identification device 1600.
  • the vessel type of the vessel whose type is to be determined may be determined based on the position of the closest point group.
  • the candidate vascular skeleton can be determined based on the closest point group, and the candidate vascular skeleton can be determined based on the candidate vascular skeleton.
  • Type The vessel type of the pending vessel For example, a generalized distance between the second vascular skeleton and the vascular skeleton in the candidate vascular skeleton may be determined, and the vessel type of the second vascular skeleton is determined based on the generalized distance.
  • Determining the type of the undetermined vessel based on the closest point group is described in FIG. 20 for more details.
  • the type of the type-undetermined vessel may be determined based on other relationships between the second vessel skeleton of the type-undetermined vessel and the reference vessel skeleton set.
  • the vascular type of the second vascular skeleton may be determined based on the spatial relationship, topological relationship, etc. between the second vascular skeleton and the reference vascular skeleton in the reference vascular skeleton set.
  • the vessel type of the second vascular skeleton may be determined based on the distance and angle between the second vascular skeleton and the reference vascular skeleton of the vessel of the undetermined type.
  • process 1900 is only for example and explanation, and does not limit the scope of application of this specification.
  • process 1900 under the guidance of this specification.
  • Such modifications and changes remain within the scope of this specification.
  • process 2000 may be performed by puncture path planning system 100 (eg, processing device 130 in puncture path planning system 100) or vessel identification device 1600.
  • the process 2000 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 16 executes the program or instructions, the process 2000 may be implemented.
  • step 2010 it is determined whether the reference vascular skeleton set contains a reference vascular skeleton. If so, step 2020 is executed; otherwise, step 2030 is executed.
  • Step 2020 Determine the vessel type of the second vessel skeleton based on the position of the closest point group.
  • the processing device 130 may be based on the difference between the second vascular skeleton of the type to be determined and the reference vascular skeleton. The position of the closest point group determines the vessel type of the second vessel skeleton.
  • the vessel type of the second vessel skeleton may be determined based on the positional relationship between the position of the closest point group and the end point of the skeleton.
  • the endpoint of a skeleton can be a point that has only one adjacent point on the skeleton.
  • n1 for example, point AAA
  • the preset value n1 is used as the comparison parameter of the spatial distance, which can be different physical quantities.
  • the preset value n1 may be a physical quantity that specifically represents the length, such as 5 mm.
  • the spatial distance may be calculated based on conversion of voxel points in the image information. For example, if the actual distance value is converted into 5 voxel points, the default value n1 can be 5. In some embodiments, if the three-dimensional image projection angles are consistent, the actual distance value can be converted into the number of pixels, and the number of pixels represents the preset value n1. For example, if the actual distance value is converted into 5 pixels, the default value n1 is 5.
  • the preset value n1 can be obtained based on experience or needs. In some embodiments, the preset value n1 can be customized by the user. In some embodiments, the preset value n1 may vary based on the level of the type of vessel to be determined. For example, the closer the vessel classification is to the terminal branches, the smaller the preset value n1 is; the further the vessel classification is to the trunk, the larger the preset value n1 is. In some embodiments, the preset value n1 is related to the thickness of the type of vessel to be determined. For example, the thinner the blood vessel, the smaller the preset value n1; the thicker the blood vessel, the larger the preset value n1.
  • the preset value n1 can be obtained through a machine learning method. For example, by building a machine learning model, based on the training data of the parts of different organisms, the optimized preset value n1 corresponding to the parts of the organism is obtained through machine learning. In practical applications, when identifying this part, the corresponding preset value n1 obtained after optimization training is used.
  • Machine learning models may include, but are not limited to, neural network models, support vector machine models, k-nearest neighbor models, decision tree models, or a combination of one or more.
  • the machine learning method of the preset value n1 can be obtained based on medical images and type judgment results of corresponding parts of similar organisms. For example, medical images of corresponding parts of the same type of organism can be used as samples, and the type judgment results can be used as labels. Through training, the preset value n1 of this type of organism can be obtained.
  • the skeleton where AAA is located is the undetermined skeleton 2110
  • the skeleton where CCC is located is the reference vessel skeleton 2120. If the closest point group The distance between the AAA in and the endpoint of the skeleton 2110 is 0 pixels, within n1 pixels, and the distance between the point CCC and the endpoint of the skeleton 2120 is 0 pixels, within n1 pixels, it is considered that the vessel and reference of the undetermined skeleton 2110 are The vessels of the vascular skeleton 2120 are of the same type.
  • (c) is the reconstructed local three-dimensional image from a bird's-eye view
  • (d) is the skeleton simulation diagram corresponding to the same perspective in figure (c).
  • (e) is the simulation diagram of the vascular skeleton from the side view corresponding to (c).
  • the two blood vessels in Figure 21(c) are on different planes in space (the same applies to blood vessels on the same plane), and the minimum spatial distance between the two blood vessels is less than the second threshold.
  • the skeleton where AAA' is located is the dark vascular skeleton 2140, and the skeleton where CCC' is located is the light vascular skeleton 2130, and AAA' CCC' is blocked, that is, the line connecting CCC' and AAA' is perpendicular to the paper.
  • the dotted line is the distance from AAA' to CCC'.
  • AAA' and CCC' if the distance between AAA' and the end point of the skeleton 2140 is 0 pixels, in n1 pixels within, and the distance between CCC' and the endpoint of the skeleton 2130 is 0 pixels, and within n1 pixels, it is considered that the vessel corresponding to the skeleton 2130 and the vessel corresponding to the skeleton 2140 are of the same type.
  • (f) is the reconstructed local three-dimensional image from a bird's-eye view
  • (g) is the skeleton simulation diagram corresponding to (f) with the same viewing angle
  • (h) ) is the partial three-dimensional image from the side view of (f)
  • (i) is the skeleton simulation diagram corresponding to (h) with the same viewing angle.
  • the two vessels in Figure 21 (h) and (f) are on different planes in space (the same applies to vessels on the same plane); in (g), the skeleton of AAA" is a dark vein.
  • the tube skeleton 2150, the skeleton where CCC" is located is the light-colored vessel skeleton 2160, and the point AAA" blocks the point CCC", that is, the line connecting the point CCC" and the point AAA" is perpendicular to the paper surface; in (i), the dotted line is the point AAA" The distance to point CCC".
  • AAA closest point group
  • both AAA” and CCC are in the middle of their respective skeletons, not near the end points. At this time, it is considered that the two vessels corresponding to skeleton 2150 and skeleton 2160 are not of the same type.
  • Step 2030 Determine a candidate vascular skeleton based on the closest point group, and determine the vascular class of the second vascular skeleton based on the candidate vascular skeleton. type.
  • the reference vascular skeleton set includes more than one vascular skeleton, that is, it contains more than one reference vascular skeleton, it can be based on the spatial relationship between the reference vascular skeleton in the reference vascular skeleton set and the second vascular skeleton of the undetermined type of vessel, Determine the vessel type of the second vessel skeleton.
  • candidate vascular skeletons can be determined from the reference vascular skeleton set based on the closest point group, that is, only vascular skeletons suspected to be of the same type as the undetermined type are retained.
  • the reference vascular skeleton of the class can be determined based on the closest point group by determining whether each reference vascular skeleton is the same type of vessel as the second vascular skeleton.
  • the second vascular skeleton is connected to the reference vascular skeleton.
  • the vascular skeleton is suspected to be of the same category, and the reference vascular skeleton is determined as a candidate vascular skeleton.
  • the vessel type of the candidate vascular skeleton (that is, the reference vascular skeleton that is suspected to be of the same category as the second vascular skeleton) can be determined as the vessel type of the corresponding type of undetermined vessel. Tube type. If the candidate vascular skeleton contains multiple vascular skeletons, and these vascular skeletons are all of the same vascular type, then the vascular types of these reference vascular skeletons can be determined as the vascular types of the corresponding types of vessels to be determined.
  • the generalized distance between the second vascular skeleton and the candidate vascular skeleton can be determined; and then based on the generalized distance, Determine the vessel type of the type to be determined.
  • Generalized distance can refer to a physical quantity that reflects the proximity between skeletons (for example, distance proximity, direction proximity).
  • the generalized distance can be obtained based on the minimum spatial distance and the generalized angle.
  • the generalized angle can refer to a physical quantity that reflects the directional proximity between skeletons. For example, angles ⁇ and ⁇ in Figure 22(b).
  • the generalized included angle may be obtained based on the generalized angle of the closest point group of the vessel. Specifically, the point in the nearest point group can be used as the tangent point, the tangent line to the skeleton where the point is located can be made, and the angle between the tangent lines can be determined as the generalized angle.
  • the candidate vascular skeleton corresponding to the second vascular skeleton 2210 includes two items: the reference vascular skeleton 2220 and the reference vascular skeleton 2230, then for the closest point group (AAA 1 and CCC) and (AAA 2 and CCC), take the point CCC as the tangent point to make the tangent line of the second vascular skeleton 2210 where it is located, use AAA 1 as the tangent point to make the tangent line of the reference vascular skeleton 2220 where it is located, and take AAA 2 as the tangent line
  • the point is tangent to the reference vascular skeleton 2230 where it is located.
  • the angle (for example, ⁇ , ⁇ ) between the tangents corresponding to each group of nearest points is determined as the generalized angle.
  • the bifurcation point can be used as the tangent point, the tangent lines of each skeleton branch can be drawn, the midline of each tangent line can be found, and the midline can be as the tangent to the skeleton at the bifurcation point.
  • the generalized angle can be obtained based on other methods. For example, you can make a fitting straight line for each skeleton, and use the angle between each fitting straight line as a generalized angle.
  • Figure 22(a)-(b) shows the method of obtaining distance based on spatial distance and generalized angle, where (a) is the reconstructed local three-dimensional image, and (b) is the corresponding Skeleton simulation diagram.
  • the three blood vessels in Figure 22(a) are on the same plane in space (the same applies to blood vessels on different planes), and they are suspected to be the same as the second blood vessel skeleton 2210 (ie, the undetermined blood vessel skeleton).
  • the reference vascular skeleton 2220 and the reference vascular skeleton 2230 that is, the candidate vascular skeleton contains two, and the closest point groups of the two reference vascular skeletons and the second vascular skeleton 2210 are respectively (AAA 1 and CCC) and (AAA 2 and CCC).
  • the processing device 130 may determine the type of the reference vascular skeleton with the smallest score as the vascular type of the second vascular skeleton 2210. For example, if S1 is smaller, then the vascular type of the second vascular skeleton 2210 is the same as the reference vascular skeleton. 2220 consistent.
  • Determining the type of blood vessels in an organism through connectivity relationships, closest point groups and generalized distances can improve recognition accuracy.
  • process 2000 is only for example and illustration, and does not limit the scope of application of this specification.
  • process 2000 various modifications and changes can be made to process 2000 under the guidance of this description. However, such modifications and changes remain within the scope of this specification.
  • Figure 23 is a schematic diagram of exemplary model training according to some embodiments of the present specification.
  • process 2300 may be performed by puncture path planning system 100 (eg, processing device 130 in puncture path planning system 100) or vessel identification device 1600 (eg, training module).
  • the process 2300 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 16 executes the program or instructions, the process 2300 may be implemented.
  • an initial model 2310 can be trained based on a large number of identified training samples to update the parameters of the initial model to obtain a trained model 2320.
  • the initial model 2310 may include an initial first segmentation model and/or an initial second segmentation model, and accordingly, the trained model 2320 may include a first segmentation model and/or a second segmentation model.
  • the initial first segmentation model may be trained based on a large number of first training samples to update parameters of the initial first segmentation model to obtain the first segmentation model.
  • the first training sample may be input into the initial first segmentation model, and the parameters of the initial first segmentation model may be updated iteratively through training.
  • the first training sample may include historical target images used to train the first segmentation model.
  • the historical target image may include historical three-dimensional medical images.
  • the sample target image in the first training sample can be used as the input of the training model, and the vessel type of the vessel in the sample target image is used as a label.
  • the vascular types include at least the first type and the second type, and may also include the third type or even more.
  • vascular types include celiac portal vein and celiac artery.
  • vascular types include hepatic portal vein, hepatic vein, and hepatic artery.
  • the first type of vessels in the sample target image can be marked with a first gray value, the second type of vessels with a second gray value, and the third type of vessels with a third gray value. Degree mark, etc. It is worth noting that the above labels only include the vessel type of the vessel in the sample target image, not the level of the vessel.
  • the first training sample may only calibrate the types of vessels that meet the conditions.
  • the conditions may include a preset range of contrast of vessels in the image, a preset range of vessel levels, etc., or any combination thereof.
  • this condition can be set based on experience or needs. For example, different types of organisms, different parts, organs, tissues, etc. can correspond to different conditions.
  • this condition may be set by the user.
  • the condition may be that the level of the vessel is less than a set level.
  • the level of the vessel may refer to the relative relationship between the vessel and the main vessel. For example, the fewer branches from the main vessel to the vessel, the smaller the level of the vessel.
  • the thoracic aorta is a level 1 vessel
  • the main pulmonary arteries on both sides are level 2 vessels
  • the lobar arteries are level 3 vessels
  • the segmental pulmonary arteries are level 4 vessels
  • the subsegmental pulmonary arteries are level 5 vessels.
  • Vascular vessels, sub-subsegmental pulmonary arteries are grade 6 vessels, etc.
  • the main hepatic portal vein is a first-level blood vessel
  • the left/right branch of the hepatic portal vein is a second-level blood vessel
  • the hepatic lobe portal vein is a third-level blood vessel
  • the hepatic segmental portal vein is a fourth-level blood vessel
  • the hepatic subsegmental portal vein is a fifth-level blood vessel.
  • the sub-subsegmental portal vein of the liver is a level 6 vessel.
  • the main hepatic vein is a first-level blood vessel
  • the left/right branches of the hepatic vein are second-level blood vessels
  • the hepatic lobar veins are third-level blood vessels
  • the hepatic segmental veins are level 4 blood vessels
  • the hepatic subsegmental veins are level 5 blood vessels.
  • Subsegmental hepatic veins are grade 6 vessels.
  • the main hepatic artery is a primary blood vessel
  • the left/right branch of the hepatic artery is a secondary blood vessel
  • the hepatic lobar artery is a third-level blood vessel
  • the segmental hepatic artery is a fourth-level blood vessel.
  • the level of vessels may reflect the richness of the image or detection results. For example, the larger the number of levels, the better the richness. For example, detection results containing vessels with a maximum level of 6 are richer than detection results containing vessels with a maximum level of 4.
  • the set level may be a preset level of the vessel, for example, level 5.
  • the set level can be used to guide the vessels that need to be marked (for example, blood vessels less than level 5), and the vessels that do not need to be marked (for example, blood vessels greater than or equal to level 5).
  • the setting level can be set based on needs and/or experience. In some embodiments, the setting level may be set by the user.
  • an initial second segmentation model may be trained based on a large number of second training samples to update parameters of the initial second segmentation model to obtain the second segmentation model.
  • the second training sample can be input into the initial second segmentation model, and the parameters of the initial second segmentation model are iteratively updated through training.
  • the second training sample may refer to a sample target image used for training the second segmentation model.
  • the sample target image may include historical three-dimensional image data.
  • the sample target image in the second training sample can be used as the input of the training model, and the blood vessels in the sample target image are used as labels, for example, the outline of the blood vessels in the sample target image is circled. It is worth noting that the above label only includes vessels (eg, blood vessels) and does not include the type of vessel (eg, hepatic portal vein, hepatic vein, hepatic artery, etc.).
  • the sample CT image data can be adjusted to the window width (the CT value range displayed on the CT image), the window level (the center value of the CT value), etc. Processing to increase the grayscale difference between various structures in the image and/or enhance the contrast of small vessels, so as to make the annotation results of the first training sample and/or the second training sample more accurate (for example, cover as much as possible Small blood vessels, so that the second training sample covers more levels of blood vessels).
  • the labels of the first training sample and/or the second training sample may be added manually or automatically, or may be added in other ways, which is not limited in this embodiment.
  • the first training sample only identifies the types of vessels that meet the conditions.
  • at least one in vivo vessel that does not meet the conditions is calibrated in the second training sample.
  • the second training sample labels more vessels (with deeper branches and smaller diameters). For example, if the set condition is that the level of blood vessels in an organism is less than level 5, the first training sample can only calibrate the types of blood vessels of levels 1-4, and the second training sample can calibrate the types of blood vessels of levels 1-6 or even smaller. Tube. Covering as many small vessels as possible, as well as covering vessels not covered by the first training sample, will help the second segmentation model learn the characteristics of small vessels and improve the richness of segmentation.
  • multiple first training samples and/or second training samples can be obtained by reading from a database, a storage device, or calling a data interface.
  • the sample target image of the first training sample can be input to the first segmentation model, and the prediction result of the vessel in the sample target image is obtained from the output of the first segmentation model; and/or the sample target of the second training sample can be input The image is input to the second segmentation model, and the prediction result of the vessel in the sample target image is obtained from the output of the second segmentation model.
  • the processing device may construct a label based on the prediction result and the first training sample (or the second training sample).
  • loss function can reflect the difference between the prediction result and the label.
  • the processing device may adjust parameters of the first segmentation model (or the second segmentation model) based on the loss function to reduce the difference between the prediction result and the label. For example, by continuously adjusting the parameters of the first segmentation model or the second segmentation model, the value of the loss function is reduced or minimized.
  • the first segmentation model and/or the second segmentation model can also be obtained according to other training methods, for example, setting a corresponding initial learning rate (eg, 0.1) and learning rate decay strategy for the training process. This application is not limited here.
  • process 2300 is only for example and explanation, and does not limit the scope of application of this specification.
  • process 2300 can be made to process 2300 under the guidance of this specification. However, such modifications and changes remain within the scope of this specification.
  • Figure 24 is a schematic flowchart of an exemplary puncture path planning method according to some embodiments of this specification.
  • process 2400 may be performed by the puncture path planning system 100 (eg, the processing device 130 in the puncture path planning system 100) or the puncture path planning device 300.
  • the process 2400 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 3 executes the program or instructions, the process 2400 may be implemented.
  • process 2400 may include the following steps.
  • Step 2410 Determine the target point based on the target image.
  • step 2410 may be performed by the processing device 130 or the data preprocessing module 310.
  • the target point may be the volume center or center of gravity of the lesion area or the area to be detected.
  • the volume center or center of gravity of the target organ can be determined in various ways. For example only, taking the puncture of the lesion area as an example, the processing device 130 can continuously erode the periphery of the lesion area inward through the boundary erosion method to obtain a distance field, determine the voxel farthest from the boundary as the center of the lesion area, and etch the center determined as a target.
  • the processing device 130 can: (1) obtain the minimum distance value in the three spaces of X, Y, and Z in the original scale of the target image, perform image resampling based on the scale, and obtain the resampled image (for example, Figure 25(a) (image shown in )); (2) Use the boundary erosion method, recursive erosion, calculate the minimum distance from the corroded voxel to the boundary according to the number of erosions, and form a distance field mask corresponding to the lesion area (for example, Figure 25 The approximately elliptical light gray irregular area shown in (b)); (3) Calculate the maximum value of the distance field.
  • the neighboring 5*5 of the voxel * Calculate the average within the 5 cube, and determine the point with the largest average as the target point; when the number of voxels with the maximum distance field is greater than 2, determine the sum of the distances between the current voxel and other voxel points with the maximum boundary distance.
  • the minimum value is the objective function, and the voxel point corresponding to the value obtained by solving the objective function is determined as the target point (for example, the black point shown in the center area of Figure 25(c)).
  • target point determination is only an example and is not a limitation of this specification.
  • the target point can be determined in other reasonable and feasible ways (for example, directly determining the volume center of the target organ through image recognition method). as the target point, or by calculating the intersection point of the volume long axis and short axis of the target organ to determine the intersection point as the target point, or by determining the volume center as the target point through pixel statistics and other methods), this specification does not limit this.
  • Step 2420 Determine the initial path based on the target point and the first constraint.
  • step 2420 may be performed by processing device 130 or path filtering module 320.
  • the first constraint may include at least one of the following: the path is located in an adjacent slice of the slice where the target area is located, needle entry points on the body contour that are in contact with the bed board are excluded, and the puncture depth of the path is less than a preset The depth threshold, or the angle between the path and the vertical line of the flat surface of the flat lesion is within a preset range, etc.
  • the first constraint may include that the path is located in adjacent slices of the slice where the target area is located, needle entry points on the body contour that are in contact with the bed board are excluded, and the puncture depth of the path is less than a preset depth threshold.
  • the first constraint may include that the path is located in an adjacent slice of the slice where the target area is located, the needle insertion point on the body contour that is in contact with the bed board is excluded, the puncture depth of the path is less than the preset depth threshold, and the distance between the path and the flat lesion is The angle between vertical lines on the flat surface is within the preset range.
  • the first constraint may include that the path is located in an adjacent slice of the slice where the target area is located, or that needle entry points on the body contour that are in contact with the bed board are excluded, or that the puncture depth of the path is less than a preset depth threshold.
  • the target volume may refer to the area where the target organ is located.
  • the slice in which the target area is located may reflect the position of the target area in the target image (for example, in a CT scan image, the target area may be one or more slices in the scan image).
  • the adjacent slices of the slice where the target area is located may refer to the adjacent slices located within a certain range of the slice where the target area is located.
  • the puncture path By constraining the puncture path to be located in adjacent slices of the slice where the target area is located, it can be avoided that the target point and needle entry point of the puncture path span too large slices along the head and foot direction, resulting in scan images acquired during the puncture operation.
  • the position of the "needle head” and “needle tail” cannot be observed at the same time, which affects the guidance and evaluation of the puncture operation by users (for example, doctors, nurses).
  • a hospital bed may refer to a platform (eg, medical bed 115) on which a target subject (eg, a patient) lies while a puncture procedure is performed.
  • the needle entry point location may be determined based on the target image/segmented image, and the needle entry point on the body contour in contact with the bed board may be excluded.
  • the processing device 130 can determine the position of the bed board based on the patient's lying posture in the target image (for example, image-based segmentation recognition or hardware system-based posture feedback positioning, etc.), and calculate the needle entry point position based on the position of the hospital bed board.
  • FIG. 26A can be simply understood as a side view.
  • the bed surface is perpendicular to the direction of the paper.
  • the processing device 130 can be aligned with the horizontal right direction of the paper as the positive X-axis. direction, the vertical upward direction is the positive direction of the Y-axis to establish a coordinate system, based on this calculation
  • the needle point position and the target point position for example, the middle point (X 1 , Y 1 ) of Figure 26A (a) or the middle point (X 0 , Y 0 ) of Figure 26A (b)
  • the ordinate of the needle point when the ordinate of the needle point is larger than the target
  • the ordinate is pointed (for example, when it is greater than Y 1 or Y 0 )
  • determine the corresponding needle entry point as the positive needle entry point that is, the needle entry point on the body contour that is not in contact with the bed board
  • determine the corresponding needle entry point The point is the reverse needle entry point (that is, the needle entry point on the body contour in contact with the bed board), which should be eliminated.
  • the puncture depth of the path may be the puncture distance from the needle entry point to the target point.
  • the initial path may be constrained to a penetration distance less than a preset depth threshold.
  • the preset depth threshold may be determined based on the length of the puncture needle (eg, the length of a model of commonly used clinical instruments for puncture surgery). For example, the length of the longest puncture needle supported by the system (e.g., 120 mm puncture needle) can be determined as the preset depth threshold, or the length of the medium puncture needle can be determined as the preset depth threshold, or the length of the shortest puncture needle can be determined as Default depth threshold.
  • the preset depth threshold may be determined based on puncture information and/or patient information, or the like.
  • puncture information may include target organ information, purpose of puncture, etc.
  • patient information may include patient age, gender, etc.
  • the processing device 130 may determine a smaller value (eg, between the skin layer and the target). The shortest distance of the organ plus 3 ⁇ 5mm) is the preset depth threshold.
  • the processing device 130 may determine the puncture needle model (eg, puncture needle length, diameter) based on target organ information, puncture purpose, and other information, and determine the length of the puncture needle to be a preset depth threshold based on the puncture needle model.
  • the planning of the initial path may be constrained based on the distance between the needle entry point and the target point. For example only, 1 in FIG. 26B represents a path where the puncture depth L 1 is less than the preset depth threshold L max , and 2 represents a path where the puncture depth L 2 is greater than the preset depth threshold.
  • the processing device 130 may determine path 1 as the initial path.
  • puncture needle length, puncture information, etc. By combining puncture needle length, puncture information, etc. to exclude paths with a puncture depth greater than the preset depth threshold, it can not only avoid puncture needles from being unable to reach the target due to puncture needle model restrictions, but also reduce the time the puncture needle stays in the human body and the distance it travels. , thereby reducing the risk of complications caused by puncture.
  • Flat lesions may refer to smaller lesions with flat features (eg, the lesion morphology shown in Figure 26C).
  • the lesion morphology can be determined through pixel statistics, principal component analysis, image recognition and other methods.
  • the processing device 130 can perform matrix decomposition according to the spatial distribution points of the lesion voxels in the target image or segmented image, and calculate the directions and eigenvalues (r 0 , r 1 , r 2 ) of the three main axes X, Y, and Z. , when 1 ⁇ r 0 /r 1 ⁇ 2 and r 1 /r 3 ⁇ 3, the current lesion is determined to be a flat lesion.
  • the eigenvalue r 0 ⁇ r 1 ⁇ r 2 , and the size of the eigenvalue represents the contribution of the corresponding eigenvector to the entire matrix after orthogonalization of the matrix (that is, the (x, y, z) value pair representing the size of the object in the coordinate system Description of object size).
  • the puncture path when the lesion is in a flat shape, can be constrained so that the angle between the perpendicular line of the flat surface of the flat lesion is within a preset range.
  • the flat surface of the flat lesion can be determined through methods such as planar projection, image recognition, pixel statistics, threshold segmentation, etc.
  • the preset range can be any reasonable angular range, and the processing device 130 can determine the preset range based on parameters such as the area of the flat surface and the diameter of the puncture needle, which is not limited in this specification.
  • the preset range can be [0°, 10°], [0°, 15°], [0°, 20°], [0°, 40°], [5°, 15°], [3° , 20°], [5°, 35°], [10°, 30°], [25°, 50°], or [0°, 60°], etc.
  • it can be based on the ratio of the number of point clouds in the path projection plane to the number of point clouds in the flat lesion projection plane (that is, determining whether the volume of the cylinder formed by the puncture path contains most of the target organ), Paths whose angles to the vertical line of the flat surface of the flat lesion are within a preset range are screened.
  • the processing device 130 can: (1) obtain the needle insertion direction corresponding to the current path; (2) calculate a projection plane equation perpendicular to the path according to the needle insertion direction; (3) based on the projection plane equation, divide the lesion area into Project the corresponding coordinates and target coordinates to obtain the corresponding lesion projection point cloud and target projection point; (4) With the target projection point as the center of the circle, the safe radius of the path (for example, the preset distance threshold between the path and the dangerous area ) draws a circle with the radius, and calculates the ratio of the number of projection point clouds in the circle to the total number of projection point clouds of the lesion.
  • the safe radius of the path for example, the preset distance threshold between the path and the dangerous area
  • the ratio is greater than the preset ratio (for example, 0.6, 0.7, etc.), it means that the size of the lesion area is punctured along that direction. All parts are on the puncture path, and the angle between the path and the vertical line of the flat surface of the flat lesion is within the preset range (for example, path b in Figure 26C(b)), and they are excluded; when the ratio is less than or equal to the preset ratio , it means that the angle between the path and the vertical line of the flat surface of the flat lesion is not within the preset range (for example, path a in Figure 26C(b)).
  • the preset ratio for example, 0.6, 0.7, etc.
  • the puncture path of the flat lesion can be punctured from the "big end" direction (that is, the vertical line direction of the flat surface).
  • the puncture path should be as perpendicular to the flat surface of the lesion as possible while meeting clinical needs, and a path with a shorter puncture depth and better effect should be specifically determined to improve the feasibility of the puncture path and the convenience of puncture, thereby ensuring the accuracy of the sampling results/lesion puncture results. reliability.
  • initial paths that satisfy the first constraint can be filtered in any reasonable order.
  • the first initial path of the adjacent slice located in the slice where the target area is located can be determined, and then the path of the needle entry point on the body contour in contact with the bed board in the first initial path can be eliminated to obtain the second initial path; further, from The path with a puncture depth less than the preset depth threshold is selected from the second initial path as the final initial path.
  • Step 2430 Determine candidate paths from the initial paths based on the second constraint condition.
  • step 2430 may be performed by processing device 130 or path filtering module 320.
  • the second constraint may include that the distance between the path and the dangerous area is greater than a preset distance threshold.
  • a risk area may be an area containing dangerous tissue (eg, blood vessels, bone, etc.).
  • the internal tissues of the target organ may be graded based on the tissue segmentation results (for example, tissue segmentation is achieved by executing process 600) or vessel identification results (eg, vessel identification is implemented by executing process 1700). Based on the grading The results and path planning conditions (e.g., constraints) determine the danger zone.
  • the processing device 130 may give priority to all blood vessels that do not pass through the interior of the target organ (i.e., determine all blood vessels as dangerous tissues) based on the average diameter of the blood vessel segment.
  • the processing device 130 can first segment the blood vessels in the target organ through the deep learning method or the image segmentation process 600 to obtain the blood vessel mask; and then use the boundary mask to erode inward to calculate the blood vessel centerline and determine the point where corrosion cannot continue.
  • the blood vessel mask between the nodes is grown to obtain the blood vessel branches of each segment; finally, the average diameter of each blood vessel segment and the thickness of the blood vessel are distinguished.
  • the threshold Dt (for example, 1mm, 2mm) is compared.
  • the threshold Dt determines whether it is less than the threshold Dt or not. If it is less than the threshold Dt, it is determined to be a thin blood vessel, and if it is greater than the threshold Dt, it is determined to be a thick blood vessel. Thin blood vessels and thick blood vessels are distinguished through different mark values, and all blood vessel segments are refreshed. , based on which the dangerous area is determined. For example, an area containing only thick blood vessels may be determined as a risk area, or an area containing both thin blood vessels and thick blood vessels may be determined as a risk area.
  • the preset distance threshold may be the shortest distance from the edge of the dangerous tissue to the path.
  • the method may be based on one or more of the distance between tissues, tissue segmentation error, registration error between planned puncture and actual puncture, execution error of the end-effector device (eg, end-effector device 120), etc.
  • the parameters determine a preset distance threshold (eg, 2mm, 3mm, 5mm, or 7mm, etc.).
  • the puncture path By constraining the puncture path to be closer to the dangerous area than the preset distance threshold, it can be avoided that the puncture path is close to dangerous tissues such as blood vessels, causing accidental injury to other tissues during the puncture process, and causing secondary harm to the patient.
  • path planning conditions may be adaptively adjusted based on the first preset condition.
  • the path planning conditions may reflect the filtering conditions of the candidate path (for example, the range of the dangerous area and/or the preset safety distance value).
  • adaptively adjusting the path planning condition based on the first preset condition may include: adjusting the range of the dangerous area when the ratio of the number of candidate paths to the number of initial paths is less than a third threshold.
  • the third threshold may represent the change control coefficient of the dangerous organization (for example, 0.2, 0.3).
  • the number of initial paths is N 1
  • the number of candidate paths determined based on this screening is N 2
  • N 2 /N 1 ⁇ H 1 i.e. third threshold
  • the scope of the dangerous area can be changed (for example, modify the marker value of blood vessels and set blood vessels with a diameter less than 1.5mm as penetrable tissue. removed from the danger zone).
  • candidate paths may be determined from the initial paths based on the adjusted danger area; when the ratio of the number of candidate paths obtained before adjustment to the number of candidate paths obtained after adjustment is less than a fourth threshold, the adjustment will be made The candidate path obtained after adjustment is used as the final candidate path; when the ratio of the number of candidate paths obtained before adjustment to the number of candidate paths obtained after adjustment is greater than the fourth threshold, the candidate path obtained before adjustment is used as the final candidate path.
  • the initial paths whose distance from the risk area is greater than the preset distance threshold can be screened again, and adjustments can be determined
  • the number of candidate paths after N 3 is N 3 .
  • N 2 /N 3 ⁇ H 2 i.e., the fourth threshold
  • the number of candidate paths corresponding to N 3 can be The candidate path is determined as the final candidate path; when N 2 /N 3 >H 2 , it means that the result of the candidate path obtained by setting small blood vessels with a diameter less than 1.5mm as permeable tissue is the same as that obtained by setting all blood vessels as non-punctureable. The difference between the candidate path results is small. At this time, the candidate path corresponding to N 2 is determined as the final candidate path.
  • the fourth threshold can be any reasonable value (for example, 0.6, 0.8), which is not limited here.
  • the impact of dangerous tissues on puncture path planning can be fully considered, helping to balance safety risks and recommended path diversity. It plays a balancing role (for example, making small blood vessels penetrable and non-penetrable) to reduce the occurrence of complications caused by puncture. For example, as shown in Figure 27, the puncture path avoids blood vessels and prethoracic ribs.
  • adaptively adjusting the path planning condition based on the first preset condition may further include: resetting the puncture parameters when there is no candidate path that satisfies the path planning condition.
  • puncture parameters may include but are not limited to puncture needle length, diameter, etc.
  • an initial path may be determined based on the reset puncture parameters, and candidate paths may be determined based on the initial path.
  • the processing device 130 may determine an initial path that satisfies the first constraint in the above step 2420 based on the length, diameter and other parameters of the puncture needle No. 1 with the shortest puncture depth, and filter out those whose distance to the dangerous area is greater than the preset distance threshold.
  • the initial path (that is, the initial path that satisfies the second constraint) is determined as a candidate path.
  • the system adaptively replaces the puncture parameters with the puncture needle No. 2 corresponding to the longer puncture depth.
  • the length, diameter, etc. of the initial path and candidate path determination process are again performed (i.e. Steps 2420 and 2430) until at least one candidate path that meets the path planning conditions is determined.
  • Step 2440 Determine the target path based on the candidate paths.
  • step 2440 may be performed by the processing device 130 or the path recommendation module 330.
  • the target path may be determined based on the coplanar and non-coplanar characteristics of the candidate path.
  • the method may be based on the shortest puncture depth D 1 in the non-coplanar candidate path, the perpendicular to the bed board in the coplanar candidate path, The shortest puncture depth D 2 in the path with small angle deflection and the shortest puncture depth D 3 in the path with non-small angle deflection are used to select the target path.
  • Small angle deflection means that the angle between the vector N passing through the target point perpendicular to the bed board and pointing from the human body to the bed board and the direction vector T corresponding to the target point and the needle entry point is less than the preset threshold (for example, 2°, 3° , 5°, 10°, 15°, etc.), non-small angle deflection refers to the sandwich between the vector N passing through the target point perpendicular to the bed board and pointing from the human body to the bed board, and the direction vector T corresponding to the target point and the needle entry point. The angle is greater than the preset threshold.
  • the preset threshold for example, 2°, 3° , 5°, 10°, 15°, etc.
  • the small angle deflection range may be in the range [0°, 15°], for example, a coplanar path perpendicular to the bed board direction.
  • the puncture path is most convenient to operate in the direction perpendicular to the bed board.
  • the shortest puncture depth D 2 or the shortest puncture depth D 3 is the smallest, if the absolute value of the difference between the shortest puncture depth D 2 and the shortest puncture depth D 3 is less than the third preset value, the shortest puncture depth D can be determined.
  • the coplanar candidate path corresponding to the small angle deflection 2 is the target path.
  • the coplanar candidate path corresponding to the minimum of the shortest puncture depth D 2 and the shortest puncture depth D 3 is determined to be the target path; when the shortest puncture depth D 1 is the minimum , if the absolute value of the difference between the minimum value of the shortest puncture depth D 2 and the shortest puncture depth D 3 and the shortest puncture depth D 1 is less than the third preset value, then the coplanar candidate path corresponding to the minimum value can be determined as the target path , otherwise, determine the non-coplanar candidate path corresponding to the shortest puncture depth D 1 as the target path.
  • the third preset value may be determined based on one or more of user habits, puncture operation history data, patient information, and the like. For example, when the puncture operation is performed manually, the third preset value can be set to a range value of 20 mm of the scanning segment of the imaging device 110 based on the convenience of the doctor's reading.
  • the processing device 130 may calculate the shortest puncture depth D 1 in the non-coplanar candidate path, the vertical bed board in the coplanar candidate path
  • the shortest puncture depth D 2 of the direction that is deflected at a small angle (for example, the deflection angle is within the range of [0°, 15°]) and the shortest puncture depth D 3 among the coplanar candidate paths that are not deflected at a small angle in the direction perpendicular to the bed board.
  • the processing device 130 can compare the sizes of D 2 and D 3 , when D 2 corresponding to small angle deflection is the smallest, determine the candidate path corresponding to D 2 as the target path; when D 3 corresponding to non-small angle deflection is the smallest, if D 2 -D 3 ⁇ the third preset value (for example, 20mm), then determine the coplanar candidate path corresponding to D 2 with small angle deflection that is more convenient for operation as the target path.
  • the third preset value for example, 20mm
  • the processing device 130 can calculate the minimum value D min among D 2 and D 3 , if D min -D 1 ⁇ the third preset value (for example, 20mm), then take the convenience of reading as the goal, and determine the coplanar candidate path corresponding to D min as the target path; if D min -D 1 ⁇ the third preset value, Then, with safety as the goal, the non-coplanar candidate path corresponding to D 1 with a shorter puncture depth is determined as the target path.
  • the preset value corresponding to the difference between the shortest puncture depth D 2 and the shortest puncture depth D 3 (ie, D 2 -D 3 ), and the minimum value of the shortest puncture depth D 2 and the shortest puncture depth D 3
  • the preset value corresponding to the difference between the shortest puncture depth D 1 (ie, D min -D 1 ) can be the same or different value.
  • the target path may be filtered based on the shortest puncture depth D 1 in the non-coplanar candidate paths (for example, determine that the non-coplanar candidate path corresponding to D 1 is Target path).
  • the candidate paths when the candidate paths only include coplanar candidate paths, it can be based on the shortest puncture depth D 2 in the paths with small angle deflection perpendicular to the direction of the bed board in the coplanar candidate paths and the shortest puncture depth D 2 in the paths with non-small angle deflection.
  • the shortest puncture depth D 3 filters the target path.
  • the processing device 130 can compare the sizes of D 2 and D 3 , and determine the candidate path corresponding to D 2 as the target path when D 2 corresponding to the small angle deflection is the smallest; when D 3 corresponding to the non-small angle deflection is the smallest, if D 2 -D 3 ⁇ the third preset value (for example, 20mm), then the coplanar candidate path corresponding to D 2 with small angle deflection, which is more convenient for operation, is determined as the target path. If D 2 -D 3 ⁇ the third preset value If the value is set, the safety of puncture depth is taken as the goal, and the candidate path corresponding to D 3 with a shorter puncture depth is determined as the target path.
  • the third preset value for example, 20mm
  • process 2400 is only for example and explanation, and does not limit the scope of application of this specification.
  • process 2400 under the guidance of this description. However, such modifications and changes remain within the scope of this specification.
  • Figure 28 is a schematic diagram of an exemplary puncture path planning method according to other embodiments of this specification.
  • process 2800 may be performed by puncture path planning system 100 (eg, processing device 130) or puncture path planning device 200.
  • the process 2800 may be stored in a storage device (eg, the storage device 150, a storage unit of the system) in the form of a program or instructions, and when the processor or the module shown in FIG. 3 executes the program or instructions, the process 2800 may be implemented.
  • the target image may be segmented (for example, through the segmentation method of the process 600 ), and the segmented image may be determined.
  • the vessel type for example, through the vessel identification method of process 1700
  • the target point is determined based on the segmentation result, and then the target point is determined based on the target point and the constraint conditions. mark path. specifically:
  • Step 2810 Segment the target image.
  • the processing device 130 can segment the target image using deep learning models, threshold segmentation, and other methods to obtain preliminary segmentation results.
  • the processing device 130 can perform rough segmentation on the target structure in the target image to obtain the target structure mask; based on soft connected domain analysis, determine the positioning information of the target structure mask; based on the positioning information of the target structure mask , accurately segment the target structure and obtain preliminary segmentation results. For more information on segmentation results obtained through coarse segmentation and precise segmentation, see the descriptions in Figures 6-16.
  • Step 2820 Perform vessel recognition on the target image.
  • vessel recognition can be performed based on the preliminary segmentation results to obtain the target segmentation results of the target image.
  • the target segmentation results may include different levels of vessels and/or types of vessels.
  • the processing device 130 may obtain the first segmentation result of the target image based on the first segmentation model; perform skeletonization processing on the first segmentation result to obtain the first vascular skeleton set; and obtain the first segmentation result based on the second segmentation model.
  • the processing device 130 can perform skeletonization processing on the fusion result, obtain a second vascular skeleton of the undetermined type of vessel, and obtain a first vascular skeleton whose minimum spatial distance from the second vascular skeleton is less than a second threshold.
  • the skeleton is used as the reference vascular skeleton; the spatial distance between the second vascular skeleton and the reference vascular skeleton is determined, and the two points with the smallest spatial distance are determined as the closest point group; the type of undetermined vessel is determined based on the closest point group The vessel type, thereby obtaining the target segmentation result.
  • the first segmentation model and the second segmentation model please refer to the descriptions in Figures 17 to 23.
  • the processing device 130 can further classify the tissues inside the target organ to determine dangerous tissues. For example, the processing device 130 can determine the center point of each blood vessel through boundary erosion based on the blood vessel mask inside the target organ obtained by segmentation, calculate the minimum distance from the center point to the blood vessel boundary as the blood vessel radius of the point, and then based on the preset
  • the blood vessel discrimination threshold Dt sets the blood vessels smaller than the threshold Dt as thin blood vessels, and the blood vessels larger than the threshold Dt as thick blood vessels, and distinguishes them with different label values.
  • Step 2830 Determine the target point based on the target segmentation result.
  • the processing device 130 can determine the target area according to the target segmentation result, determine the volume center or center of gravity of the target area through boundary erosion or other methods, and determine it as the target point. See the description in Figure 24 for more details.
  • Step 2840 Determine the initial path based on the target point and the first constraint.
  • the processing device 130 may determine, according to the target point, the path of adjacent slices located in the slice where the target area is located as the first initial path; in step 2843, the processing device 130 may determine based on the puncture parameters (for example, , currently set puncture needle length), determine the path with a puncture depth less than the preset depth threshold in the first initial path as the second initial path; in step 2845, the processing device 130 may exclude the path on the body contour that is in contact with the bed board. The second initial path corresponding to the needle entry point is used to obtain the third initial path.
  • the puncture parameters for example, , currently set puncture needle length
  • the processing device 130 may further perform step 2847 to screen the path in the third initial path whose angle with the vertical line of the flat surface of the flat lesion is within a preset range, and determine it. is the final initial path.
  • steps 2841 to 2847 in Figure 28 is only an example. In some embodiments, at least one of steps 2841 to 2847 can be executed in any reasonable order (for example, after step 2841, Perform step 2845 first and then perform step 2843), this specification does not limit this.
  • Step 2850 Determine candidate paths from the initial paths.
  • processing device 130 may determine candidate paths from the initial paths based on the second constraint. In some embodiments, during the process of determining the candidate path, the processing device 130 may adaptively adjust the path planning conditions based on the first preset condition. For example only, the processing device 130 may determine a path whose distance from the dangerous area is greater than a preset distance threshold from the initial paths, and adjust the range of the dangerous area when a ratio of the number of candidate paths to the number of initial paths is less than a third threshold.
  • the candidate path obtained before adjustment is used as the final candidate path; when the ratio of the number of candidate paths obtained before adjustment to the number of candidate paths obtained after adjustment is greater than the fourth threshold, the candidate path obtained before adjustment is used as the final candidate path.
  • the processing device 130 can reset the puncture parameters (for example, when the preset depth threshold determined based on the length of a certain puncture needle cannot be effective When planning the path, increase the length of the puncture needle (that is, increase the preset depth threshold), and perform steps 2840 to 2850 again according to the puncture parameters until a candidate path that meets the path planning conditions is determined; when it exists, perform steps 2860.
  • the puncture parameters for example, when the preset depth threshold determined based on the length of a certain puncture needle cannot be effective
  • increase the length of the puncture needle that is, increase the preset depth threshold
  • processing device 130 may determine a target path based on the candidate paths.
  • the processing device 130 may calculate the shortest puncture depth D 1 in the non-coplanar candidate path, the shortest puncture depth D 2 in the coplanar candidate path with a small angle deflection in the direction perpendicular to the bed board, and the shortest puncture depth D 2 in the non-small angle deflection path.
  • the shortest puncture depth D 3 determines the target path based on the shortest puncture depth D 1 , the shortest puncture depth D 2 and the shortest puncture depth D 3 .
  • the processing device 130 may recommend a target path to the user, and/or control the end-execution device 120 to perform puncture based on user feedback (eg, a user-selected target path or a replanned puncture path).
  • user feedback eg, a user-selected target path or a replanned puncture path.
  • step 2810 and step 2820 may be performed simultaneously.
  • step 2830 may be performed first, and then step 2820 may be performed, that is, the target point is first determined based on the segmentation result obtained in step 2810, and then vessel identification is performed to determine the dangerous area.
  • step 2830 may be performed first, and then step 2820 may be performed, that is, the target point is first determined based on the segmentation result obtained in step 2810, and then vessel identification is performed to determine the dangerous area.
  • puncture path planning methods and/or systems (1) based on the clinical requirements of puncture biopsy, at least two constraints are used to calculate a safe and feasible optimal puncture path, effectively shortening planning time and improving puncture efficiency. Accuracy, reducing the occurrence of complications; (2) Determining the initial path whose distance from the dangerous area is greater than the preset distance threshold as a candidate path, which can effectively control the risk of puncture operations; (3) Adaptively adjust the path planning process to fully Consider safety and the diversity of path planning to improve the accuracy and efficiency of path planning; (4) Determine the final target path by comprehensively considering operational convenience and safety to ensure the accuracy and safety of path planning; (5) Through coarse The segmentation stage uses a soft connected domain analysis method to accurately retain the target structure area while effectively eliminating false positive areas, which not only improves the accuracy of target structure positioning in the rough positioning stage, but also contributes to subsequent accurate segmentation; (6) Through The segmentation results of the second segmentation model with high richness are used to perform vessel growth on the segmentation results of the first segment
  • the possible beneficial effects may be any one or a combination of the above, or any other possible beneficial effects.
  • numbers are used to describe the quantities of components and properties. It should be understood that such numbers used to describe the embodiments are modified by the modifiers "about”, “approximately” or “substantially” in some examples. Grooming. Unless otherwise stated, “about,” “approximately,” or “substantially” means that the stated number is allowed to vary by ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending on the desired features of the individual embodiment. In some embodiments, numerical parameters should account for the specified number of significant digits and use general digit preservation methods. Although the numerical ranges and parameters used to identify the breadth of ranges in some embodiments of this specification are approximations, in specific embodiments, such numerical values are set as accurately as is feasible.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Robotics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种用于穿刺路径规划的***(100)及方法,该***(100)包括:至少一个存储介质,包括一组指令;以及与至少一个存储介质通信的一个或以上处理器(210);其中,当执行指令时,一个或以上处理器(210)用于:基于目标图像确定靶点(410);基于靶点和至少两个约束条件,确定候选路径(420),其中,在确定候选路径过程中,基于第一预设条件,自适应调节路径规划条件;基于候选路径,确定目标路径(430)。

Description

一种用于穿刺路径规划的***及方法
优先权声明
本申请要求于2022年4月2日提交的申请号为202210342911.7的中国申请、2022年5月25日递交的申请号为202210577448.4的中国申请、以及2022年6月30日递交的申请号为202210764219.3的中国申请的优先权,其全部内容通过引用并入本文。
技术领域
本说明书涉及医疗技术领域,特别涉及一种穿刺路径规划方法和***。
背景技术
穿刺活检是一种在医学影像设备引导下,通过穿刺进入靶器官(例如,病变器官或待检测器官)进行抽吸,获取少量组织以进行病理学检查、诊断的方式,其作为病理诊断的主要途径,广泛应用在临床场景中。穿刺路径的规划在穿刺活检中至关重要,其不仅要求选择合适的穿刺针长度、皮肤入针点和入针角度,还需与靶器官内和/或靶器官周围的敏感组织(例如,血管、骨骼)保持一定的安全距离,以避免穿刺导致的并发症等的发生。
发明内容
本说明书实施例之一提供一种用于穿刺路径规划的***,包括:至少一个存储介质,包括一组指令;以及与所述至少一个存储介质通信的一个或以上处理器。其中,当执行所述指令时,所述一个或以上处理器用于:基于目标图像确定靶点;基于所述靶点和至少两个约束条件,确定候选路径,其中,在确定所述候选路径过程中,基于第一预设条件,自适应调节路径规划条件;基于所述候选路径,确定目标路径。
在一些实施例中,所述基于目标图像确定靶点包括:对所述目标图像中的目标结构进行粗分割,得到目标结构掩膜;基于软连通域分析,确定所述目标结构掩膜的定位信息;基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割;基于分割结果,确定所述靶点。
在一些实施例中,所述基于软连通域分析,确定所述目标结构掩膜的定位信息包括:确定所述目标结构掩膜中的连通域的数量;基于所述连通域的数量,确定所述目标结构掩膜的定位信息。
在一些实施例中,所述目标结构掩膜的定位信息包括所述目标结构掩膜的外接矩形的位置信息;和/或所述确定所述目标结构掩膜的定位信息包括:基于预设结构的定位坐标,对所述目标结构掩膜进行定位。
在一些实施例中,所述基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割包括:对所述目标结构进行初步精准分割,得到初步精准分割结果;基于所述初步精准分割结果,判断所述目标结构掩膜的定位信息是否准确;若是,将所述初步精准分割结果作为目标分割结果;否则,通过自适应滑窗方式确定所述目标结构的目标分割结果。
在一些实施例中,所述一个或以上处理器还用于:基于第一分割模型,获取所述目标图像的第一分割结果;对所述第一分割结果进行骨架化处理,获取第一脉管骨架集,其中,所述第一脉管骨架集包括至少一条类型已确定的第一脉管骨架;基于第二分割模型,获取所述目标图像的第二分割结果,所述第二分割结果中包括至少一条类型待定脉管;融合所述第一分割结果和所述第二分割结果,获取融合结果;基于所述融合结果,确定危险区域。
在一些实施例中,所述第二分割结果中的至少一条脉管未包括在所述第一分割结果中;所述基于所述融合结果,确定危险区域包括:对所述融合结果进行骨架化处理,获取所述类型待定脉管的第二脉管骨架;获取与所述第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;确定所述第二脉管骨架与所述参考脉管骨架之间的空间距离,将所述空间距离最小的两个点确定为最近点组;基于所述最近点组确定所述类型待定脉管的脉管类型;基于所述融合结果的脉管类型,确定所述危险区域。
在一些实施例中,所述约束条件包括:路径与危险区域的距离大于预设距离阈值,路径位于靶区所在片层的邻近片层,排除与病床床板接触的体廓上的入针点,路径的穿刺深度小于预设深度阈值,或路径与扁平病灶的扁平面垂直线的夹角在预设范围内。
在一些实施例中,所述基于所述靶点和至少两个约束条件,确定候选路径,包括:基于所述靶点和第一约束条件,确定初始路径;基于第二约束条件,从所述初始路径中确定候选路径;其中,所述 第一约束条件包括以下中至少一个:路径位于靶区所在片层的邻近片层,排除与病床床板接触的体廓上的入针点,路径的穿刺深度小于预设深度阈值,或路径与扁平病灶的扁平面垂直线的夹角在预设范围内;所述第二约束条件包括路径与危险区域的距离大于预设距离阈值。
在一些实施例中,所述基于第一预设条件,自适应调节路径规划条件,包括:当不存在满足所述路径规划条件的候选路径时,重新设定穿刺参数;所述穿刺参数至少包括穿刺针的长度和/或直径。
在一些实施例中,所述候选路径分为共面候选路径和非共面候选路径;所述基于所述候选路径,确定目标路径包括:如果所述候选路径同时包含共面候选路径和非共面候选路径,则基于所述非共面候选路径中的最短穿刺深度D1、所述共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的路径中的最短穿刺深度D3筛选目标路径;如果所述候选路径仅包含非共面候选路径,则基于所述D1筛选目标路径;如果所述候选路径仅包含共面候选路径,则基于所述共面候选路径的所述D2及所述D3筛选目标路径。
本说明书实施例之一提供一种用于医学图像分割的***,包括:至少一个存储介质,包括一组指令;以及与所述至少一个存储介质通信的一个或以上处理器。其中,当执行所述指令时,所述一个或以上处理器用于:获取目标图像;对所述目标图像中的目标结构进行粗分割,得到目标结构掩膜;基于软连通域分析,确定所述目标结构掩膜的定位信息;基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割,以确定分割结果。
本说明书实施例之一提供一种用于生物体内脉管识别的***,包括:至少一个存储介质,包括一组指令;以及与所述至少一个存储介质通信的一个或以上处理器。其中,当执行所述指令时,所述一个或以上处理器用于:获取生物体的目标图像;基于第一分割模型,获取所述目标图像的第一分割结果;对所述第一分割结果进行骨架化处理,获取第一脉管骨架集,其中,所述第一脉管骨架集包括至少一条类型已确定的第一脉管骨架;基于第二分割模型,获取所述目标图像的第二分割结果,所述第二分割结果中包括至少一条类型待定脉管;融合所述第一分割结果和所述第二分割结果,获取融合结果。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的示例性穿刺路径规划***的应用场景示意图;
图2是根据本说明书一些实施例所示的示例性计算设备的硬件和/或软件的示意图;
图3是根据本说明书一些实施例所示的示例性穿刺路径规划装置的模块示意图;
图4是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程示意图;
图5是根据本说明书一些实施例所示的示例性图像分割装置的模块示意图;
图6是根据本说明书一些实施例所示的示例性图像分割方法的流程示意图;
图7是根据本说明书一些实施例所示的示例性确定目标结构掩膜的定位信息的流程示意图;
图8是根据本说明书另一些实施例所示的示例性确定目标结构掩膜的定位信息的流程示意图;
图9是根据本说明书一些实施例所示的示例性确定目标结构掩膜的定位信息的示意图;
图10是根据本说明书一些实施例所示的示例性粗分割结果的对比示意图;
图11是根据本说明书一些实施例所示的示例性精准分割过程的流程示意图;
图12是根据本说明书一些实施例所示的示例性目标结构掩膜的定位信息判断的示意图;
图13是根据本说明书一些实施例所示的示例性判断滑动方向的示意图;
图14是根据本说明书一些实施例所示的示例性滑窗后进行精准分割的示意图;
图15是根据本说明书一些实施例所示的示例性分割结果的对比示意图;
图16是根据本说明书一些实施例所示的示例性脉管识别装置的模块示意图;
图17是根据本说明书一些实施例所示的示例性脉管识别方法的流程示意图;
图18是根据本说明书一些实施例所示的示例性脉管识别结果的示意图;
图19是根据本说明书一些实施例所示的示例性脉管类型确定的流程示意图;
图20是根据本说明书另一些实施例所示的示例性脉管类型确定的流程示意图;
图21是根据本说明书一些实施例所示的示例性脉管类型确定的示意图;
图22是根据本说明书另一些实施例所示的示例性脉管类型确定的示意图;
图23是根据本说明书一些实施例所示的示例性模型训练的示意图;
图24是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程示意图;
图25是根据本说明书一些实施例所示的示例性靶点确定的示意图;
图26A-图26C是根据本说明书一些实施例所示的示例性初始路径确定的示意图;
图27是根据本说明书一些实施例所示的示例性候选路径的示意图;以及
图28是根据本说明书另一些实施例所示的示例性穿刺路径规划方法的示意图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“***”、“装置”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。“多个”可以指“两个或以上”。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的***所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
本说明书实施例中提供的生物体内脉管识别的方法可以适用于动物体内脉管类型判断。为说明方便,本申请的具体实施例将主要以人体内血管类型的判断为例进行说明。但对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,可以将本说明书应用于其它类似情景,例如人体的其它脉管,或者其它动物(如狗、猫等)的血管或者其它脉管的类型判断。
传统穿刺方法中,一般由医护人员基于自身经验选择合适的穿刺路径,其对医护人员要求较高,穿刺效率较低。本说明书实施例中提供一种穿刺路径规划方法,通过自动对目标图像进行器官分割,定位最佳穿刺靶点,并基于靶点和至少两个约束条件自适应选择最佳穿刺器械与穿刺路径,可以使穿刺路径更智能、更符合临床需求,从而提高穿刺活检准确性和效率。
图1是根据本说明书一些实施例所示的示例性穿刺路径规划***的应用场景示意图。
如图1所示,穿刺路径规划***100可以包括成像设备110、末端执行设备120、处理设备130、终端设备140、存储设备150和网络160。在一些实施例中,处理设备130可以是成像设备110和/或末端执行设备120中的一部分。
穿刺路径规划***100中的组件之间的连接可以是可变的。如图1所示,在一些实施例中,成像设备110可以通过网络160连接到处理设备130。又例如,成像设备110可以直接连接到处理设备130,如连接成像设备110和处理设备130的虚线双向箭头所指示的。再例如,存储设备150可以直接或通过网络160连接到处理设备130。作为示例,终端设备140可以直接连接到处理设备130(如连接终端设备140和处理设备130的虚线箭头所示),也可以通过网络160连接到处理设备130。
成像设备110可以对检测区域或扫描区域内的目标对象(扫描对象)进行扫描,得到该目标对象的扫描数据(例如,目标图像)。例如,成像设备110可以使用高能射线(如X射线、γ射线等)对目标对象进行扫描以收集与目标对象有关的扫描数据,如三维影像。目标对象可以包括生物的或非生物的。仅作为示例,目标对象可以包括患者、人造物体(例如人造模体)等。又例如,目标对象可以包括患者的特定部位、器官和/或组织(例如头部、耳鼻、口腔、颈部、胸部、腹部、肝胆胰脾、肾脏、脊柱、心脏或肿瘤组织等)。
在一些实施例中,成像设备110可以包括单模态扫描仪和/或多模态扫描仪。单模态扫描仪可以包括例如X射线扫描仪、计算机断层扫描(CT)扫描仪、核磁共振成像(MRI)扫描仪、正电子发射计算机断层扫描(PET)扫描仪、光学相干断层扫描(OCT)扫描仪、超声(US)扫描仪、血管内超声(IVUS)扫描仪、近红外光谱(NIRS)扫描仪、远红外(FIR)扫描仪、数字放射线摄影(DR)扫描仪(例如,移动数字放射线摄影)、数字减影血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪等。多模态扫描仪可以包括例如X射线成像-核磁共振成像(X射线-MRI)扫描仪、正电子发射断层扫描-X射线成像(PET-X射线)扫描仪、单光子发射计算机断层扫描-核磁共振成像(SPECT-MRI)扫描仪、正电子发射断层扫描-计算机断层摄影(PET-CT)扫描仪、数字减影血管造影-核磁共振成像(DSA-MRI)扫描仪等。上述成像设备的相关描述仅用于说明目的,而无意限制本说明书的范围。
在一些实施例中,成像设备110可以包括医疗床115。医疗床115可以用于放置目标对象,以便对目标对象进行扫描获取目标图像。在一些实施例中,医疗床115可以包括自动医疗床和/或手推医 疗床。在一些实施例中,医疗床115可以独立于成像设备110。
在一些实施例中,成像设备110可以包括显示设备。显示设备可以用于显示目标对象的扫描数据(例如,目标图像、分割影像、穿刺路径等)。在一些实施例中,成像设备110还可以包括机架、探测器、工作台和放射源等(图中未示出)。机架可以支撑探测器和放射源。可以在工作台上放置目标对象以进行扫描。放射源可以向目标对象发射放射线。探测器可以检测从放射源发出的放射线(例如,X射线)。在一些实施例中,探测器可以包括一个或以上探测器单元。探测器单元可以包括闪烁探测器(例如,碘化铯探测器)、气体探测器等。探测器单元可以包括单行探测器和/或多行探测器。
末端执行设备120可以是执行末端操作(例如,消融、穿刺、放射性粒子植入)的机器人。在一些实施例中,处理设备130可以通过远程操作控制引导末端执行设备120执行相应的操作(例如,穿刺操作)。在一些实施例中,末端执行设备120可以包括机器臂末端、功能部件(例如,穿刺针)及机器人主机。在一些实施例中,机械臂末端可以用于承载功能部件;机器人主机可以为机械臂本体,用于带动机器臂末端运动,以调整功能部件的姿态(例如,角度、位置等)。
在一些实施例中,处理设备130可以通过通信装置(例如,网络160)与机器臂本体或机械臂末端连接,用于控制机器臂末端带动功能部件(例如,穿刺针等)执行同步操作。例如,处理设备130可以通过控制机器臂末端旋转、平移、向前推进等带动穿刺针执行穿刺操作。
在一些实施例中,末端执行设备120还可以包括主手操控装置。主手操控装置可以通过通信装置(例如,网络160)与机器人主机或机器臂末端电连接,用于控制机器臂末端带动功能部件(例如,穿刺针等)执行穿刺操作。
处理设备130可以处理从成像设备110、末端执行设备120、终端设备140、存储设备150或穿刺路径规划***100的其他组件获取的数据和/或信息。例如,处理设备130可以从成像设备110中获取目标图像(例如,断层扫描图像、PET扫描图像、MR扫描图像等),并对其进行分析处理(例如,进行目标结构的粗分割、精准分割等,和/或进行脉管识别、脉管类型识别等)确定靶点,基于靶点确定目标路径等。在一些实施例中,处理设备130可以是本地或远程的。例如,处理设备130可以通过网络160从成像设备110、末端执行设备120、终端设备140和/或存储设备150访问信息和/或数据。
在一些实施例中,处理设备130和成像设备110可以集成为一体。在一些实施例中,处理设备130和成像设备110可以直接或间接相连接,联合作用实现本说明书所述的方法和/或功能。
在一些实施例中,处理设备130和末端执行设备120可以集成为一体。在一些实施例中,处理设备130和末端执行设备120可以直接或间接相连接,联合作用实现本说明书所述的方法和/或功能。在一些实施例中,成像设备110、末端执行设备120和处理设备130可以集成为一体。在一些实施例中,成像设备110、末端执行设备120和处理设备130可以直接或间接相连接,联合作用实现本说明书所述的方法和/或功能。
在一些实施例中,处理设备130可以包括输入装置和/或输出装置。通过输入装置和/或输出装置,可以实现与用户的交互(例如,显示目标图像、分割影像、目标路径等)。在一些实施例中,输入装置和/或输出装置可以包括显示屏、键盘、鼠标、麦克风等或其任意组合。
终端设备140可以与成像设备110、末端执行设备120、处理设备130和/或存储设备150连接和/或通信。例如,终端设备140可以从处理设备130获取完成器官或组织分割后的目标图像并显示,以便用户了解患者信息。又如,终端设备140可以从处理设备130获取脉管识别后的图像并显示。在一些实施例中,终端设备140可以包括移动设备141、平板电脑142、笔记本电脑143等或其任意组合。在一些实施例中,终端设备140(或其全部或部分功能)可以集成在处理设备130中。
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从成像设备110、末端执行设备120和/或处理设备130获取的数据(例如,目标图像、分割影像、初始路径、候选路径、目标路径、穿刺参数等)。在一些实施例中,存储设备150可以存储用于实现穿刺路径规划方法的计算机指令等。
在一些实施例中,存储设备150可以包括一个或多个存储组件,每个存储组件可以是一个独立的设备,也可以是其他设备的一部分。在一些实施例中,存储设备150可包括随机存取存储器(RAM)、只读存储器(ROM)、大容量存储器、可移动存储器、易失性读写存储器等或其任意组合。示例性的,大容量储存器可以包括磁盘、光盘、固态磁盘等。RAM可以包括动态RAM(DRAM)、双倍速率同步动态RAM(DDR SDRAM)、静态RAM(SRAM)、晶闸管RAM(T-RAM)和零电容(Z-RAM)等。ROM可以包括掩模ROM(MROM)、可编程ROM(PROM)、可擦除可编程ROM(PEROM)、电可擦除可编程ROM(EEPROM)、光盘ROM(CD-ROM)和数字通用盘ROM等。在一些实施例中,存储设备150可在云平台上实现。
网络160可以包括能够促进信息和/或数据交换的任何合适的网络。在一些实施例中,穿刺路 径规划***100的至少一个组件(例如,成像设备110、末端执行设备120、处理设备130、终端设备140、存储设备150)可以通过网络160与穿刺路径规划***100中至少一个其他组件交换信息和/或数据。例如,处理设备130可以通过网络160从成像设备110获取目标图像。
应当注意,穿刺路径规划***100仅仅是为了说明的目的而提供的,并不意图限制本说明书的范围。对于本领域的普通技术人员来说,可以根据本说明书的描述,做出多种修改或变化。例如,穿刺路径规划***100可以在其它设备上实现类似或不同的功能。然而,这些变化和修改不会背离本说明书的范围。
图2是根据本说明书一些实施例所示的示例性计算设备的硬件和/或软件的示意图。
如图2所示,计算设备200可以包括处理器210、存储器220、输入/输出接口230和通信端口240。
处理器210可以执行计算指令(程序代码)并执行本申请描述的穿刺路径规划***100的功能。所述计算指令可以包括程序、对象、组件、数据结构、过程、模块和功能(所述功能指本申请中描述的特定功能)。例如,处理器210可以处理从穿刺路径规划***100的任何组件获得的图像和/或数据。例如,处理器210可以对从成像设备110获取的目标图像中的目标结构进行粗分割,得到目标结构掩膜;基于软连通域分析确定目标结构掩膜的定位信息;并基于目标结构掩膜的定位信息对目标结构进行精准分割,得到目标图像的分割结果,从而进行穿刺路径规划。又如,处理器210可以对从成像设备110获取生物体的目标图像;基于第一分割模型,获取目标图像的第一分割结果;基于第二分割模型,获取目标图像的第二分割结果;融合第一分割结果和第二分割结果,获取融合结果。在一些实施例中,处理器210可以包括微控制器、微处理器、精简指令集计算机(RISC)、专用集成电路(ASIC)、应用特定指令集处理器(ASIP)、中央处理器(CPU)、图形处理单元(GPU)、物理处理单元(PPU)、微控制器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、高级RISC机(ARM)、可编程逻辑器件以及能够执行一个或多个功能的任何电路和处理器等,或其任意组合。仅为了说明,图2中的计算设备200只描述了一个处理器,但需要注意的是本申请中的计算设备200还可以包括多个处理器。
存储器220可以存储从穿刺路径规划***100的任何其他组件获得的数据/信息。在一些实施例中,存储器220可以包括大容量存储器、可移动存储器、易失性读取和写入存储器和ROM等,或其任意组合。
输入/输出接口230可以用于输入或输出信号、数据或信息。在一些实施例中,输入/输出接口230可以使用户与穿刺路径规划***100进行联系。在一些实施例中,输入/输出接口230可以包括输入装置和输出装置。通信端口240可以连接到网络以便数据通信。所述连接可以是有线连接、无线连接或两者的组合。有线连接可以包括电缆、光缆或电话线等,或其任意组合。无线连接可以包括蓝牙、Wi-Fi、WiMax、WLAN、ZigBee、移动网络(例如,3G、4G或5G等)等中的一种或以上任意组合。在一些实施例中,通信端口240可以是标准化端口,如RS232、RS485等。在一些实施例中,通信端口240可以是专门设计的端口。例如,通信端口240可以根据数字成像和医学通信协议(DICOM)进行设计。
图3是根据本说明书一些实施例所示的示例性穿刺路径规划装置的模块示意图。
如图3所示,在一些实施例中,穿刺路径规划装置300可以包括数据预处理模块310、路径筛选模块320和路径推荐模块330。在一些实施例中,穿刺路径规划装置300对应的功能可以由处理设备130执行实现。
数据预处理模块310可以用于对目标图像进行预处理。在一些实施例中,数据预处理模块310可以用于基于目标图像确定靶点。例如,数据预处理模块310可以对目标图像中的目标结构进行粗分割,得到目标结构掩膜;基于软连通域分析,确定目标结构掩膜的定位信息;基于目标结构掩膜的定位信息,对目标结构进行精准分割,以确定靶点。在一些实施例中,数据预处理模块310可以用于确定危险区域。例如,数据预处理模块310可以基于第一分割模型,获取目标图像的第一分割结果;基于第二分割模型,获取目标图像的第二分割结果;融合第一分割结果和第二分割结果,获取融合结果;基于融合结果,确定危险区域。
路径筛选模块320可以用于确定初始路径和/或候选路径。在一些实施例中,路径筛选模块320可以基于靶点和至少两个约束条件,确定候选路径。在一些实施例中,约束条件可以包括:路径与危险区域的距离大于预设距离阈值、路径位于靶区所在片层的邻近片层、排除与病床床板接触的体廓上的入针点、路径的穿刺深度小于预设深度阈值或路径与扁平病灶的扁平面垂直线的夹角在预设范围内等。
路径推荐模块330可以用于基于候选路径,确定目标路径。在一些实施例中,当候选路径同时包含共面候选路径和非共面候选路径时,路径推荐模块330可以基于非共面候选路径中的最短穿刺深度D1、共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的 路径中的最短穿刺深度D3筛选目标路径。在一些实施例中,当候选路径仅包含非共面候选路径时,路径推荐模块330可以基于D1筛选目标路径。在一些实施例中,当候选路径仅包含共面候选路径,路径推荐模块330可以基于共面候选路径的D2及D3筛选目标路径。
在一些实施例中,路径推荐模块330可以用于推荐目标路径。例如,路径推荐模块330可以将确定的目标路径传输至终端设备140,以输出给医生,供医生选择。
更多关于数据预处理模块310、路径筛选模块320和路径推荐模块330的内容可以参见本说明书其他地方,例如图4-图28及其相关描述。
应当理解,图3所示的***及其模块可以利用各种方式来实现。例如,在一些实施例中***及其模块可以通过硬件、软件或者软件和硬件的结合来实现。
需要注意的是,以上对于穿刺路径规划装置300及其模块的描述,仅为描述方便,作为示意,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该***的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子***与其他模块连接。例如,数据预处理模块310可以进一步包括:图像获取单元,用于获取目标图像;图像分割单元,用于进行器官分割;脉管识别单元,用于识别目标图像中脉管和/或脉管类型;以及靶点确定单元,用于基于分割影像或脉管识别后的影像,确定靶点。再如,路径筛选模块320可以进一步包括初始路径确定单元和候选路径确定单元,分别用于基于靶点和第一约束条件确定初始路径,和基于第二约束条件从初始路径中确定候选路径。包括诸如此类的变形,均在本说明书的保护范围之内。
图4是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程示意图。在一些实施例中,流程400可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或穿刺路径规划装置300执行。例如,流程400可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图3所示的模块执行程序或指令时,可以实现流程400。如图4所示,在一些实施例中,流程400可以包括以下步骤。
步骤410,基于目标图像确定靶点。在一些实施例中,步骤410可以由处理设备130或数据预处理模块310执行。
目标图像可以指能够反映人体内器官或组织的结构、成分等的图像。在一些实施例中,目标图像可以包括基于各种不同成像机理生成的医学影像。例如,目标图像可以是CT扫描图像、MR扫描图像、超声波扫描图像、X射线扫描图像、MRI扫描图像、PET扫描图像、OCT扫描图像、NIRS扫描图像、FIR扫描图像、X射线-MRI扫描图像、PET-X射线扫描图像、SPECT-MRI扫描图像、DSA-MRI扫描图像、PET-CT扫描图像或US扫描图像等。在一些实施例中,目标图像可以包括二维图像、三维图像或四维图像等。生物体的三维图像可以反映生物体内部组织器官结构、密度等信息。在一些实施例中,三维图像可以是把医学成像设备(例如,成像设备110)获得的二维断层数据序列转换成三维数据的图像,以直观立体地展示生物体的三维形态、空间信息等。
在一些实施例中,可以获取目标对象的目标图像。在一些实施例中,可以通过成像设备110获取目标对象的目标图像。例如,在穿刺前,成像设备110可以对位于检测区域内的目标对象进行扫描以获得目标图像,并将其传输至穿刺路径规划装置300或处理设备130。在一些实施例中,可以从处理设备130、终端设备140或存储设备150获取目标对象的目标图像等。在一些实施例中,处理设备130可以通过从存储设备150、数据库读取,调用数据接口等方式获取目标对象的目标图像。在一些实施例中,还可以通过任何其他可行的方式获取目标图像,例如,可以经由网络160从云端服务器和/或医疗***(如医院的医疗***中心等)获取目标对象的目标图像,本申请实施例不做特别限定。
在一些实施例中,靶点可以反映穿刺路径的穿刺终点。在一些实施例中,靶点可以是病灶区域(例如,病变器官或组织)或待检测区域(例如,待检测器官或组织)的体积中心或重心。为方便描述,将病灶区域或待检测区域统称为“靶器官”。
在一些实施例中,可以通过对目标图像进行分割(例如,进行器官或组织分割),以基于分割结果确定靶点。不同的组织或器官在扫描图像(例如,CT扫描图像)上显影的灰度存在差异,此外,器官或组织有其自身的形状特征或位置特征,基于这些特征即可实现器官或组织分割。例如,病灶区域因组织发生病变,在目标图像中的显影与其他区域存在差异(例如,病变组织在CT平扫图像中一般呈现为低密度显影区,在CT增强图像中一般呈现为边缘亮化),显影差异结合病灶特征,则可以实现病灶区域的分割。
在一些实施例中,可以通过深度学***集等方法对目标图像进行器官或组织分割。以胸腹部穿刺为例,可以对胸腹部的目标图像进行器官或组织分割,以分割确定皮肤、骨骼、肝、肾、心脏、肺、器官内、外部血管、脾、胰腺等。在一些实施例中,可以对目标图像进行粗分割得到目标结构掩膜,并确定目标结构掩膜的定位信息,基于目标结构掩膜的定位信息进行精准分割,得到 分割结果。通过粗分割和精准分割获得分割结果的更多内容可以参见图5-图15及其相关描述,此处不再赘述。
在一些实施例中,可以将分割后的目标图像和/或确定脉管类型的目标图像显示在终端设备(例如,终端设备140),以输出给用户,便于用户了解目标对象的器官和/或组织的结构和/或病灶信息。
步骤420,基于靶点和至少两个约束条件,确定候选路径。在一些实施例中,步骤420可以由处理设备130或路径筛选模块320执行。
在一些实施例中,约束条件可以包括但不限于:路径与危险区域的距离大于预设距离阈值、路径位于靶区所在片层的邻近片层、排除与病床床板接触的体廓上的入针点、路径的穿刺深度小于预设深度阈值或路径与扁平病灶的扁平面垂直线的夹角在预设范围内等。
在一些实施例中,可以识别目标图像中的脉管和/或脉管类型,基于脉管和/或脉管类型确定危险区域。在一些实施例中,处理设备130可以分别利用第一分割模型和第二分割模型获取目标图像的第一分割结果和第二分割结果,融合第一分割结果和第二分割结果,获取融合结果。进一步地,处理设备130可以对第一分割结果进行骨架化处理,获取包括至少一条类型已确定的第一脉管骨架的第一脉管骨架集;对融合结果进行骨架化处理,获取类型待定脉管的第二脉管骨架,基于第一脉管骨架确定第二脉管骨架的脉管类型,从而基于脉管类型确定危险区域。更多关于脉管类型确定的内容可以参见图16-图23及其相关描述,此处不再赘述。
在一些实施例中,可以基于前述约束条件中的任意两个或两个以上确定候选路径。在一些实施例中,可以基于路径与危险区域的距离大于预设距离阈值以及其他约束条件中的任意一个或一个以上,确定候选路径。在一些实施例中,可以根据实际情况确定约束条件的类型和/或数量。例如,处理设备130可以筛选同时满足上述多个约束条件的路径为候选路径。
在一些实施例中,可以基于第一约束条件,确定初始路径,进一步基于第二约束条件,从初始路径中确定候选路径。更多关于候选路径确定的内容可以参见图24及其相关描述,此处不再赘述。
步骤430,基于候选路径,确定目标路径。在一些实施例中,步骤430可以由处理设备130或路径推荐模块330执行。
在一些实施例中,候选路径可以分为共面路径和非共面路径。其中,共面路径可以是指与靶区位于同一片层(例如,CT成像中的同一个横断平面)或邻近几个片层内的路径,非共面路径是指与靶区不在同一片层或邻近几个片层内的路径。在一些实施例中,可以基于候选路径的共面和非共面特性,确定目标路径。更多目标路径确定的内容可以参见图24及其相关描述,此处不再赘述。
在一些实施例中,确定目标路径后,可以将目标路径推荐给用户。例如,处理设备130可以将目标路径发送给终端设备140或成像设备110,以输出给医生供其参考。在一些实施例中,可以基于目标路径执行穿刺操作。例如,处理设备130可以控制末端执行设备120根据目标路径执行穿刺操作。在一些实施例中,可以记录初始路径、候选路径和/或目标路径的相关参数(例如,穿刺深度、穿刺角度、危险区域、预设安全距离、预设深度阈值、第三预设值、预设范围、是否穿过细血管等),以供用户参考和/或后续目标路径的确定。
应当注意的是,上述有关流程400的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程400进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
医学影像(例如,目标图像)分割(例如,器官或组织分割)不仅可以用于穿刺路径规划,还可用于医学研究、临床诊断和影像信息处理等场景。在一些实施例中,可以采用由粗到精的器官分割方法,该方法的优势在于可以有效提高分割的准确率、减少所占用的硬件资源,并能降低分割消耗的时间。但该方法的分割结果严重依赖于粗定位的准确程度,而在临床应用中,可能会存在器官形态多变、体积较小、病变等情况,从而造成粗定位的失准。粗分割定位的不准确,也会严重影响精分割的准确率,导致医学影像分割的处理效果差。
本说明书实施例中提供一种图像分割方法,通过在粗分割阶段采用软连通域分析方法,准确保留目标结构区域的同时,可以有效排除假阳性区域,不仅提高了粗定位阶段对目标结构定位的准确率,而且有助于后续精准分割,从而提升了分割效率和准确性。下面将结合附图(例如图5-图15)对图像分割方法进行详细描述。
图5是根据本说明书一些实施例所示的示例性图像分割装置的模块示意图。
如图5所示,在一些实施例中,图像分割装置500可以包括图像获取模块510、粗分割模块520、定位信息确定模块530和精准分割模块540。在一些实施例中,图像分割装置500对应的功能可以由处理设备130或穿刺路径规划装置300(例如,数据预处理模块310)执行实现。
图像获取模块510,可以用于获取目标图像。在一些实施例中,目标图像可以包括二维图像、 三维图像或四维图像等。在一些实施例中,图像获取模块510可以获取目标对象的目标图像。
粗分割模块520,可以用于对目标图像中的目标结构进行粗分割,得到目标结构掩膜。在一些实施例中,粗分割模块520可以用于对目标图像中的至少一个目标结构进行粗分割,得到至少一个目标结构掩膜。
定位信息确定模块530,可以用于基于软连通域分析,确定目标结构掩膜的定位信息。在一些实施例中,定位信息确定模块530可以用于确定目标结构掩膜中的连通域的数量,基于连通域的数量确定目标结构掩膜的定位信息。在一些实施例中,定位信息确定模块530可以用于基于预设结构的定位坐标,对目标结构掩膜进行定位。
精准分割模块540,可以用于基于目标结构掩膜的定位信息,对目标结构进行精准分割。在一些实施例中,精准分割模块540可以用于对目标结构进行初步精准分割,得到初步精准分割结果;基于初步精准分割结果,判断目标结构掩膜的定位信息是否准确;若是,将初步精准分割结果作为目标分割结果;否则,通过自适应滑窗方式确定目标结构的目标分割结果。
需要说明的是,有关粗分割模块520、定位信息确定模块530和精准分割模块540执行相应流程或功能实现器官分割的更多技术细节,具体参见图6至图15所示的任一实施例描述的图像分割方法相关内容,在此不再赘述。
关于图像分割装置500的以上描述仅用于说明目的,而无意限制本申请的范围。对于本领域普通技术人员来说,在不背离本申请原则的前提下,可以对上述方法及***的应用进行各种形式和细节的改进和改变。然而,这些变化和修改不会背离本申请的范围。
图6是根据本说明书一些实施例所示的示例性图像分割方法的流程示意图。在一些实施例中,流程600可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或图像分割装置500执行。例如,流程600可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图5所示的模块执行程序或指令时,可以实现流程600。如图6所示,在一些实施例中,流程600可以包括以下步骤。
步骤610,对目标图像中的目标结构进行粗分割,得到目标结构掩膜。在一些实施例中,步骤610可以由处理设备130或粗分割模块520执行。
目标结构可以指用于进行分割的目标器官和/或器官组织,例如,靶器官、目标器官中的血管等。在一些实施例中,目标图像中可以包括一个或多个目标结构。在一些实施例中,目标结构可以包括心脏、肝脏、脾脏、肾脏、血管和/或其他任何可能的器官或器官组织。
目标结构掩膜(Mask)可以指像素级的分类标签。以腹腔的目标图像为例进行说明,目标结构掩膜表示对目标图像中各个像素进行分类,例如,可以分成背景、肝脏、脾脏、肾脏等,特定类别的汇总区域用相应的标签值表示(例如,所有分类为肝脏的像素进行汇总,汇总区域用肝脏对应的标签值表示),其中标签值可以根据具体粗分割任务进行设定。在一些实施例中,粗分割得到的目标结构掩膜可以是较为粗糙的器官掩膜。粗分割得到的目标结构掩膜也可称为第一掩膜。
在一些实施例中,可以对目标图像进行预处理,对经过预处理的目标图像中的至少一个目标结构进行粗分割,得到目标结构掩膜。例如,预处理可以包括归一化处理和/或去背景处理等。
在一些实施例中,可以利用阈值分割方法、区域生长方法或水平集方法,对目标图像中的至少一个目标结构进行粗分割。例如,处理设备130可以通过设定多个不同的像素阈值范围,根据输入的目标图像的像素值,对目标图像中的各个像素进行分类,将像素值在同一像素阈值范围内的像素点分割为同一区域,以实现对目标图像的粗分割。又如,处理设备130可以基于目标图像上已知像素点或由像素点组成的预定区域,根据需求预设相似度判别条件,并基于该预设相似度判别条件,将像素点与其周边像素点比较,或者将预定区域与其周边区域进行比较,合并相似度高的像素点或区域,直到上述过程无法重复则停止合并,完成粗分割过程,从而实现对目标图像的粗分割。其中,预设相似度判别条件可以根据预设影像特征确定,例如,灰度、纹理等影像特征。又如,处理设备130可以将目标图像的目标轮廓设为一个高维函数的零水平集,对该函数进行微分,从输出中提取零水平集来得到目标的轮廓,然后将轮廓范围内的像素区域分割出来,以实现对目标图像的粗分割。
在一些实施例中,可以利用训练好的深度学习模型(例如,UNet),对目标图像中的至少一个目标结构进行粗分割。例如,将目标图像输入训练好的卷积神经网络后,卷积神经网络的编码器通过卷积进行目标图像的特征提取,然后由卷积神经网络的解码器将特征恢复成像素级的分割概率图,分割概率图表示图中每个像素点属于特定类别的概率,最后将分割概率图输出为分割掩膜,由此完成粗分割。
步骤620,基于软连通域分析,确定目标结构掩膜的定位信息。在一些实施例中,步骤620可以由处理设备130或定位信息确定模块530执行。
连通域(即连通区域)可以指目标图像中具有相同像素值,且位置相邻的前景像素点组成的影 像区域。在一些实施例中,目标结构掩膜中可以包括一个或多个连通域。
在一些实施例中,可以通过对目标结构掩膜进行软连通域分析,确定目标结构掩膜的定位信息(也称为第一定位信息)。软连通域分析可以指对目标结构掩膜内连通域的个数及其对应面积进行分析和计算。
在一些实施例中,可以确定目标结构掩膜中的连通域的数量,基于连通域的数量确定目标结构掩膜的定位信息。在一些实施例中,当目标图像中包括多个连通域时,可以先判断多个连通域的位置信息,再基于多个连通域的位置信息得到目标结构掩膜的定位信息。在一些实施例中,可以基于连通域的数量确定保留连通域,基于保留连通域的位置信息确定目标结构掩膜的定位信息。
在一些实施例中,当连通域的数量大于第一预设值时,处理设备130可以确定符合设定条件的连通域为保留连通域。在一些实施例中,设定条件可以是对连通域面积的限定条件。在一些实施例中,当连通域的数量小于或等于第一预设值时,可以将所有连通域均确定为保留连通域(例如,连通域的数量为1)或输出的保留连通域为空(例如,连通域的数量为0)。
在一些实施例中,当连通域的数量大于第一预设值时,可以判断多个连通域中的全部或部分连通域(例如,面积在预设序位n以内的连通域)是否为保留连通域。
在一些实施例中,当连通域的数量大于第一预设值且小于第二预设值时,可以确定目标结构掩膜中最大连通域的面积与连通域总面积的比值;判断该比值是否大于第一阈值;若是,则确定该最大连通域为保留连通域;否则,确定目标结构掩膜中每个连通域均为所述保留连通域。最大连通域可以指目标结构掩膜中面积最大的连通域。连通域总面积可以指目标结构掩膜中所有连通域的面积的总和。更多详细内容可以参见图7及其相关描述,此处不再赘述。
在一些实施例中,当连通域的数量大于或等于第二预设值时,可以按照面积从大到小的顺序对目标结构掩膜中每个连通域进行排序;基于排序结果,确定排名前n(即预设序位n)的连通域为目标连通域;基于第二预设条件,从目标连通域中确定保留连通域。例如,处理设备130可以将具有不同面积的多个连通域按照面积从大到小进行排序,排序后的连通域记为第一连通域、第二连通域、…、第k连通域。第一连通域是多个连通域中面积最大的连通域,因此也称最大连通域。当预设序位n为3时,即目标连通域为第一连通域、第二连通域和第三连通域,处理设备130可以按照面积序位的顺序,基于第二预设条件依次判断第一连通域、第二连通域和第三连通域中的一个或多个是否为保留连通域。即,先判断第一连通域是否为保留连通域,再判断第二连通域是否为保留连通域,直至完成第n-1个的判断。更多详细内容可以参见图8及其相关描述,此处不再赘述。
可以理解,当连通域的数量在不同的范围内或满足不同阈值条件(例如,第一预设值、第二预设值)时,判断不同面积序位的连通域作为保留连通域的设定条件可以不同,具体参见图7和图8的相关描述。
步骤630,基于目标结构掩膜的定位信息,对目标结构进行精准分割。在一些实施例中,步骤630可以由处理设备130或精准分割模块540执行。
在一些实施例中,精准分割可以包括:对目标结构进行初步精准分割,基于初步精准分割结果判断目标结构掩膜的定位信息是否准确。若是,则将初步精准分割结果作为目标分割结果;否则,通过自适应滑窗方式确定目标结构的目标分割结果。更多详细内容可以参见图11及其相关描述,此处不再赘述。
应当注意的是,上述有关流程600的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程600进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图7是根据本说明书一些实施例所示的示例性确定目标结构掩膜的定位信息的流程示意图。在一些实施例中,流程700可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或图像分割装置500(例如,定位信息确定模块530)执行。例如,流程700可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图5所示的模块执行程序或指令时,可以实现流程700。如图7所示,在一些实施例中,流程700可以包括以下步骤。
步骤710,确定目标结构掩膜中的连通域的数量。
在一些实施例中,目标结构掩膜中多个连通域可以具有不同的面积。在一些实施例中,可以通过任意可行的方式确定目标结构掩膜中的连通域的数量,本说明书对此不做限制。
步骤720,响应于连通域的数量大于第一预设值小于第二预设值,确定目标结构掩膜中最大连通域的面积与连通域总面积的比值。
在一些实施例中,第一预设值可以为1。
在一些实施例中,如图9所示,当连通域的数量为0时,表示对应掩膜为空,即目标结构的掩 膜获取失败或粗分割失败,或分割对象不存在。例如,对腹腔中的脾脏进行分割时,可能存在脾脏切除的情况,此时脾脏的掩膜为空,连通域的数量为0。此时输出的保留连通域为空。当连通域的数量为1时,表示仅此一个连通域,无假阳性或分割断开等情况。此时,可以保留该连通域,即确定该连通域为保留连通域。可以理解的是,连通域个数为0和1时,无需根据设定条件判断连通域是否为保留连通域。
在一些实施例中,当连通域的数量大于第一预设值且小于第二预设值时,可以通过步骤730-步骤740的操作确定目标结构掩膜的定位信息。在一些实施例中,第二预设值可以为3。例如,当目标结构掩膜的连通域的数量大于1且小于3时(例如,连通域的数量为2),处理设备130可以确定目标结构掩膜中最大连通域的面积与连通域总面积的比值。
当连通域的数量大于或等于第二预设值时,可以通过流程800中的操作确定目标结构掩膜的定位信息,更多详细内容参见步骤820-步骤840,此处不再赘述。
步骤730,判断最大连通域的面积与连通域总面积的比值是否大于第一阈值。
在一些实施例中,第一阈值的取值范围可以在0.8~0.95范围内。第一阈值在取值范围0.8~0.95内,能够保障软连通域分析获得预期准确率。在一些实施例中,第一阈值的取值范围可以在0.9~0.95范围内。第一阈值在0.9~0.95范围内,可以进一步提高软连通域分析的准确率。在一些实施例中,第一阈值可以基于目标结构的类别(例如,胸部目标结构、腹部目标结构)进行设定。在一些实施例中,第一阈值可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
若目标结构掩膜中最大连通域的面积与连通域总面积的比值大于第一阈值,则执行步骤731:确定最大连通域为保留连通域。否则,执行步骤735:确定目标结构掩膜中每个连通域均为保留连通域。
仅作为示例,如图9中所示,当目标结构掩膜的连通域的数量大于1且小于3,即为2时,处理设备130可以按面积(S)的大小分别获取连通域A和B,其中,连通域A的面积大于连通域B的面积,即S(A)>S(B)。结合上文,连通域A也可以称为第一连通域或最大连通域;连通域B可以称为第二连通域。通过对连通域进行计算,当连通域A的面积占连通域A和B的总面积的比值大于第一阈值,即S(A)/S(A+B)>第一阈值,可以将连通域B判定为假阳性区域,仅保留连通域A,即确定最大连通域A为保留连通域。当连通域A的面积占连通域A和B总体面积的比值小于或等于第一阈值时,可以将A和B均判定为目标结构掩膜的一部分,同时保留连通域A和B,即确定连通域A和B均为保留连通域。
步骤740,基于保留连通域确定目标结构掩膜的定位信息。
在一些实施例中,目标结构掩膜的定位信息可以包括目标结构掩膜的外接矩形的位置信息,例如,外接矩形的边框线的坐标信息。在一些实施例中,目标结构掩膜的外接矩形可以覆盖目标结构的定位区域。在一些实施例中,目标结构掩膜的外接矩形可以以外接矩形框的形式显示在目标图像中。在一些实施例中,可以基于目标结构中连通区域的各方位的底边缘(例如,连通区域上下左右方位上的底边缘),来构建相对于目标结构掩膜的外接矩形框。
在一些实施例中,目标结构掩膜的外接矩形可以包括仅存在一个矩形框的一个外接矩形框。例如,在目标结构(例如,器官)中只存在一个连通区域时(例如,血管或腹腔中的器官),可以根据该连通区域各方位的底边缘,构建成一个较大面积的外接矩形。在一些实施例中,上述大面积的外接矩形可以应用于存在一个连通区域的器官。
在一些实施例中,目标结构掩膜的外接矩形可以包括由多个矩形框组合拼成的一个外接矩形框。例如,在器官存在多个连通区域时,多个连通区域对应多个矩形框,根据多个矩形框的底边缘可以构建成一个较大面积的外接矩形框。当目标结构掩膜的外接矩形框由多个小的矩形框组合形成时,如三个连通区域对应的三个矩形框的底边缘构建成一个总的外接矩形框,计算时可以按照一个总的外接矩形框来处理,从而在保障实现预期精确度的同时,减少计算量。
在一些实施例中,当目标结构掩膜的外接矩形定位失败时,可以基于预设结构的定位坐标,对目标结构掩膜进行定位。可以理解,当目标结构掩膜外接矩形的坐标不存在时,判断对应器官定位失败。
在一些实施例中,预设结构可以选取定位较为稳定的目标结构(例如,定位较为稳定的器官)。在对此类目标结构定位时出现定位失败的概率较低,由此即可实现对目标结构掩膜进行精确定位。示例性地,由于在腹腔范围中肝部、胃部、脾部、肾部的定位失败的概率较低,胸腔范围中肺部的定位失败的概率较低,即这些器官的定位较为稳定,因此肝部、胃部、脾部、肾部可以作为腹腔范围中的预设的器官,即预设结构可以包括肝部、胃部、脾部、肾部、肺部,或者其他任何可能的器官组织。
在一些实施例中,可以以预设结构的定位坐标为基准坐标,对目标结构掩膜进行再次定位。例如,当定位失败的目标结构位于腹腔范围时,可以肝部、胃部、脾部、肾部的定位坐标作为再次定位的坐标,据此对腹腔范围中定位失败的目标结构进行再次定位。在一些实施例中,可以基于肺部的定位坐 标对胸腔范围中的目标结构掩膜进行定位。例如,当定位失败的目标结构位于胸腔范围时,可以肺部的定位坐标作为再次定位的坐标,据此对胸腔中定位失败的目标结构进行再次定位。
仅作为示例,当定位失败的目标结构位于腹腔范围时,可以以肝顶、肾底、脾左、肝右的定位坐标作为再次定位的横断面方向(上侧和下侧)、冠状面方向(左侧和右侧)的坐标,并取这四个器官坐标的最前端和最后端作为新定位的矢状面方向(前侧和后侧)的坐标,据此对腹腔中定位失败的目标结构进行再次定位。仅作为示例,当定位失败的目标结构位于胸腔范围时,以肺部定位坐标构成的外接矩形框扩张一定像素,据此对胸腔中定位失败的目标结构进行再次定位。
通过基于预设结构的定位坐标,对目标结构掩膜进行精确定位,从而确定目标结构的定位信息,能够提高分割精确度,提高分割效率,同时较少了分割计算量,节约了内存资源。
在一些实施例中,确定目标结构掩膜的定位信息,还包括以下操作:对目标结构掩膜进行后处理,以降低噪声及优化影像显示效果。例如,后处理可以包括以下图像后处理操作:对影像进行边缘光滑处理和/或影像去噪等。在一些实施例中,边缘光滑处理可以包括平滑处理或模糊处理(blurring),以便减少医学影像的噪声或者失真。在一些实施例中,平滑处理或模糊处理可以采用以下方式:均值滤波、中值滤波、高斯滤波以及双边滤波。
应当注意的是,上述有关流程700的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程700进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图8是根据本说明书另一些实施例所示的示例性确定目标结构掩膜的定位信息的流程示意图。在一些实施例中,流程800可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或图像分割装置500(例如,定位信息确定模块530)执行。例如,流程800可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图5所示的模块执行程序或指令时,可以实现流程800。如图8所示,在一些实施例中,流程800可以包括以下步骤。
步骤810,确定目标结构掩膜中的连通域的数量。更多内容可见步骤710及其描述。
步骤820,响应于连通域的数量大于或等于第二预设值,按照面积从大到小的顺序对目标结构掩膜中每个连通域进行排序。
结合上文,第二预设值可以为3,当连通域的数量大于或等于3时,处理设备130可以按照面积从大到小的顺序对目标结构掩膜中每个连通域进行排序。
步骤830,基于排序结果,确定排名前n的连通域为目标连通域。
在一些实施例中,基于上述排序结果,处理设备130可以确定排名前n(例如,3)的连通域为目标连通域。在一些实施例中,预设序位n可以基于目标结构的类别(例如,胸部目标结构、腹部目标结构)进行设定。在一些实施例中,预设序位n可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
步骤840,基于第二预设条件,从目标连通域中确定保留连通域。
在一些实施例中,保留连通域可以至少包括目标结构掩膜中的最大连通域。在一些实施例中,可以按照面积序位的顺序,根据第二预设条件依次判断面积序位在预设序位n以内的每个连通域(或者目标结构掩膜中每个连通域)是否为保留连通域,最后输出保留的连通域。
第二预设条件可以为与连通域的面积相关的限定条件。
在一些实施例中,第二预设条件可以包括特定连通域(例如,最大连通域、或面积序位在预设序位m内的连通域,其中m小于或等于n)的面积与连通域总面积的比值与阈值(例如,第一阈值)的大小关系。例如,预设序位n中最大连通域作为保留连通域需要满足的条件可以是,该最大连通域的面积与连通域总面积的比值与第一阈值的大小关系,若大于第一阈值,则将该最大连通域确定为保留连通域。又如,预设序位n中第二连通域(排序第二的连通域)作为保留连通域需要满足的条件可以是,第一连通域(即最大连通域)与第二连通域的面积之和(即特定连通域的面积)与连通域总面积的比值与第一阈值的大小关系,当比值大于第一阈值时,确定第一连通域和第二连通域均为保留连通域。又如,预设序位n中第三连通域(排序第三的连通域)作为保留连通域需要满足的条件可以是,第一连通域、第二连通域和第三连通域的面积之和(即特定连通域的面积)与连通域总面积的比值与第一阈值的大小关系,若比值大于第一阈值,则将第一连通域、第二连通域和第三连通域均确定为保留连通域。
在一些实施例中,第二预设条件可以包括第一预设连通域和第二预设连通域之间的面积比值与第五阈值的大小关系。例如,预设序位n中最大连通域作为保留连通域需要满足的条件可以是,第二连通域(即第一预设连通域)面积与最大连通域(即第二预设连通域)面积的比值与第五阈值的大小关系,当该面积比值小于第五阈值时,将该最大连通域确定为保留连通域。又如,预设序位n中第二连通域作为保留连通域需要满足的条件可以是,第三连通域面积(即第一预设连通域的面积)占第一连通域 面积与第二连通域面积之和(即第二预设连通域的面积)的比例与第五阈值的大小关系。当小于第五阈值时,确定第二连通域为保留连通域,此时最大连通域与第二连通域均为保留连通域。又如,预设序位n中第三连通域作为保留连通域需要满足的条件可以是,排序第四的连通域的面积(即第一预设连通域的面积)占第一连通域面积、第二连通域面积和第三连通域面积之和(即第二预设连通域的面积)的比例与第五阈值的大小关系。当小于第五阈值时,确定第一连通域、第二连通域和第三连通域均为保留连通域。
在一些实施例中,第五阈值可以在0.05至0.2范围内。在该取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第五阈值可以为0.05。此种设置情况下,能够获得较为优异的软连通域分析准确率效果。在一些实施例中,第五阈值可以为其他合理的数值,本说明书对此不做限制。
仅作为示例,如图9中所示,当目标结构掩膜中连通域的数量大于或等于3时,处理设备130可以按面积(S)分别获取连通域A、B、C、…、P,其中,连通域A的面积大于连通域B的面积,连通域B的面积大于连通域C的面积,以此类推,即S(A)>S(B)>S(C)>…>S(P)。进一步,处理设备130可以计算连通域A、B、C、…、P的总面积S(T),以对连通域进行计算。具体地,处理设备130可以按照面积序位选取预设序位n内的连通域(如连通域A、B、C),并依次判断预设序位n以内的每个连通域是否为保留连通域。当连通域A面积占总面积S(T)的比例大于第一阈值M时,即S(A)/S(T)>M,或者,连通域B面积占连通域A面积的比例小于第五阈值N时,即S(B)/S(A)<N,将连通域A判定为器官掩膜部分并保留(即连通域A为保留连通域),其余连通域则判定为假阳性区域;否则,继续进行计算,即继续判断第二连通域(即连通域B)是否为保留连通域。当连通域A和连通域B的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B)/S(T)>M,或者,连通域C面积占连通域A和连通域B面积的比例小于第五阈值N时,即S(C)/S(A+B)<N,将连通域A和B判定为目标结构掩膜部分并保留(即连通域A和连通域B为保留连通域),剩余部分均判定为假阳性区域;否则,继续进行计算,即继续判断第三连通域(即连通域C)是否为保留连通域。当连通域A、连通域B和连通域C的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B+C)/S(T)>M,或者,连通域D(第四连通域)面积占连通域A、连通域B和连通域C面积的比例小于第五阈值N时,即S(D)/S(A+B+C)<N,将连通域A、B和C均判定为目标结构掩膜部分并保留(即连通域A、连通域B和连通域C均为保留连通域)。参照上述判断方法,可以依次判断目标结构掩膜中连通域A、B、C、D、…、P,或者是面积序位在预设序位n以内的部分连通域是否为保留连通域。
需要说明的是,图9中仅示出了对三个连通域是否为保留连通域进行的判断。也可以理解为,图9中的预设序位n的值设定为4,因此,只需对序位为1、2、3的连通域,即连通域A、连通域B、连通域C是否为保留连通域进行判断。
步骤850,基于保留连通域确定目标结构掩膜的定位信息。更多内容可见步骤740及其描述。
应当注意的是,上述有关流程800的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程800进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图10是根据本说明书一些实施例所示的示例性粗分割结果的对比示意图。如图10中所示,虚线左侧的上下两图分别为未采用软连通域分析的粗分割结果的横断面目标图像和立体目标图像,虚线右侧分别为采用了软连通域分析的粗分割结果的横断面目标图像和立体目标图像。经过对比可知,基于软连通域分析对目标结构掩膜进行粗分割的结果显示,去除了左侧影像中方框框出的假阳性区域,相比以往连通域分析方法,排除假阳性区域的准确性和可靠性更高,并且直接有助于后续合理提取目标结构掩膜定位信息的边界框,提高了分割效率。
图11是根据本说明书一些实施例所示的示例性精准分割过程的流程示意图。在一些实施例中,流程1100可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或图像分割装置500(例如,精准分割模块540)执行。例如,流程1100可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图5所示的模块执行程序或指令时,可以实现流程1100。如图11所示,在一些实施例中,流程1100可以包括以下步骤。
步骤1110,对目标结构进行初步精准分割,得到初步精准分割结果。
初步精准分割可以指根据粗分割的目标结构掩膜的定位信息,进行的精准分割。
在一些实施例中,可以根据粗分割定位的外接矩形框,对目标结构进行初步精准分割,得到初步精准分割结果。通过初步精准分割可以生成目标结构的更为精准的掩膜,即初步精准分割结果包括精准分割的目标结构掩膜。通过精准分割得到的目标结构掩膜也可称为第二掩膜。
步骤1120,判断目标结构掩膜的定位信息是否准确。
通过步骤1120,可以判断粗分割得到的目标结构掩膜的定位信息是否准确,即基于软连通域 分析确定的第一定位信息是否准确,从而判断粗分割是否准确。
在一些实施例中,可以根据初步精准分割的目标结构掩膜的定位信息,来判断粗分割的目标结构掩膜的定位信息是否准确。在一些实施例中,可以对第二掩膜进行计算获得第二定位信息(即初步精准分割结果的定位信息),将粗分割的定位信息(第一定位信息)与精准分割的定位信息(第二定位信息)进行比较,以判断第一掩膜(即粗分割的目标结构掩膜)的第一定位信息是否准确。在一些实施例中,初步精准分割结果可以包括第二掩膜和/或第二掩膜的定位信息。
在一些实施例中,可以对粗分割的目标结构掩膜的外接矩形框,与精准分割的目标结构掩膜的外接矩形框进行比较,确定两者的差别大小。在一些实施例中,可以在三维空间的6个方向上(即外接矩形框的整体在三维空间内为一个立方体),对粗分割的目标结构掩膜的外接矩形框,与精准分割的目标结构掩膜的外接矩形框进行比较,确定两者的差别大小。仅作为示例,处理设备130可以计算粗分割的目标结构掩膜(第一掩膜)的外接矩形框每个边与精准分割的目标结构掩膜(第二掩膜)的外接矩形框每个边的重合度,或者计算粗分割的目标结构掩膜的外接矩形框的顶点坐标与精准分割的目标结构掩膜的外接矩形框的顶点坐标的差值。
在一些实施例中,可以根据粗分割的定位信息与精准分割的定位信息的差别大小,来判断粗分割目标结构掩膜的结果是否准确。在一些实施例中,定位信息可以是目标结构掩膜的外接矩形(如外接矩形框),根据粗分割的目标结构掩膜的外接矩形与精准分割的目标结构掩膜的外接矩形,判断粗分割目标结构掩膜的外接矩形是否准确。此时,粗分割的定位信息与精准分割的定位信息的差别大小可以指,粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离大小。在一些实施例中,当粗分割的定位信息与精准分割的定位信息差别较大(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较大),则判断粗分割的定位信息准确;当差别较小(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较小)时,则判断粗分割的定位信息不准确。需要注意的是,粗分割外接矩形框是对原始粗分割贴近目标结构的边框线上进行了像素扩张(例如,扩张15-20个体素)得到的。在一些实施例中,可以基于粗分割的外接矩形框与精准分割的外接矩形框中相距最近的边框线之间的距离与预设的阈值的大小关系,来确定粗分割的定位信息是否准确。例如,当距离小于预设的阈值时确定为不准确,当距离大于预设的阈值时确定为准确。在一些实施例中,为了保障判断准确度,预设的阈值的取值可以小于或等于5体素。
当判断粗分割的目标结构掩膜的定位信息准确时,可以进入步骤1130:将初步精准分割结果作为目标分割结果。当判断粗分割的目标结构掩膜的定位信息不准确时,可以执行步骤1140:通过自适应滑窗方式确定目标结构的目标分割结果。
图12是根据本说明书一些实施例所示的示例性目标结构掩膜的定位信息判断的示意图。其中,图12(a)和(b)中示出了粗分割得到的目标结构掩膜A、目标结构掩膜A的外接矩形框B(即粗分割的目标结构掩膜的定位信息),以及根据粗分割的外接矩形框进行初步精准分割后的外接矩形框C(即精准分割的目标结构掩膜的定位信息)。为方便起见,图中以三维外接矩形框的一个平面内的平面矩形框进行示例说明,可以理解三维外接矩形框还存在其他5个平面矩形框,即在进行三维外接矩形框的具体计算时存在6个方向的边框线,这里仅以某一平面的4个边框线进行说明。
仅作为示例,如图12(a)中所示,精准分割的外接矩形框C中的右边边框线与粗分割的外接矩形框B对应的边框线差别较小(距离较小),由此可以判断粗分割外接矩形框B右边对应的方向上是不准确的,需要对右边边框线进行调整。但是,外接矩形框C的上边、下边以及左边边框线分别与外接矩形框B的上边、下边以及左边边框线差别较大,由此可以判断粗分割外接矩形框B上边、下边以及左边对应的方向上是准确的。此种情况下,确定粗分割的目标结构掩膜的定位信息不准确,可以通过自适应滑窗方式对右边边框线进行调整,以确定目标结构的目标分割结果,更多内容参见步骤1140中描述。
仅作为示例,如图12(b)中所示,精准分割外接矩形框C中4个边的边框线与粗分割的外接矩形框B对应边框线差别均较大,可以判断粗分割外接矩形框B中4个边的边框线均是准确的,即粗分割的目标结构掩膜的定位信息准确。此种情况下,可以将初步精准分割结果作为目标分割结果。
需要注意的是,对于目标结构掩膜A共有6个方向,图12中仅以4个边框线进行示意进行说明,实际情况中会对目标结构掩膜A中的6个方向的12个边框线进行判断。
步骤1130,将初步精准分割结果作为目标分割结果。
粗分割的定位信息准确表示粗分割结果是准确的,进而基于粗分割的定位信息获得的初步精准分割结果也是准确的,因此可以将初步精准分割结果作为目标分割结果输出,即进行了一次精准分割。
步骤1140,通过自适应滑窗方式确定目标结构的目标分割结果。
当粗分割的定位信息不准确表示粗分割结果不准确,此时对其精准分割获取到的目标结构大 概率是不准确的,可以对其进行相应自适应滑窗计算,并获取准确的定位信息,以便继续进行精准分割。
在一些实施例中,可以确定定位信息存在偏差的方向为目标方向,根据重叠率参数在目标方向上进行自适应滑窗计算。在一些实施例中,可以确定外接矩形框不准确的至少一个方向为目标方向,例如,图12(a)中外接矩形框B右边对应的方向。确定粗分割的外接矩形框不准确后,可以根据输入的预设重叠率参数,将粗分割外接矩形框按照目标方向滑动,即进行滑窗操作,并重复该滑窗操作直至所有外接矩形框完全准确。
重叠率参数可以指初始外接矩形框与滑动之后的外接矩形框之间重叠部分面积占初始外接矩形框面积的比例,当重叠率参数较高时,滑窗操作的滑动步长较短。例如,可以将重叠率参数设置的较小,以使滑窗计算的过程更加简洁(即滑窗操作的步骤较少);将重叠率参数设置的较大,以使滑窗计算的结果更加准确。在一些实施例中,可以根据当前重叠率参数计算进行滑窗操作的滑动步长。
图13是根据本说明书一些实施例所示的示例性判断滑动方向的示意图。图13中示出了粗分割的外接矩形框B滑动后得到的滑窗B1,其中,(a)为滑动操作前的示意图,(b)为滑动操作后的示意图。
结合上文图12(a)中的判断方法可知,图13中粗分割的外接矩形框B的右边和下边边框线对应的方向上是不准确的。为方便描述,这里将外接矩形框B的右边边框线对应的方向记为第一方向,其中第一方向垂直于B的右边边框线,下边边框线对应的方向记为第二方向,第二方向垂直于B的下边边框线。仅作为示例,如图13中所示,假设外接矩形框B的长度为a,当重叠率参数为60%时,可以确定对应步长为a*(1-60%),如上述的,外接矩形框B的右边框线可以沿着第一方向滑动a*(1-60%)。同理,外接矩形框B的下边框线可以沿着第二方向滑动相应步长。外接矩形框B的右边边框线以及下边边框线分别重复相应滑窗操作,直至外接矩形框B完全准确,如图13(b)中所示的滑窗B1。结合图12(a)及图13,当确定了粗分割的外接矩形框(即目标结构掩膜的定位信息)不准确时,对精分割外接矩形框上6个方向上边框线的坐标值与粗分割外接矩形框上6个方向上边框线的坐标值进行一一比对,当差距值小于坐标差值阈值(例如,坐标差值阈值为5pt)时,可以判断该外接矩形框的边框线为不准确的方向,其中坐标差值阈值可以根据实际情况进行设定,在此不做限定。
再例如,如图12(a)所示,将精分割的外接矩形框C中4条边对应的4个方向的像素点坐标,与粗分割外接矩形框B中4条边框线对应的4个方向的像素点坐标进行一一比对,其中,当一个方向的像素点坐标的差值小于坐标差值阈值8pt时,则可以判定图12(a)中的粗分割外接矩形框该方向不准确。如,上边差值为20pt、下边差值为30pt、右边差值为1pt,左边为50pt,则右边对应的方向不准确,上边、下边、左边对应的方向准确,将右边对应的方向确定为目标方向。
再例如,结合图13(a)和(b),其中B1为粗分割的外接矩形框B滑动后得到的外接矩形框(也称为滑窗),可以理解的,滑窗为符合预期精确度标准的粗分割外接矩形框,需要将粗分割外接矩形框B的边框线(例如,右边边框线、下边边框线)分别沿着相应方向(例如,第一方向、第二方向)滑动对应的步长至滑窗B1的位置。其中,依次移动不符合标准的每条边框线对应的方向,例如,先滑动B的右边边框线,再滑动B的下边边框线至滑窗的指定位置,而B左边和上边对应的方向是标准的,则不需要进行滑动。可以理解的,每一边滑动的步长取决于B1与B的重叠率。其中,重叠率可以是粗分割外接矩形框B与滑窗B1当前的重叠面积占总面积的比值,例如,当前的重叠率为40%等等。需要说明的是,粗分割外接矩形框B的边框线的滑动顺序可以是从左到右、从上到下的顺序,或者是其他可行的顺序,在此不做进一步限定。
图14是根据本说明书一些实施例所示的示例性滑窗后进行精准分割的示意图。
在一些实施例中,基于原粗分割外接矩形框(也称原滑窗),自适应滑窗后获取准确的粗分割外接矩形框后,可以获取准确的外接矩形框的坐标值,并基于坐标值和重叠率参数,对新滑窗进行精准分割,将精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。具体地,参见图14(a),可以对原外接矩形框B进行滑窗操作,得到滑窗B1(滑窗操作后的最大范围的外接矩形框),B沿第一方向滑动对应步长得到滑窗B1-1,然后对滑窗B1-1的全域范围进行精准分割,得到滑窗B1-1的精准分割结果。进一步地,参见图14(b),B可以沿第二方向滑动对应步长得到滑窗B1-2,然后对滑窗B1-2的全域范围进行精准分割,得到滑窗B1-2的精准分割结果。再进一步地,参见图14(c),B滑动可以得到滑窗B1-3(如B可以按照图14(c)所示滑动操作得到滑窗B1-2,再由滑窗B1-2滑动得到滑窗B1-3),然后对滑窗B1-3的全域范围进行精准分割,得到滑窗B1-3的精准分割结果。将滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。需要说明的是,滑窗B1-1、滑窗B1-2以及滑窗B1-3的尺寸与B的尺寸相同。滑窗B1是原滑窗B进行连续滑窗操作,即滑窗B1-1、滑窗B1-2以及滑窗B1-3得到的最终滑窗结果。在一些实施例中,滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果进行叠加时,可能存在重复叠加部分, 例如,图14(d)中,滑窗B1-1和滑窗B1-2之间可能存在交集部分,在进行分割结果叠加时,该交集部分可能被重复叠加。针对这种情况,可以采用下述方法进行处理:对于目标结构掩膜A的某一部分,若一个滑窗对该部分的分割结果准确,另一滑窗的分割结果不准确,则将分割结果准确的滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都准确,则将右侧滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都不准确,则将右侧滑窗的分割结果作为该部分的分割结果,并继续进行精准分割,直至分割结果准确。
在一些实施例中,当判断粗分割的目标结构掩膜的定位信息为不准确时,基于自适应滑窗获取准确的定位信息可以是一个循环过程,即执行两次或以上与初步精准分割相同的操作。示例性地,在对比初步精准分割边框线和粗分割边框线后,通过自适应滑窗可以得到更新后的精准分割外接矩形框坐标值,该精准分割外接矩形框扩张一定的像素后设定为新一轮循环的粗分割外接矩形框(也可称为目标外接矩形框),然后对新的外接矩形框(即目标外接矩形框)再次进行精准分割,得到新的精准分割外接矩形框,并计算目标外接矩形框是否准确。若准确,则结束循环,将新的精准分割外接矩形框作为目标分割结果输出;否则继续循环。
在一些实施例中,可以利用深度卷积神经网络模型对粗分割得到的至少一个目标结构进行精准分割。例如,可以利用粗分割前初始获取的历史目标图像作为训练数据,以历史精准分割结果数据,训练得到深度卷积神经网络模型。在一些实施例中,历史目标图像、历史精准分割结果数据可以从成像设备110获取,或者,从处理设备130、终端设备140或存储设备150获取。
在一些实施例中,可以输出上述进行精准分割的至少一个目标结构的结果数据,即目标分割结果。在一些实施例中,为了进一步降低噪声及优化影像显示效果,可以在目标分割结果输出之前对其进行后处理操作。示例性地,后处理操作可以包括对影像/图像进行边缘光滑处理和/或去噪等。在一些实施例中,边缘光滑处理可以包括平滑处理或模糊处理(blurring),以便减少图像的噪声或者失真。在一些实施例中,平滑处理或模糊处理可以采用以下方式:均值滤波、中值滤波、高斯滤波、双边滤波等或其任意组合。
图15是根据本说明书一些实施例所示的示例性分割结果的对比示意图。
如图15所示,虚线左边的上下分别为采用传统技术的粗分割结果的横断面目标图像和立体目标图像,右边分别为采用本申请实施例提供的器官分割方法的横断面目标图像和立体目标图像。经过对比可知,右边分割结果影像显示的目标结构分割结果,相比左边分割结果影像显示的目标结构分割结果,获取的目标结构更完整,降低了分割目标结构缺失的风险,提高了分割精准率,最终提高了整体分割效率。
应当注意的是,上述有关流程1100的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1100进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
本说明书一些实施例还提供了一种图像分割装置,包括处理器,处理器用于执行任一实施例描述的图像分割方法。在一些实施例中,图像分割装置还包括显示装置,显示装置显示基于处理器执行的医学影像的分割方法的结果,具体参见图5至图15相关描述,在此不再赘述。
本说明书实施例提供的图像分割方法,(1)通过在粗分割阶段采用软连通域分析方法,准确保留目标结构区域的同时,有效排除了假阳性区域,首先提高了粗定位阶段对目标结构定位的准确率,并直接有助于后续合理提取目标结构掩膜定位信息的边界框,从而提升了分割效率;(2)针对粗分割阶段粗定位失准但未失效的不利情况,利用自适应的滑窗计算及相应滑窗操作,能够补全定位区域的缺失部分,并能自动规划及执行合理的滑窗操作,降低了精分割阶段对于粗定位结果的依赖性,在保持分割时间和计算资源无明显增加的前提下,提高了分割准确率;(3)当粗定位失效时,基于预设的目标结构的定位坐标,对目标结构掩膜进行精确定位,不仅提高了分割精确度,还降低了分割时间,减少了分割计算量,进一步提高了分割效率;(4)由于该目标结构分割的整体工作流,充分考虑到降低目标结构分割准确率的多种不利情形,使得适用于不同种目标结构分割任务的有效实施,具有较高的目标结构分割准确率和分割鲁棒性。
动物体内一般都有各种脉管,例如,血管、气管、胆管或输尿管等。生物体内往往有多种脉管。同一种脉管因为结构和功能的不同,又可以分为多种类型。例如,血管就至少包括了动脉和静脉两种主要类型。在一些实施例中,生物体内脉管的类型可以包括脉管的细分类型,例如,肺静脉、肺动脉、肝静脉、肝门静脉、肝动脉等。
本说明书实施例中提供一种脉管识别方法,首先训练出一个丰富度较低却准确的第一分割模型和一个丰富度较高未分类的第二分割模型,然后采用后处理算法,利用高丰富度模型的结果在低丰富度模型结果上进行脉管生长,对两个模型进行融合,最后准确有效的得到高丰富度高准确度的多类别脉 管分割结果。以下将结合图16-图23对脉管识别的具体操作进行详细说明。
图16是根据本说明书一些实施例所示的示例性脉管识别装置的模块示意图。
如图16所示,在一些实施例中,脉管识别装置1600可以包括第一分割模块1610、处理模块1620、第二分割模块1630和融合模块1640。在一些实施例中,脉管识别装置1600对应的功能可以由处理设备130或穿刺路径规划装置300(例如,数据预处理模块310)执行实现。
第一分割模块1610可以用于基于第一分割模型,获取目标图像的第一分割结果。
处理模块1620可以用于对第一分割结果进行骨架化处理,获取第一脉管骨架集,其中,第一脉管骨架集包括至少一条类型已确定的第一脉管骨架。
第二分割模块1630可以用于基于第二分割模型,获取目标图像的第二分割结果,第二分割结果中包括至少一条类型待定脉管。
融合模块1640可以用于融合第一分割结果和第二分割结果,获取融合结果。在一些实施例中,融合模块1640还可以用于确定脉管类型。具体地:融合模块1640可以对融合结果进行骨架化处理,获取类型待定脉管的第二脉管骨架;获取与第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;确定第二脉管骨架与参考脉管骨架之间的空间距离,将空间距离最小的两个点确定为最近点组;基于最近点组确定类型待定脉管的脉管类型。
在一些实施例中,脉管识别装置1600还可以包括计算模块、确定模块以及训练模块(图中未示出)。其中,计算模块可以用于获取与第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;以及确定第二脉管骨架与参考脉管骨架之间的空间距离,将空间距离最小的两个点确定为最近点组。确定模块可以用于基于最近点组确定类型待定脉管的脉管类型。训练模块可以用于进行模型训练,如训练获得用于确定第二阈值的机器学习模型。
更多关于脉管识别装置1600中各个模块的内容可以参见图17-图23及其相关描述,此处不再赘述。
关于脉管识别装置1600的以上描述仅用于说明目的,而无意限制本申请的范围。对于本领域普通技术人员来说,在不背离本申请原则的前提下,可以对上述方法及***的应用进行各种形式和细节的改进和改变。然而,这些变化和修改不会背离本申请的范围。
图17是根据本说明书一些实施例所示的示例性脉管识别方法的流程示意图。在一些实施例中,流程1700可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或脉管识别装置1600执行。例如,流程1700可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图16所示的模块执行程序或指令时,可以实现流程1700。如图17所示,在一些实施例中,流程1700可以包括以下步骤。
步骤1710,基于第一分割模型,获取目标图像的第一分割结果。在一些实施例中,步骤1710可以由处理设备130或第一分割模块1610执行。
第一分割结果可以包括特定生物体内脉管的分割影像,即对目标图像进行第一分割后得到的影像或图像。在一些实施例中,第一分割结果中至少一条脉管的类型已确定。
第一分割模型可以比较准确地分割生物体内脉管并判断部分脉管的类型。利用第一分割模型,可以得到目标图像中生物体内脉管的精确和/或细分类型,例如,肺静脉、肺动脉、肝静脉、肝门静脉等。在一些实施例中,第一分割模型可以包括多类别分割模型,其能够较为准确的对脉管进行分类。第一分割模型可以对目标图像中的全部或部分脉管进行分类。在一些实施例中,第一分割模型可以对设定级别范围内的脉管进行分割分类。在一些实施例中,第一分割模型可以对设定级别范围内,以及设定级别范围外的部分脉管进行分割分类。在一些实施例中,第一分割模型可以对一个设定级别范围内的脉管进行分割。在一些实施例中,第一分割模型可以对三维影像(即目标图像为三维图像)进行分割和/或分类。
脉管的类型可以包括两种或以上的类型。例如,脉管的类型可以包括第一类型和第二类型,第一类型和第二类型是同时出现在目标图像中,且类别不同的脉管类型。目标图像中第一类型的脉管和第二类型的脉管通常具有相近或相似的特征(例如,轮廓、灰度值等)。例如,第一类型和第二类型可以分别是静脉和动脉。又例如,在CT影像下第一类型和第二类型分别是(肾静脉,输尿管)、(腹腔门静脉,腹腔动脉)等二元组。又如,腹部或肝区域的目标图像中脉管的类型可能包括肝门静脉、肝静脉、肝动脉等。
在一些实施例中,第一分割模型可以通过训练获得。第一分割模型可以是机器学习模型,机器学习模型可以包括但不限于神经网络模型、支持向量机模型、k近邻模型、决策树模型等一种或多种的组合。其中,神经网络模型可以包括但不限于CNN、LeNet、GoogLeNeT、ImageNet、AlexNet、VGG、ResNet等一种或多种的组合。
在一些实施例中,第一分割模型可以包括CNN模型。处理设备130可以通过提高网络感受野、提高网络深度等方法进行模型训练,以提高第一分割模型对生物体内设定级别范围内脉管的分类的准确度。例如,提高网络感受野可以采用空洞卷积等方法。更多第一分割模型的训练的内容可以参见本说明书图23中描述。
在一些实施例中,第一分割模型的输入是目标图像(例如,生物体的三维影像),输出是第一分割结果。其中,第一分割结果包括特定生物体内脉管(例如人体血管)的分割影像。例如,第一分割结果可以包括肺动脉和肺静脉的分割影像或者包括肝动脉和肝门静脉的分割影像等。第一分割结果中不同类型的生物体内脉管可以通过分别着色或不同的灰度值等方式区分。示例性地,如图18(a)和(b)中所示,(a)中的动脉的像素(或体素)统一置成较深灰度,(b)中的静脉的像素(或体素)统一置成较浅灰度。
步骤1720,对第一分割结果进行骨架化处理,获取第一脉管骨架集。在一些实施例中,步骤1720可以由处理设备130或处理模块1620执行。
骨架化处理是将脉管图像或影像简化为单位宽度(例如,单位像素宽度、单位体素宽度)的中心线的过程。骨架化处理可以保留原图像或影像的中心线、线条的端点、交叉点等,从而保留了原图像的连通性。骨架化处理可以减少冗余信息,仅保留有用信息来进行拓扑分析、形状分析等。骨架化处理能使对象被更简单的数据结构表示,以简化数据分析、减少数据存储和对传输设备的要求。
在一些实施例中,骨架化处理的方法可以包括并行快速细化算法、K3M算法等。
在一些实施例中,第一分割结果中至少一条脉管的类型已确定。相应地,对第一分割结果进行骨架化处理,获得的第一脉管骨架集中的骨架与类型已确定的脉管相对应,即第一脉管骨架集中包括至少一条类型已确定的第一脉管骨架。通过对第一分割结果进行骨架化处理,可以方便后续计算,提高识别方法的效率。
步骤1730,基于第二分割模型,获取目标图像的第二分割结果。在一些实施例中,步骤1730可以由处理设备130或第二分割模块1630执行。
第二分割结果可以包括生物体内脉管的分割影像,即对目标图像进行第二分割后得到的分割影像或图像。在一些实施例中,第二分割结果中包括至少一条类型待定脉管。类型待定脉管即脉管的类型是不确定的,类型待定脉管的类型可能是上述的任意一种类型。例如,肺内血管暂时无法确定是静脉或动脉血管,肾内暂时无法确定是肾静脉或输尿管的脉管,肝脏内暂时不确定是肝静脉、肝门静脉或肝动脉的脉管等。在这种情况下,可以对更多类型进行分类,而不仅仅局限于上述第一类型和第二类型,还可以有第三类型甚至更多。例如,在MR影像下第一类型,第二类型和第三类型分别是(肝动脉,肝静脉,肝门静脉)等三元组。在一些实施例中,第二分割结果中的至少一条脉管未包括在第一分割结果中。在一些实施例中,第二分割结果中未包括在第一分割结果中的脉管为类型待定脉管。
第二分割模型可以比较丰富地分割生物体内脉管的模型,以尽可能地对更细小的脉管进行分割。利用第二分割模型,可以得到包括深分支和/或细小的脉管的影像,例如,第二分割模型可以分割包括1-6级甚至更细小的脉管的影像、包括1-6级甚至更细小的血管的影像等。在一些实施例中,第二分割模型可以包括单类别分割模型,其能够分割更多的脉管。第二分割结果中的至少一条脉管第二分割模型可以对目标图像中的全部或部分脉管进行分割。
在一些实施例中,第二分割模型可以通过对机器学习模型训练获得。机器学习模型可以包括但不限于神经网络模型、支持向量机模型、k近邻模型、决策树模型等一种或多种的组合。
在一些实施例中,第二分割模型可以包括CNN模型。构建第二分割模型时,可以减少下采样次数,以避免降采样过多而导致细节丢失,使得第二分割模型能够识别出更细节的脉管。更多第二分割模型的训练的内容参见本说明书图23中描述。
在一些实施例中,第二分割模型的输入是目标图像,输出是第二分割结果。例如,第二分割结果中的脉管的边缘已标出,并且输出的影像中的脉管统一着色。示例性地,如图18(b)中所示的分割影像,脉管的边缘已标出,并且该影像中的脉管的像素(或体素)以同一灰度值填充。在一些实施例中,第二分割模型输出的分割影像的全部或部分脉管的类型是不确定的。
利用第二分割模型,可以得到深分支和/或细小的脉管。和第一分割模型相比,第二分割模型具有更高的丰富度。在一些实施例中,第一分割模型的第一分割级别的范围小于第二分割模型的第二分割级别的范围。第二分割模型可以比第一分割模型对更大范围的血管进行分割。在一些实施例中,第二分割模型的第二分割级别的范围和第一分割模型的第一分割级别的范围存在交集,但第二分割模型可以比第一分割模型对更精细的脉管进行分割。在一些实施例中,第一分割模型的第一分割级别的范围可以与第二分割模型的第二分割级别的范围重叠。但第二分割模型在对比较精细的脉管进行分割时,丰富度和/或辨识度优于第一分割模型。例如,第一分割结果包括1-4级脉管,而第二分割结果包括1-6级甚 至更细小级别的脉管,第二分割结果中的5-6级甚至更细小的脉管可能未包括在第一分割结果中。其中,级别值越高,对应的脉管更难识别,如5级的血管比4级的血管细,从而更难识别。
步骤1740,融合第一分割结果和第二分割结果,获取融合结果。在一些实施例中,步骤1740可以由处理设备130或融合模块1640执行。
在一些实施例中,处理设备130可以基于第一分割结果与第二分割结果的信息进行融合,获得融合结果。融合结果可以是包含目标图像中的脉管以及全部或部分脉管的类型的影像/图像。
在一些实施例中,可以获取第一分割结果和第二分割结果的并集,基于该并集和第一分割结果获得融合结果。示例性地,处理设备130可以计算第一分割结果和第二分割结果的并集并进行处理,再从处理后的并集中剔除第一分割结果集,将得到的差集作为融合结果。在一些实施例中,该差集可以为从第二分割结果中去掉第一分割结果中已标注的脉管后,剩余的类型待定脉管构成的集合。例如,第一分割结果中标注了1-4级血管的类别,第二分割结果包括1-6级甚至更细小的血管,融合结果可以为5-6级甚至更细小的类型尚不明确的血管构成的集合。
在一些实施例中,处理设备130可以基于多种融合方法融合第一分割结果和第二分割结果,以获取融合结果。示例性地,融合方法可以包括主分量变换融合法、乘积变换融合法、小波变换融合法、拉普拉斯变换融合法等或其任意组合。
第二分割结果比第一分割结果包含的脉管多,与第一分割结果融合后,相当于一个血管生长的过程。因为第一分割结果准确度较高,第二分割结果丰富度较高,通过融合,可以获得既有一定丰富度,又有足够准确度的脉管以及全部或部分脉管的类别信息,从而可以提高脉管分割结果的准确性和丰富性。
在一些实施例中,可以基于融合结果确定类型待定脉管的类型。例如,可以基于连通关系、空间关系等确定类型待定脉管的类型。更多内容参见图19和图20中描述。
图18是根据本说明书一些实施例所示的示例性脉管识别结果的示意图。如图18(a)-(f)所示,(a)中示出的第一分割结果中脉管的类型已确定,具体的,黑灰色着色的脉管1810为动脉,深灰色着色的脉管1820为静脉;(b)中示出的第二分割结果中标出了脉管,但未区分具体的脉管类型,且大量细小的上述脉管未包括在第一分割结果中。通过融合图18(a)中第一分割结果和(b)中第二分割结果可以识别出更多细小脉管的类型。如图18(d)及其局部放大图(c)所示,融合结果中除原静脉和原动脉外,新增了动脉(浅灰色的脉管)。又例如图18(f)及其局部放大图(e)中所示,融合结果中除原静脉和原动脉外,新增了静脉(浅色的脉管)。
通过融合高准确度的第一分割模型和高丰富度的第二分割模型的输出结果,经过对融合结果处理,可以识别灰度值接近易错分的两种或多种脉管,获得准确度与丰富度兼得的生物体内脉管识别结果,例如,本说明书实施例可识别到5~6级的肝门静脉肝静脉、肝动脉等。
在一些实施例中,可以基于融合结果确定靶点。在一些实施例中,可以基于融合结果中脉管的类型确定靶点。
应当注意的是,上述有关流程1700的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1700进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图19是根据本说明书一些实施例所示的示例性脉管类型确定的流程示意图。在一些实施例中,流程1900可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或脉管识别装置1600执行。例如,流程1900可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图16所示的模块执行程序或指令时,可以实现流程1900。如图19所示,在一些实施例中,流程1900可以包括以下步骤。
步骤1910,对融合结果进行骨架化处理,获取类型待定脉管的第二脉管骨架。在一些实施例中,步骤1910可以由处理设备130或脉管识别装置1600执行。
在一些实施例中,融合结果可以为由类型待定脉管构成的集合。通过对融合结果进行骨架化处理,可以得到待定骨架,即类型待定脉管的第二脉管骨架。骨架化处理的更多内容参见图17中描述,此处不再赘述。
步骤1920,获取与第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架。在一些实施例中,步骤1920可以由处理设备130或脉管识别装置1600执行。
在一些实施例中,可以基于类型待定脉管的第二脉管骨架与第一脉管骨架集内第一脉管骨架的连通关系,确定类型待定脉管的脉管类型。具体地,若第一脉管骨架集内存在与第二脉管骨架(如待定骨架K1)相连通的第一脉管骨架(如类型已定骨架K2),则类型待定脉管的第二脉管骨架的类型与该第一脉管骨架的类型相同。由此即可确定第二脉管骨架的脉管类型。例如,如果第一脉管骨架集内某 段静脉骨架与待定骨架(即第二脉管骨架)中某段骨架有连通,则该段待定骨架所对应的血管也是静脉。
在一些实施例中,对每个第二脉管骨架(例如,某段类型待定脉管骨架),可以获取在第一脉管骨架集中与该第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架。一个或多个参考脉管骨架组成一个参考脉管骨架集。该参考脉管骨架集中的脉管是与待定脉管最密切相关的脉管。
第二阈值可以确定参考脉管骨架的范围,其取值影响最终识别效果。在一些实施例中,基于空间距离计算方法的不同,第二阈值作为空间距离的比较参数,可以是不同的物理量。例如,当以实际空间距离作为距离测算的基础时,第二阈值可以是具体代表长度的物理量,例如10mm。在一些实施例中,空间距离的计算可以基于图像信息中的体素点经过换算之后进行。通过该方式可以将实际距离值折算成图像中体素点的数量,以体素点的数量表示第二阈值,例如,实际距离值折算成5个体素点,则第二阈值为5。在一些实施例中,当三维影像投影角度一致时,可以将实际距离值折算成像素点的数量,将像素点的数量确定为第二阈值。例如,实际距离值折算成5个像素点,则第二阈值可以为5。
在一些实施例中,第二阈值可以根据经验或需求获得。在一些实施例中,第二阈值可以由用户自定义。在一些实施例中,第二阈值可以基于目标图像对应的生物体的部位获得。在一些实施例中,第二阈值可以基于类型待定脉管的级别的不同而不同。
在一些实施例中,第二阈值可以通过机器学习方法获得。例如,通过构建机器学习模型,针对不同生物体的部位的训练数据,通过机器学习的方式,获得该生物体的部位对应的优化后的第二阈值。在实际应用中,在对该部位进行识别时,使用与其相对应的经过优化训练之后获得的第二阈值。机器学习模型可以包括但不限于神经网络模型、支持向量机模型、k近邻模型、决策树模型等一种或多种的组合。
在一些实施例中,第二阈值的机器学习方法可以基于同类生物体对应的部位的医学图像和类型判断结果获得。例如,可以以同类生物体对应的部位的医学影像为样本,类型判断结果为标签,通过训练,获得该类生物体的第二阈值。
在一些实施例中,机器训练可以针对生物体的性别、年龄、地域、种族中的至少一项作为参量,通过训练获得与性别、年龄、地域、种族等参量相关的第二阈值。例如,对于50岁以上女性第二阈值可以为5、对于50岁以下女性第二阈值为6。
通过多种方法得到第二阈值,可以减少人工操作,并使能应用于与多种场景,提高普适度。
步骤1930,确定第二脉管骨架与参考脉管骨架之间的空间距离,将空间距离最小的两个点确定为最近点组。在一些实施例中,步骤1930可以由处理设备130或脉管识别装置1600执行。
最近点组可以指类型待定脉管的第二脉管骨架(即待定骨架)与参考脉管骨架的空间距离最小的两个点所构成的点组。例如图21(a)和(b)中所示,(a)示出了重建后的局部三维影像,(b)为(a)对应的骨架模拟图。其中,图21(a)中两根脉管在空间中同一平面上(同样适用于不在同一平面上的脉管);(b)中实线为骨架,虚线为最短距离。若待定骨架2110与参考脉管骨架2120的最小空间距离小于第二阈值,可以将空间距离最小的两个点(AAA和CCC)确定为待定骨架2110与参考脉管骨架2120之间的最近点组。
在一些实施例中,对于每条参考脉管骨架,可以确定第二脉管骨架与该参考脉管骨架之间的空间距离,将空间距离最小的两个点确定为最近点组。
步骤1940,基于最近点组确定类型待定脉管的脉管类型。在一些实施例中,步骤1940可以由处理设备130或脉管识别装置1600执行。
在一些实施例中,当参考脉管骨架集仅包括一条参考脉管骨架时,可以基于最近点组的位置,确定类型待定脉管的脉管类型。
在一些实施例中,当参考脉管骨架包括一条以上参考脉管骨架时,即参考脉管骨架集中包含多条脉管骨架,可以基于最近点组确定候选脉管骨架,基于候选脉管骨架确定类型待定脉管的脉管类型。例如,可以确定第二脉管骨架与候选脉管骨架中脉管骨架的广义距离,基于广义距离确定第二脉管骨架的脉管类型。
基于最近点组确定类型待定脉管的类型的更多内容参见图20中描述。
在一些实施例中,可以基于类型待定脉管的第二脉管骨架与参考脉管骨架集的其他关系判断类型待定脉管的类型。例如,可以基于第二脉管骨架与参考脉管骨架集内参考脉管骨架的空间关系、拓扑关系等等,确定第二脉管骨架的脉管类型。在一些实施例中,可以基于类型待定脉管的第二脉管骨架与参考脉管骨架的距离和角度,确定第二脉管骨架的脉管类型。
应当注意的是,上述有关流程1900的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1900进行各种修正和改变。然而, 这些修正和改变仍在本说明书的范围之内。
图20是根据本说明书另一些实施例所示的示例性脉管类型确定的流程示意图。在一些实施例中,流程2000可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或脉管识别装置1600执行。例如,流程2000可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图16所示的模块执行程序或指令时,可以实现流程2000。
如图20所示,基于参考脉管骨架集内脉管骨架的数目,可以采用不同的方式确定第二脉管骨架的脉管类型。在步骤2010,判断参考脉管骨架集中是否包含一条参考脉管骨架,若是,执行步骤2020;否则,执行步骤2030。
步骤2020,基于最近点组的位置确定第二脉管骨架的脉管类型。
在一些实施例中,当参考脉管骨架集仅包括一条脉管骨架时,即只有一条参考脉管骨架,处理设备130可以基于类型待定脉管的第二脉管骨架与该参考脉管骨架的最近点组的位置确定第二脉管骨架的脉管类型。
在一些实施例中,可以基于最近点组的位置与骨架的端点的位置关系,确定第二脉管骨架的脉管类型。骨架的端点可以指在所在骨架上只有一个相邻点的点。在一些实施例中,如果最近点组中存在与其所在骨架的任一端点的最近距离小于预设值n1的一点(例如,点AAA),则认为该第二脉管骨架与该参考脉管骨架为同一类型的脉管。基于空间距离计算方法的不同,预设值n1作为空间距离的比较参数,可以是不同的物理量。例如,当以实际空间距离作为距离测算的基础时,预设值n1可以是具体代表长度的物理量,例如5mm。在一些实施例中,空间距离的计算可以基于图像信息中的体素点经过换算之后进行。例如,实际距离值折算成5个体素点,则预设值n1可以为5。在一些实施例中,若三维影像投影角度一致,可以将实际距离值折算成像素点的数量,以像素点的数量表示预设值n1。例如,实际距离值折算成5个像素点,则预设值n1为5。
在一些实施例中,预设值n1可以根据经验或需求获得。在一些实施例中,预设值n1可以由用户自定义。在一些实施例中,预设值n1可以基于类型待定脉管的级别的不同而不同。例如,脉管分级越到末端细支,预设值n1越小;脉管分级越到主干,预设值n1越大。在一些实施例中,预设值n1与类型待定脉管的粗细有关。例如,脉管越细,预设值n1越小;脉管越粗,预设值n1越大。
在一些实施例中,预设值n1可以通过机器学习方法获得。例如,通过构建机器学习模型,针对不同生物体的部位的训练数据,通过机器学习的方式,获得该生物体的部位对应的优化后的预设值n1。在实际应用中,在对该部位进行识别时,使用与其相对应的经过优化训练之后获得的预设值n1。机器学习模型可以包括但不限于神经网络模型、支持向量机模型、k近邻模型、决策树模型等一种或多种的组合。在一些实施例中,预设值n1的机器学习方法可以基于同类生物体对应的部位的医学影像和类型判断结果获得。例如,可以以同类生物体对应的部位的医学影像为样本,类型判断结果为标签,通过训练,获得该类生物体的预设值n1。
仅作为示例,如图21(a)和(b)中所示,最近点组(AAA和CCC)中,AAA所在骨架为待定骨架2110,CCC所在骨架为参考脉管骨架2120,若最近点组中的AAA与所在骨架2110的端点距离为0像素,在n1个像素内,并且点CCC与所在骨架2120的端点距离为0像素,在n1个像素内,则认为待定骨架2110的脉管与参考脉管骨架2120的脉管为同一类型。
作为又一示例,如图21(c)-(e)中所示,(c)是重建后的俯视角度的局部三维影像,(d)为图(c)对应的视角一致的骨架模拟图,(e)为(c)对应的侧视角度的脉管骨架模拟图。其中,图21(c)中两根脉管在空间中处于不同平面上(同样适用于在相同平面上的脉管),并且这两根脉管最小空间距离小于第二阈值。如图21(d)中所示,最近点组(AAA'和CCC')中,AAA'所在骨架为深色的脉管骨架2140,CCC'所在骨架为浅色的脉管骨架2130,AAA'遮挡了CCC',即CCC'与AAA'连线垂直于纸面。如图21(e)中所示,虚线为AAA'到CCC'的距离,最近点组(AAA'和CCC')中,若AAA'与其所在骨架2140的端点距离为0像素,在n1个像素内,并且CCC'与其所在骨架2130的端点距离为0像素,在n1个像素内,则认为骨架2130对应的脉管与骨架2140对应的脉管为同一类型。
又一示例,如图21(f)-(i)中所示,(f)是重建后的俯视角度的局部三维影像,(g)为(f)对应的视角一致的骨架模拟图,(h)是(f)的侧视角度的局部三维影像,(i)为(h)对应的视角一致的骨架模拟图。其中,图21(h)和(f)中两根脉管在空间中处于不同平面上(同样适用于在相同平面上的脉管);(g)中,AAA”所在骨架为深色的脉管骨架2150,CCC”所在骨架为浅色脉管骨架2160,点AAA”遮挡了点CCC”,即点CCC”与点AAA”连线垂直于纸面;(i)中,虚线为点AAA”到点CCC”的距离。最近点组(AAA”和CCC”)中,AAA”和CCC”都在各自骨架的中间位置,并非端点附近,此时认为骨架2150和骨架2160对应的两根脉管不是同一类型。
步骤2030,基于最近点组确定候选脉管骨架,基于候选脉管骨架确定第二脉管骨架的脉管类 型。
当参考脉管骨架集中包括不止一条脉管骨架时,即包含一条以上参考脉管骨架,可以基于参考脉管骨架集中的参考脉管骨架与类型待定脉管的第二脉管骨架的空间关系,确定第二脉管骨架的脉管类型。
在一些实施例中,当参考脉管骨架集中包括不止一条脉管骨架时,可以基于最近点组,从参考脉管骨架集中确定候选脉管骨架,即仅保留与类型待定脉管骨架是疑似同类别的参考脉管骨架。结合步骤2020中判别方法,在一些实施例中,可以基于最近点组,通过判断各条参考脉管骨架是否与第二脉管骨架是同类型的脉管,确定候选脉管骨架。例如,如果参考脉管骨架与第二脉管骨架的最近点组中存在一点MMM,MMM与其所在骨架的任一端点的最近距离小于预设值n1,则认为第二脉管骨架与该参考脉管骨架为疑似同类别,将该参考脉管骨架确定为候选脉管骨架。
如果候选脉管骨架仅包含一条脉管骨架,则可以将该候选脉管骨架(即与第二脉管骨架疑似同类别的参考脉管骨架)的脉管类型确定为相应类型待定脉管的脉管类型。如果候选脉管骨架包含多条脉管骨架,而这些脉管骨架又都是同一脉管类型,则可以将这些参考脉管骨架的脉管类型确定为相应类型待定脉管的脉管类型。如果候选脉管骨架包含多条脉管骨架,而这些脉管骨架中至少两条不属于同一脉管类型,则可以确定第二脉管骨架与候选脉管骨架的广义距离;再基于广义距离,确定类型待定脉管的脉管类型。
广义距离可以指反映骨架间的接近程度(例如,距离接近程度、方向接近程度)的物理量。在一些实施例中,广义距离可以基于最小空间距离和广义夹角获得。广义夹角可以指反映骨架间的方向接近程度的物理量。例如,图22(b)中的角α和β。
在一些实施例中,广义夹角可以基于脉管最近点组的广义角度获得。具体地,可以最近点组中的点为切点,做该点所在骨架的切线,将切线之间的夹角确定为广义夹角。例如图22(b)所示,若与第二脉管骨架2210对应的候选脉管骨架包含两条:参考脉管骨架2220和参考脉管骨架2230,则对于最近点组(AAA1和CCC)和(AAA2和CCC),分别以点CCC为切点做其所在第二脉管骨架2210的切线,以AAA1为切点做其所在的参考脉管骨架2220的切线,以AAA2为切点做其所在参考脉管骨架2230的切线。将每组最近点组对应的切线之间的夹角(例如,α、β)确定为广义夹角。
在一些实施例中,若最近点组中的点在骨架的分叉点上,可以以该分叉点为切点,分别做出各个骨架分支的切线,求出各个切线的中线,将该中线作为该骨架在分叉点的切线。
在一些实施例中,广义夹角可以基于其他方式获得。例如,可以做出各条骨架的拟合直线,将各条拟合直线的夹角作为广义夹角。
仅作为示例,图22(a)-(b)示出了基于空间距离和广义夹角获得距离的方法,其中,(a)是重建后的局部三维影像,(b)为(a)对应的骨架模拟图。为方便说明,图22(a)中3条脉管在空间中处于同一平面上(同样适用于在不同平面上的脉管),与第二脉管骨架2210(即待定脉管骨架)疑似同类别的参考脉管骨架存在两条,参考脉管骨架2220和参考脉管骨架2230,即候选脉管骨架包含两条,两条参考脉管骨架与第二脉管骨架2210的最近点组分别是(AAA1和CCC)和(AAA2和CCC)。若距离权重为f1,角度权重为f2(例如f1=0.4,f2=0.6),则参考脉管骨架2220的得分为S1=f1×distance(AAA1,CCC)+f2×β,参考脉管骨架2230的得分为S2=f1×distance(AAA2,CCC)+f2×α。处理设备130可以将得分最小者的参考脉管骨架的类型确定为第二脉管骨架2210的脉管类型,例如若S1更小,则第二脉管骨架2210的脉管类型与参考脉管骨架2220一致。
通过连通关系、最近点组和广义距离判断生物体内脉管的类型,可以提高识别准确度。
应当注意的是,上述有关流程2000的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程2000进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图23是根据本说明书一些实施例所示的示例性模型训练的示意图。在一些实施例中,流程2300可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或脉管识别装置1600(例如,训练模块)执行。例如,流程2300可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图16所示的模块执行程序或指令时,可以实现流程2300。
如图23所示,在一些实施例中,可以基于大量带有标识的训练样本训练初始模型2310以更新初始模型的参数来得到训练后的模型2320。初始模型2310可以包括初始第一分割模型和/或初始第二分割模型,相应地,训练后的模型2320可以包括第一分割模型和/或第二分割模型。
在一些实施例中,可以基于大量第一训练样本训练初始第一分割模型以更新初始第一分割模型的参数来得到第一分割模型。在一些实施例中,可以将第一训练样本输入初始第一分割模型,通过训练迭代更新初始第一分割模型的参数。
第一训练样本可以包括用于对第一分割模型进行训练的历史目标图像。该历史目标图像可以包括历史的三维医学影像。其中,第一训练样本中样本目标图像可以作为训练模型的输入,样本目标图像中脉管的脉管类型作为标签。脉管类型至少包括第一类型和第二类型,还可以有第三类型甚至更多。例如,脉管类型包括腹腔门静脉、腹腔动脉。又例如,脉管类型包括肝门静脉、肝静脉和肝动脉。在一些实施例中,可以将样本目标图像中第一类型的脉管以第一灰度值标记、第二类型的脉管以第二灰度值标记、第三类型的脉管以第三灰度值标记等。值得注意的是,上述标签仅包括样本目标图像中脉管的脉管类型,不包括脉管的级别。
在一些实施例中,第一训练样本可以只标定符合条件的脉管的类型。例如,条件可以包括影像中脉管的对比度的预设范围、脉管级别的预设范围等或其任意组合。在一些实施例中,该条件可以根据经验或需求设定。例如,不同类型的生物体、不同的部位、器官、组织等可以对应不同的条件。在一些实施例中,该条件可以由用户设定。在一些实施例中,该条件可以为脉管的级别小于设定级别。
脉管的级别可以指脉管与主干脉管的相对关系,例如,从主干脉管到该脉管经过的分支越少,则该脉管的级别数越小。对于胸部动脉来说,胸主动脉为1级脉管,两侧的肺动脉主干为2级脉管,肺叶动脉为3级脉管,肺段动脉为4级脉管,肺亚段动脉为5级脉管,肺亚亚段动脉为6级脉管等。对于肝门静脉来说,肝门静脉主干为一级血管,肝门静脉左/右支为二级血管,肝叶门静脉为三级血管,肝段门静脉为4级血管,肝亚段门静脉为5级血管,肝亚亚段门静脉为6级血管。对于肝静脉来说,肝静脉主干为一级血管,肝静脉左/右支为二级血管,肝叶静脉为三级血管,肝段静脉为4级血管,肝亚段静脉为5级血管,肝亚亚段静脉为6级血管。对于肝动脉来说,肝动脉主干为一级血管,肝动脉左/右支为二级血管,肝叶动脉为三级血管,肝段动脉为4级血管。
在一些实施例中,脉管的级别可以反映影像或检测结果的丰富度。例如,级别数越大,丰富度越好。示例性地,包含最大级数为6级的脉管的检测结果比包含最大级数为4级的脉管的检测结果更丰富。
设定级别可以是预先设定的脉管的级别,例如,5级。设定级别可以用来指导需要标注的脉管(例如,小于5级的血管),以及不需要标注的脉管(例如,大于或等于5级的血管)。设定级别可以根据需求和/或经验设定。在一些实施例中,设定级别可以由用户设定。
仅标注级别小于设定级别的脉管,有利于第一分割模型专注于主干脉管的分割分类,提高分割的准确度。
在一些实施例中,可以基于大量第二训练样本训练初始第二分割模型以更新初始第二分割模型的参数来得到第二分割模型。在一些实施例中,可以将第二训练样本输入初始第二分割模型,通过训练迭代更新初始第二分割模型的参数。
第二训练样本可以指用于对第二分割模型进行训练的样本目标图像。该样本目标图像可以包括历史三维影像数据。在一些实施例中,第二训练样本中样本目标图像可以作为训练模型的输入,样本目标图像中的脉管作为标签,例如,圈出样本目标图像中的脉管的轮廓。值得注意的是,上述标签仅包括脉管(例如,血管),不包括脉管的类型(例如,肝门静脉、肝静脉、肝动脉等)。
在一些实施例中,例如,样本目标图像是CT影像数据的实施例中,可以对样本CT影像数据做调整窗宽(CT图像上显示的CT值范围)窗位(CT值的中心值)等处理,使影像中各结构之间的灰度差别增加和/或加强细小脉管的对比度,以使第一训练样本和/或第二训练样本的标注结果更准确(例如,尽可能多地覆盖细小的脉管,使第二训练样本覆盖更多级别的脉管)。第一训练样本和/或第二训练样本的标签可以通过人工添加或自动添加的方式添加,也可以通过其他方式添加,本实施例对此不作限定。
如前所述,在一些实施例中,第一训练样本只标定符合条件的脉管的类型。在一些实施例中,第二训练样本中至少标定了一条不符合条件的生物体内脉管。换言之,相对于第一训练样本,第二训练样本标注了更多(分叉更深、更细小)的脉管。例如,若设定的条件为生物体内脉管的级别小于5级,第一训练样本只标定1-4级脉管的类型,第二训练样本中则可以标定1-6级甚至更细小的脉管。尽可能多地覆盖细小的脉管,以及覆盖第一训练样本未覆盖的脉管,有利于第二分割模型学习到细小脉管的特征,提高分割的丰富度。
在一些实施例中,可以通过从数据库、存储设备读取或调用数据接口的方式获取得到多个第一训练样本和/或第二训练样本,包括其对应的标签。
在一些实施例中,可以将第一训练样本的样本目标图像输入至第一分割模型,由第一分割模型输出得到样本目标图像中脉管的预测结果;和/或将第二训练样本的样本目标图像输入至第二分割模型,由第二分割模型输出得到样本目标图像中脉管的预测结果。
在一些实施例中,处理设备可以基于预测结果与第一训练样本(或第二训练样本)的标签构建 损失函数。损失函数可以反映预测结果与标签之间的差异大小。处理设备可以基于损失函数对第一分割模型(或第二分割模型)的参数进行调整,以减小预测结果与标签之间的差异。例如,通过不断调整第一分割模型或第二分割模型的参数,使得损失函数值减小或最小化。
在一些实施例中,还可以根据其他训练方法得到第一分割模型和/或第二分割模型,例如,为训练过程设置相应的初始学习率(例如,0.1)、学习率衰减策略。本申请在此不做限制。
应当注意的是,上述有关流程2300的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程2300进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图24是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程示意图。在一些实施例中,流程2400可以由穿刺路径规划***100(例如,穿刺路径规划***100中的处理设备130)或穿刺路径规划装置300执行。例如,流程2400可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图3所示的模块执行程序或指令时,可以实现流程2400。如图24所示,在一些实施例中,流程2400可以包括以下步骤。
步骤2410,基于目标图像确定靶点。在一些实施例中,步骤2410可以由处理设备130或数据预处理模块310执行。
结合上文,靶点可以是病灶区域或待检测区域的体积中心或重心。在一些实施例中,完成器官或组织分割(如执行流程600)后,可以通过多种方式确定靶器官的体积中心或重心。仅作为示例,以病灶区域穿刺为例,处理设备130可以通过边界腐蚀的方法将病灶区域的外周向内不断腐蚀得到距离场,确定离边界最远的体素为病灶区域的中心,将该中心确定为靶点。具体地,处理设备130可以,(1)获取目标图像的原始尺度中X、Y、Z三个空间的距离最小值,基于该尺度进行图像重采样,获得重采样图像(例如,图25(a)中所示图像);(2)采用边界腐蚀的方法,递归腐蚀,根据腐蚀次数计算被腐蚀体素到边界的最小距离,形成病灶区域对应的距离场掩码(mask)(例如,图25(b)中所示的近似椭圆形的浅灰色不规则区域);(3)计算距离场的最大值,当距离场最大值的体素数量为2时,对该体素的邻近5*5*5立方体内求均值,将均值最大的点确定为靶点;当距离场最大值的体素数量大于2时,确定求解当前体素与其它最大边界距离所在的体素点的距离之和的最小值为目标函数,将求解目标函数获取的值对应的体素点确定为靶点(例如,图25(c)中心区域所示的黑点)。
可以理解,上述关于靶点确定的描述仅作为示例,并非对本说明书的限制,在一些实施例中,可以通过其他合理可行的方式确定靶点(例如,直接通过图像识别法确定靶器官的体积中心为靶点,或通过计算靶器官的体积长轴和短轴交点确定该交点为靶点,或通过像素统计等方法确定体积中心为靶点),本说明书对此不作限制。
步骤2420,基于靶点和第一约束条件,确定初始路径。在一些实施例中,步骤2420可以由处理设备130或路径筛选模块320执行。
在一些实施例中,第一约束条件可以包括以下中至少一个:路径位于靶区所在片层的邻近片层、排除与病床床板接触的体廓上的入针点、路径的穿刺深度小于预设深度阈值、或路径与扁平病灶的扁平面垂直线的夹角在预设范围内等。例如,第一约束条件可以包括路径位于靶区所在片层的邻近片层、排除与病床床板接触的体廓上的入针点、路径的穿刺深度小于预设深度阈值。又如,第一约束条件可以包括路径位于靶区所在片层的邻近片层、排除与病床床板接触的体廓上的入针点、路径的穿刺深度小于预设深度阈值和路径与扁平病灶的扁平面垂直线的夹角在预设范围内。再如,第一约束条件可以包括路径位于靶区所在片层的邻近片层,或排除与病床床板接触的体廓上的入针点,或路径的穿刺深度小于预设深度阈值。
靶区可以指靶器官所在区域。在一些实施例中,靶区所在片层可以反映靶区在目标图像中的位置(例如,在CT扫描图像中,靶区可以为扫描图像的其中一个或多个片层)。靶区所在片层的邻近片层可以指位于靶区所在片层一定范围内的邻近片层。
通过将穿刺路径约束为位于靶区所在片层的邻近片层,可以避免穿刺路径的靶点和入针点沿头足方向跨越的片层过大,导致在穿刺操作过程中获取的扫描图像中无法同时观察到“针头”和“针尾”位置,给用户(例如,医生、护士)对穿刺操作的引导评估带来影响。
病床可以指在执行穿刺操作时目标对象(例如,患者)躺卧的平台(例如,医疗床115)。在一些实施例中,可以基于目标图像/分割影像确定入针点位置,并排除与病床床板接触的体廓上的入针点。例如,处理设备130可以根据目标图像中患者的躺卧姿势确定病床床板所在位置(例如,基于图像的分割识别或基于硬件***的位姿反馈定位等方法),根据病床床板位置计算入针点位置。仅作为示例,可以将图26A简单理解为侧视图,床板面为垂直于纸面方向,假设患者平躺或趴卧在病床上,处理设备130可以以纸面的水平向右方向为X轴正向,竖直向上方向为Y轴正向建立坐标系,基于此计算入 针点位置以及靶点位置(例如,图26A(a)中点(X1,Y1)或图26A(b)中点(X0,Y0)),当入针点的纵坐标大于靶点纵坐标时(例如,大于Y1或Y0时),确定相应入针点为正向入针点(即与病床床板不接触的体廓上的入针点),否则,确定相应入针点为反向入针点(即与病床床板接触的体廓上的入针点),将其排除。
通过排除与病床床板接触的体廓上的入针点,可以避免所规划的路径因从床板侧入针而不合实际无法执行,提高穿刺路径规划的效率和准确性。
路径的穿刺深度可以是从入针点到靶点的穿刺距离。在一些实施例中,可以将初始路径约束为穿刺距离小于预设深度阈值。在一些实施例中,可以基于穿刺针的长度(例如,穿刺手术的临床常用器械的型号长度)确定预设深度阈值。例如,可以将***支持的最长穿刺针(例如,120mm穿刺针)的长度确定为预设深度阈值,或将中等穿刺针的长度确定为预设深度阈值,或将最短穿刺针的长度确定为预设深度阈值。在一些实施例中,可以基于穿刺信息和/或患者信息等确定预设深度阈值。例如,穿刺信息可以包括靶器官信息、穿刺目的等;患者信息可以包括患者年龄、性别等。仅作为示例,当靶器官包含较为危险的组织(例如,血管、骨骼等)、穿刺目的是病灶检测或患者年龄较大时,处理设备130可以确定较小的数值(例如,在皮肤层与靶器官的最短距离基础上加3~5mm)为预设深度阈值。又如,处理设备130可以根据靶器官信息、穿刺目的等信息确定穿刺针型号(例如,穿刺针长度、直径),根据穿刺针型号确定该穿刺针的长度为预设深度阈值。在一些实施例中,可以基于入针点与靶点之间的距离,约束初始路径的规划。仅作为示例,图26B中1表示穿刺深度L1小于预设深度阈值Lmax的路径,2表示穿刺深度L2大于预设深度阈值的路径,处理设备130可以将路径1确定为初始路径。
通过结合穿刺针长度、穿刺信息等排除穿刺深度大于预设深度阈值的路径,不仅可以避免因穿刺针型号限制导致穿刺针不可达到靶点,还可以减少穿刺针在人体内停留时间和经过的距离,进而降低穿刺导致的并发症发生的风险。
扁平病灶可以指体积较小、带扁平特征的病灶(例如,图26C中所示的病灶形态)。在一些实施例中,可以通过像素统计、主成分分析、图像识别等方法确定病灶形态。仅作为示例,处理设备130可以根据目标图像或分割影像中病灶体素的空间分布点进行矩阵分解,计算三个主轴X、Y、Z的方向和特征值(r0、r1、r2),当1≤r0/r1≤2且r1/r3≥3时,确定当前病灶为扁平病灶。其中,特征值r0≥r1≥r2,特征值的大小表示矩阵正交化后对应特征向量对整个矩阵的贡献程度(即坐标系中表示物体大小的(x,y,z)值对物体大小的描述)。
在一些实施例中,当病灶为扁平形态时,可以将穿刺路径约束为与扁平病灶的扁平面垂直线的夹角在预设范围内。在一些实施例中,可以通过平面投影、图像识别、像素统计、阈值分割等方法确定扁平病灶的扁平面。在一些实施例中,预设范围可以为任意合理的角度范围,处理设备130可以基于扁平面的面积、穿刺针直径等参数确定预设范围,本说明书对此不作限制。例如,预设范围可以是[0°,10°]、[0°,15°]、[0°,20°]、[0°,40°]、[5°,15°]、[3°,20°]、[5°,35°]、[10°,30°]、[25°,50°]、或[0°,60°]等。
在一些实施例中,可以基于路径投影面内的点云的数量与扁平病灶投影面内的点云的数量的比值(即判断穿刺路径形成的圆柱体内的是否包含大部分靶器官的体积),筛选与扁平病灶的扁平面垂直线的夹角在预设范围内的路径。在一些实施例中,处理设备130可以,(1)获取当前路径对应的入针方向;(2)根据入针方向计算垂直于路径的投影平面方程;(3)基于投影平面方程,将病灶区域对应的坐标和靶点坐标进行投影,获取相应的病灶投影点云和靶点投影点;(4)以靶点投影点为圆心,路径的安全半径(例如,路径与危险区域的预设距离阈值)为半径绘制圆形,计算圆形内投影点云的数量占病灶投影点云总数的比值,当比值大于预设比值(例如,0.6、0.7等),则表示沿该方向穿刺病灶区域的大部分均在穿刺路径上,该路径与扁平病灶的扁平面垂直线的夹角在预设范围内(例如,图26C(b)中路径b),将其排除;当比值小于等于预设比值时,则表示该路径与扁平病灶的扁平面垂直线的夹角不在预设范围内(例如,图26C(b)中路径a)。
通过将穿刺路径约束为与扁平病灶的扁平面垂直线的夹角在预设范围内,可以使得扁平病灶的穿刺路径是从“大端”方向(即扁平面的垂直线方向)进行穿刺,在穿刺路径尽可能垂直病灶的扁平面的同时满足临床需求,针对性地确定穿刺深度更短、效果更优的路径,提高穿刺路径的可行性和穿刺便捷性,从而保证取样结果/病灶穿刺结果的可靠性。
在一些实施例中,可以按照任意合理的顺序筛选满足第一约束条件的初始路径。例如,可以确定位于靶区所在片层的邻近片层的第一初始路径,然后排除第一初始路径中与病床床板接触的体廓上的入针点的路径,获得第二初始路径;进一步从第二初始路径中筛选出穿刺深度小于预设深度阈值的路径为最终确定的初始路径。又如,可以先排除与病床床板接触的体廓上的入针点,确定第一初始路径,然后从第一初始路径中确定位于靶区所在片层的邻近片层的路径为最终的初始路径。
步骤2430,基于第二约束条件,从初始路径中确定候选路径。在一些实施例中,步骤2430可以由处理设备130或路径筛选模块320执行。
在一些实施例中,第二约束条件可以包括路径与危险区域的距离大于预设距离阈值。
危险区域可以是包含危险组织(例如,血管、骨骼等)的区域。在一些实施例中,可以根据组织分割结果(例如,通过执行流程600实现组织分割)或脉管识别结果(例如,通过执行流程1700实现脉管识别),对靶器官内部组织进行分级,基于分级结果和路径规划条件(例如,约束条件)确定危险区域。例如,处理设备130可以按血管段的平均直径,优先考虑不经过靶器官内部的所有血管(即将所有血管确定为危险组织),若此种情况下无法获取有效路径或获取的有效路径较少则弱化细血管的影响,将靶器官内部的细血管设置为可穿组织(即将粗血管确定为危险组织),进行路径规划。具体地,处理设备130可以先通过深度学***均直径与血管粗细分辨的阈值Dt(例如,1mm、2mm)进行比较,若小于该阈值Dt则确定为细血管,大于阈值Dt则确定为粗血管,并通过不同的标记值区分细血管和粗血管,刷新所有血管段,基于此确定危险区域。例如,可以将仅包含粗血管的区域确定为危险区域,或将包含细血管以及粗血管的区域确定为危险区域。
预设距离阈值可以是危险组织的边缘到路径的最短距离。在一些实施例中,可以基于组织之间的距离、组织分割误差、计划穿刺与实际穿刺的配准误差、末端执行设备(例如,末端执行设备120)的执行误差等中的一种或多种参数确定预设距离阈值(例如,2mm、3mm、5mm、或7mm等)。
通过将穿刺路径约束为与危险区域的距离大于预设距离阈值,可以避免穿刺路径与血管等危险组织距离较近导致穿刺过程中误伤其他组织,给患者造成二次伤害。
在一些实施例中,在确定候选路径过程中,可以基于第一预设条件自适应调节路径规划条件(例如,第二约束条件)。路径规划条件可以反映候选路径的筛选条件(例如,危险区域的范围和/或预设安全距离值)。在一些实施例中,基于第一预设条件自适应调节路径规划条件可以包括:当候选路径的数量与初始路径的数量的比值小于第三阈值时,调节危险区域的范围。其中,第三阈值可以表示危险组织的变更控制系数(例如,0.2、0.3)。例如,若初始路径的数量是N1,初始的路径规划条件中将所有血管均设置为危险组织,基于此筛选确定的候选路径的数量是N2,当N2/N1≤H1(即第三阈值)时,表示大部分初始路径在安全范围内与危险组织相交,此时可以变更危险区域的范围(例如,修改血管的标记值,将直径小于1.5mm的血管设置为可穿组织,从危险区域中剔除)。
在一些实施例中,可以基于调节后的危险区域,从初始路径中确定候选路径;当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值小于第四阈值时,将调节后获得的候选路径作为最终的候选路径;当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值大于第四阈值时,将调节前获得的候选路径作为最终的候选路径。例如,可以根据将直径小于1.5mm的血管设置为可穿组织时(即不包含在危险区域中)确定的危险区域,再次筛选与危险区域的距离大于预设距离阈值的初始路径,并确定调节后的候选路径的数量N3,当N2/N3<H2(即第四阈值)时,表示直径小于1.5mm的血管对穿刺路径的规划造成了影响,此时可以将N3对应的候选路径确定为最终的候选路径;当N2/N3>H2时,表示将直径小于1.5mm的细血管设置为可穿组织得到的候选路径的结果与将所有血管均设置为不可穿刺得到的候选路径结果之间相差较小,此时则将N2对应的候选路径确定为最终的候选路径。
在一些实施例中,第四阈值可以为任意合理的数值(例如,0.6、0.8),在此不作限制。
通过在确定候选路径过程中,自适应调节路径规划条件(例如,危险区域的范围),能够充分考虑危险组织(例如,粗细血管)对穿刺路径规划的影响,帮助在安全风险与推荐路径多样性间起到平衡作用(例如,将细血管置为可穿和不可穿),减少穿刺导致的并发症的发生。例如,如图27中所示,穿刺路径避开了血管和胸前肋骨。
在一些实施例中,基于第一预设条件自适应调节路径规划条件还可以包括:当不存在满足路径规划条件的候选路径时,重新设定穿刺参数。例如,穿刺参数可以包括但不限于穿刺针长度、直径等。在一些实施例中,可以基于重新设定的穿刺参数,确定初始路径,并根据初始路径确定候选路径。仅作为示例,处理设备130可以基于穿刺深度最短的穿刺针1号的长度、直径等参数确定满足上述步骤2420中第一约束条件的初始路径,并筛选与危险区域的距离大于预设距离阈值的初始路径(即满足第二约束条件的初始路径)将其确定为候选路径,当不存在满足路径规划条件的候选路径时,***自适应将穿刺参数更换为穿刺深度更长的穿刺针2号对应的长度、直径等再次执行初始路径和候选路径确定过程(即 步骤2420和步骤2430),直至确定至少一个满足路径规划条件的候选路径。
步骤2440,基于候选路径,确定目标路径。在一些实施例中,步骤2440可以由处理设备130或路径推荐模块330执行。
结合上文,在一些实施例中,可以基于候选路径的共面和非共面特性,确定目标路径。
在一些实施例中,当确定的候选路径中同时包含共面候选路径和非共面候选路径时,可以基于非共面候选路径中的最短穿刺深度D1、共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的路径中的最短穿刺深度D3筛选目标路径。小角度偏转是指经过靶点垂直于床板并由人体指向床板的方向的向量N与靶点和入针点对应的方向向量T之间的夹角小于预设阈值(例如,2°、3°、5°、10°、15°等),非小角度偏转是指经过靶点垂直于床板并由人体指向床板的方向的向量N与靶点和入针点对应的方向向量T之间的夹角大于所述预设阈值。在一些实施例中,小角度的偏转范围可以在[0°,15°]范围内,例如,垂直于病床床板方向的共面路径。穿刺路径对应的偏转角度越小,操作越便利,尤其地,穿刺路径垂直病床床板方向操作最为便利。具体地,当最短穿刺深度D2或最短穿刺深度D3最小时,若最短穿刺深度D2与最短穿刺深度D3的差值的绝对值小于第三预设值,则可以确定最短穿刺深度D2对应的小角度偏转的共面候选路径为目标路径,否则,确定最短穿刺深度D2与最短穿刺深度D3中最小值对应的共面候选路径为目标路径;当最短穿刺深度D1最小时,若最短穿刺深度D2与最短穿刺深度D3中的最小值与最短穿刺深度D1的差值的绝对值小于第三预设值,则可以确定最小值对应的共面候选路径为目标路径,否则,确定最短穿刺深度D1对应的非共面候选路径为目标路径。在一些实施例中,第三预设值可以根据用户习惯、穿刺操作历史数据、患者信息等中的一种或多种确定。例如,当穿刺操作为手动执行时,可以基于医生阅片便利性,将第三预设值设定为成像设备110的扫描片段的范围值20mm。
仅作为示例,当确定的候选路径中同时包含共面候选路径和非共面候选路径时,处理设备130可以计算非共面候选路径中的最短穿刺深度D1、共面候选路径中垂直病床床板方向小角度偏转(例如,偏转角度在[0°,15°]范围内)的最短穿刺深度D2以及共面候选路径中垂直病床床板方向非小角度偏转的路径中的最短穿刺深度D3。进一步地,当D1、D2、D3中的最小值对应共面候选路径(即最短穿刺深度D2或最短穿刺深度D3最小)时,处理设备130可以比较D2和D3的大小,当小角度偏转对应的D2最小时,确定D2对应的候选路径为目标路径;当非小角度偏转对应的D3最小时,若D2-D3<第三预设值(例如,20mm),则确定操作更为便利的小角度偏转的D2对应的共面候选路径为目标路径,若D2-D3≥第三预设值,则以穿刺深度安全性为目标,确定穿刺深度更短的D3对应的候选路径为目标路径。当D1、D2、D3中的最小值对应非共面候选路径(即最短穿刺深度D1最小)时,处理设备130可以计算D2和D3中的最小值Dmin,若Dmin-D1<第三预设值(例如,20mm),则以阅片便利性为目标,确定Dmin对应的共面候选路径为目标路径;若Dmin-D1≥第三预设值,则以安全性为目标,确定穿刺深度更短的D1对应的非共面候选路径为目标路径。在一些实施例中,最短穿刺深度D2与最短穿刺深度D3的差值(即D2-D3)对应的预设值,和最短穿刺深度D2与最短穿刺深度D3中的最小值与最短穿刺深度D1的差值(即Dmin-D1)对应的预设值,可以为相同或不同的数值。
在一些实施例中,当候选路径中仅包含非共面候选路径时,可以基于非共面候选路径中的最短穿刺深度D1筛选目标路径(例如,确定D1对应的非共面候选路径为目标路径)。在一些实施例中,当候选路径仅包含共面候选路径时,可以基于共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的路径中的最短穿刺深度D3筛选目标路径。例如,处理设备130可以比较D2和D3的大小,当小角度偏转对应的D2最小时,确定D2对应的候选路径为目标路径;当非小角度偏转对应的D3最小时,若D2-D3<第三预设值(例如,20mm),则确定操作更为便利的小角度偏转的D2对应的共面候选路径为目标路径,若D2-D3≥第三预设值,则以穿刺深度安全性为目标,确定穿刺深度更短的D3对应的候选路径为目标路径。
应当注意的是,上述有关流程2400的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程2400进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。
图28是根据本说明书另一些实施例所示的示例性穿刺路径规划方法的示意图。在一些实施例中,流程2800可以由穿刺路径规划***100(例如,处理设备130)或穿刺路径规划装置200执行。例如,流程2800可以以程序或指令的形式存储在存储设备(例如,存储设备150、***的存储单元)中,当处理器或图3所示的模块执行程序或指令时,可以实现流程2800。
仅作为示例,如图28所示,处理设备130从成像设备110或存储设备150获取目标对象的目标图像后,可以对目标图像进行分割(例如,通过流程600的分割方法),并确定分割图像中脉管类型(例如,通过流程1700的脉管识别方法),基于分割结果确定靶点,然后基于靶点和约束条件确定目 标路径。具体地:
步骤2810,对目标图像进行分割。
在一些实施例中,处理设备130可以利用深度学习模型、阈值分割等方式对目标图像进行分割,得到初步分割结果。在一些实施例中,处理设备130可以对目标图像中的目标结构进行粗分割,得到目标结构掩膜;基于软连通域分析,确定目标结构掩膜的定位信息;基于目标结构掩膜的定位信息,对目标结构进行精准分割,得到初步分割结果。更多通过粗分割和精准分割得到分割结果的内容参见图6-图16中描述。
步骤2820,对目标图像进行脉管识别。
在一些实施例中,可以基于初步分割结果进行脉管识别,得到目标图像的目标分割结果。在一些实施例中,目标分割结果可以包括不同级别的脉管和/或脉管的类型。
在一些实施例中,处理设备130可以基于第一分割模型,获取目标图像的第一分割结果;对第一分割结果进行骨架化处理,获取第一脉管骨架集;基于第二分割模型,获取目标图像的第二分割结果;融合第一分割结果和第二分割结果,获取融合结果。在一些实施例中,处理设备130可以对融合结果进行骨架化处理,获取类型待定脉管的第二脉管骨架;获取与第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;确定第二脉管骨架与参考脉管骨架之间的空间距离,将空间距离最小的两个点确定为最近点组;基于最近点组确定类型待定脉管的脉管类型,从而得到目标分割结果。更多通过第一分割模型和第二分割模型获得脉管类型的内容可以参见图17-图23中描述。
在一些实施例中,基于目标分割结果,处理设备130可以进一步对靶器官内部的组织进行分级,确定危险组织。例如,处理设备130可以根据分割获取的靶器官内部的血管mask,通过边界腐蚀方式确定每个血管的中心点,计算中心点到血管边界的最小距离,作为该点的血管半径,然后基于预设血管分辨阈值Dt,将小于阈值Dt的血管置为细血管,大于阈值Dt的血管置为粗血管,并以不同的标记值区分。
步骤2830,基于目标分割结果确定靶点。
在一些实施例中,处理设备130可以根据目标分割结果确定靶区,通过边界腐蚀等方法确定靶区的体积中心或重心点,并将其确定为靶点。更多内容参见图24中描述。
步骤2840,根据靶点和第一约束条件确定初始路径。
仅作为示例,在步骤2841中,处理设备130可以根据靶点,确定位于靶区所在片层的邻近片层的路径为第一初始路径;在步骤2843中,处理设备130可以基于穿刺参数(例如,当前设定的穿刺针长度),确定第一初始路径中穿刺深度小于预设深度阈值的路径为第二初始路径;在步骤2845中,处理设备130可以排除与病床床板接触的体廓上的入针点对应的第二初始路径,获取第三初始路径。在一些实施例中,当为扁平病灶时,处理设备130可以进一步执行步骤2847,筛选第三初始路径中与扁平病灶的扁平面垂直线的夹角在预设范围内的路径,并将其确定为最终的初始路径。
可以理解,图28中关于步骤2841-步骤2847的执行顺序仅作为示例,在一些实施例中,可以按照任意合理的顺序执行步骤2841-步骤2847中的至少一个(例如,在步骤2841后,可以先执行步骤2845再执行步骤2843),本说明书对此不做限制。
步骤2850,从初始路径中确定候选路径。
在一些实施例中,处理设备130可以基于第二约束条件从初始路径中确定候选路径。在一些实施例中,在确定候选路径过程中,处理设备130可以基于第一预设条件,自适应调节路径规划条件。仅作为示例,处理设备130可以从初始路径中确定与危险区域的距离大于预设距离阈值的路径,并当候选路径的数量与初始路径的数量的比值小于第三阈值时,调节危险区域的范围,基于调节后的危险区域,再次从初始路径中确定多个候选路径;当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值小于第四阈值时,将调节后获得的候选路径作为最终的候选路径;当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值大于第四阈值时,将调节前获得的候选路径作为最终的候选路径。
在一些实施例中,当执行完步骤2850后不存在满足路径规划条件的候选路径时,处理设备130可以重新设定穿刺参数(例如,当基于某穿刺针的长度确定的预设深度阈值无法有效规划路径的时候,增加穿刺针的长度,即增大预设深度阈值),并根据该穿刺参数再次执行步骤2840-步骤2850,直至确定满足路径规划条件的候选路径;当存在时,则执行步骤2860。
在步骤2860中,处理设备130可以基于候选路径确定目标路径。在一些实施例中,处理设备130可以计算非共面候选路径中的最短穿刺深度D1、共面候选路径中垂直病床床板方向小角度偏转的最短穿刺深度D2以及非小角度偏转路径中的最短穿刺深度D3,基于最短穿刺深度D1、最短穿刺深度D2以及最短穿刺深度D3确定目标路径。更多内容可以参见图24及其相关描述,此处不再赘述。
在一些实施例中,处理设备130可以向用户推荐目标路径,和/或根据用户反馈(例如,用户选择的目标路径或重新规划的穿刺路径)控制末端执行设备120执行穿刺。
应当注意的是,上述有关流程2800的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程2800进行各种修正和改变。例如,步骤2810和步骤2820可以同时执行。又如,可以先执行步骤2830,再执行步骤2820,即先基于步骤2810获得的分割结果确定靶点,然后再进行脉管识别,以确定危险区域。然而,这些修正和改变仍在本说明书的范围之内。
本说明书一些实施例中,通过穿刺路径规划方法和/或***,(1)基于穿刺活检的临床要求,使用至少两个约束条件,计算安全可行的最佳穿刺路径,有效缩短规划时间,提高穿刺准确性,减少并发症的发生;(2)确定与危险区域的距离大于预设距离阈值的初始路径为候选路径,能有效把控穿刺操作的风险;(3)自适应调节路径规划过程,充分考虑安全性以及路径规划多样性,提高路径规划准确率和效率;(4)综合考虑操作便利性和安全性确定最终的目标路径,保证路径规划的准确性和安全性;(5)通过在粗分割阶段采用软连通域分析方法,准确保留目标结构区域的同时,可以有效排除假阳性区域,不仅提高了粗定位阶段对目标结构定位的准确率,而且有助于后续精准分割;(6)通过利用高丰富度的第二分割模型的分割结果在低丰富度但准确度高的第一分割模型的分割结果上进行脉管生长,对两个模型进行融合,能够准确有效的得到高丰富度高准确度的多类别脉管分割结果。
需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的***组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的***。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (29)

  1. 一种用于穿刺路径规划的***,包括:
    至少一个存储介质,包括一组指令;以及
    与所述至少一个存储介质通信的一个或以上处理器,其中,当执行所述指令时,所述一个或以上处理器用于:
    基于目标图像确定靶点;
    基于所述靶点和至少两个约束条件,确定候选路径,其中,在确定所述候选路径过程中,基于第一预设条件,自适应调节路径规划条件;
    基于所述候选路径,确定目标路径。
  2. 根据权利要求1所述的***,所述基于目标图像确定靶点包括:
    对所述目标图像中的目标结构进行粗分割,得到目标结构掩膜;
    基于软连通域分析,确定所述目标结构掩膜的定位信息;
    基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割;
    基于分割结果,确定所述靶点。
  3. 根据权利要求2所述的***,所述基于软连通域分析,确定所述目标结构掩膜的定位信息包括:
    确定所述目标结构掩膜中的连通域的数量;
    基于所述连通域的数量,确定所述目标结构掩膜的定位信息。
  4. 根据权利要求3所述的***,所述基于所述连通域数量,确定所述目标结构掩膜的定位信息包括:
    响应于所述连通域的数量大于第一预设值小于第二预设值,确定所述目标结构掩膜中最大连通域的面积与连通域总面积的比值;
    判断所述比值是否大于第一阈值;
    若是,则确定所述最大连通域为保留连通域;否则,确定所述目标结构掩膜中每个连通域均为所述保留连通域;
    基于所述保留连通域,确定所述目标结构掩膜的定位信息。
  5. 根据权利要求3所述的***,所述基于所述连通域数量,确定所述目标结构掩膜的定位信息包括:
    响应于所述连通域的数量大于或等于第二预设值,按照面积从大到小的顺序对所述目标结构掩膜中每个连通域进行排序;
    基于排序结果,确定排名前n的连通域为目标连通域;
    基于第二预设条件,从所述目标连通域中确定保留连通域,所述保留连通域至少包括所述目标结构掩膜中的最大连通域;
    基于所述保留连通域,确定所述目标结构掩膜的定位信息。
  6. 根据权利要求2所述的***,
    所述目标结构掩膜的定位信息包括所述目标结构掩膜的外接矩形的位置信息;和/或
    所述确定所述目标结构掩膜的定位信息包括:基于预设结构的定位坐标,对所述目标结构掩膜进行定位。
  7. 根据权利要求2所述的***,所述基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割包括:
    对所述目标结构进行初步精准分割,得到初步精准分割结果;
    基于所述初步精准分割结果,判断所述目标结构掩膜的定位信息是否准确;
    若是,将所述初步精准分割结果作为目标分割结果;否则,通过自适应滑窗方式确定所述目标结构的目标分割结果。
  8. 根据权利要求7所述的***,所述通过自适应滑窗方式确定所述目标结构的目标分割结果包括:
    确定目标方向,所述目标方向是所述定位信息存在偏差的方向;
    根据重叠率参数,在所述目标方向上进行自适应滑窗计算,以确定所述目标结构的目标分割结果。
  9. 根据权利要求1所述的***,所述一个或以上处理器还用于:
    基于第一分割模型,获取所述目标图像的第一分割结果;
    对所述第一分割结果进行骨架化处理,获取第一脉管骨架集,其中,所述第一脉管骨架集包括至少一条类型已确定的第一脉管骨架;
    基于第二分割模型,获取所述目标图像的第二分割结果,所述第二分割结果中包括至少一条类型待定脉管;
    融合所述第一分割结果和所述第二分割结果,获取融合结果;
    基于所述融合结果,确定危险区域。
  10. 根据权利要求9所述的***,
    所述第二分割结果中的至少一条脉管未包括在所述第一分割结果中;
    所述基于所述融合结果,确定危险区域包括:
    对所述融合结果进行骨架化处理,获取所述类型待定脉管的第二脉管骨架;
    获取与所述第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;
    确定所述第二脉管骨架与所述参考脉管骨架之间的空间距离,将所述空间距离最小的两个点确定为最近点组;
    基于所述最近点组确定所述类型待定脉管的脉管类型;
    基于所述融合结果的脉管类型,确定所述危险区域。
  11. 根据权利要求10所述的***,所述基于所述最近点组确定所述类型待定脉管的脉管类型包括:
    当所述参考脉管骨架仅包括一条参考脉管骨架时,
    基于所述最近点组的位置确定所述第二脉管骨架的脉管类型;
    当所述参考脉管骨架包括一条以上参考脉管骨架时,
    基于所述最近点组确定候选脉管骨架,基于所述候选脉管骨架确定所述第二脉管骨架的脉管类型。
  12. 根据权利要求10所述的***,所述第二阈值通过以下中的一项或多项获得:
    所述第二阈值至少基于所述目标图像对应的生物体的部位获得;
    所述第二阈值通过机器学习方法获得,所述机器学习方法基于同类生物体对应的部位的医学图像和类型判断结果获得。
  13. 根据权利要求1所述的***,所述约束条件包括:
    路径与危险区域的距离大于预设距离阈值,
    路径位于靶区所在片层的邻近片层,
    排除与病床床板接触的体廓上的入针点,
    路径的穿刺深度小于预设深度阈值,或
    路径与扁平病灶的扁平面垂直线的夹角在预设范围内。
  14. 根据权利要求1所述的***,所述基于所述靶点和至少两个约束条件,确定候选路径,包括:
    基于所述靶点和第一约束条件,确定初始路径;
    基于第二约束条件,从所述初始路径中确定候选路径;
    其中,所述第一约束条件包括以下中至少一个:路径位于靶区所在片层的邻近片层,排除与病床床板接触的体廓上的入针点,路径的穿刺深度小于预设深度阈值,或路径与扁平病灶的扁平面垂直线的夹角在预设范围内;所述第二约束条件包括路径与危险区域的距离大于预设距离阈值。
  15. 根据权利要求14所述的***,
    所述基于第一预设条件,自适应调节路径规划条件包括:
    当所述候选路径的数量与所述初始路径的数量的比值小于第三阈值时,调节所述危险区域的范围;
    所述从所述初始路径中确定候选路径进一步包括:
    基于调节后的危险区域,从所述初始路径中确定候选路径;
    当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值小于第四阈值时,将调节后获得的候选路径作为最终的候选路径;
    当调节前获得的候选路径的数量与调节后获得的候选路径的数量的比值大于所述第四阈值时,将调节前获得的候选路径作为最终的候选路径。
  16. 根据权利要求1所述的***,所述基于第一预设条件,自适应调节路径规划条件,包括:
    当不存在满足所述路径规划条件的候选路径时,重新设定穿刺参数;所述穿刺参数至少包括穿刺针的长度和/或直径。
  17. 根据权利要求1所述的***,
    所述候选路径分为共面候选路径和非共面候选路径;
    所述基于所述候选路径,确定目标路径包括:
    如果所述候选路径同时包含共面候选路径和非共面候选路径,则基于所述非共面候选路径中的最短穿刺深度D1、所述共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的路径中的最短穿刺深度D3筛选目标路径;
    如果所述候选路径仅包含非共面候选路径,则基于所述D1筛选目标路径;
    如果所述候选路径仅包含共面候选路径,则基于所述共面候选路径的所述D2及所述D3筛选目标路径。
  18. 根据权利要求17所述的***,所述基于所述非共面候选路径中的最短穿刺深度D1、所述共面候选路径中垂直于病床床板方向小角度偏转的路径中的最短穿刺深度D2及非小角度偏转的路径中的最短穿刺深度D3筛选目标路径,包括:
    当所述最短穿刺深度D2或所述最短穿刺深度D3最小时,若所述最短穿刺深度D2与所述最短穿刺深度D3的差值的绝对值小于第三预设值,则确定所述最短穿刺深度D2对应的小角度偏转的共面候选路径为所述目标路径,否则,确定所述最短穿刺深度D2与所述最短穿刺深度D3中最小值对应的共面候选路径为所述目标路径;
    当所述最短穿刺深度D1最小时,若所述最短穿刺深度D2与所述最短穿刺深度D3中的最小值与所述最短穿刺深度D1的差值的绝对值小于所述第三预设值,则确定所述最小值对应的共面候选路径为所述目标路径,否则,确定所述最短穿刺深度D1对应的非共面候选路径为所述目标路径。
  19. 一种用于医学图像分割的***,包括:
    至少一个存储介质,包括一组指令;以及
    与所述至少一个存储介质通信的一个或以上处理器,其中,当执行所述指令时,所述一个或以上处理器用于:
    获取目标图像;
    对所述目标图像中的目标结构进行粗分割,得到目标结构掩膜;
    基于软连通域分析,确定所述目标结构掩膜的定位信息;
    基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割,以确定分割结果。
  20. 根据权利要求19所述的***,所述基于软连通域分析,确定所述目标结构掩膜的定位信息包括:
    确定所述目标结构掩膜中的连通域的数量;
    基于所述连通域的数量,确定所述目标结构掩膜的定位信息。
  21. 根据权利要求20所述的***,所述基于所述连通域的数量,确定所述目标结构掩膜的定位信息包括:
    响应于所述连通域的数量大于第一预设值小于第二预设值,确定所述目标结构掩膜中最大连通域的面积与连通域总面积的比值;
    判断所述比值是否大于第一阈值;
    若是,则确定所述最大连通域为保留连通域;否则,确定所述目标结构掩膜中每个连通域均为所述保留连通域;
    基于所述保留连通域,确定所述目标结构掩膜的定位信息。
  22. 根据权利要求20所述的***,所述基于所述连通域的数量,确定所述目标结构掩膜的定位信息包括:
    响应于所述连通域的数量大于或等于第二预设值,按照面积从大到小的顺序对所述目标结构掩膜中每个连通域进行排序;
    基于排序结果,确定排名前n的连通域为目标连通域;
    基于第二预设条件,从所述目标连通域中确定保留连通域,所述保留连通域至少包括所述目标结构掩膜中的最大连通域;
    基于所述保留连通域,确定所述目标结构掩膜的定位信息。
  23. 根据权利要求19所述的***,
    所述目标结构掩膜的定位信息包括所述目标结构掩膜的外接矩形的位置信息;和/或
    所述确定所述目标结构掩膜的定位信息包括:基于预设结构的定位坐标,对所述目标结构掩膜进行定位。
  24. 根据权利要求19所述的***,所述基于所述目标结构掩膜的定位信息,对所述目标结构进行精准分割包括:
    对所述目标结构进行初步精准分割,得到初步精准分割结果;
    基于所述初步精准分割结果,判断所述目标结构掩膜的定位信息是否准确;
    若是,将所述初步精准分割结果作为目标分割结果;否则,通过自适应滑窗方式确定所述目标结构的目标分割结果。
  25. 根据权利要求24所述的***,所述通过自适应滑窗方式确定所述目标结构的目标分割结果包括:
    确定目标方向,所述目标方向是所述定位信息存在偏差的方向;
    根据重叠率参数,在所述目标方向上进行自适应滑窗计算,以确定所述目标结构的目标分割结果。
  26. 一种用于生物体内脉管识别的***,包括:
    至少一个存储介质,包括一组指令;以及
    与所述至少一个存储介质通信的一个或以上处理器,其中,当执行所述指令时,所述一个或以上处理器用于:
    获取生物体的目标图像;
    基于第一分割模型,获取所述目标图像的第一分割结果;
    对所述第一分割结果进行骨架化处理,获取第一脉管骨架集,其中,所述第一脉管骨架集包括至少一条类型已确定的第一脉管骨架;
    基于第二分割模型,获取所述目标图像的第二分割结果,所述第二分割结果中包括至少一条类型待定脉管;
    融合所述第一分割结果和所述第二分割结果,获取融合结果。
  27. 根据权利要求26所述的***,
    所述第二分割结果中的至少一条脉管未包括在所述第一分割结果中;
    所述一个或以上处理器还用于:
    对所述融合结果进行骨架化处理,获取所述类型待定脉管的第二脉管骨架;
    获取与所述第二脉管骨架的最小空间距离小于第二阈值的第一脉管骨架,将其作为参考脉管骨架;
    确定所述第二脉管骨架与所述参考脉管骨架之间的空间距离,将所述空间距离最小的两个点确定为最近点组;
    基于所述最近点组确定所述类型待定脉管的脉管类型。
  28. 根据权利要求27所述的***,所述基于所述最近点组确定所述类型待定脉管的脉管类型包括:
    当所述参考脉管骨架仅包括一条参考脉管骨架时,
    基于所述最近点组的位置确定所述第二脉管骨架的脉管类型;
    当所述参考脉管骨架包括一条以上参考脉管骨架时,
    基于所述最近点组确定候选脉管骨架,基于所述候选脉管骨架确定所述第二脉管骨架的脉管类型。
  29. 根据权利要求27所述的***,所述第二阈值通过以下中的一项或多项获得:
    所述第二阈值至少基于所述目标图像对应的生物体的部位获得;
    所述第二阈值通过机器学习方法获得,所述机器学习方法基于同类生物体对应的部位的医学图像和类型判断结果获得。
PCT/CN2023/085618 2022-04-02 2023-03-31 一种用于穿刺路径规划的***及方法 WO2023186133A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202210342911.7 2022-04-02
CN202210342911.7A CN116919584A (zh) 2022-04-02 2022-04-02 一种穿刺路径规划方法、***、装置及存储介质
CN202210577448.4A CN117173077A (zh) 2022-05-25 2022-05-25 一种生物体内脉管的识别方法和***
CN202210577448.4 2022-05-25
CN202210764219.3 2022-06-30
CN202210764219.3A CN117392144A (zh) 2022-06-30 2022-06-30 医学影像的分割方法、***、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2023186133A1 true WO2023186133A1 (zh) 2023-10-05

Family

ID=88199507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085618 WO2023186133A1 (zh) 2022-04-02 2023-03-31 一种用于穿刺路径规划的***及方法

Country Status (1)

Country Link
WO (1) WO2023186133A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117084790A (zh) * 2023-10-19 2023-11-21 苏州恒瑞宏远医疗科技有限公司 一种穿刺方位控制方法、装置、计算机设备、存储介质
CN117243694A (zh) * 2023-10-19 2023-12-19 河北港口集团有限公司秦皇岛中西医结合医院 基于ct影像的穿刺路线规划方法
CN117437309A (zh) * 2023-12-21 2024-01-23 梁山公用水务有限公司 基于人工智能的水利水务数字化管理***
CN118000908A (zh) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 全膝关节置换规划方法、装置、设备及存储介质

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177950A1 (en) * 2008-07-25 2010-07-15 Aureon Laboratories, Inc. Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US20130080134A1 (en) * 2008-07-25 2013-03-28 Fundação D. Anna Sommer Champalimaud e Dr. Carlos Montez Champalimaud Systems and methods for predicting favorable-risk disease for patients enrolled in active surveillance
CN106997594A (zh) * 2016-01-26 2017-08-01 上海联影医疗科技有限公司 一种眼部组织的定位方法及装置
CN108682015A (zh) * 2018-05-28 2018-10-19 科大讯飞股份有限公司 一种生物图像中的病灶分割方法、装置、设备及存储介质
CN109919912A (zh) * 2019-01-28 2019-06-21 平安科技(深圳)有限公司 一种医学影像的质量评价方法和装置
CN110013306A (zh) * 2019-03-22 2019-07-16 北京工业大学 Ct引导肝肿瘤热消融治疗穿刺路径规划方法
CN110537960A (zh) * 2018-05-29 2019-12-06 上海联影医疗科技有限公司 穿刺路径的确定方法、存储设备及机器人辅助手术***
CN111127466A (zh) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 医学图像检测方法、装置、设备及存储介质
CN111768400A (zh) * 2020-07-07 2020-10-13 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质
US20200342600A1 (en) * 2018-01-08 2020-10-29 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN111932554A (zh) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 一种肺部血管分割方法、设备及存储介质
WO2021169128A1 (zh) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 眼底视网膜血管识别及量化方法、装置、设备及存储介质
CN113516623A (zh) * 2021-04-23 2021-10-19 武汉联影智融医疗科技有限公司 穿刺路径校验方法、装置、计算机设备和可读存储介质
CN113679470A (zh) * 2021-08-19 2021-11-23 江苏集萃苏科思科技有限公司 一种用于颅脑穿刺手术的计算机辅助穿刺路径规划方法、装置及存储介质

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130080134A1 (en) * 2008-07-25 2013-03-28 Fundação D. Anna Sommer Champalimaud e Dr. Carlos Montez Champalimaud Systems and methods for predicting favorable-risk disease for patients enrolled in active surveillance
US20100177950A1 (en) * 2008-07-25 2010-07-15 Aureon Laboratories, Inc. Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
CN106997594A (zh) * 2016-01-26 2017-08-01 上海联影医疗科技有限公司 一种眼部组织的定位方法及装置
US20200342600A1 (en) * 2018-01-08 2020-10-29 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN108682015A (zh) * 2018-05-28 2018-10-19 科大讯飞股份有限公司 一种生物图像中的病灶分割方法、装置、设备及存储介质
CN110537960A (zh) * 2018-05-29 2019-12-06 上海联影医疗科技有限公司 穿刺路径的确定方法、存储设备及机器人辅助手术***
CN109919912A (zh) * 2019-01-28 2019-06-21 平安科技(深圳)有限公司 一种医学影像的质量评价方法和装置
CN110013306A (zh) * 2019-03-22 2019-07-16 北京工业大学 Ct引导肝肿瘤热消融治疗穿刺路径规划方法
WO2021169128A1 (zh) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 眼底视网膜血管识别及量化方法、装置、设备及存储介质
CN111127466A (zh) * 2020-03-31 2020-05-08 上海联影智能医疗科技有限公司 医学图像检测方法、装置、设备及存储介质
CN111768400A (zh) * 2020-07-07 2020-10-13 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质
CN111932554A (zh) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 一种肺部血管分割方法、设备及存储介质
CN113516623A (zh) * 2021-04-23 2021-10-19 武汉联影智融医疗科技有限公司 穿刺路径校验方法、装置、计算机设备和可读存储介质
CN113679470A (zh) * 2021-08-19 2021-11-23 江苏集萃苏科思科技有限公司 一种用于颅脑穿刺手术的计算机辅助穿刺路径规划方法、装置及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117084790A (zh) * 2023-10-19 2023-11-21 苏州恒瑞宏远医疗科技有限公司 一种穿刺方位控制方法、装置、计算机设备、存储介质
CN117243694A (zh) * 2023-10-19 2023-12-19 河北港口集团有限公司秦皇岛中西医结合医院 基于ct影像的穿刺路线规划方法
CN117084790B (zh) * 2023-10-19 2024-01-02 苏州恒瑞宏远医疗科技有限公司 一种穿刺方位控制方法、装置、计算机设备、存储介质
CN117243694B (zh) * 2023-10-19 2024-03-12 河北港口集团有限公司秦皇岛中西医结合医院 基于ct影像的穿刺路线规划方法
CN117437309A (zh) * 2023-12-21 2024-01-23 梁山公用水务有限公司 基于人工智能的水利水务数字化管理***
CN117437309B (zh) * 2023-12-21 2024-03-22 梁山公用水务有限公司 基于人工智能的水利水务数字化管理***
CN118000908A (zh) * 2024-04-09 2024-05-10 北京天智航医疗科技股份有限公司 全膝关节置换规划方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
WO2023186133A1 (zh) 一种用于穿刺路径规划的***及方法
CN107545584B (zh) 医学图像中定位感兴趣区域的方法、装置及其***
CN109074639B (zh) 医学成像***中的图像配准***和方法
CN112508965B (zh) 医学影像中正常器官的轮廓线自动勾画***
WO2018001099A1 (zh) 一种血管提取方法与***
EP2810598B1 (en) Surgical support device, surgical support method and surgical support program
US8358819B2 (en) System and methods for image segmentation in N-dimensional space
CN109215032A (zh) 图像分割的方法及***
US20160117814A1 (en) Method for distinguishing pulmonary artery and pulmonary vein, and method for quantifying blood vessels using same
CN112037200A (zh) 一种医学影像中解剖特征自动识别与模型重建方法
US9730609B2 (en) Method and system for aortic valve calcification evaluation
CN108109170B (zh) 医学图像扫描方法及医学影像设备
US20220301224A1 (en) Systems and methods for image segmentation
CN111275762A (zh) 病人定位的***和方法
WO2022164374A1 (en) Automated measurement of morphometric and geometric parameters of large vessels in computed tomography pulmonary angiography
Zhou et al. Detection and semiquantitative analysis of cardiomegaly, pneumothorax, and pleural effusion on chest radiographs
KR101625955B1 (ko) 장기의 동맥 및 정맥의 구분 방법
Tahoces et al. Deep learning method for aortic root detection
CN104915989B (zh) 基于ct影像的血管三维分割方法
JP5364009B2 (ja) 画像生成装置、画像生成方法、及びそのプログラム
Dhalia Sweetlin et al. Patient-Specific Model Based Segmentation of Lung Computed Tomographic Images.
WO2022223042A1 (zh) 手术路径处理***、方法、装置、设备及存储介质
Cheng et al. Automatic centerline detection of small three-dimensional vessel structures
CN113177945A (zh) 用于将分割图链接到体数据的***和方法
Mughal et al. Early lung cancer detection by classifying chest CT images: a survey

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778497

Country of ref document: EP

Kind code of ref document: A1