CN113920114A - Image processing method, image processing apparatus, computer device, storage medium, and program product - Google Patents

Image processing method, image processing apparatus, computer device, storage medium, and program product Download PDF

Info

Publication number
CN113920114A
CN113920114A CN202111513480.8A CN202111513480A CN113920114A CN 113920114 A CN113920114 A CN 113920114A CN 202111513480 A CN202111513480 A CN 202111513480A CN 113920114 A CN113920114 A CN 113920114A
Authority
CN
China
Prior art keywords
image
lesion
image sequence
dimensional
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111513480.8A
Other languages
Chinese (zh)
Other versions
CN113920114B (en
Inventor
崔亚轩
潘伟凡
霍志敏
范伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Original Assignee
Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd filed Critical Hangzhou Taimei Xingcheng Pharmaceutical Technology Co Ltd
Priority to CN202111513480.8A priority Critical patent/CN113920114B/en
Publication of CN113920114A publication Critical patent/CN113920114A/en
Application granted granted Critical
Publication of CN113920114B publication Critical patent/CN113920114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The embodiment of the specification provides an image processing method, an image processing device, a computer device, a storage medium and a computer program product. The method comprises the steps of receiving an image sequence comprising a target body part, determining a reference image corresponding to the image sequence based on prior knowledge, and further cutting the image sequence according to position information and size information corresponding to a region range mark in the reference image to obtain a focus image sequence, so that not only is a focus image accurately cut from the image sequence by taking the reference image as a reference, but also noise influence in a background image is reduced, and the focus is completely cut from the image sequence.

Description

Image processing method, image processing apparatus, computer device, storage medium, and program product
Technical Field
Embodiments of the present description relate to the field of medical image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program product.
Background
Lung cancer is a malignant tumor with the highest morbidity and mortality rate and the greatest threat to human health, and one of the main reasons for the high mortality rate is that it is very difficult to find malignant lung nodules.
In the clinical diagnosis and subsequent diagnosis and treatment process of the lung cancer, medical images of the same patient at different periods are acquired through a medical imaging device. The doctor respectively reads the medical images in different periods, and can segment the medical images in different periods by means of the deep learning network to obtain the size of the tumor. Monitoring the change period profile of the tumor based on the size of the tumor, and judging whether the treatment means is effective.
Disclosure of Invention
In view of the above, embodiments of the present disclosure are directed to providing an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program product, so as to solve the technical problem in the conventional technology that the accuracy of extracting a lesion image from a medical image is not high.
The embodiment of the specification provides an image processing method, which comprises the following steps: receiving a sequence of images of a target body part; acquiring a reference image corresponding to the image sequence; wherein the reference image has a two-dimensional lesion area of the target body part, wherein the two-dimensional lesion area in the reference image corresponds to an area range mark; wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image; and cutting the image sequence according to the position information and the size information to obtain a focus image sequence.
An embodiment of the present specification provides an image processing apparatus including: an image sequence receiving module for receiving an image sequence of a target body part; a reference image obtaining module, configured to obtain a reference image corresponding to the image sequence; wherein the reference image has a two-dimensional lesion area of the target body part, wherein the two-dimensional lesion area in the reference image corresponds to an area range mark; wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image; and the image sequence cutting module is used for cutting the image sequence according to the position information and the size information to obtain a focus image sequence.
The present specification provides a computing device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the method steps of the above embodiments when executing the computer program.
The present specification provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the method steps in the above embodiments.
The present specification embodiments provide a computer program product, which includes instructions that, when executed by a processor of a computer device, enable the computer device to perform the method steps in the above embodiments.
In the embodiment of the specification, the image sequence of the target body part is received, the reference image corresponding to the image sequence is determined based on the priori knowledge, and further, the image sequence is cut according to the position information and the size information corresponding to the region range mark in the reference image to obtain the focus image sequence, so that the focus image is accurately cut from the image sequence by taking the reference image as a reference, the noise influence in the background image is reduced, and the focus is completely cut from the image sequence.
Drawings
Fig. 1a is a diagram illustrating an application environment of an image processing method in a scene example according to an embodiment.
Fig. 1b is a diagram illustrating an application environment of an image processing method according to an embodiment.
Fig. 2a is a schematic flow chart of an image processing method according to an embodiment.
Fig. 2b is a schematic flow chart of an image processing method according to an embodiment.
Fig. 3 is a block diagram of an image processing apparatus according to an embodiment.
Fig. 4 is an internal structural diagram of a computer device according to an embodiment.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present specification belong to the protection scope of the present specification.
In the following, reference will be made to the term "subject" in part of the present specification, which may be a reference to a person participating in a clinical trial on a new drug or a new treatment regimen, and which may also be referred to as a "volunteer". The "subject" may be a healthy person or a patient, depending on the needs of the clinical trial study. Such as clinical research trials conducted by patients, to investigate the therapeutic effects, side effects, etc. of a new drug or a therapeutic regimen. The operation process is different for different types of clinical trials. After a subject participates in a clinical trial study, the subject needs to communicate with a doctor (or nurse, social worker, other investigator) to monitor the subject's health, either periodically or on a trial basis. Interviews may be understood as the process by which subjects come to the point of trial during the administration of a new drug or the receipt of a new treatment regimen. At each visit, the subject needs to receive some medical examination or laboratory test examination, and also needs to be asked by the doctor to receive further guidance from the doctor.
Please refer to fig. 1 a. In a particular scenario example, a clinical trial site prepares a clinical trial of a new lung cancer drug X, and by subject enrollment and screening, determines lung cancer patient A that may be enrolled in the clinical trial study. Patient a needs to receive a medical image examination (e.g., CT (Computed Tomography), MRI (Magnetic Resonance Imaging)) before taking new drug X. This time patient a receives a medical image examination of the lungs and generates a sequence of images, denoted as a baseline sequence of images. It will be appreciated that the sequence of baseline images includes several baseline images. Each baseline image in the sequence of baseline images is obtained from a medical examination of the target body part prior to the treatment plan being administered. It is noted that the treatment regimen may be the administration of a new drug X. The target body part may be a lung.
And the doctor reads the baseline image sequence, and selects the baseline image with the largest two-dimensional lesion area from the baseline image sequence to perform cross marking or linear marking. The straight line produced by the straight line marker or the major axis produced by the cross marker may be marked as a lesion segment. The baseline image with the largest two-dimensional lesion area can be used as the reference image. Can be used as the size information of the two-dimensional focus area. Can be used as the position information of the two-dimensional focus area. Specifically, the radiographing computer Y receives the baseline image sequence and acquires a reference image from the baseline image sequence. The two-dimensional lesion area in the reference image corresponds to an area range mark (such as a cross mark or a straight line mark); the region range mark corresponds to position information (such as spatial coordinate information of a midpoint of a lesion line segment) and size information (such as a length of the lesion line segment) of a two-dimensional lesion region in the reference image. Further, the base line image sequence is cut according to the position information and the size information to obtain a focus image sequence. Sequentially inputting each focus image into a two-dimensional segmentation network (such as a U-Net network); and sequentially carrying out image segmentation on each focus image through a two-dimensional segmentation network to obtain a target image corresponding to the baseline image sequence. The target image may be a binary image. And representing the boundary contour of the focus in the baseline image sequence through the binary image. In the embodiment, on one hand, the baseline image sequence is cut by using the reference image marked by the doctor, so that the influence of the image segmentation accuracy of interference noise such as pulmonary blood vessels is reduced, and the probability of false positive in the segmentation result is reduced. On the other hand, the image segmentation of the baseline image sequence can be accurately realized based on the reference image only by marking the reference image by the doctor, so that the focus in the baseline image sequence can be accurately measured, and the time spent on focus measurement is reduced.
After taking the new medicine X for a period of time, the patient A arrives at a clinical test point according to the visit plan, receives medical image examination on the lung again according to the requirements of doctors, generates an image sequence and records the image sequence as a visit image sequence. It should be noted that after patient A takes new medicine X, the patient A can see a visit according to the visit plan to the clinical trial point. The image sequence generated by medical image examination of the lung of patient a at any visit can be recorded as a visit image sequence. Specifically, the computer device for reading receives the visit image sequence and acquires a reference image from the baseline image sequence. Further, according to the position information and the size information, the visiting image sequence is cut to obtain a focus image sequence. Sequentially inputting each focus image into a two-dimensional segmentation network (such as a U-Net network); and sequentially carrying out image segmentation on each focus image through a two-dimensional segmentation network to obtain a target image corresponding to each focus image. The target image may be a binary image. And representing the boundary contour of the focus in the visit image sequence through the binary image. In the embodiment, on one hand, the reference image marked by the doctor is used for cutting the visiting image sequence, so that the influence of the image segmentation accuracy of interference noise such as pulmonary blood vessels is reduced, and the probability of false positive in the segmentation result of the visiting image sequence is reduced. On the other hand, the doctor is only required to mark the reference image, and accurate image cutting can be realized on the subsequent visiting image sequence so as to accurately measure the focus and reduce the time spent on focus measurement in the visiting process.
In some embodiments, the boundary contour of the lesion in the visit image sequence is compared with the boundary contour of the lesion in the baseline image sequence to determine the size change of the lesion, thereby obtaining the curative effect of new drug X on lung cancer or the development condition of lung cancer.
Referring to fig. 1b, an embodiment of the present disclosure provides an image processing system, and the image processing method provided in the present disclosure can be applied to the image processing system. The image processing system may include a hardware environment formed by a medical imaging device 110, a computer device 120, and a server 130. The medical imaging device 110 is connected to a server 130, and the server 130 communicates with the computer device 120 via a network. The medical imaging device 110 examines and images the target body part resulting in a sequence of images. The sequence of images of the target body part is transmitted to the server 130 by the networked medical imaging device 110. The computer device 120 requests a sequence of images from the server 130. Server 130 sends a sequence of images including the target body part to computer device 120, computer device 120 receives the sequence of images including the target body part; acquiring a reference image corresponding to the image sequence; the reference image is provided with a two-dimensional focus area of the target body part, wherein the two-dimensional focus area in the reference image corresponds to an area range mark; wherein, the region range mark corresponds to the position information and the size information of the two-dimensional focus region in the reference image; and cutting the image sequence according to the position information and the size information to obtain a focus image sequence. The medical imaging device 110 may be, but is not limited to, at least one of an ultrasound medical device, a CT medical examination device, and an MRI medical examination device, among others. The computer device 120 may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices. The server 130 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. With the development of scientific technology, new computing devices, such as quantum computing servers, may be developed, and may also be applied to the embodiments of the present specification.
Referring to fig. 2a, an embodiment of the present disclosure provides an image processing method. The image processing method includes the following steps.
S210, receiving an image sequence comprising the target body part.
The target body part may be a body organ such as a lung, a liver, an eye, etc. The target body part may be a body part such as a face, four limbs, an abdomen, etc. The sequence of images may be several medical images arranged in a temporal order or spatial position resulting from medical image detection of the target body-part. The image sequence may be a baseline image sequence or an interview image sequence. In particular, in some embodiments, the sequence of images is received by a server connected to the medical imaging device, where the medical imaging device detects the target body part, generates the sequence of images, and transmits the sequence of images to the server. It should be noted that in some embodiments, the server may be a server providing image processing functionality connected to a reviewing computer, which may be used by a physician to review a sequence of images. In some embodiments, the target body part is detected at the medical imaging device, and the sequence of images is generated and stored locally at the medical imaging device. The computer equipment sends a film reading request to the medical imaging equipment, and under the condition that the film reading request is received, the medical imaging equipment sends an image sequence to the computer equipment, so that the computer equipment acquires the image sequence. It should be noted that in some embodiments, the computer device may be a reviewing computer used by a physician to review a sequence of images.
And S220, acquiring a reference image corresponding to the image sequence.
The reference image may be a reference image when the image sequence is cropped. By taking the reference image as a reference, the frame position of the focus area on each medical image of the image sequence can be accurately marked. And the area of the area formed by the surrounding of each frame is larger than the area of the two-dimensional lesion area. The reference image may be a baseline image in which the area of the specified two-dimensional lesion region in the sequence of baseline images is the largest. It is noted that the condition may be controlled and may progress further during the course of the treatment regimen. References to lesions may become larger or smaller. Therefore, in order to define the range of the lesion more accurately, with the implementation of the clinical experimental protocol, the reference image may also be an interview image in which the area of the specified two-dimensional lesion region in the interview image sequence is the largest.
In some embodiments, the reference image has a two-dimensional lesion area of the target body part. Wherein, the two-dimensional focus area in the reference image corresponds to the area range mark. And the two-dimensional focus area is a plane area occupied by the focus in the reference image. In some implementations, the region range labels may be rectangular boxes labeled through a deep learning model. In some embodiments, the region extent marking may be a lesion line segment marked in the reference image by a physician based on a priori knowledge. Further, in some embodiments, the region extent marker may also be a rectangular box generated in the reference image based on the lesion line segment marked by the physician.
The region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image. In some embodiments, if the region range marking uses a lesion line segment, the position information of the two-dimensional lesion region may be position information of the lesion line segment, such as coordinate information of two end points of the lesion line segment, such as spatial coordinate information of a middle point of the lesion line segment. The size information of the two-dimensional lesion region may be a lesion line length. In some embodiments, if the region range marking uses a rectangular frame, the position information and the size information of the two-dimensional lesion region may be represented as (x, y, w, h), such as (x, y) representing the coordinate position of the rectangular frame in the reference image. w represents the width of the rectangular frame in the reference image. h represents the height of the rectangular frame in the reference image.
In particular, each medical image in the sequence of images comprises background noise, such as blood vessels. In order to reduce the false positive rate caused by blood vessels, the image sequence is cropped. Further, in order to improve the accuracy of cropping, a reference image of the image sequence is set in advance. And cutting the image sequence by taking the reference image as a reference. In some embodiments, a reference image satisfying a preset condition is acquired in a baseline image sequence. The reference image can be used for cutting an access image sequence and can also be used for cutting a base line image sequence. In some embodiments, the preset condition may set a threshold for the area of the two-dimensional lesion area in the reference image. In other embodiments, as the clinical trial is conducted, the lesion becomes larger and the reference image specified in the baseline image sequence is no longer suitable for image sequence cropping. Therefore, a reference image satisfying a preset condition is acquired in the visit image sequence, and the reference image can be used for cropping the visit image sequence.
And S230, cutting the image sequence according to the position information and the size information to obtain a focus image sequence.
In particular, to accurately crop a lesion image sequence from an image sequence to reduce the false positive rate caused by background noise in the image sequence. In some embodiments, the image sequence includes a plurality of medical images, each medical image is subjected to pixel cropping based on the position information and the size information, a corresponding lesion image is obtained, and the plurality of lesion images form the lesion image sequence.
In some implementations, a rectangular frame cropped to the image sequence is generated based on the position information and the size information. And performing voxel cutting on the image sequence based on the rectangular frame to correspondingly obtain a focus image sequence.
According to the image processing method, the image sequence of the target body part is received, the reference image corresponding to the image sequence is determined based on the priori knowledge, and further, the image sequence is cut according to the position information and the size information corresponding to the region range mark in the reference image to obtain the focus image sequence.
In some embodiments, the image sequence is an interview image sequence or a baseline image sequence; the reference image is specified in advance in each of the baseline images in the baseline image sequence. Wherein the visit images in the visit image sequence are obtained by medical examination of the target body part at a first time point; each baseline image in the sequence of baseline images is obtained by medical examination of the target body part at a second point in time; wherein the second time point is earlier than the first time point.
In particular, in some embodiments, the image sequence is an interview image sequence. The reference image may be pre-specified in each baseline image in the sequence of baseline images. And the computer equipment receives the visiting image sequence of the target body part and acquires a reference image corresponding to the visiting image sequence. The reference image is provided with a two-dimensional focus area of the target body part, wherein the two-dimensional focus area in the reference image corresponds to an area range mark; the region range mark corresponds to the position information and the size information of the two-dimensional focus region in the reference image, so that the visiting image sequence is cut according to the position information and the size information of the two-dimensional focus region in the reference image to obtain a focus image sequence. In the embodiment, the reference image in the baseline image sequence is used for cutting the access image sequence, so that the measurement time of the focus can be reduced, and the range of the focus can be completely defined by the cut focus image sequence, thereby reducing the influence of blood vessels in the background and reducing the false positive rate of the detection result.
In some embodiments, the image sequence is a baseline image sequence. The reference image may be pre-specified in each baseline image in the sequence of baseline images. And the computer equipment receives the visiting image sequence of the target body part and acquires a reference image corresponding to the baseline image sequence. The reference image is provided with a two-dimensional focus area of the target body part, wherein the two-dimensional focus area in the reference image corresponds to an area range mark; wherein, the region range mark corresponds to the position information and the size information of the two-dimensional focus region in the reference image; and cutting the baseline image sequence according to the position information and the size information of the two-dimensional focus area in the reference image to obtain a focus image sequence.
In some embodiments, the baseline image may be understood as a special interview image at a time point that is earlier than the time point of the interview image. When the lesion development is judged, the baseline image can be used as a reference image or a benchmark image of the visit image. The time point of the baseline image may be before or possibly before the treatment plan is administered. In some embodiments, each baseline image in the sequence of baseline images is obtained from a medical examination of the target body part prior to the treatment plan being administered. The visit images in the visit image sequence are obtained by medical examination of the target body part after the treatment plan is implemented.
In some embodiments, referring to fig. 2b, in a case that the image sequence is an access image sequence, before cropping the image sequence according to the position information and the size information to obtain a lesion image sequence, the method may further include:
and S310, receiving the visiting image sequence and the deformation field between the baseline image sequences.
In medical image registration, a (or a series of) spatial transformation is determined for a medical image (or a sequence of images) to be spatially coincident with a corresponding point on another medical image (or a sequence of images). The spatial congruency may be that the same anatomical point on the human body has the same spatial position on both medical images. The result of the registration is that at least some of the focal points (or points of diagnostic interest, points of interest) on the two medical images can be matched to one another. The action field that converts the visiting image sequence into the space in which the baseline image sequence is located is called the deformation field. Specifically, the baseline image sequence and the visiting image sequence are obtained, and registration may be performed by using medical registration software (such as ants (advanced Normalization tools)), so as to obtain a deformation field between the visiting image sequence and the baseline image sequence.
And S320, converting the position information of the two-dimensional focus area by using the deformation field to obtain the converted position information.
Specifically, there may be some displacement or deformation between the baseline image sequence and the visit image sequence due to the examination pose, lung breathing, and the like. The region range mark corresponding to the two-dimensional lesion region in the reference image may also have some variations on each visit image. The region range markers in the baseline image sequence need to be matched into the visiting image sequence through registration. And converting the position information of the two-dimensional focus area by using the deformation field to obtain the converted position information. In some embodiments, the location information of the two-dimensional lesion area may be the midpoint of the lesion line segment. The coordinates of the middle point are (x, y, z), and the coordinates of the middle point (x, y, z) are matched with the coordinates (x ', y ', z ') after the visit image sequence is registered under the action of the deformation field.
Accordingly, the cropping the image sequence according to the position information and the size information to obtain the lesion image sequence may include the following steps.
And S330, performing voxel cutting on the visiting image sequence based on the converted position information and size information to obtain a focus visiting image sequence.
Specifically, a rectangular solid frame or a rectangular solid frame for clipping the image sequence is generated based on the converted position information and size information. And performing voxel cutting on the image sequence based on the square frame or the cuboid frame to correspondingly obtain a focus image sequence. In some embodiments, the side length of the square frame is determined by the size information of the two-dimensional lesion area in the reference image. The position of the cube border is determined by the registered position information. In some embodiments, in order to crop out the complete lesion, the size information of the two-dimensional lesion region is appropriately enlarged, and the enlarged size information does not include the blood vessel information determined based on the prior knowledge, and the voxel cropping is performed on the image sequence by using a rectangular frame.
In the embodiment, the position information is matched based on the registration between the visiting image sequence and the baseline image sequence, so that the accuracy of the position information based on the rectangular frame is improved, the influence of uncertain factors such as respiration on the cutting accuracy is reduced, and the focus image sequence is accurately and completely cut from the visiting image sequence.
In some embodiments, the size information of the two-dimensional lesion area includes a length of the lesion line segment. The position information of the two-dimensional lesion area includes spatial coordinate information of midpoints of lesion line segments. Wherein, the lesion line segment is used for marking a two-dimensional lesion area in the reference image.
Specifically, the lesion line segment may be a line segment generated by a doctor performing straight marking on a lesion in the reference image. The lesion line segment may be a major diameter resulting from a cross marking of the lesion in the reference image by the physician. And generating a cubic frame according to the length of the lesion line segment, wherein the side length of the cubic frame is the length of the lesion line segment. The center of the cube box is the midpoint of the lesion line segment. And generating a cubic frame by using the length of the focus line segment and the midpoint of the focus line segment to ensure that a focus image sequence is accurately and completely defined from the image sequence.
In some embodiments, the region range marking is represented by a lesion line segment based on a straight line marking or a cross marking of a two-dimensional lesion region in the reference image. The method may further comprise the following steps.
And S410, acquiring the number of pixels occupied by the focal line segment in the reference image.
And S420, determining the length of the lesion line segment based on the number of pixels and the pixel spacing.
And S430, taking the length of the lesion line segment as the size information of the two-dimensional lesion area.
Specifically, in some embodiments, the computer device displays a reference image, the physician views the reference image, and issues a marking operation, such as a straight marking operation or a cross marking operation, on a two-dimensional lesion area in the reference image. In some embodiments, the two-dimensional lesion area in the reference image may be identified through a deep learning network, and the two-dimensional lesion area in the reference image may be marked according to the identification result, so as to obtain a lesion line segment.
In some embodiments, in response to performing a straight marking operation on the longest diameter of the two-dimensional lesion area, the two-dimensional lesion area in the reference image is subjected to straight marking, and a lesion line segment is obtained and displayed in the two-dimensional lesion area in the reference image. In some embodiments, in response to the cross marking operation being performed on the two-dimensional lesion area, the two-dimensional lesion area in the reference image is cross marked, and a major axis and a minor axis in the two-dimensional lesion area are displayed in the reference image, with the major axis of the two-dimensional lesion area as a lesion line segment.
In some embodiments, the reference image is composed of columns of pixels. The number of pixels occupied by the lesion line segment is marked as N. The computer stores a Dicom file locally, and reads a pixel pitch ps (pixel spacing) from the Dicom file. And multiplying the pixel number N by the pixel spacing PS to obtain the length of the lesion line segment. And taking the length of the lesion line segment as the size information of the two-dimensional lesion area. The pixel pitch PS is a distance between two pixels in the reference image.
In this embodiment, the size information of the two-dimensional lesion area in the reference image is determined based on the number of pixels occupied by the lesion line segment in the reference image, and a rectangular frame for clipping the image sequence can be obtained based on the size information and the position information of the two-dimensional lesion area, so that the background noise of an irrelevant area is reduced, and the false positive rate is reduced.
In some embodiments, cropping the image sequence according to the position information and the size information may include: generating a three-dimensional frame surrounding the three-dimensional lesion area based on the position information and the size information; and performing voxel cutting on the image sequence according to the stereo frame.
Wherein, the three-dimensional focus area is a space area occupied by the focus in the image sequence. Specifically, in order to reduce the false positive rate due to blood vessels and the like of each medical image in the image sequence, a stereoscopic frame for cropping the image sequence is generated based on the position information and the size information of the two-dimensional lesion region in the reference image. The three-dimensional lesion area may be enclosed within a solid border. And performing voxel clipping on the image sequence according to the boundary defined by the stereo frame. In the embodiment, the voxel cutting is performed on the image sequence according to the three-dimensional frame, distribution factors such as blood vessels of each medical image in the image sequence are removed, the proportion of the focus in the focus image obtained by cutting is improved as much as possible, and the false positive rate is reduced.
In some embodiments, the location information includes spatial coordinate information of a midpoint of the lesion line segment. The size information includes the length of the lesion line segment. Wherein, the lesion line segment is used for marking a two-dimensional lesion area in the reference image. Accordingly, generating a stereoscopic border enclosing the three-dimensional lesion area based on the position information and the size information may include:
and S510, generating a target circle by taking the midpoint of the focal line segment as the center of a circle and the length of the focal line segment as the diameter.
S520, receiving plane coordinate information of the circumscribed quadrangle of the target circle.
S530, determining the extension position information of the circumscribed quadrangle in the extension direction according to the space coordinate information of the middle point and the length of the lesion line segment.
And S540, obtaining the three-dimensional frame based on the plane coordinate information and the extension position information.
Wherein the extension direction intersects the surface on which the circumscribed quadrilateral is located. Specifically, a lesion line segment of a two-dimensional lesion area in the reference image is obtained through a straight line marking operation or a cross marking operation. In order to more fully cover the two-dimensional lesion area, a target circle is generated by taking the midpoint of a lesion line segment as the center of a circle and the length of the lesion line segment as the diameter. The target circle may not be able to completely cover the two-dimensional lesion area on the corresponding medical image, and therefore, the coverage of the target circle needs to be enlarged, and then the circumscribed quadrangle is generated based on the target circle. And obtaining planar coordinate information of the circumscribed quadrangle based on the spatial coordinate information of the midpoint of the focus line segment, so that the boundary defining the two-dimensional focus area in each medical image is determined.
After the boundary of the two-dimensional lesion area in each medical image is determined, the medical image is understood to lie within the XY plane. It is also necessary that the defined boundary extend along a direction Z intersecting the XY plane (e.g., a Z direction approaching a vertical direction). And taking the space coordinate information of the middle point as a starting point, extending L/2 along the positive direction crossed with the XY plane, and extending L/2 along the negative direction crossed with the XY plane to obtain a three-dimensional frame with the side length equal to the length of the focal line segment.
In some embodiments, the computer stores a Dicom file locally, and reads the inter-layer distances sbs (spacing Between tracks) from the Dicom file. Dividing the length of the focal line segment by the interbedded distance SBS, and taking the obtained quotient as the number of layers of the medical image. In some embodiments, the resulting quotient needs to be rounded as the number of layers of the medical image.
In the embodiment, a three-dimensional frame is generated based on the length of the focal line segment, voxel cutting is performed on the image sequence according to the three-dimensional frame, distribution factors such as blood vessels of each medical image in the image sequence are removed, the proportion of the focal in the cut focal image is improved as much as possible, and the false positive rate is reduced.
In some embodiments, the method may further comprise: segmenting each focus image in a focus image sequence to obtain a target image corresponding to each focus image; and measuring the two-dimensional focus region in the target image to obtain at least one of size information in the two-dimensional focus region in the target image and volume information of the three-dimensional focus region in the focus image sequence.
In particular, a two-dimensional deep learning network may be used to segment a single image in a sequence of lesion images. The whole focus image sequence can also be segmented by using a three-dimensional deep learning network. In some embodiments, the lesion image sequence includes a plurality of lesion images, and each lesion image is sequentially subjected to image segmentation to obtain a target image corresponding to each lesion image. The target image can be a binary image, the contour boundary of the two-dimensional lesion area is determined in the target image, and the contour boundary is measured to obtain the size information of the two-dimensional lesion area in the target image. Calculating by using the size information in the two-dimensional focus region in the target image and the interlayer space SBS (spacing Between channels) to obtain the sub-volume information corresponding to the target image. And summing the sub-volume information of each corresponding target image to obtain the volume information of the three-dimensional focus region in the focus image sequence.
In some embodiments, segmenting each lesion image in the sequence of lesion images to obtain a target image corresponding to each lesion image may include: sequentially inputting each focus image into a two-dimensional segmentation network; and sequentially segmenting each focus image through a two-dimensional segmentation network to obtain a target image corresponding to each focus image.
A large number of false positive marks can be generated in the segmentation result of the two-dimensional deep learning network, so that the method is not beneficial to assisting a doctor in measuring the tumor. The three-dimensional deep learning network is greatly reduced in segmentation efficiency, and is difficult to establish a model, so that the three-dimensional deep learning network is not beneficial to assisting a doctor in measuring tumors. Specifically, in the present embodiment, an image sequence is first cropped based on the position information and the size information corresponding to the region range mark in the reference image, so as to obtain a lesion image sequence. Further, the focus image sequence comprises a plurality of focus images, each focus image is sequentially input into the two-dimensional segmentation network, and each focus image is sequentially segmented through the two-dimensional segmentation network to obtain a target image corresponding to each focus image. The data processing efficiency of the image is improved, and the accuracy of focus identification is improved.
The embodiment of the specification provides an image processing method, and the image processing method can comprise the following steps.
S602, receiving an access image sequence of the target body part.
Wherein the visit images in the visit image sequence are obtained by medical examination of the target body part after the treatment scheme is implemented.
And S604, acquiring a reference image corresponding to the image sequence.
Wherein the reference image is pre-specified in each baseline image in a sequence of baseline images; wherein each baseline image in the sequence of baseline images is obtained from a medical examination of the target body part prior to implementation of the treatment plan. The reference image has a two-dimensional lesion area of the target body part. Wherein the two-dimensional lesion area in the reference image corresponds to an area range mark. Wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image.
Specifically, the size information of the two-dimensional lesion area includes a length of a lesion line segment. The position information of the two-dimensional lesion area includes spatial coordinate information of a midpoint of the lesion line segment. Wherein the lesion line segment is used to mark a two-dimensional lesion region in the reference image.
In some embodiments, a two-dimensional lesion area in the reference image is marked with a straight line or a cross to obtain a lesion line segment; acquiring the number of pixels occupied by the lesion line segment in the reference image; determining a length of the lesion line segment based on the number of pixels and a pixel spacing; and taking the length of the lesion line segment as the size information of the two-dimensional lesion area.
And S606, receiving the visiting image sequence and the deformation field between the baseline image sequences.
And S608, converting the position information of the two-dimensional focus area by using the deformation field to obtain converted position information.
S610, generating a target circle by taking the midpoint of the lesion line segment as the center of a circle and the length of the lesion line segment as the diameter.
And S612, receiving the plane coordinate information of the circumscribed quadrangle of the target circle.
And S614, determining the extension position information of the circumscribed quadrangle in the extension direction according to the space coordinate information of the middle point and the length of the lesion line segment.
Wherein the extension direction intersects a surface on which the circumscribed quadrilateral is located.
And S616, obtaining the three-dimensional frame based on the plane coordinate information and the extension position information.
And S618, performing voxel cutting on the image sequence according to the stereo frame to obtain a focus visiting image sequence.
And S620, segmenting each focus image in the focus image sequence to obtain a target image corresponding to each focus image.
Specifically, each focus image is sequentially input to a two-dimensional segmentation network; and sequentially segmenting each focus image through the two-dimensional segmentation network to obtain a target image corresponding to each focus image.
S622, measuring the two-dimensional focus area in the target image to obtain at least one of size information in the two-dimensional focus area in the target image and volume information of the three-dimensional focus area in the focus image sequence.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
Referring to fig. 3, an embodiment of the present disclosure provides an image processing apparatus 300. The image processing apparatus 300 includes an image sequence receiving module 310, a reference image acquiring module 320, and an image sequence cropping module 330.
An image sequence receiving module 310 for receiving a sequence of images of a target body part;
a reference image obtaining module 320, configured to obtain a reference image corresponding to the image sequence; wherein the reference image has a two-dimensional lesion area of the target body part, wherein the two-dimensional lesion area in the reference image corresponds to an area range mark; wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image;
and the image sequence cutting module 330 is configured to cut the image sequence according to the position information and the size information to obtain a focus image sequence.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing device to which the disclosed aspects apply, and that a computing device may in particular include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory having a computer program stored therein and a processor that, when executing the computer program, performs the method steps of the above embodiments.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method steps in the above-described embodiments.
In some embodiments, a computer program product is also provided, which comprises instructions that are executable by a processor of a computer device to implement the method steps in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The features of the above embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be construed as being within the scope of the present specification as long as there is no contradiction between the combinations of the features.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure and is not to be construed as limiting the present disclosure, and any modifications, equivalents and the like that are within the spirit and principle of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (13)

1. An image processing method, characterized in that the method comprises:
receiving a sequence of images including a target body part;
acquiring a reference image corresponding to the image sequence; wherein the reference image has a two-dimensional lesion area of the target body part, wherein the two-dimensional lesion area in the reference image corresponds to an area range mark; wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image; wherein, the two-dimensional focus area is a plane area occupied by the focus in the reference image;
and cutting the image sequence according to the position information and the size information to obtain a focus image sequence.
2. The method of claim 1, wherein the image sequence is an interview image sequence or a baseline image sequence; the reference image is pre-specified in each baseline image in the baseline image sequence; wherein the visit images in the visit image sequence are obtained by medical examination of the target body part at a first time point; each baseline image in the sequence of baseline images is obtained by medical examination of the target body part at a second point in time; wherein the second time point is earlier than the first time point.
3. The method according to claim 2, wherein in a case that the image sequence is an access image sequence, before the cropping the image sequence according to the position information and the size information to obtain a lesion image sequence, the method further comprises:
receiving a deformation field between the visiting image sequence and the baseline image sequence;
converting the position information of the two-dimensional focus area by using the deformation field to obtain converted position information;
the cutting the image sequence according to the position information and the size information to obtain a focus image sequence, comprising:
and performing voxel cutting on the visit image sequence based on the converted position information and the size information to obtain a focus image sequence corresponding to the visit image sequence.
4. The method of claim 1, wherein the size information of the two-dimensional lesion area includes a length of a lesion line segment; the position information of the two-dimensional lesion area comprises space coordinate information of a midpoint of the lesion line segment; wherein the lesion line segment is used to mark a two-dimensional lesion region in the reference image.
5. The method of claim 1, wherein the region range marking is represented by a lesion line segment based on a straight line marking or a cross marking of a two-dimensional lesion region in the reference image; the method further comprises the following steps:
acquiring the number of pixels occupied by the lesion line segment in the reference image;
determining a length of the lesion line segment based on the number of pixels and a pixel spacing;
and taking the length of the lesion line segment as the size information of the two-dimensional lesion area.
6. The method of claim 1, wherein the cropping the sequence of images based on the position information and the size information comprises:
generating a stereoscopic frame surrounding a three-dimensional lesion area based on the position information and the size information; wherein the three-dimensional focus region is a spatial region occupied by the focus in the image sequence;
and performing voxel cutting on the image sequence according to the stereo border.
7. The method of claim 6, wherein the location information includes spatial coordinate information of a midpoint of the lesion line segment; the size information includes a length of the lesion line segment; wherein the lesion line segment is used for marking a two-dimensional lesion area in the reference image; accordingly, the generating a stereoscopic border surrounding a three-dimensional lesion area based on the position information and the size information includes:
generating a target circle by taking the midpoint of the focal line segment as the center of a circle and the length of the focal line segment as the diameter;
receiving plane coordinate information of the circumscribed quadrangle of the target circle;
determining extension position information of the circumscribed quadrangle in the extension direction according to the space coordinate information of the middle point and the length of the lesion line segment; wherein the extension direction intersects a surface on which the circumscribed quadrilateral is located;
and obtaining the three-dimensional frame based on the plane coordinate information and the extension position information.
8. The method of claim 1, further comprising:
segmenting each focus image in the focus image sequence to obtain a target image corresponding to each focus image;
and measuring the two-dimensional focus region in the target image to obtain at least one of size information in the two-dimensional focus region in the target image and volume information of the three-dimensional focus region in the focus image sequence.
9. The method of claim 8, wherein the segmenting each lesion image in the sequence of lesion images to obtain a target image corresponding to each lesion image comprises:
sequentially inputting each focus image into a two-dimensional segmentation network;
and sequentially segmenting each focus image through the two-dimensional segmentation network to obtain a target image corresponding to each focus image.
10. An image processing apparatus, characterized in that the apparatus comprises:
an image sequence receiving module for receiving an image sequence of a target body part;
a reference image obtaining module, configured to obtain a reference image corresponding to the image sequence; wherein the reference image has a two-dimensional lesion area of the target body part, wherein the two-dimensional lesion area in the reference image corresponds to an area range mark; wherein the region range mark corresponds to position information and size information of a two-dimensional lesion region in the reference image;
and the image sequence cutting module is used for cutting the image sequence according to the position information and the size information to obtain a focus image sequence.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
13. A computer program product comprising instructions, characterized in that said instructions, when executed by a processor of a computer device, enable said computer device to perform the steps of the method according to any one of claims 1 to 9.
CN202111513480.8A 2021-12-13 2021-12-13 Image processing method, image processing apparatus, computer device, storage medium, and program product Active CN113920114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111513480.8A CN113920114B (en) 2021-12-13 2021-12-13 Image processing method, image processing apparatus, computer device, storage medium, and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111513480.8A CN113920114B (en) 2021-12-13 2021-12-13 Image processing method, image processing apparatus, computer device, storage medium, and program product

Publications (2)

Publication Number Publication Date
CN113920114A true CN113920114A (en) 2022-01-11
CN113920114B CN113920114B (en) 2022-04-22

Family

ID=79248426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111513480.8A Active CN113920114B (en) 2021-12-13 2021-12-13 Image processing method, image processing apparatus, computer device, storage medium, and program product

Country Status (1)

Country Link
CN (1) CN113920114B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546174A (en) * 2022-10-20 2022-12-30 数坤(北京)网络科技股份有限公司 Image processing method, image processing device, computing equipment and storage medium
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130230228A1 (en) * 2012-03-01 2013-09-05 Empire Technology Development Llc Integrated Image Registration and Motion Estimation for Medical Imaging Applications
CN110853082A (en) * 2019-10-21 2020-02-28 科大讯飞股份有限公司 Medical image registration method and device, electronic equipment and computer storage medium
CN113034427A (en) * 2019-12-25 2021-06-25 合肥欣奕华智能机器有限公司 Image recognition method and image recognition device
CN113538471A (en) * 2021-06-30 2021-10-22 上海联影医疗科技股份有限公司 Method and device for dividing patch, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130230228A1 (en) * 2012-03-01 2013-09-05 Empire Technology Development Llc Integrated Image Registration and Motion Estimation for Medical Imaging Applications
CN110853082A (en) * 2019-10-21 2020-02-28 科大讯飞股份有限公司 Medical image registration method and device, electronic equipment and computer storage medium
CN113034427A (en) * 2019-12-25 2021-06-25 合肥欣奕华智能机器有限公司 Image recognition method and image recognition device
CN113538471A (en) * 2021-06-30 2021-10-22 上海联影医疗科技股份有限公司 Method and device for dividing patch, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张定华: "《锥束CT技术及其应用》", 30 December 2010 *
邝宇林: "基于机器学习的甲状腺肿瘤识别与图像分割", 《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546174A (en) * 2022-10-20 2022-12-30 数坤(北京)网络科技股份有限公司 Image processing method, image processing device, computing equipment and storage medium
CN115546174B (en) * 2022-10-20 2023-09-08 数坤(北京)网络科技股份有限公司 Image processing method, device, computing equipment and storage medium
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method
CN117635613B (en) * 2024-01-25 2024-04-16 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method

Also Published As

Publication number Publication date
CN113920114B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN113920114B (en) Image processing method, image processing apparatus, computer device, storage medium, and program product
TWI750583B (en) Medical image dividing method, device, and system, and image dividing method
CN114092475B (en) Focal length determining method, image labeling method, device and computer equipment
CN111542827B (en) System and method for positioning patient
US20070118100A1 (en) System and method for improved ablation of tumors
CN111261265B (en) Medical imaging system based on virtual intelligent medical platform
US10713802B2 (en) Ultrasonic image processing system and method and device thereof, ultrasonic diagnostic device
CN108697402A (en) The gyrobearing of deep brain stimulation electrode is determined in 3-D view
CN111080584A (en) Quality control method for medical image, computer device and readable storage medium
KR20160095426A (en) Device and method for providing of medical information
CN115089303A (en) Robot positioning method and system
CN111192268B (en) Medical image segmentation model construction method and CBCT image bone segmentation method
Widodo et al. Lung diseases detection caused by smoking using support vector machine
CN111166332A (en) Radiotherapy target region delineation method based on magnetic resonance spectrum and magnetic resonance image
CN114943714A (en) Medical image processing system, medical image processing apparatus, electronic device, and storage medium
CN116091466A (en) Image analysis method, computer device, and storage medium
WO2022012541A1 (en) Image scanning method and system for medical device
US8009886B2 (en) System and method for image registration
US20230316550A1 (en) Image processing device, method, and program
CN110349151B (en) Target identification method and device
CN112641466A (en) Ultrasonic artificial intelligence auxiliary diagnosis method and device
JP2005253755A (en) Tumor region setting method and tumor region setting system
CN113436236B (en) Image processing method and system
JPWO2020090445A1 (en) Area modifiers, methods and programs
WO2022054541A1 (en) Image processing device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant