CN115861347A - Method and device for extracting focus region, electronic equipment and readable storage medium - Google Patents

Method and device for extracting focus region, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115861347A
CN115861347A CN202211464801.4A CN202211464801A CN115861347A CN 115861347 A CN115861347 A CN 115861347A CN 202211464801 A CN202211464801 A CN 202211464801A CN 115861347 A CN115861347 A CN 115861347A
Authority
CN
China
Prior art keywords
focus
contour
region
lesion
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211464801.4A
Other languages
Chinese (zh)
Inventor
刘俞辰
涂世鹏
陈涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innermedical Co ltd
Original Assignee
Innermedical Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innermedical Co ltd filed Critical Innermedical Co ltd
Priority to CN202211464801.4A priority Critical patent/CN115861347A/en
Publication of CN115861347A publication Critical patent/CN115861347A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of focus detection, and discloses a method and a device for extracting a focus area, electronic equipment and a readable storage medium. Wherein, the method comprises the following steps: acquiring image data to be detected, wherein the image data to be detected comprises a real focus area; analyzing the image data to be detected, and determining an interested area corresponding to the real focus area; extracting feature information of the region of interest, and generating a focus initial contour according to the feature information; and adjusting the initial contour of the focus to obtain a target contour which is completely overlapped with the real focus area. By implementing the technical scheme of the invention, the automatic extraction of the initial outline of the focus is realized, manual drawing is not required according to each layer of tangent plane, the time cost for determining the focus area is reduced, and the extraction efficiency of the focus area is improved; the central point of the target contour is the central point of the focus, so that the accurate extraction of the focus region is realized, the subsequent operation navigation terminal point is convenient to determine, and the accuracy of the operation navigation path is further improved.

Description

Method and device for extracting focus area, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of focus detection, in particular to a method and a device for extracting a focus area, electronic equipment and a readable storage medium.
Background
With the development of medical technology, natural orifice endoscopic surgery has the advantages of no wound on body surface, short recovery time of patients and the like compared with traditional minimally invasive surgery, and is receiving more and more attention. However, obtaining lesion tissue through the bronchial tract for pathological examination requires planning a navigation path to the lesion site, and therefore the accuracy of the lesion site is critical to the success of the surgery.
At present, a regular sphere is usually adopted to replace a focus, and the sphere center is taken as an end point of surgical navigation, however, the focus is often in an irregular shape, and the acquisition of the focus central point is difficult; part of the method adopts an image processing technology to automatically extract focuses from an image sequence, but due to the complexity of focus tissues, the extraction precision and error are large, and the method cannot be directly used for surgical navigation; doctors can choose manual drawing, but one piece of medical image data comprises hundreds of slice images, doctors need to continuously compare the images of the front and rear slices, imagine the three-dimensional outline of the focus according to experience, refer to a large number of historical diagnosis records, then can screen the focus on the slice images, and draw each layer of section of the focus after screening the focus, which is huge in workload and time-consuming.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for extracting a lesion area, an electronic device, and a readable storage medium, so as to solve the problems of inaccurate extraction of a lesion area and high time cost.
According to a first aspect, an embodiment of the present invention provides a method for extracting a lesion area, including: acquiring image data to be detected, wherein the image data to be detected comprises a real focus area; analyzing the image data to be detected, and determining an interested area corresponding to the real focus area; extracting the feature information of the region of interest, and generating a focus initial contour according to the feature information; and adjusting the initial focus contour to obtain a target contour completely coincident with the real focus region.
According to the method for extracting the focus region provided by the embodiment of the invention, the focus initial contour is extracted from the image data to be detected by analyzing the image data, and the focus initial contour is adjusted to be completely overlapped with the real focus region, so that the target contour of the focus region is obtained. The method realizes automatic extraction of the initial outline of the focus based on the image data to be detected, does not need to manually draw according to each layer of tangent plane, reduces the time cost for determining the focus area, and improves the extraction efficiency of the focus area. Meanwhile, the target contour is completely overlapped with the real focus area, so that the shape of the real focus area can be reflected, the central point of the target contour is the central point of the focus, the accurate extraction of the focus area is realized, the subsequent operation navigation end point is convenient to determine, and the accuracy of the operation navigation path is further improved.
With reference to the first aspect, in a first implementation manner of the first aspect, the analyzing the image data to be detected to determine a region of interest corresponding to the real lesion region includes: extracting a two-dimensional image sequence of the image data to be detected, and converting the two-dimensional image sequence into a three-dimensional data structure; extracting a data center point of the three-dimensional data structure; constructing a sagittal plane, a coronal plane and a cross section for the three-dimensional data structure respectively based on the data center points, and generating visual images of the sagittal plane, the coronal plane and the cross section; and detecting the focus of the sagittal plane, the coronal plane and the cross section, and determining an interested area wrapping the real focus area.
According to the method for extracting the focus region, provided by the embodiment of the invention, the two-dimensional image sequence in the image data to be detected is extracted, and the two-dimensional image sequence is converted into the three-dimensional data structure body, so that the region of interest is extracted according to the three-dimensional data structure body, and the extraction efficiency of the focus region is improved conveniently.
With reference to the first embodiment of the first aspect, in a second embodiment of the first aspect, the performing lesion detection on the sagittal plane, the coronal plane, and the cross section to determine a region of interest that encompasses the actual lesion region includes: in response to a switching operation on the sagittal plane, the coronal plane and the cross section, determining a target tangent plane corresponding to the switching operation; when the focus exists in the target section, generating a geometric body wrapping the focus based on a surrounding operation of the focus in response to the surrounding operation; determining an enclosed region of the geometry as the region of interest.
With reference to the second embodiment of the first aspect, in a third embodiment of the first aspect, before determining the closed region of the geometric object as the region of interest, the method further includes: detecting whether the geometry satisfies a requirement to wrap the lesion in each of the sagittal plane, coronal plane, and transverse plane; when the geometry is not satisfied with wrapping the lesion in all of the sagittal plane, the coronal plane, and the cross-section, controlling the geometry to wrap the lesion in all of the sagittal plane, the coronal plane, and the cross-section based on an adjustment operation on the geometry.
The method for extracting the focus area provided by the embodiment of the invention supports the switching operation of the section and the surrounding operation of the focus, and can accurately determine the region of interest wrapping the focus so as to extract the focus area in the region of interest.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the extracting feature information of the region of interest and generating a lesion initial contour according to the feature information includes: determining a plurality of focus characteristic position points in the region of interest based on a preset rule; and generating the initial focus contour according to a plurality of focus characteristic position points.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, when the preset rule is a gray threshold rule, the determining a plurality of lesion feature location points in the region of interest based on the preset rule includes: optionally selecting a focus position point meeting the gray threshold rule in the region of interest as a starting point for diffusion to obtain a plurality of diffusion points; sequentially traversing the plurality of diffusion points, and judging whether a target diffusion point meeting the gray threshold rule exists in the plurality of diffusion points; when a target diffusion point meeting the gray threshold rule exists, performing diffusion again by taking the target diffusion point as a starting point until no target diffusion point meeting the gray threshold rule exists; determining all target diffusion points meeting the gray threshold rule as the focus characteristic position points; the gray threshold rule is used for detecting whether the gray value of the starting point is larger than a preset threshold value or not, and if the gray value is larger than the preset threshold value, the starting point is determined as a focus characteristic position point.
According to the method for extracting the focus region, provided by the embodiment of the invention, the focus characteristic position point is extracted from the region of interest, so that the focus initial contour is generated according to the focus characteristic position point, and the extraction efficiency of the focus region is greatly improved.
With reference to the first aspect, in a sixth implementation manner of the first aspect, the adjusting the initial lesion contour to obtain a target contour that completely coincides with the true lesion area includes: fusing the focus initial contour and the image data to be detected, and determining the coincidence degree between the focus initial contour and the real focus area; in response to an editing operation generated based on the contact ratio and aiming at the focus initial contour, adjusting the focus initial contour according to the editing operation until the focus initial contour is completely overlapped with the real focus area; and determining the adjusted contour which is coincident with the real lesion area as the target contour.
The method for extracting the focus area provided by the embodiment of the invention supports editing of the focus initial contour, and when the focus initial contour is not completely overlapped with the real focus area, a doctor can flexibly adjust the focus initial contour so as to enable the target contour to be completely overlapped with the real focus area and ensure that the target contour can represent the shape of the real focus area.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the fusing the initial lesion contour with the image data to be detected includes: respectively extracting focus sections generated by the focus in a sagittal plane, a coronal plane and a cross section based on the focus initial contour; fusing the focus initial contour and the image data to be detected according to the corresponding relation between the focus section and the section of the image data to be detected to generate a fused image; wherein the fused image displays the initial lesion contour and the real lesion region.
According to the method for extracting the focus area, provided by the embodiment of the invention, the focus initial contour and the image data to be detected are fused, so that the focus initial contour and the real focus area can be distinguished conveniently.
With reference to the sixth implementation manner of the first aspect, in an eighth implementation manner of the first aspect, the method further includes: converting the data format of the focus initial contour into a data format corresponding to the image data to be detected; analyzing target position data from the image data to be detected; rendering the target part data and the focus initial contour subjected to the data format conversion, and generating a visual image of the focus initial contour and the target part on a display interface.
According to the method for extracting the focus region, provided by the embodiment of the invention, the visual image is obtained by rendering the initial focus contour and the target position data and is displayed in the display interface, so that a doctor can conveniently determine the approximate position of the focus region by observing the visual image, and the influence on the accurate extraction of the focus region due to the subjective imagination of the doctor is avoided.
According to a second aspect, an embodiment of the present invention provides an apparatus for extracting a lesion region, including: the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring image data to be detected, and the image data to be detected comprises a real focus area; the analysis module is used for analyzing the image data to be detected and determining an interested area corresponding to the real focus area; the extraction module is used for extracting the characteristic information of the region of interest and generating a focus initial contour according to the characteristic information; and the adjusting module is used for adjusting the initial focus contour to obtain a target contour completely overlapped with the real focus area.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: the display is in communication connection with the host, and the interactive interface is used for connecting interactive equipment to the host; the host computer comprises a memory and a processor, the memory and the processor are connected in communication with each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the method for extracting a lesion area according to the first aspect or any embodiment of the first aspect.
According to a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the method for extracting a lesion region according to the first aspect or any embodiment of the first aspect.
It should be noted that, for the corresponding benefits of the device, the system, the electronic device, and the computer-readable storage medium for extracting a lesion area provided in the embodiments of the present invention, please refer to the description of the corresponding contents in the method for extracting a lesion area, which is not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method of extracting a lesion region according to an embodiment of the present invention;
fig. 2 is another flowchart of a method for extracting a lesion region according to an embodiment of the present invention;
FIG. 3 is a schematic representation of a sagittal plane, a coronal plane, and a cross-section according to an embodiment of the present invention;
fig. 4 is another flowchart of a method for extracting a lesion region according to an embodiment of the present invention;
FIG. 5 is a three-dimensional schematic of three-dimensional lesion data and bronchial mask data according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a display interface according to an embodiment of the invention;
FIG. 7 is another schematic diagram of a display interface according to an embodiment of the invention;
FIG. 8 is a diagram illustrating editing an initial contour of a lesion according to an embodiment of the present invention;
fig. 9 is a block diagram illustrating a structure of an apparatus for extracting a lesion region according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, because one part of CT scan data of a patient generally generates hundreds of slice images, doctors are difficult to extract focus information from the medical image data accurately. In the traditional method, a doctor needs to continuously contrast images of front and back slices, imagine a three-dimensional outline of a focus in the brain and reference a large number of historical diagnosis records, and then can discriminate the focus on each CT slice image of a patient. After the focus is screened, a regular sphere is usually adopted to replace the focus, and the sphere center is used as the end point of the surgical navigation, however, the focus is often in an irregular shape, and the acquisition of the focus center point is difficult.
In order to solve the above problems, currently, after a lesion is screened, a lesion region is further extracted, and a commonly used lesion extraction method includes computer automatic extraction and manual delineation. For the automatic extraction of a computer, a gray scale feature extraction method, a texture feature extraction method, a morphological feature extraction method and the like are generally adopted, but due to the complexity of an actual image, more errors are often introduced, and the method is only suitable for diagnosis assistance and has defects as an operation reference; for manual delineation, it is necessary to delineate each slice of the lesion, which is labor intensive and time consuming.
Based on the above, the technical scheme of the application realizes automatic extraction of the initial outline of the focus based on the image data to be detected, does not need to perform manual drawing according to each layer of tangent plane, reduces the time cost for determining the focus area, and improves the extraction efficiency of the focus area. Meanwhile, the target contour is completely overlapped with the real focus area, so that the shape of the real focus area can be reflected, the central point of the target contour is the central point of the focus, the accurate extraction of the focus area is realized, the subsequent operation navigation end point is convenient to determine, and the accuracy of the operation navigation path is further improved.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for extracting a lesion region, it is noted that the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that herein.
In this embodiment, a method for extracting a lesion area is provided, which may be used in an electronic device, such as a host, a computer, etc., fig. 1 is a flowchart of a method for extracting a lesion area according to an embodiment of the present invention, as shown in fig. 1, the flowchart includes the following steps:
s11, acquiring image data to be detected, wherein the image data to be detected comprises a real focus area.
The image data to be detected is Digital Imaging and Communications in Medicine (DICOM) data, which is obtained by Computed Tomography (CT) and is specific to a certain part of a human body. E.g., chest CT, brain CT, etc. The real lesion area is the location of a lesion on the human body determined by CT scanning.
After the CT scanning device finishes scanning a certain part of a human body, the CT scanning device can store the scanning data and can transmit the scanning data to an electronic device which is in communication connection with the CT scanning device. Accordingly, the electronic device can receive the image data to be detected.
And S12, analyzing the image data to be detected and determining an interested area corresponding to the real focus area.
The region of interest ROI is a closed region to be processed analyzed from the image data to be detected, and corresponds to the real lesion area. By analyzing the image data to be detected, a focus area and a non-focus area can be determined, and a closed area containing a focus is outlined for the focus area in a square frame, circle, ellipse, irregular polygon and other modes. The electronic device determines the enclosed region containing the lesion as a region of interest.
And S13, extracting the characteristic information of the region of interest, and generating a focus initial contour according to the characteristic information.
The characteristic information represents the data characteristic of the focus, and the initial contour of the focus is the range area of the focus area in the region of interest ROI. Because the focus is contained in the region of interest, and the image pixel value and the boundary of the focus region and the non-focus region are obviously different, the electronic equipment can analyze the image data in the region of interest and extract the characteristic information capable of representing the focus region. Then, according to the characteristic information of the focus, the three-dimensional focus initial contour can be roughly calculated.
And S14, adjusting the initial focus contour to obtain a target contour completely coincident with the real focus region.
The target contour is a real contour corresponding to a real lesion area. Since the initial lesion contour is an approximate contour of a real lesion region which is roughly calculated, after the initial lesion contour is obtained, a difference between the initial lesion contour and the real lesion region is detected. And then, according to the difference between the focus initial contour and the real focus area, adjusting the focus initial contour to ensure that the adjusted focus contour is completely overlapped with the real focus area, and determining the current focus contour completely overlapped with the real focus area as a target contour.
It should be noted that the central point of the target contour is the central point of the lesion, so that the central point of the lesion is accurately obtained, and the accuracy of navigation path planning is further improved.
In the method for extracting a lesion area provided in this embodiment, an initial contour of a lesion is extracted from image data to be detected by analyzing the image data, and the initial contour of the lesion is adjusted to be completely overlapped with a real lesion area, so as to obtain a target contour of the lesion area. The method realizes automatic extraction of the initial outline of the focus based on the image data to be detected, does not need to manually draw according to each layer of tangent plane, reduces the time cost for determining the focus area, and improves the extraction efficiency of the focus area. Meanwhile, the target contour is completely overlapped with the real focus area, so that the shape of the real focus area can be reflected, the central point of the target contour is the central point of the focus, the accurate extraction of the focus area is realized, the subsequent operation navigation end point is convenient to determine, and the accuracy of the operation navigation path is further improved.
In this embodiment, a method for extracting a lesion area is provided, which may be used in an electronic device, such as a host, a computer, etc., fig. 2 is a flowchart of a method for extracting a lesion area according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
s21, acquiring image data to be detected, wherein the image data to be detected comprises a real focus area. For a detailed description, refer to the corresponding related description of the above embodiments, which is not repeated herein.
And S22, analyzing the image data to be detected and determining the region of interest corresponding to the real focus region.
Specifically, the step S22 may include:
s221, extracting a two-dimensional image sequence of the image data to be detected, and converting the two-dimensional image sequence into a three-dimensional data structure.
One image data to be detected comprises a two-dimensional image formed by arranging a plurality of pixels with different gray scales according to a matrix, namely a two-dimensional image sequence. Specifically, the electronic device may automatically extract two-dimensional image information in the DICOM data, such as length, width, number of frames, pixel spacing, data format, image orientation, etc. of each two-dimensional image. And arranging the extracted two-dimensional image sequences into a three-dimensional image sequence according to the extracted two-dimensional image sequences, and generating a corresponding three-dimensional data structure according to the three-dimensional image sequence.
And S222, extracting a data center point of the three-dimensional data structure.
The data center point is the geometric center point of the three-dimensional data structure. Specifically, the data center point is related to the shape of the three-dimensional data structure, and the electronic device may calculate the data center point of the three-dimensional data structure according to the shape of the currently constructed three-dimensional data structure.
And S223, respectively constructing a sagittal plane, a coronal plane and a cross section aiming at the three-dimensional data structure based on the data center point, and generating visual images of the sagittal plane, the coronal plane and the cross section.
According to the central point of the three-dimensional data structure, the section generated by the three-dimensional data structure in three directions (front and back direction, left and right direction, up and down direction), namely sagittal plane, coronal plane and cross section, are constructed in sequence. Meanwhile, the visual images of the sagittal plane, the coronal plane and the cross section are generated and displayed on a display interface of the display. For chest CT, as shown in fig. 3, a visualization image is generated based on the sagittal, coronal and cross-sectional planes of the lung.
Specifically, in order to ensure the display quality of the visualized image, the image scaling can be performed on the sagittal plane, the coronal plane and the cross section to ensure that the visualized image of the sagittal plane, the coronal plane and the cross section can be completely displayed on the display interface, and meanwhile, the image enhancement, the image filtering and other processing are performed on the sagittal plane, the coronal plane and the cross section to increase the recognition degree of the focus region in each section, so that a doctor can find the focus region through the visualized image.
S224, detecting the focus of the sagittal plane, the coronal plane and the cross section, and determining the region of interest wrapping the real focus region.
The electronic equipment can detect the focus of the sagittal plane, the coronal plane and the cross section according to the difference between the focus area and the non-focus area so as to determine the focus area from any one section of the sagittal plane, the coronal plane and the cross section. And then, wrapping the focus area to generate a closed area capable of wrapping the focus area, wherein the closed area is the determined region of interest. As shown in fig. 3, a lesion exists in the cross-sectional visualization image of the lung, and the lesion region may be extracted at this time.
As an optional implementation manner, the step S224 may include:
(1) And in response to the switching operation on the sagittal plane, the coronal plane and the cross section, determining a target section corresponding to the switching operation.
Since the sagittal plane, coronal plane, and cross-sectional plane correspond to different directional slices, a lesion may appear only in a certain slice, and not in every slice.
Specifically, when the doctor observes the visual images generated for the sagittal plane, the coronal plane and the cross section on the display interface, the doctor can control the switching of the sagittal plane, the coronal plane and the cross section through an interactive device such as a mouse, a keyboard, a handle, a touch screen, a trackball and the like. Accordingly, the host computer can respond to the switching operation of the doctor on the sagittal plane, the coronal plane and the cross section, and switch the sagittal plane, the coronal plane and the cross section according to the switching operation, and display the switched target section on the display interface, wherein the target section is the sagittal plane, or the coronal plane or the cross section.
(2) When the focus exists in the target section, in response to the enclosing operation of the focus, generating a geometric body wrapping the focus based on the enclosing operation.
(3) The closed region of the geometry is determined as the region of interest.
Obvious difference exists between the focus area and the image of the normal tissue organ, and a doctor can quickly find the focus as long as the focus appears in the visual image of the target section.
When a doctor finds that a focus exists on a target section, a geometric body can be placed at the center of the focus through interactive equipment such as a mouse, a keyboard, a handle, a touch screen, a track ball and the like, and the focus is surrounded through the geometric body. The geometric body may be a sphere, a cube, a cuboid, etc., and is not limited herein.
Accordingly, the host computer can respond to the enclosing operation of the doctor for the focus, generate a geometric body wrapping the focus according to the enclosing operation, and determine the closed area of the geometric body as the area of interest.
If the focus exists in the visual image of each section, the focus area can be wrapped by the geometric body on each section. Accordingly, before determining the occlusion region of the geometry as the region of interest, the method may further comprise:
(31) And detecting whether the geometric body is required to wrap the focus in a sagittal plane, a coronal plane and a cross section.
(32) When the geometric body does not meet the condition that the focus is wrapped in the sagittal plane, the coronal plane and the cross section, responding to the adjustment operation of the geometric body, and controlling the geometric body to wrap the focus in the sagittal plane, the coronal plane and the cross section based on the adjustment operation.
After generating a geometric body capable of wrapping the focus in the current target section, switching to other two sections in sequence, and judging whether the geometric body can completely wrap the focus on other sections. If the geometrical body can not wrap the focus in the sagittal plane, the coronal plane and the cross section, at the moment, a doctor can adjust the wrapping range of the geometrical body by combining focus areas on the sagittal plane, the coronal plane and the cross section through interactive equipment, so that the geometrical body can wrap the focus in the sagittal plane, the coronal plane and the cross section. Correspondingly, the host computer can respond to the adjustment operation of the doctor on the geometric body, and adjust the surrounding range of the geometric body according to the adjustment operation to generate the geometric body of which each section can wrap the focus.
For example, the geometric body is a sphere, in which case, the doctor can adjust the diameter length of the sphere, and the host computer can respond to the adjustment operation aiming at the diameter length of the sphere to obtain a sphere capable of wrapping the lesion in each section.
For example, the geometric body is a cube, in which case, the doctor can adjust the side length of the cube, and the host computer can respond to the adjustment operation for the side length of the cube to obtain a cube capable of wrapping the lesion in each section.
And S23, extracting the characteristic information of the region of interest, and generating a focus initial contour according to the characteristic information.
Specifically, the step S23 may include:
and S231, determining a plurality of focus characteristic position points in the region of interest based on a preset rule.
The preset rule is a preset strategy for extracting the lesion features, specifically, the preset rule may be to calculate three-dimensional coordinates of each point in the ROI, sort data gray values of each point, calculate a data gray value histogram, calculate an upper threshold and a lower threshold of the data gray values, calculate a data gradient amplitude, calculate a curvature value of the data, and the like.
The focus characteristic position point represents a coordinate point with focus data characteristics, and the host machine determines the focus characteristic position point according to the difference of the focus area and the non-focus area in pixel values and boundaries by analyzing image data in the ROI.
And S232, generating a focus initial contour according to the plurality of focus characteristic position points.
The host can roughly calculate the distribution range of the focus area in the ROI according to the distribution position of each focus characteristic position point to obtain the focus initial contour. Further, each focus characteristic position point on the outermost layer can form an enclosing body, the area enclosed by the enclosing body is a focus area, and the surface of the enclosing body is a focus initial contour.
As an optional implementation manner, when the preset rule is a gray threshold rule, the step S231 may include:
(1) And optionally selecting a focus position point meeting the gray threshold rule in the region of interest as a starting point for diffusion to obtain a plurality of diffusion points.
The gray threshold rule is used for detecting whether the gray value of the starting point is larger than a preset threshold, and if the gray value is larger than the preset threshold, the starting point is determined as a focus characteristic position point.
After the ROI is determined, the host computer selects one coordinate point in the ROI to perform gray value calculation so as to determine whether the gray value of the current coordinate point meets a gray threshold rule. If the coordinate point does not meet the gray threshold rule, another coordinate point is selected from the ROI continuously until the coordinate point meeting the gray threshold rule is selected, the coordinate point is determined as a focus position point, and then the focus position point is used as a starting point to diffuse to the periphery by a certain number of points. For example, the diffusion may appear as a square, diffusing from the starting point to 26 points around, i.e., diffusing around as a square with the starting point as the midpoint. Of course, the diffusion may also be circular, and the diffusion shape and the number of diffusion points are not particularly limited and may be determined by those skilled in the art according to actual needs.
(2) And traversing the plurality of diffusion points in sequence, and judging whether a target diffusion point meeting the gray threshold rule exists in the plurality of diffusion points.
The target diffusion point is the next starting point that can be used for diffusion. The host machine traverses the diffusion points, and calculates whether the gray value of each diffusion point exceeds a preset threshold value in sequence, namely whether the gray value of each diffusion point meets the gray threshold value rule is judged. If there is a point satisfying the gray threshold rule, the point is diffused as a target diffusion point.
(3) And when the target diffusion point meeting the gray threshold rule exists, performing diffusion again by taking the target diffusion point as a starting point until the target diffusion point meeting the gray threshold rule does not exist.
(4) And determining all target diffusion points meeting the gray threshold rule as focus characteristic position points.
And (3) diffusing by taking the target diffusion point as a starting point, namely diffusing to the periphery by taking the target diffusion point as a diffusion central point, continuously judging whether the target diffusion point meeting the gray threshold rule exists in the diffused points after the target diffusion point is diffused again according to the step (2), and if so, performing the next diffusion. And analogizing in turn until no target diffusion point meeting the gray threshold rule exists. And obtaining all target diffusion points meeting the gray threshold rule in the ROI, and taking all the target diffusion points as focus characteristic position points of the focus area.
It should be noted that, the preset threshold determination method in the gray threshold rule is as follows:
the pixels of image a of size X Y Z are divided into L gray levels: 0,1,2, \ 8230 \ 8230;, L-1; x is the length of image A, Y is the width of image A, and Z is the number of frames of image A; n is i The total number N of pixels in image a is:
N=n 0 +n 1 +n 2 +n 3 +……+n L-1
probability P of gray level i i Comprises the following steps:
Figure BDA0003957013100000131
wherein the probability satisfies:
Figure BDA0003957013100000132
and P is i ≥0。
Let k be the lower limit of the focus pixel threshold, divide all pixels in image A into gray values of 0]Non-focal region C of 1 And [ k +1, L-1 ]]Focal region C of 2 。P 1 (k) And P 2 (k) Is C 1 And C 2 Then, the probability of (c) can be calculated to obtain:
Figure BDA0003957013100000133
Figure BDA0003957013100000134
let m 1 (k) And m 2 (k) Are respectively C 1 And C 2 The gray level average of the voxel can be calculated as follows:
Figure BDA0003957013100000135
/>
Figure BDA0003957013100000136
let m G The mean gray value of the image and theta is the mean squared difference between classes, which can be calculated as follows:
θ=P 1 (k)( 1 (k)-m G ) 2 +P 2 (k)( 2 (k)-m G ) 2
=P 1 (k)P 2 (k)( 1 (k)-m 2 (k)) 2
when the theta is maximum, k at the moment is a preset threshold value of the focus area, namely
Figure BDA0003957013100000137
And S24, adjusting the initial focus contour to obtain a target contour completely coincident with the real focus region. For detailed description, reference is made to the corresponding related description of the above embodiments, and details are not repeated herein.
In the method for extracting a lesion area provided in this embodiment, the two-dimensional image sequence in the image data to be detected is extracted, and the two-dimensional image sequence is converted into the three-dimensional data structure, so as to extract the region of interest according to the three-dimensional data structure, thereby improving the extraction efficiency of the lesion area. By supporting the switching operation of the section and the surrounding operation of the focus, the region of interest wrapping the focus can be accurately determined, so that the focus region can be extracted in the region of interest. The focus characteristic position points are extracted from the region of interest, so that the focus initial contour is generated according to the focus characteristic position points, and the extraction efficiency of the focus region is greatly improved.
In this embodiment, a method for extracting a lesion area is provided, which may be used in an electronic device, such as a host, a computer, etc., fig. 4 is a flowchart of a method for extracting a lesion area according to an embodiment of the present invention, as shown in fig. 4, the flowchart includes the following steps:
s31, acquiring image data to be detected, wherein the image data to be detected comprises a real focus area. For a detailed description, refer to the corresponding related description of the above embodiments, which is not repeated herein.
And S32, analyzing the image data to be detected, and determining the region of interest corresponding to the real focus region. For a detailed description, refer to the corresponding related description of the above embodiments, which is not repeated herein.
And S33, extracting the characteristic information of the region of interest, and generating a focus initial contour according to the characteristic information. For detailed description, reference is made to the corresponding related description of the above embodiments, and details are not repeated herein.
And S34, adjusting the initial focus contour to obtain a target contour completely coincident with the real focus region.
Specifically, the step S34 may include:
and S341, fusing the initial focus contour with the image data to be detected, and determining the contact ratio between the initial focus contour and the real focus area.
The coincidence degree is used for representing the coincidence degree between the focus initial contour and the real focus area. And the host machine carries out image detection and image matching on the initial focus contour and the image data to be detected, and fuses the initial focus contour into the image data to be detected to obtain a fused image. The initial focus contour and the real focus area can be displayed simultaneously in the fusion image, and the coincidence degree between the initial focus contour and the real focus area can be determined by comparing the initial focus contour and the real focus area.
As an optional embodiment, in step S341, the method for fusing the initial contour of the lesion with the image data to be detected may include:
(1) And respectively extracting focus sections generated by the focus in a sagittal plane, a coronal plane and a cross section based on the initial focus contour.
The host computer extracts the section of the focus area in three directions (front and back direction, left and right direction, up and down direction) according to the three-dimensional focus initial contour, namely extracting the focus section of the focus area in the sagittal plane, coronal plane and cross section respectively.
(2) And fusing the initial contour of the focus with the image data to be detected according to the corresponding relation between the focus section and the section of the image data to be detected to generate a fused image.
Wherein, the fused image displays the focus initial contour and the real focus area.
As described above, the host computer generates a corresponding three-dimensional data structure by analyzing the image data to be detected, and constructs sagittal, coronal and cross-sections for the three-dimensional data structure. The corresponding relation between the focus section and the section of the image data to be detected is the corresponding relation between the focus section and the section of the three-dimensional data structure.
The host computer adopts an image fusion algorithm to fuse the focus section and the three-dimensional data structure body in a sagittal plane, a coronal plane and a cross section respectively according to the section corresponding relation, so that the fused image can clearly distinguish a focus area contained in the focus initial contour and a real focus area contained in the image data to be detected, and the focus area and the real focus area are displayed in a display interface, thereby being convenient for observing the coincidence degree between the focus initial contour and the real focus area.
Wherein the image fusion algorithm is represented as follows:
Figure BDA0003957013100000151
wherein α is a weight, and
Figure BDA0003957013100000152
f is the fused image, F k (= 1,2, \8230;, n) is the image before fusion.
And S342, responding to the editing operation aiming at the focus initial contour generated based on the contact ratio, and adjusting the focus initial contour according to the editing operation until the focus initial contour is completely overlapped with the real focus area.
Since the initial lesion contour is not completely overlapped with the real lesion area, the doctor can adjust the initial lesion contour through the interactive device so as to enable the initial lesion contour to be completely overlapped with the real lesion area in the sagittal plane, the coronal plane and the cross section. Specifically, the host computer supports the editing function of the initial focus contour, and after entering the editing function, the host computer can fit the initial focus contour of the focus area by using points and lines. In this case, the doctor can perform an editing operation such as dragging a point or a line for each point or line.
Accordingly, the host computer can respond to the editing operation of the doctor on the initial contour of the focus and adjust the initial contour of the focus according to the editing operation until the initial contour of the focus is completely overlapped with the real focus area. For example, when a doctor selects any one of the points, the point is highlighted, and when the point is dragged, the two lines in front and behind connecting the point change together with the point, or when any one of the lines is selected, the line is highlighted, and when the line is dragged, the two lines in front and behind connecting the line change together with the point, as shown in fig. 8.
And S343, determining the adjusted contour which is overlapped with the real lesion area as a target contour.
After the doctor finishes the adjustment of the initial contour of the focus, the host computer exits the editing mode and takes the currently adjusted contour which is overlapped with the real focus area as the target contour. Meanwhile, the target contour capable of reflecting the real focus area is stored and is stored in a standard DICOM format, so that navigation system software can be directly introduced to determine the focus navigation end point in the later period conveniently.
As an alternative embodiment, after obtaining the initial contour of the lesion, data conversion is required to fuse the initial contour of the lesion with the image data to be detected. Specifically, the method may further include:
(1) And converting the data format of the initial outline of the focus into the data format corresponding to the image data to be detected.
The host machine compares the focus initial contour data with the image data to be detected to determine whether the data formats of the focus initial contour data and the image data are the same. If the data formats of the two are the same, the two can be subjected to data fusion. If the data formats of the two data formats are different, the host can convert the focus initial contour data into the data format corresponding to the image data to be detected through an image conversion algorithm, namely, the focus initial contour data is converted into the standard format of DICOM. The image transformation algorithm includes image scaling, image format conversion, and the like, and is not particularly limited as long as data format unification can be achieved.
(2) Target portion data is analyzed from image data to be detected.
The target region data is CT data for the target region. Because the computed tomography has a certain scanning range, the image data to be detected is CT data obtained by performing tomography on the target region, and therefore, the image data to be detected includes target region data. The host computer can divide the target part data from the image data to be detected by an image division algorithm. For example, pulmonary bronchial data is extracted from thoracic CT data.
(3) And rendering the data of the target part and the initial focus contour subjected to data format conversion, and generating a visual image of the initial focus contour and the target part on a display interface.
The host machine carries out volume rendering or surface rendering on the target part data and the focus initial contour through a three-dimensional rendering algorithm, renders a three-dimensional visual image which is formed by fusing the target part and the focus initial contour and has space information, and displays the visual image on a display interface.
In particular, rendering algorithms may include surface-rendering algorithms (Marching Cubes) based on the surface of an object, and volume-rendering algorithms (Ray-Casting) for directly rendering three-dimensional voxels of an object. The Marching Cubes algorithm can regard a series of two-dimensional slice data as a three-dimensional data field, a three-dimensional model is constructed through extracting an isosurface of the three-dimensional data to form a surface grid of the three-dimensional model, and then the three-dimensional model is constructed. The Ray-Casting algorithm emits a Ray from each pixel of the image plane along the sight line direction, the Ray passes through the volume data set, samples are carried out according to a certain step length, the color value and the opacity of each sampling point are calculated through interpolation, and then the accumulated color value and the accumulated opacity are calculated point by point from the front to the back or from the back to the front until the Ray is completely absorbed or passes through the object. The method can reflect the change of material boundary, and by using the Phong model, the mirror reflection, the diffuse reflection and the environmental reflection are introduced to obtain good illumination effect, so that the property attribute, the shape characteristic and the hierarchical relationship among tissues and organs can be expressed, and the image information is enriched.
The host can automatically select a corresponding drawing algorithm for data rendering according to the image characteristics of the data to be rendered. Of course, the doctor can also specify the corresponding rendering algorithm according to the diagnosis requirement.
Taking the lung bronchus as an example, the host may select a corresponding rendering algorithm according to the image features of the lung bronchus and the image features of the initial contour of the lesion, and after rendering is completed, simultaneously display the three-dimensional lesion data and the bronchus mask data on the display interface, as shown in fig. 5.
Of course, the display interface may include a plurality of display windows, as shown in fig. 6, in which all the sectional visualized images of the lungs are displayed in the display window 1, and the three-dimensional visualized image is displayed in the display window 2. Of course, it is also possible to set a window switching tab on the display interface, as shown in fig. 7, to display different visual images in different windows, and to display different images on the display interface by clicking the window switching tab (display window 1 or display window 2).
The method for extracting the lesion area provided by this embodiment supports editing of the initial lesion contour, and when the initial lesion contour and the real lesion area are not completely overlapped, a doctor can flexibly adjust the initial lesion contour, so that the target contour and the real lesion area are completely overlapped, and it is ensured that the target contour can represent the shape of the real lesion area. The focus initial contour and the image data to be detected are fused, so that the focus initial contour and the real focus area can be distinguished conveniently. By rendering the initial outline of the focus and the target position data, a visual image is obtained and displayed in a display interface, so that a doctor can conveniently determine the approximate position of the focus area by observing the visual image, and the influence on the accurate extraction of the focus area due to the subjective imagination of the doctor is avoided.
In this embodiment, a device for extracting a lesion area is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides an apparatus for extracting a lesion area, as shown in fig. 9, for use in a host of an electronic device, comprising:
the acquiring module 41 is configured to acquire image data to be detected, where the image data to be detected includes a real lesion area.
And the analyzing module 42 is configured to analyze the image data to be detected and determine an area of interest corresponding to the real lesion area.
And the extracting module 43 is configured to extract feature information of the region of interest, and generate an initial focus contour according to the feature information.
And the adjusting module 44 is used for adjusting the initial focus contour to obtain a target contour which is completely overlapped with the real focus area.
Optionally, the parsing module 42 may include:
and the first extraction submodule is used for extracting a two-dimensional image sequence of the image data to be detected and converting the two-dimensional image sequence into a three-dimensional data structure body.
And the second extraction submodule is used for extracting the data center point of the three-dimensional data structure body.
And the first visualization submodule is used for respectively constructing a sagittal plane, a coronal plane and a cross section aiming at the three-dimensional data structure based on the data center point and generating visualization images of the sagittal plane, the coronal plane and the cross section.
And the focus detection submodule is used for carrying out focus detection on the sagittal plane, the coronal plane and the cross section and determining an interested area wrapping the real focus area.
Optionally, the lesion detection submodule is further configured to: responding to the switching operation of the sagittal plane, the coronal plane and the cross section, and determining a target tangent plane corresponding to the switching operation; when the focus exists in the target section, responding to the surrounding operation of the focus, and generating a geometric body wrapping the focus based on the surrounding operation; the closed region of the geometry is determined as the region of interest.
Optionally, the lesion detection submodule is further configured to: detecting whether the geometric body covers the focus on the sagittal plane, the coronal plane and the cross section; when the geometric body does not meet the condition that the focus is wrapped in the sagittal plane, the coronal plane and the cross section, responding to the adjustment operation of the geometric body, and controlling the geometric body to wrap the focus in the sagittal plane, the coronal plane and the cross section based on the adjustment operation.
Optionally, the extracting module 43 may include:
the first determining submodule is used for determining a plurality of focus characteristic position points in the region of interest based on a preset rule.
And the generation submodule is used for generating a focus initial contour according to the plurality of focus characteristic position points.
Optionally, when the preset rule is a gray threshold rule, the first determining submodule is specifically configured to: optionally selecting a focus position point meeting the gray threshold rule in the region of interest as a starting point for diffusion to obtain a plurality of diffusion points; sequentially traversing a plurality of diffusion points, and judging whether a target diffusion point meeting the gray threshold rule exists in the plurality of diffusion points; when a target diffusion point meeting the gray threshold rule exists, performing diffusion again by taking the target diffusion point as a starting point until no target diffusion point meeting the gray threshold rule exists; and determining all target diffusion points meeting the gray threshold rule as focus characteristic position points.
Optionally, the adjusting module 44 may include:
and the data fusion submodule is used for fusing the focus initial contour with the image data to be detected and determining the coincidence degree between the focus initial contour and the real focus area.
And the editing submodule is used for responding to the editing operation aiming at the focus initial contour generated based on the contact ratio, and adjusting the focus initial contour according to the editing operation until the focus initial contour is completely overlapped with the real focus area.
And the second determining submodule is used for determining the adjusted contour which is coincident with the real lesion area as the target contour.
Optionally, the data fusion submodule is further specifically configured to: respectively extracting focus sections generated by the focus in a sagittal plane, a coronal plane and a cross section based on the initial focus contour; and fusing the initial focus contour with the image data to be detected according to the corresponding relation between the focus section and the section of the image data to be detected to generate a fused image, wherein the initial focus contour and the real focus area are displayed in the fused image.
Optionally, the device for extracting a lesion region may further include:
and the data conversion module is used for converting the data format of the initial outline of the focus into the data format corresponding to the image data to be detected.
And the target part analysis module is used for analyzing the target part data from the image data to be detected.
And the rendering module is used for rendering the data of the target part and the initial focus contour subjected to data format conversion and generating a visual image of the initial focus contour and the target part on a display interface.
The lesion area extraction device in this embodiment is in the form of a functional unit, where the unit refers to an ASIC circuit, a processor and a memory executing one or more software or fixed programs, and/or other devices capable of providing the above functions.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
The device for extracting a lesion area provided in this embodiment extracts an initial lesion contour from image data to be detected by analyzing the image data, and adjusts the initial lesion contour to be completely overlapped with a real lesion area, so as to obtain a target contour of the lesion area. The device realizes automatic extraction of the initial outline of the focus based on the image data to be detected, does not need to manually draw according to each layer of tangent plane, reduces the time cost for determining the focus area, and improves the extraction efficiency of the focus area. Meanwhile, the target contour is completely overlapped with the real focus area, so that the shape of the real focus area can be reflected, the central point of the target contour is the central point of the focus, the accurate extraction of the focus area is realized, the subsequent operation navigation end point is convenient to determine, and the accuracy of the operation navigation path is further improved.
An embodiment of the present invention further provides an electronic device, which includes the device for extracting a lesion area shown in fig. 9.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, as shown in fig. 10, the electronic device may include: a host 50, at least one interactive interface 51 and a display 52. The interactive interface 51 is used for connecting an interactive device to the host, where the interactive device includes a mouse, a keyboard, a handle, a track ball, and the like. Wherein the display 52 is communicatively coupled to the host 50.
The host 50 may include: at least one processor 501, such as a Central Processing Unit (CPU), at least one communication interface 503, memory 504, and at least one communication bus 502. Wherein a communication bus 502 is used to enable connective communication between these components. The communication interface 503 may include a Display (Display) interface, a Keyboard (Keyboard) interface, and the like, and the optional communication interface 503 may also include a standard wired interface and a standard wireless interface. The Memory 504 may be a high-speed volatile Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 504 may optionally be at least one storage device located remotely from the processor 501. Wherein the processor 501 may be in connection with the apparatus described in fig. 9, an application program is stored in the memory 504, and the processor 501 calls the program code stored in the memory 504 for performing any of the above-mentioned method steps.
The communication bus 502 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The memory 504 may include a volatile memory (volatile memory), such as a random-access memory (RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory 504 may also comprise a combination of the above-described types of memory.
The processor 501 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of the CPU and the NP.
The processor 501 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 504 is also used to store program instructions. The processor 501 may call program instructions to implement the method for extracting a lesion area as shown in the above embodiments of the present application.
Embodiments of the present invention further provide a non-transitory computer storage medium storing computer-executable instructions that can perform the method for extracting a lesion area in any of the above method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), a Solid-State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art can make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (12)

1. A method for extracting a lesion region is characterized by comprising the following steps:
acquiring image data to be detected, wherein the image data to be detected comprises a real focus area;
analyzing the image data to be detected, and determining an interested area corresponding to the real focus area;
extracting the characteristic information of the region of interest, and generating a focus initial contour according to the characteristic information;
and adjusting the initial focus contour to obtain a target contour completely overlapped with the real focus region.
2. The method according to claim 1, wherein the analyzing the image data to be detected and determining the region of interest corresponding to the real lesion region comprises:
extracting a two-dimensional image sequence of the image data to be detected, and converting the two-dimensional image sequence into a three-dimensional data structure;
extracting a data center point of the three-dimensional data structure;
constructing a sagittal plane, a coronal plane and a cross section for the three-dimensional data structure based on the data center points, respectively, and generating visualized images of the sagittal plane, the coronal plane and the cross section;
and detecting the focus of the sagittal plane, the coronal plane and the cross section, and determining an interested area wrapping the real focus area.
3. The method of claim 2, wherein the performing lesion detection on the sagittal plane, the coronal plane, and the cross sectional plane to determine a region of interest encompassing the actual lesion area comprises:
in response to a switching operation on the sagittal plane, the coronal plane and the cross section, determining a target tangent plane corresponding to the switching operation;
when the focus exists in the target section, responding to the surrounding operation of the focus, and generating a geometric body wrapping the focus based on the surrounding operation;
determining an enclosed region of the geometry as the region of interest.
4. The method of claim 3, further comprising, prior to determining the enclosed region of the geometry as the region of interest:
detecting whether the geometry satisfies a requirement to wrap the lesion in each of the sagittal plane, coronal plane, and transverse plane;
when the geometry is not satisfied with wrapping the lesion in all of the sagittal plane, the coronal plane, and the cross-section, controlling the geometry to wrap the lesion in all of the sagittal plane, the coronal plane, and the cross-section based on an adjustment operation on the geometry.
5. The method according to claim 1, wherein the extracting feature information of the region of interest and generating a lesion initial contour according to the feature information comprises:
determining a plurality of focus characteristic position points in the region of interest based on a preset rule;
and generating the initial focus contour according to a plurality of focus characteristic position points.
6. The method according to claim 5, wherein when the predetermined rule is a gray threshold rule, the determining a plurality of lesion feature location points in the region of interest based on the predetermined rule comprises:
optionally selecting a focus position point meeting the gray threshold rule in the region of interest as a starting point for diffusion to obtain a plurality of diffusion points;
sequentially traversing the plurality of diffusion points, and judging whether a target diffusion point meeting the gray threshold rule exists in the plurality of diffusion points;
when a target diffusion point meeting the gray threshold rule exists, performing diffusion again by taking the target diffusion point as a starting point until no target diffusion point meeting the gray threshold rule exists;
determining all target diffusion points meeting the gray threshold rule as the focus characteristic position points;
the gray threshold rule is used for detecting whether the gray value of the starting point is larger than a preset threshold value or not, and if the gray value is larger than the preset threshold value, the starting point is determined as a focus characteristic position point.
7. The method of claim 1, wherein said adjusting said initial lesion contour to obtain a target contour that completely coincides with said true lesion area comprises:
fusing the focus initial contour and the image data to be detected, and determining the contact ratio between the focus initial contour and the real focus area;
in response to an editing operation generated based on the contact ratio and aiming at the focus initial contour, adjusting the focus initial contour according to the editing operation until the focus initial contour is completely overlapped with the real focus area;
determining the adjusted contour coinciding with the real lesion area as the target contour.
8. The method according to claim 7, wherein fusing the initial lesion contour with the image data to be detected comprises:
respectively extracting focus sections generated by the focus in a sagittal plane, a coronal plane and a cross section based on the focus initial contour;
fusing the focus initial contour and the image data to be detected according to the corresponding relation between the focus section and the section of the image data to be detected to generate a fused image;
wherein the fused image displays the initial lesion contour and the real lesion region.
9. The method of claim 7, further comprising:
converting the data format of the focus initial contour into a data format corresponding to the image data to be detected;
analyzing target position data from the image data to be detected;
rendering the target part data and the focus initial contour subjected to the data format conversion, and generating a visual image of the focus initial contour and the target part on a display interface.
10. An apparatus for extracting a lesion region, comprising:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring image data to be detected, and the image data to be detected comprises a real focus area;
the analysis module is used for analyzing the image data to be detected and determining an interested area corresponding to the real focus area;
the extraction module is used for extracting the characteristic information of the region of interest and generating a focus initial contour according to the characteristic information;
and the adjusting module is used for adjusting the initial focus contour to obtain a target contour completely overlapped with the real focus area.
11. An electronic device, comprising:
the display is in communication connection with the host, and the interactive interface is used for connecting interactive equipment to the host;
the host computer, comprising a memory and a processor, wherein the memory and the processor are communicatively connected with each other, the memory stores computer instructions, and the processor executes the computer instructions to perform the method for extracting a lesion region according to any one of claims 1 to 9.
12. A computer-readable storage medium storing computer instructions for causing a computer to execute the method for lesion region extraction according to any one of claims 1 to 9.
CN202211464801.4A 2022-11-22 2022-11-22 Method and device for extracting focus region, electronic equipment and readable storage medium Pending CN115861347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211464801.4A CN115861347A (en) 2022-11-22 2022-11-22 Method and device for extracting focus region, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211464801.4A CN115861347A (en) 2022-11-22 2022-11-22 Method and device for extracting focus region, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115861347A true CN115861347A (en) 2023-03-28

Family

ID=85664820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211464801.4A Pending CN115861347A (en) 2022-11-22 2022-11-22 Method and device for extracting focus region, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115861347A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116779093A (en) * 2023-08-22 2023-09-19 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050278A1 (en) * 2010-08-31 2012-03-01 Canon Kabushiki Kaisha Image display apparatus and image display method
CN113034426A (en) * 2019-12-25 2021-06-25 飞依诺科技(苏州)有限公司 Ultrasonic image focus description method, device, computer equipment and storage medium
CN114943688A (en) * 2022-04-27 2022-08-26 江苏婷灏健康科技有限公司 Method for extracting interest region in mammary gland image based on palpation and ultrasonic data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120050278A1 (en) * 2010-08-31 2012-03-01 Canon Kabushiki Kaisha Image display apparatus and image display method
CN113034426A (en) * 2019-12-25 2021-06-25 飞依诺科技(苏州)有限公司 Ultrasonic image focus description method, device, computer equipment and storage medium
CN114943688A (en) * 2022-04-27 2022-08-26 江苏婷灏健康科技有限公司 Method for extracting interest region in mammary gland image based on palpation and ultrasonic data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116779093A (en) * 2023-08-22 2023-09-19 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment
CN116779093B (en) * 2023-08-22 2023-11-28 青岛美迪康数字工程有限公司 Method and device for generating medical image structured report and computer equipment

Similar Documents

Publication Publication Date Title
JP6877868B2 (en) Image processing equipment, image processing method and image processing program
US8423124B2 (en) Method and system for spine visualization in 3D medical images
USRE43225E1 (en) Method and apparatus for enhancing an image using data optimization and segmentation
JP4512586B2 (en) Volume measurement in 3D datasets
US7924279B2 (en) Protocol-based volume visualization
US9600890B2 (en) Image segmentation apparatus, medical image device and image segmentation method
US20050110791A1 (en) Systems and methods for segmenting and displaying tubular vessels in volumetric imaging data
EP1315125A2 (en) Method and system for lung disease detection
JP2008510499A (en) Anatomical visualization / measurement system
US7684602B2 (en) Method and system for local visualization for tubular structures
US20080107318A1 (en) Object Centric Data Reformation With Application To Rib Visualization
CN112861961B (en) Pulmonary blood vessel classification method and device, storage medium and electronic equipment
US20080117210A1 (en) Virtual endoscopy
JP2007061622A (en) System and method for automated airway evaluation for multi-slice computed tomography (msct) image data using airway lumen diameter, airway wall thickness and broncho-arterial ratio
US9530238B2 (en) Image processing apparatus, method and program utilizing an opacity curve for endoscopic images
US9019272B2 (en) Curved planar reformation
DE10255526B4 (en) Recognition of vascular-associated pulmonary nodules by volume projection analysis
CN115861347A (en) Method and device for extracting focus region, electronic equipment and readable storage medium
CN113506277B (en) Image processing method and device
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
EP3389006B1 (en) Rib unfolding from magnetic resonance images
WO2006055031A2 (en) Method and system for local visualization for tubular structures
JP2003265463A (en) Image diagnosis support system and image diagnosis support program
CN114596275A (en) Pulmonary vessel segmentation method, device, storage medium and electronic equipment
JPH0697466B2 (en) Device and method for displaying a two-dimensional image of an internal surface within an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination