CN117476219A - Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis - Google Patents

Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis Download PDF

Info

Publication number
CN117476219A
CN117476219A CN202311821117.1A CN202311821117A CN117476219A CN 117476219 A CN117476219 A CN 117476219A CN 202311821117 A CN202311821117 A CN 202311821117A CN 117476219 A CN117476219 A CN 117476219A
Authority
CN
China
Prior art keywords
positioning
image
picture
radiotherapy
cases
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311821117.1A
Other languages
Chinese (zh)
Other versions
CN117476219B (en
Inventor
应微
高绪峰
吴德全
梁黎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Cancer Hospital
Original Assignee
Sichuan Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Cancer Hospital filed Critical Sichuan Cancer Hospital
Priority to CN202311821117.1A priority Critical patent/CN117476219B/en
Publication of CN117476219A publication Critical patent/CN117476219A/en
Application granted granted Critical
Publication of CN117476219B publication Critical patent/CN117476219B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Radiation-Therapy Devices (AREA)

Abstract

The application relates to the technical field of artificial intelligence and discloses an auxiliary method and an auxiliary system for positioning CT (computed tomography) tomographic images based on big data analysis. An auxiliary method for positioning CT tomographic images based on big data analysis comprises the following steps: step 1: collecting radiotherapy cases in each medical system, extracting positioning CT pictures of tumor target areas in the radiotherapy cases, and establishing a CT image database; step 2: and acquiring a tomographic image of the positioning CT of the patient to obtain a plurality of images to be sketched. An auxiliary system for positioning CT tomographic images based on big data analysis comprises a data collection module, a data processing module and a display module; and a data processing module. According to the technical scheme provided by the application, target zone sketching staff can consult cases of radiotherapy to sketch how the cases are in the process of sketching the target zone, and then feasible suggestions are provided for sketching the target zone.

Description

Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an auxiliary method and an auxiliary system for positioning CT (computed tomography) tomographic images based on big data analysis.
Background
In the treatment scheme of tumor, radiotherapy is a common treatment scheme, and positioning CT is a key link in radiotherapy, generally, after scanning of positioning CT is completed, a doctor is required to delineate a radiation area of radiotherapy rays, and then follow-up radiotherapy work is performed.
After the doctor delineates the target area and finishes radiotherapy on the patient, the effect of radiotherapy is not fed back immediately, but long-term tracking investigation is needed, so that the final radiotherapy result can be known. In addition, the tumor size of each patient is not the same, and many patients who are in contact with the tumor position and the tumor size are not seen for the experience of doctors, so that the determination of the radiotherapy target zone can be performed only through intuition and experience. While doctors can search corresponding information from the patient cases of radiotherapy to refer to, in the actual treatment process, cases which are relatively similar to the current patient cases are difficult to find from massive radiotherapy cases, so that the doctors refer to the radiotherapy cases of the patients of radiotherapy at present during learning. However, in actual treatment, because it is difficult to find cases similar to the current situation of the patient, the doctor lacks effective information to assist the doctor in the judgment of delineating the target area of the radiotherapy.
Disclosure of Invention
The content of the present application is intended to introduce concepts in a simplified form that are further described below in the detailed description. The section of this application is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
As a first aspect of the present application, in order to solve a technical problem that a doctor lacks auxiliary information to guide a target region to be delineated when delineateing a target region, the present application provides an auxiliary method for locating CT tomographic images based on big data analysis, comprising the steps of:
step 1: collecting radiotherapy cases in each medical system, extracting positioning CT pictures and case history data of tumor target areas in the radiotherapy cases, and establishing a CT image database and a case history database;
step 2: acquiring patient history data of a patient and tomographic images of a positioning CT, and processing the tomographic images of the positioning CT to obtain a plurality of images to be sketched;
step 3: matching the image to be sketched with a positioning CT picture in a CT image database, simultaneously matching patient medical history data of a patient with case medical history data in a case medical history database, acquiring a plurality of radiotherapy cases similar to the patient from the collected radiotherapy cases according to a matching result, and then sending the radiotherapy cases to a designated terminal.
In the technical scheme provided by the application, the radiotherapy cases in each medical system are collected, then the positioning CT images and case history data in the cases are extracted, the corresponding CT image database and case history database are established, then when the patient is sketched in the radiotherapy target area, the patient is matched with the information in the CT image database and the case history data, so that a plurality of radiotherapy cases with high correlation degree are obtained, and then the radiotherapy cases are sent to target area sketching staff, so that the target area sketching staff can provide a plurality of suggestions for the sketching of the radiotherapy target area according to the radiotherapy cases.
In the actual radiotherapy process, it is impossible to refer to the similarity of tumor areas only, and the similarity is taken as a factor of radiotherapy area design. For example, both patients suffer from a tumor, and the tumor size and location of both patients are substantially the same. However, the age of the patient and the biopsy report of the tumor are different, and there is certainly a different choice in the radiotherapy regimen. Therefore, merely examining a patient's scout CT image may result in the recommended radiotherapy case to the target volume delineator not conforming to the patient's condition.
Aiming at the problem, the application provides the following technical scheme:
the step 1 specifically comprises the following steps:
step 11: collecting radiotherapy cases in each medical system, and setting a unique identification code for each radiotherapy case;
step 12: dividing each radiotherapy case into a positioning CT picture and case history data, and marking the positioning CT picture and the case history data by using an identification code;
step 13: collecting all positioning CT pictures, establishing a CT image database, collecting all case history data, and establishing a case history database.
According to the scheme provided by the application, after the unique identification code is set for each radiotherapy case, each radiotherapy case is divided into case history data and positioning CT pictures, then the case history database and the CT image database are formed respectively, and then after the positioning CT pictures in the CT image database are matched, the case history data corresponding to the positioning CT pictures can be searched immediately, so that a doctor can consider the medical history information in the radiotherapy case better when referring to the information.
There are many different types of CT apparatuses in the same medical unit, but more types of CT apparatuses are provided in different medical systems, so that CT images of different cases are not uniform, and the accuracy of positioning CT images of different cases is not uniform. Some radiotherapy cases which are relatively long in time have low precision of CT images (mainly related to the storage mode of positioning CT images), and medical institutions do not necessarily store CT image originals when storing CT images, especially CT images in user cases, so that the definition of the CT images is easily reduced, and when the images are matched, the inaccuracy of matching results is easily caused by different definition and different formats. Aiming at the problem, the application provides the following technical scheme:
step 13 comprises the following steps:
step 131: collect all positioning CT pictures M 1 、M 2 、…M i …M m The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is 1 Represents the 1 st positioning CT picture, M 2 Represents the 2 nd positioning CT picture, M i I represents an i-th positioning CT picture, i is an integer, i is more than or equal to 1 and less than or equal to m, m represents the number of all collected radiotherapy cases, m is an integer, and m is more than or equal to 2;
step 132: setting a standard format and standard definition, reducing definition for a positioning CT picture which is scaled to the standard format and has definition higher than the standard definition until reaching the standard definition, and increasing definition for a positioning CT picture which is scaled to the standard format and has definition lower than the standard definition until reaching the standard definition.
According to the technical scheme, the standard format and the standard definition are preset, and then the high-definition positioning CT picture and the low-definition positioning CT picture are restored to the positioning CT picture with the same format and definition, so that the matching of the positioning CT pictures in the same format and the same definition is ensured, and the influence of the definition and the format on the matching is avoided.
The direct removal of redundant pixel grids when the definition is reduced can cause excessive characteristics to be lost, and after the pixel grids are removed, the change curve of gray values between adjacent pixel grids can be very steep, and the precision of picture matching can be reduced when subsequent pictures are matched. Aiming at the problem, the application provides the following technical scheme:
in step 132, the method for reducing sharpness is:
step S1: scaling the positioning CT picture to a standard format;
step S2: mapping pixel grid network under standard definition on positioning CT picture, and calculating labelThe gray value of each pixel grid under the quasi-definition, wherein the pixel grid network under the standard definition comprises a 1 st pixel grid, a 2 nd pixel grid … kth pixel grid … kth 0 Each pixel grid, k 0 And k is a positive integer, k 0 ≥k≥1,k 0 ≥2,k 0 Representing the total number of pixel grids under standard definition;
wherein y is k Represents the gray value, X, of the kth pixel grid k Is the sum of gray values of all pixel grids on the original positioning CT picture occupied by the kth pixel grid, n k The number of all pixel grids on the original positioning CT picture occupied by the kth pixel grid.
In the technical scheme provided by the application, all gray values in the positioning CT picture can be considered in a mode of calculating the average gray value, so that the influence on the picture matching precision caused by too low reference performance of the positioning CT picture due to serious characteristic loss is avoided.
In the restoration work of positioning CT pictures, the key parts are boundary features of restored tumors and corresponding organs, and other texture features are not found in positioning CT pictures. For this reason, the present application provides the following technical solutions to this problem:
in step 132, the method for increasing the sharpness is:
step S01: preparing a training data set, wherein the training data set comprises an image to be repaired, a sample image and an image edge map to be repaired corresponding to the image to be repaired;
step S02: establishing a basic model based on the generated countermeasure network;
step S03: training and verifying the basic model by using a training data set to obtain an image restoration model;
step S04: the low-definition CT image is scaled to a standard format and then input into an image restoration model to obtain a restored image with standard format and standard definition.
The edge features of the image to be repaired are introduced in the training process, and the method has positive effects on improving the learning ability of the basic model. This additional information plays an important auxiliary role in the image restoration process. The edge map to be repaired provides clear constraint on the edges of the image for the network, which enables the network to pay more attention to the edge features of the image more accurately, and has good effect on recovering the CT image.
The existing image restoration model is often incapable of capturing the image processing on different scales and details, particularly, is a generalized processing mode for processing outline and texture parts in image features, and is incapable of focusing the edge features of the image, and aiming at the problem, the application provides the following technical scheme:
the basic model comprises the following structure: a network structure comprising at least 3 dimensions; and each scale includes a beginning and ending convolutional layer, and at least 10 incomplete blocks;
wherein the input of the basic model is { sample image A, image B to be repaired, image edge map E (B) to be repaired },
restored image F, f=b+g (B, E (B)); wherein G is a generator;
the loss function L in the scale is:
wherein alpha is e 、ɑ f 、ɑ g Respectively weight constraint items, L is preset d Representing a loss of multiscale structural similarity, L e Indicating loss of resistance, L f Representing edge-aware loss, L g Representing a perceived loss;
wherein Ls represents a repaired image of each scale, rs represents a sample image of each scale, S represents a plurality of scales, S.gtoreq.3, c s 、w s 、h s The channel number, width and height of each scale input image are respectively represented;
wherein D is a discriminator;
wherein +.A. indicates pixel multiplication;
wherein->(F) Feature map extracted representing the restored image F, < >>(A) A feature map extracted from the sample image a is shown.
The image restoration method designs a multi-scale processing mode, and can better capture and restore details of different scales of images. And edge loss is introduced, so that the edge information of the image, especially the edge information of tumors and organs, can be effectively recovered, and the readability of positioning CT pictures is improved. Overall, the image restoration method provided by the application combines content loss, generation countermeasure loss and edge loss, so that the method can be considered from multiple angles when restoring images, and a better restoration effect is obtained. Meanwhile, the scheme also sets flexible weight constraint, and then the weight of each loss can be adjusted according to the actual requirements and the characteristics of the data set, so that more flexible and targeted repairing effect is obtained.
For most patients undergoing radiotherapy, the tumor diffusion condition is good. However, in the case of performing a localization CT, the range of scan is generally relatively large. So that a number of positioning images to be delineated are obtained. If these images to be located are used to match the located CT pictures in the CT database, this results in a large calculation. Aiming at the problem, the application provides the following technical scheme:
in step 2, acquiring all tomographic images of the positioning CT of the patient, screening out tomographic images containing tumor, and taking the tomographic images containing tumor as the image to be sketched.
In the technical scheme provided by the application, CT tomographic images without tumors are screened out, so that a large amount of surplus information can be removed when images are matched, and the technical problem that the calculated amount is too large when images are matched is avoided.
In the actual course of treatment, two identical tumors will generally grow at the same location in different patients. Therefore, when the images are matched, the images are directly matched, the images can not be accurately screened out, the reference meaning of the images is achieved, and aiming at the problem, the application provides the following technical scheme:
step 3 comprises the following steps:
step 31: extracting all positioning CT pictures M 1 、M 2 、…M i …M m Extracting N features from each picture, wherein the dictionary D is an m multiplied by N matrix;
,j 1,1 representing picture M 1 First feature, j 1,m Representing picture M m Features 1 of j N,1 Representing picture M 1 The nth feature, j N,m Representing picture M m The nth feature of (a);
step 32: dividing an image to be sketched into a cutting images along the edge of a tumor, sequentially matching each cutting image with all positioning CT images, and obtaining a primary sequence according to the arrangement of the number of the cutting images matched with each positioning CT image, wherein a is preset, a is more than 1, and a is an integer;
step 33, selecting the previous v positioning CT pictures in the primary sequence, wherein v is preset, v is an integer larger than zero, and according to the selected v positioning CT pictures, the corresponding v case history data are called out from the case history database, and then similarity scores Q of the v case history data and the patient history data are sequentially calculated, the radiotherapy cases are ordered according to the similarity scores to obtain a final sequence, and then the previous 5 radiotherapy cases are selected and sent to a designated terminal; q=tu, where T is the similarity of the case history data of the radiotherapy case and the patient history data of the patient, and U is the number of cut patterns to which the positioning CT images of the radiotherapy case are matched.
In the scheme provided by the application, after feature extraction is carried out on all positioning CT pictures in a database, a corresponding dictionary is established. Then, dividing the image to be sketched into y cutting images along the edge of the tumor, and matching with all the positioning CT images. Therefore, the scheme provided by the application can effectively match the positioning CT picture similar to the positioning CT of the patient, and then calculate the corresponding text data similarity, so that personnel can be finally drawn to the radiotherapy target area, and a radiotherapy case with reference value is output.
Step 32 includes the steps of:
step 321: extracting feature vectors of the cut map, and solving sparse vectors p of the cut map through a sparse coding algorithm by using the extracted feature vector set and a dictionary D;
step 322: reconstructing a cut map from the sparse vector p and dictionary D, z rd =Dp,z rd Reconstructing a cut map, wherein Dp represents a picture corresponding to a sparse vector p reconstructed by using a dictionary D and the sparse vector p;
step 323: computing a reconstructed cut map z rd And judging the matching condition of the picture z and the picture in the dictionary D according to the matching metric between the picture z and the original cutting picture.
As a second aspect of the present application, the present application provides an assistance system for locating CT tomographic images based on big data analysis, including a data collection module, a data processing module, and a display module; the data processing module is respectively connected with the display module and the data collecting module through signals;
the data collection module is used for collecting radiotherapy cases of all hospital systems and tomographic images of positioning CT of a patient to be delineated on a target area;
the display module is used for displaying the radiotherapy case to a target area sketching person;
the data processing module sends corresponding radiotherapy cases to the appointed terminal based on the auxiliary method for positioning the CT tomograms based on big data analysis.
Compared with the prior art, the invention has the following beneficial effects:
according to the technical scheme provided by the application, target zone sketching staff can consult how to sketch the target zone when sketching the target zone according to radiotherapy cases, and then feasible suggestions are provided for sketching the target zone.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, are included to provide a further understanding of the application and to provide a further understanding of the application with regard to the other features, objects and advantages of the application. The drawings of the illustrative embodiments of the present application and their descriptions are for the purpose of illustrating the present application and are not to be construed as unduly limiting the present application.
In addition, the same or similar reference numerals denote the same or similar elements throughout the drawings. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
In the drawings:
fig. 1 is a flow chart of an auxiliary method of locating CT tomographic images based on big data analysis.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, example 1: an auxiliary method for positioning CT tomographic images based on big data analysis comprises the following steps:
step 1: and collecting radiotherapy cases in each medical system, extracting positioning CT pictures and case history data of tumor target areas in the radiotherapy cases, and establishing a CT image database and a case history database.
The radiotherapy cases in each medical system are collected, and the radiotherapy cases of patients need to be shared inside each system. In the prior art, most hospitals with long-term treatment and radiotherapy of tumors realize informatization management, so that the medical systems are provided with a basis for realizing information sharing, and specific sharing modes and information interaction modes are not described.
Step 1 comprises the following steps:
step 11: the radiotherapy cases in each medical system are collected, and each radiotherapy case is provided with a unique identification code.
One patient number is set for each patient inside each hospital, but since a plurality of medical systems are involved, these cases are managed in order to be unified. The collected cases need to be reset with an identification code. The setting of the identification code is in fact the setting of a unique label for each case.
Step 12: each radiotherapy case is divided into a positioning CT picture and case history data, and the positioning CT picture and case history data are marked with an identification code.
The radiotherapy cases can be broadly divided into case history data, positioning CT pictures, and other data. The case history data is mainly recorded by doctors, and includes patient identity information, time of illness, treatment scheme used in treatment process, corresponding symptoms and the like, and the information is mainly recorded by words and can be identified and extracted, and specific identification and extraction processes are not described here. Positioning CT pictures, namely, delineating the CT pictures and corresponding target areas obtained by each patient when radiotherapy is carried out; other data is the rest of the patient's test data. Because the present application is primarily directed to contrast localization CT, the remaining test data is not considered, and the case history data is only collected for basic radiotherapy cases, age, sex, symptoms, and underlying treatment regimen. For part of case history data in the form of pictures, namely, handwriting the case history data on a medical record book by a doctor, photographing the uploaded data, and identifying and extracting the uploaded data by using a text technology common in the prior art. Specifically, the extraction method is not described here.
Step 13: collecting all positioning CT pictures to establish a CT image database, and collecting all case history data to establish a case history database.
It should be noted that in the previous technical solutions, it has been explicitly mentioned that an identification code is set for each case, so that the information collected in the corresponding CT image databases and the text data are marked with the identification code so as to find the correspondence between the case history data and the positioning CT images in the two databases.
Step 131 includes the steps of:
step 131: collect all positioning CT pictures M 1 、M 2 、…M i …M m The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is 1 Represents the 1 st positioning CT picture, M 2 Represents the 2 nd positioning CT picture, M i The i-th positioning CT picture is represented by i, i is an integer, i is more than or equal to 1 and less than or equal to m, m represents the number of all collected radiotherapy cases, m is an integer, and m is more than or equal to 2.
Step 132: setting a standard format and standard definition, reducing definition of a positioning CT picture which is scaled to the standard format and has definition higher than the standard definition, and restoring the positioning CT picture which has definition lower than the standard format and increasing definition.
In step 132, the method for reducing the definition of the high-definition picture includes:
step S1: the scout CT picture is converted to a standard format.
Step S2: mapping a pixel grid network under standard definition on a positioning CT picture, and calculating the gray value of each pixel grid under the standard definition, wherein the pixel grid network under the standard definition comprises a 1 st pixel grid, a 2 nd pixel grid … kth pixel grid … kth pixel grid 0 Each pixel grid, k 0 And k is a positive integer, k 0 ≥k≥1,k 0 ≥2,k 0 Representing the total number of pixel grids under standard definition;
wherein y is k Represents the gray value, X, of the kth pixel grid k Is the sum of gray values of all pixel grids on the original positioning CT picture occupied by the kth pixel grid, n k The number of all pixel grids on the original positioning CT picture occupied by the kth pixel grid.
Generally, for a pixel picture, it can be reduced and enlarged, but the picture is enlarged and reduced, in fact, each pixel cell is correspondingly enlarged and reduced. In this scheme, the standard format refers to the side length of the picture, and the standard definition refers to the number of pixel cells.
Thus, after the high definition picture is adjusted to the standard format, the number of pixels of the high definition picture is higher than that of the standard definition picture, so that one standard definition pixel is mapped to several high definition pictures, thus the above formula is adoptedThe gray value of each pixel cell at standard definition can be calculated.
For a low definition picture, it is necessary to repair it. Specifically, in step 132, the method for increasing the sharpness includes:
step S01: preparing a training data set, wherein the training data set comprises an image to be repaired, a sample image and an image edge map to be repaired corresponding to the image to be repaired.
Step S02: a base model is built based on the generation of the antagonism network.
The basic model comprises the following structure: a network structure comprising at least 3 dimensions; and each scale includes a beginning and ending convolutional layer, and at least 10 incomplete blocks; the input of the basic model is { sample image A, image B to be repaired, image edge map E (B) to be repaired }, repaired image F, F=B+G (B, E (B)); wherein G is a generator; the loss function L in the scale is:
;
wherein alpha is e 、ɑ f 、ɑ g Respectively weight constraint items, L is preset d Representing a loss of multiscale structural similarity, L e Indicating loss of resistance, L f Representing edge-aware loss, L g Representing a perceived loss;
wherein Ls represents a repaired image of each scale, rs represents a sample image of each scale, S represents a plurality of scales, S.gtoreq.3, c s 、w s 、h s The channel number, width and height of each scale input image are respectively represented;
wherein D is a discriminator;
wherein +.A. indicates pixel multiplication;
wherein->(F) Representing repaired graphImage F is based on the feature map extracted by VGG16, < >>(A) The feature map extracted from the sample image a based on VGG16 is represented.
Among them, VGG16 refers to a deep convolutional neural network model, which is proposed by Visual Geometry Group (VGG) of oxford university. VGG16 is one of the series of models, specifically a model containing 16 convolution layers (Convolutional Layers), in which VGG16 is used to extract Feature Maps (Feature Maps), Φ 1 (X) is to input the input image (X) into a pretrained VGG16 model to obtain a characteristic map thereof. The feature images contain high-level semantic information of the images, and have important guiding function for edge restoration and texture generation in an image restoration task. By calculating the difference between the VGG16 feature map of the repaired image and the sample image, the model can pay more attention to the structure and texture information of the image in the training process, so that a more accurate and natural repair result is obtained. Φ1 (x) represents a feature map of the input image x extracted by the VGG16 model.
Step S03: and training and verifying the basic model by using the training data set to obtain the image restoration model.
Step S04: the low-definition CT image is scaled to a standard format and then input into an image restoration model to obtain a restored image with standard format and standard definition.
Step 2: and acquiring patient history data of the patient and positioning CT tomographic images to obtain a plurality of images to be sketched.
In step 2, acquiring all tomographic images of the positioning CT of the patient, screening out CT tomographic images containing tumor, and taking the tomographic images containing tumor as the image to be sketched.
Step 3: matching the image to be sketched with a positioning CT picture in a CT image database, simultaneously matching patient medical history data of a patient with case medical history data in a case medical history database, acquiring a plurality of radiotherapy cases similar to the patient from the collected radiotherapy cases according to a matching result, and then sending the radiotherapy cases to a radiotherapy target zone sketching person.
Step 31: extracting all positioning CT pictures M 1 、M 2 、…M i …M m Extracting N features from each picture, wherein the dictionary D is an m multiplied by N matrix;
,j 1,1 representing picture M 1 First feature, j 1,m Representing picture M m Features 1 of j N,1 Representing picture M 1 The nth feature, j N,m Representing picture M m N-th feature of (a).
Step 32: dividing an image to be sketched into a cutting images along the edge of a tumor, sequentially matching each cutting image with all positioning CT images, and obtaining a primary sequence according to the arrangement of the number of the cutting images matched with each positioning CT image, wherein a is preset, a is more than 1, and a is an integer.
When dividing the cutting graph, the cutting graph is generally divided into a sections of side lengths according to the side lengths of the edges of the tumor, and then each section of side length is framed into a rectangular square, and the rectangular square is the cutting graph. With respect to the problem of edge recognition of tumors, there is virtually no need to recognize. Before positioning CT, the patient completes the diagnosis of tumor in common CT, and during diagnosis, the diagnostician or the mainly treating doctor draws out the target area of tumor, then finds out the target area of tumor during positioning CT, and then designs the radiation area, so that during positioning CT, the boundary position of tumor is known, and the tumor can be distinguished. Besides the target area sketched by the main doctor, sketching can be directly performed by a target area sketching personnel according to the boundary of the tumor, and the sketching is a simple process for determining the tumor range, so that the sketching is very convenient.
In step 32, the matching method between the cut map and all the positioning CT pictures is as follows:
step 321: and extracting the characteristics of the cutting graph, and solving the sparse vector p of the cutting graph through a sparse coding algorithm by using the extracted characteristic vector set and the dictionary D.
For sparse vectors, the following conditions are satisfied:where ε is the regularization parameter controlling sparsity, min p Indicating finding the smallest sparse limit p.
Step 322: reconstructing a cut map from the sparse vector p and dictionary D, z rd =Dp,z rd And reconstructing the cut map, wherein Dp represents a picture corresponding to the reconstructed sparse vector p by using the dictionary D and the sparse vector p.
Step 323: computing a reconstructed cut map z rd And judging the matching condition of the picture z and the picture in the dictionary D according to the matching metric between the picture z and the original cutting picture.
Step 33: selecting the front v positioning CT pictures in the primary sequence, wherein v is preset, v is an integer larger than zero, calling corresponding v case history data from a case history database according to the selected v positioning CT pictures, further sequentially calculating similarity scores Q of the v case history data and patient history data, sorting radiotherapy cases according to the similarity scores to obtain a final sequence, and then selecting the front 5 radiotherapy cases to be sent to a designated terminal; q=tu, where T is the similarity of the case history data of the radiotherapy case and the patient history data of the patient, and U is the number of cut patterns to which the positioning CT images of the radiotherapy case are matched.
Specifically, the calculation method for the similarity between the case history data and the patient history data is the prior art, which is not described herein, and the technology is a very mature natural language recognition technology. Generally, after recognizing a text by a BERT model, the similarity is calculated, for example, reference may be made to Rezers N, gurlev ch I, sentence-BERT: sentence Embeddings using Siamese BERT-Networks [ J ], and after calculating the similarity, the similarity is converted into a percentage according to a measurement method of the similarity. Thus, the similarity of the patient history data and the case history data and the matching degree of the tomographic image and the positioning CT image can guide the output of the recommended radiotherapy case together.
Example 2: an auxiliary system for positioning CT tomographic images based on big data analysis comprises a data collection module, a data processing module and a display module; the data processing module is respectively connected with the display module and the data collecting module through signals;
the data collection module is used for collecting radiotherapy cases of all hospital systems and tomographic images of positioning CT of a patient to be delineated on a target area;
the display module is used for displaying the radiotherapy case to a target area sketching person;
and the auxiliary method for positioning the CT tomogram based on big data analysis by the data processing module transmits a corresponding radiotherapy case to the appointed terminal.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. An auxiliary method for positioning CT tomographic images based on big data analysis is characterized in that: the method comprises the following steps:
step 1: collecting radiotherapy cases in each medical system, extracting positioning CT pictures and case history data of tumor target areas in the radiotherapy cases, and establishing a CT image database and a case history database;
step 2: acquiring patient history data of a patient and tomographic images of a positioning CT, and processing the tomographic images of the positioning CT to obtain a plurality of images to be sketched;
step 3: matching the image to be sketched with a positioning CT picture in a CT image database, simultaneously matching patient medical history data of a patient with case medical history data in a case medical history database, acquiring a plurality of radiotherapy cases similar to the patient from the collected radiotherapy cases according to a matching result, and then sending the radiotherapy cases to a designated terminal.
2. The assistance method for locating a CT tomographic image based on big data analysis according to claim 1, wherein: the step 1 specifically comprises the following steps:
step 11: collecting radiotherapy cases in each medical system, and setting a unique identification code for each radiotherapy case;
step 12: dividing each radiotherapy case into a positioning CT picture and case history data, and marking the positioning CT picture and the case history data by using an identification code;
step 13: collecting all positioning CT pictures to establish a CT image database, and collecting all case history data to establish a case history database.
3. The assistance method for locating a CT tomographic image based on big data analysis according to claim 2, wherein: step 13 comprises the following steps:
step 131: collect all positioning CT pictures M 1 、M 2 、…M i …M m The method comprises the steps of carrying out a first treatment on the surface of the Wherein M is 1 Represents the 1 st positioning CT picture, M 2 Represents the 2 nd positioning CT picture, M i I represents an i-th positioning CT picture, i is an integer, i is more than or equal to 1 and less than or equal to m, m represents the number of all collected radiotherapy cases, m is an integer, and m is more than or equal to 2;
step 132: setting a standard format and standard definition, and reducing definition of a positioning CT picture which is scaled to the standard format and has definition higher than the standard definition until the standard definition is reached;
the sharpness is added to a positioned CT picture scaled to a standard format and having a sharpness below the standard sharpness until the standard sharpness is reached.
4. The assistance method for locating a CT tomographic image based on big data analysis according to claim 3, wherein:
in step 132, the method for reducing sharpness is:
step S1: scaling the positioning CT picture to a standard format;
step S2: mapping a pixel grid network under standard definition on a positioning CT picture, and calculating the gray value of each pixel grid under the standard definition, wherein the pixel grid network under the standard definition comprises a 1 st pixel grid, a 2 nd pixel grid … kth pixel grid … kth pixel grid 0 Each pixel grid, k 0 And k is a positive integer, k 0 ≥k≥1,k 0 ≥2,k 0 Representing the total number of pixel grids under standard definition;
wherein y is k Represents the gray value, X, of the kth pixel grid k Is the sum of gray values of all pixel grids on the original positioning CT picture occupied by the kth pixel grid, n k The number of all pixel grids on the original positioning CT picture occupied by the kth pixel grid.
5. The assistance method for locating a CT tomographic image based on big data analysis according to claim 4, wherein: in step 132, the method for increasing the sharpness is:
step S01: preparing a training data set, wherein the training data set comprises an image to be repaired, a sample image and an image edge map to be repaired corresponding to the image to be repaired;
step S02: establishing a basic model based on the generated countermeasure network;
step S03: training and verifying the basic model by using a training data set to obtain an image restoration model;
step S04: and scaling the CT image with low definition to a standard format, and inputting the CT image into an image restoration model to obtain a restored image with the standard format and standard definition.
6. The assistance method for locating a CT tomographic image based on big data analysis according to claim 5, wherein: the basic model at least comprises 3-scale network structures; and each scale includes a beginning and ending convolutional layer, and at least 10 incomplete blocks;
the input of the basic model is { sample image A, image B to be repaired, image edge map E (B) to be repaired };
restored image F, f=b+g (B, E (B)); wherein G is a generator;
the loss function L in the scale is:
wherein alpha is e 、ɑ f 、ɑ g Respectively weight constraint items, L is preset d Representing a loss of multiscale structural similarity, L e Indicating loss of resistance, L f Representing edge-aware loss, L g Representing a perceived loss;
wherein Ls represents a repaired image of each scale, rs represents a sample image of each scale, S represents a plurality of scales, S.gtoreq.3, c s 、w s 、h s The channel number, width and height of each scale input image are respectively represented;
wherein D is a discriminator;
wherein +.A. indicates pixel multiplication;
wherein->(F) Feature map extracted representing the restored image F, < >>(A) A feature map extracted from the sample image a is shown.
7. The assistance method for locating a CT tomographic image based on big data analysis according to claim 6, wherein: in step 2, acquiring all tomographic images of the positioning CT of the patient, screening out tomographic images containing tumor, and taking the tomographic images containing tumor as the image to be sketched.
8. The assistance method for locating a CT tomographic image based on big data analysis according to claim 7, wherein: step 3 comprises the following steps:
step 31: extracting all positioning CT pictures M 1 、M 2 、…M i …M m Extracting N features from each picture, wherein the dictionary D is an m multiplied by N matrix;
,j 1,1 representing picture M 1 First feature, j 1,m Representing picture M m Features 1 of j N,1 Representing picture M 1 The nth feature, j N,m Representing picture M m The nth feature of (a);
step 32: dividing an image to be sketched into a cutting images along the edge of a tumor, sequentially matching each cutting image with all positioning CT images, and obtaining a primary sequence according to the arrangement of the number of the cutting images matched with each positioning CT image, wherein a is preset, a is more than 1, and a is an integer;
step 33: selecting the front v positioning CT pictures in the primary sequence, wherein v is preset, v is an integer larger than zero, calling corresponding v case history data from a case history database according to the selected v positioning CT pictures, further sequentially calculating similarity scores Q of the v case history data and patient history data, sorting radiotherapy cases according to the similarity scores to obtain a final sequence, and then selecting the front 5 radiotherapy cases to be sent to a designated terminal;
q=tu, where T is the similarity of the case history data of the radiotherapy case and the patient history data of the patient, and U is the number of cut patterns to which the positioning CT images of the radiotherapy case are matched.
9. The assistance method for locating a CT tomographic image based on big data analysis according to claim 8, wherein: in step 32, the matching method between the cut map and all the positioning CT pictures is as follows:
step 321: extracting feature vectors of the cut map, and solving sparse vectors p of the cut map through a sparse coding algorithm by using the extracted feature vector set and a dictionary D;
step 322: reconstructing a cut map from the sparse vector p and dictionary D, z rd =Dp,z rd Reconstructing a cut map, wherein Dp represents a picture corresponding to a sparse vector p reconstructed by using a dictionary D and the sparse vector p;
step 323: computing a reconstructed cut map z rd And judging the matching condition of the picture z and the picture in the dictionary D according to the matching metric between the picture z and the original cutting picture.
10. An auxiliary system for positioning CT tomographic images based on big data analysis is characterized in that: the system comprises a data collection module, a data processing module and a display module; the data processing module is respectively connected with the display module and the data collecting module through signals;
the data collection module is used for collecting radiotherapy cases of all hospital systems and tomographic images of positioning CT of a patient to be delineated on a target area;
the display module is used for displaying the radiotherapy case to a target area sketching person;
the data processing module transmits a corresponding radiotherapy case to the specified terminal based on the auxiliary method for locating CT tomographic images based on big data analysis according to any one of claims 1 to 9.
CN202311821117.1A 2023-12-27 2023-12-27 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis Active CN117476219B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311821117.1A CN117476219B (en) 2023-12-27 2023-12-27 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311821117.1A CN117476219B (en) 2023-12-27 2023-12-27 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Publications (2)

Publication Number Publication Date
CN117476219A true CN117476219A (en) 2024-01-30
CN117476219B CN117476219B (en) 2024-03-12

Family

ID=89633370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311821117.1A Active CN117476219B (en) 2023-12-27 2023-12-27 Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis

Country Status (1)

Country Link
CN (1) CN117476219B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN113288193A (en) * 2021-07-08 2021-08-24 广州柏视医疗科技有限公司 Automatic delineation method of CT image breast cancer clinical target area based on deep learning
CN114677378A (en) * 2022-05-31 2022-06-28 四川省医学科学院·四川省人民医院 Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model
WO2022166800A1 (en) * 2021-02-02 2022-08-11 广州柏视医疗科技有限公司 Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region
CN115661107A (en) * 2022-11-07 2023-01-31 中国医学科学院北京协和医院 Image analysis method, system and equipment based on bladder cancer risk stratification
CN116168097A (en) * 2022-11-02 2023-05-26 中国医学科学院肿瘤医院 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN117152442A (en) * 2023-10-27 2023-12-01 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN117282047A (en) * 2023-11-24 2023-12-26 四川省肿瘤医院 Intelligent auxiliary system for tumor target area radiotherapy

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
WO2022166800A1 (en) * 2021-02-02 2022-08-11 广州柏视医疗科技有限公司 Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region
CN113288193A (en) * 2021-07-08 2021-08-24 广州柏视医疗科技有限公司 Automatic delineation method of CT image breast cancer clinical target area based on deep learning
CN114677378A (en) * 2022-05-31 2022-06-28 四川省医学科学院·四川省人民医院 Computer-aided diagnosis and treatment system based on ovarian tumor benign and malignant prediction model
CN116168097A (en) * 2022-11-02 2023-05-26 中国医学科学院肿瘤医院 Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image
CN115661107A (en) * 2022-11-07 2023-01-31 中国医学科学院北京协和医院 Image analysis method, system and equipment based on bladder cancer risk stratification
CN117152442A (en) * 2023-10-27 2023-12-01 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN117282047A (en) * 2023-11-24 2023-12-26 四川省肿瘤医院 Intelligent auxiliary system for tumor target area radiotherapy

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
余行;蒋家良;何奕松;姜晓璇;傅玉川;: "利用卷积神经网络对CT图像进行定位的可行性研究", 中国医疗器械杂志, no. 06, 30 November 2019 (2019-11-30), pages 68 - 72 *
吴茜;余永建;汪志;: "少样本条件下CT图像三维分割算法及其放疗应用", 兰州文理学院学报(自然科学版), no. 1, 28 February 2021 (2021-02-28), pages 65 - 70 *
应微;李晓阳;刘力豪;李林涛;: "分次内锥形束CT扫描联合Fraxion放疗***固定***在颅内肿瘤立体定向放疗中的应用", 肿瘤学杂志, no. 5, 31 May 2020 (2020-05-31), pages 424 - 427 *
蒋聪;梁黎;高绪峰;何友安;刘秦嵩;王先良;: "螺旋断层放疗在胸下段食管癌治疗中的应用效果及对患者心肺功能的影响", 中国医学物理学杂志, no. 3, 31 March 2023 (2023-03-31), pages 297 - 302 *

Also Published As

Publication number Publication date
CN117476219B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
Zhuang et al. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases.
Tilve et al. Pneumonia detection using deep learning approaches
US10691980B1 (en) Multi-task learning for chest X-ray abnormality classification
CN107563383A (en) A kind of medical image auxiliary diagnosis and semi-supervised sample generation system
CN111553892B (en) Lung nodule segmentation calculation method, device and system based on deep learning
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
EP3654343A1 (en) Application of deep learning for medical imaging evaluation
CN111008974A (en) Multi-model fusion femoral neck fracture region positioning and segmentation method and system
JP7346553B2 (en) Determining the growth rate of objects in a 3D dataset using deep learning
CN111462049A (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
CN113743463B (en) Tumor benign and malignant recognition method and system based on image data and deep learning
CN112950552B (en) Rib segmentation marking method and system based on convolutional neural network
CN116563533A (en) Medical image segmentation method and system based on target position priori information
CN116703837B (en) MRI image-based rotator cuff injury intelligent identification method and device
CN117476219B (en) Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
CN116580198A (en) Medical image instance segmentation method based on trans-scale interactive fusion transducer model
Najeeb et al. Brain Tumor Segmentation Utilizing Generative Adversarial, Resnet And Unet Deep Learning
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN114612381A (en) Medical image focus detection algorithm with scale enhancement and attention fusion
CN113592029A (en) Automatic medical image labeling method and system under small sample condition
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
Zhang et al. Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Xue et al. Pathology-based vertebral image retrieval
CN112634221B (en) Cornea hierarchy identification and lesion positioning method and system based on images and depth
CN116759052B (en) Image storage management system and method based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant