CN113689926A - Case identification processing method, case identification processing system, electronic device and storage medium - Google Patents
Case identification processing method, case identification processing system, electronic device and storage medium Download PDFInfo
- Publication number
- CN113689926A CN113689926A CN202111166351.6A CN202111166351A CN113689926A CN 113689926 A CN113689926 A CN 113689926A CN 202111166351 A CN202111166351 A CN 202111166351A CN 113689926 A CN113689926 A CN 113689926A
- Authority
- CN
- China
- Prior art keywords
- pathological image
- focus area
- image
- identification processing
- pathological
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 32
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000001575 pathological effect Effects 0.000 claims abstract description 93
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000003062 neural network model Methods 0.000 claims abstract description 14
- 230000003902 lesion Effects 0.000 claims description 28
- 230000011218 segmentation Effects 0.000 claims description 28
- 230000007170 pathology Effects 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 5
- 238000010191 image analysis Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010827 pathological analysis Methods 0.000 description 2
- 210000004224 pleura Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000004066 metabolic change Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008506 pathogenesis Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Bioethics (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a case identification processing method, a case identification processing system, electronic equipment and a storage medium. Relates to the technical field of image analysis. The method comprises the following steps: acquiring the authority of accessing a case identification processing system; after the system authority is obtained, obtaining a pathological image to be identified, wherein the pathological image to be identified comprises a focus area; according to the pathological image to be recognized, recognizing a focus area in the pathological image to be recognized through a trained neural network model; and displaying the recognition result. According to the method and the device, the problem that the workload is huge and manual errors exist due to the fact that the focus area in the pathological image is manually marked at present can be solved, and the effect of improving the work efficiency and accuracy of marking the focus area in the pathological image and recognizing the focus area is achieved.
Description
Technical Field
Embodiments of the present disclosure relate to the field of image analysis technologies, and in particular, to a case identification processing method, a case identification processing system, an electronic device, and a storage medium.
Background
Medical image processing belongs to the human health career, and the research of the medical image processing has important practical significance. At present, hospitals can generate a large number of pathological images every day, and in a related pathological image identification method, doctors with rich experience generally need to perform effective diagnosis on each pathological image in a manual labeling mode one by one in order to ensure the accuracy of pathological image identification and judgment.
In view of the above-mentioned related technologies, the inventor believes that a method of identifying a lesion region in a pathology image by manual labeling is currently adopted, which results in a huge workload and manual errors.
Disclosure of Invention
Embodiments of the present application provide a case identification processing method, system, electronic device, and storage medium, which can solve the problems of huge workload and artificial errors caused by manually labeling a focus region in a pathological image at present.
In a first aspect of the present application, there is provided a case identification processing method including:
acquiring the authority of accessing a case identification processing system;
after the system authority is obtained, obtaining a pathological image to be identified, wherein the pathological image to be identified comprises a focus area;
according to the pathological image to be recognized, recognizing a focus area in the pathological image to be recognized through a trained neural network model;
and displaying the recognition result.
Through adopting above technical scheme, after obtaining visit case identification processing system authority, obtain the pathology image of waiting to discern including the focus region, through the neural network model that the training was accomplished, discern the focus region in the pathology image of waiting to discern and show the recognition result, can solve and adopt artifical mark focus region in the pathology image at present, lead to the huge and problem that has artificial error of work load, reach the effect that improves the work efficiency and the precision that mark focus region and go on discerning in the pathology image.
In one possible implementation, the neural network model is obtained by training positive samples and negative samples;
the positive sample is a pixel point on the pathological image, the offset of the center point of the pathological image from the focus area is within a preset distance, and the negative sample is a pixel point on the pathological image, the offset of the center point of the pathological image from the focus area is outside the preset distance;
the method for determining the central point of the focal region comprises the following steps:
determining a boundary of the lesion area;
calculating the linear distance between any two pixel points on the boundary of the focus area;
and drawing a circle by taking the linear distance between the two pixel points with the maximum linear distance value as the diameter, and taking the center of the circle as the central point of the focus area.
In one possible implementation, the offset includes a lateral offset and a longitudinal offset;
the lateral offset is calculated using the following equation:
X=x0-xij
the longitudinal offset is calculated using the following equation:
Y=y0-yij
wherein X is a transverse offset, Y is a longitudinal offset, (X)0,y0) Is the coordinate of the central point of the lesion area, (x)ij,yij) The coordinates of the pixel points are shown, i is the row of the image, j is the column of the image, and i and j are positive real numbers.
In one possible implementation, the determining the boundary of the lesion area includes:
screening the focus area in the pathological image through semantic segmentation according to the pathological image;
after determining the focal region, determining a boundary of the focal region.
In one possible implementation, the screening the lesion region in a pathology image by semantic segmentation according to the pathology image includes:
extracting the characteristics of the pathological image to obtain characteristic data of the pathological image;
obtaining the focus region segmentation frame information based on the characteristic data;
and obtaining the focus region semantic segmentation information of the pathological image based on the feature data and the focus region segmentation frame information.
In a second aspect of the present application, there is provided a case identification processing system including: the device comprises a login module, a reading module, an analysis module and a display module;
the login module is used for acquiring the authority of accessing the case identification processing system;
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a pathological image to be identified after acquiring system authority, and the pathological image to be identified comprises a focus area;
the analysis module is used for identifying a focus area in the pathological image to be identified through the trained neural network model according to the pathological image to be identified;
and the display module is used for displaying the identification result.
In one possible implementation, the analysis module includes:
a first determination unit for determining a boundary of the lesion region;
the first calculation unit is used for calculating the linear distance between any two pixel points on the boundary of the focus area;
and the second determining unit is used for drawing a circle by taking the linear distance between the two pixel points with the maximum linear distance value as the diameter, and taking the center of the circle as the center point of the focus area.
In one possible implementation, the analysis module further includes:
and the second calculating unit is used for calculating the offset of the pixel point on the pathological image from the central point of the focus area.
In a third aspect of the present application, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the computer program.
In a fourth aspect of the application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method.
It should be understood that what is described in this summary section is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present application will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
fig. 1 shows a flowchart of a case identification processing method in an embodiment of the present application;
FIG. 2 is a block diagram showing a case identification processing system in the embodiment of the present application;
fig. 3 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The case identification processing method provided by the embodiment of the application can be applied to the technical field of image analysis, such as scenes of pathological identification processing and the like. In some embodiments, the case identification processing method may be performed by an electronic device.
Fig. 1 shows a flowchart of a case identification processing method in the embodiment of the present application. Referring to fig. 1, the case identification processing method in the present embodiment includes:
step 101: and acquiring the authority of accessing the case identification processing system.
Step 102: and after the system authority is acquired, acquiring a pathological image to be identified, wherein the pathological image to be identified comprises a focus area.
Step 103: and according to the pathological image to be recognized, recognizing a focus area in the pathological image to be recognized through the trained neural network model.
Step 104: and displaying the recognition result.
Through adopting above technical scheme, after obtaining visit case identification processing system authority, obtain the pathology image of waiting to discern including the focus region, through the neural network model that the training was accomplished, discern the focus region in the pathology image of waiting to discern and show the recognition result, can solve and adopt artifical mark focus region in the pathology image at present, lead to the huge and problem that has artificial error of work load, reach the effect that improves the work efficiency and the precision that mark focus region and go on discerning in the pathology image.
In the present embodiment, in step 101, the pathology is the cause of the disease, the pathogenesis, and the structural, functional, and metabolic changes of cells, tissues, and organs and their regularity occurring during the disease. Therefore, the method has important significance in clinical diagnosis and treatment processes for diagnosing the pathology. Because of the importance of pathological diagnosis in clinical diagnosis and treatment, it is important to set access rights to the case identification processing system.
In the embodiment of the application, the setting of the access right to the case identification processing system has the following main effects: firstly, the application except the case identification processing system is limited, and the case identification processing system is protected; secondly, the login except the account is limited; and thirdly, carrying out hierarchical limitation on the account according to the account grade.
In the embodiment of the present application, in step 102, the pathological section belongs to one of the pathological specimens for examination under a microscope to observe pathological changes and make pathological diagnosis. The pathological image is a huge image obtained by digitizing pathological sections.
In the embodiment of the application, the pathological image to be identified can be acquired from a case. After the user acquires the corresponding system authority, the acquired pathological image to be identified is stored in the case identification processing system according to the authority requirement, so that the pathological image to be identified can be identified conveniently.
In the embodiment of the application, in the pathological images to be identified, the pixel characteristics presented by the focus area in different types of pathological images are different.
In this embodiment of the application, in step 103, the trained neural network model can identify a lesion area in the pathology image to be identified based on the pathology image to be identified including the lesion area. It should be noted that the pixels of the lesion region included in the pathological image to be identified of the same type are distinguished from the pixels outside the lesion region, and have identifiable characteristics, but the lesion region presenting manner has problems of contour occlusion and unclear range.
In the embodiment of the present application, the target detection neural network is used for identifying a lesion area in a case image to be identified, and optionally, the target detection neural network may be a convolutional neural network, such as a U-net, a CNN network, a residual error network, or any other neural network capable of achieving target detection. Specifically, feature extraction (for example, at least one layer of convolution processing) may be performed on an input pathological image to be identified to obtain an image feature, and then classification, segmentation, and detection processing may be performed on the image feature to obtain a location region of the lesion.
In some embodiments, the trained neural network model is obtained from positive and negative sample training. The positive sample is a pixel point on the pathological image within a preset distance from the central point offset of the focus area, and the negative sample is a pixel point on the pathological image outside the preset distance from the central point offset of the focus area. Then, the following method can be used to determine the focal region center point:
step a1, determine the boundary of the lesion area.
And A2, calculating the linear distance between any two pixel points on the boundary of the focus area.
And A3, drawing a circle by taking the straight line distance between two pixel points with the largest straight line distance value as the diameter, and taking the center of the circle as the center point of the focus area.
In the embodiment of the application, the pixels in the historical pathological image have the same value or are within a preset range under the relative average deviation, and adjacent pixels serve as the same characteristic, the range is determined according to the pixels with the same characteristic, and the boundary of the range is used as the focus boundary.
In the embodiment of the application, the pixel value is a value given by a computer when the historical pathological image is digitized, and represents the average brightness information of a small square in the historical pathological image. And selecting pixels with pixel values of 0-5% relative to the preset range under the average deviation, namely defining the pixels as the same characteristic pixels. Groups of pixels with the same feature and adjacent pixels form the boundary of the lesion area.
In some embodiments, the offset comprises a lateral offset and a longitudinal offset. Then, the following method can be adopted for calculating the offset amount:
in the step B1, the step B,
the lateral offset is calculated using the following equation:
X=x0-xij
the longitudinal offset is calculated using the following equation:
Y=y0-yij
wherein X is a transverse offset, Y is a longitudinal offset, (X)0,y0) Is the coordinate of the central point of the lesion area, (x)ij,yij) The coordinates of the pixel points are shown, i is the row of the image, j is the column of the image, and i and j are positive real numbers.
In the embodiment of the application, the positive sample includes all the pixel points on the historical pathological images, which are within the preset distance from the central point offset of the focus region, and the negative sample includes all the pixel points on the historical pathological images, which are outside the preset distance from the central point offset of the focus region.
For example, when the CT value of the patient is lower than-950 HU, the pathological image shows that a nodular high-density image is seen under the pleura of the right lung, the image is tightly attached to the horizontally split pleura and is about 2-3mm in size, the boundary is clear, the other two lungs do not see obvious and substantial lesions, and the preset distance of the central point offset of the lesion area of the pathological image is 2-3 mm.
In some embodiments, the boundary of the lesion region is determined by semantic segmentation. Then, the following method can be adopted to determine the boundary of the lesion region by semantic segmentation: :
and step C1, according to the pathological image, screening the focus area in the pathological image through semantic segmentation.
Step C2, after determining the lesion area, determining the boundary of the lesion area.
In the embodiment of the present application, optionally, each pixel in the historical pathological image is labeled with its corresponding category by using image semantic segmentation, and pixels belonging to the same category are classified into one category. Before deep learning, optionally, a classifier for realizing image semantic segmentation is constructed by adopting methods such as a texture element forest and a random forest.
And classifying pixel points, which are within a preset distance from the central point offset of the focus region, in the historical pathological image into positive samples by image semantic segmentation, and classifying pixel points, which are outside the preset distance from the central point offset of the focus region, in the pathological image into negative samples.
In some embodiments, the following method may be adopted for screening the lesion region in the pathology image by semantic segmentation:
and D1, extracting the characteristics of the pathological image to obtain the characteristic data of the pathological image.
And D2, obtaining the focus area segmentation frame information based on the characteristic data.
And D3, obtaining the focus area semantic segmentation information of the pathological image based on the feature data and the focus area segmentation frame information.
In the embodiment of the application, a semantic segmentation model is adopted to screen pathological images. The semantic segmentation model comprises an encoding unit, a segmentation frame decoding unit and a semantic decoding unit. And performing feature extraction on the pathological image through an encoding unit to obtain feature data of the pathological image. And obtaining focus region segmentation frame information based on the characteristic data through a segmentation frame decoding unit. And obtaining the focus region semantic segmentation information of the pathological image based on the feature data and the focus region segmentation frame information through a semantic decoding unit.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the following is a further description of the scheme described in the present application by way of system embodiments.
Fig. 2 is a block diagram showing a case identification processing system according to an embodiment of the present application. Referring to fig. 2, the case identification processing system includes a login module 201, an acquisition module 202, an analysis module 203, and a display module 204.
And the login module 201 is used for acquiring the authority of accessing the case identification processing system.
The obtaining module 202 is configured to obtain a pathological image to be identified after obtaining the system permission, where the pathological image to be identified includes a focus area.
And the analysis module 203 is configured to identify a focus region in the pathological image to be identified through the trained neural network model according to the pathological image to be identified.
And a display module 204 for displaying the recognition result.
In some embodiments, the analysis module 203 comprises:
a first determination unit for determining a boundary of a lesion region;
the first calculation unit is used for calculating the linear distance between any two pixel points on the boundary of the focus area;
and the second determining unit is used for drawing a circle by taking the linear distance between the two pixel points with the maximum linear distance value as the diameter, and taking the center of the circle as the central point of the focus area.
In some embodiments, the analysis module 203 further comprises:
and the second calculating unit is used for calculating the offset of the pixel point on the pathological image from the central point of the focus area.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 3 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application. As shown in fig. 3, the electronic device 300 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein the processor 301 is coupled to the memory 303. Optionally, the electronic device 300 may also include a transceiver 304. It should be noted that the transceiver 304 is not limited to one in practical applications, and the structure of the electronic device 300 is not limited to the embodiment of the present application.
The Processor 301 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 301 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
The Memory 303 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 303 is used for storing application program codes for executing the scheme of the application, and the processor 301 controls the execution. The processor 301 is configured to execute application program code stored in the memory 303 to implement the aspects illustrated in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, in the embodiment of the application, after the authority of the access case identification processing system is acquired, the pathological image to be identified including the focus area is acquired, the focus area in the pathological image to be identified is identified and the identification result is displayed through the trained neural network model, the problems that the work load is huge and manual errors exist due to the fact that the focus area in the pathological image is manually marked at present can be solved, and the effect of improving the work efficiency and the accuracy of marking the focus area in the pathological image and identifying the focus area is achieved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
Claims (10)
1. A case identification processing method, comprising:
acquiring the authority of accessing a case identification processing system;
after the system authority is obtained, obtaining a pathological image to be identified, wherein the pathological image to be identified comprises a focus area;
according to the pathological image to be recognized, recognizing a focus area in the pathological image to be recognized through a trained neural network model;
and displaying the recognition result.
2. The method of claim 1, wherein the neural network model is trained from positive and negative examples;
the positive sample is a pixel point on the pathological image, the offset of the center point of the pathological image from the focus area is within a preset distance, and the negative sample is a pixel point on the pathological image, the offset of the center point of the pathological image from the focus area is outside the preset distance;
the method for determining the central point of the focal region comprises the following steps:
determining a boundary of the lesion area;
calculating the linear distance between any two pixel points on the boundary of the focus area;
and drawing a circle by taking the linear distance between the two pixel points with the maximum linear distance value as the diameter, and taking the center of the circle as the central point of the focus area.
3. The method of claim 2, wherein the offset amount comprises a lateral offset amount and a longitudinal offset amount;
the lateral offset is calculated using the following equation:
X=x0-xij
the longitudinal offset is calculated using the following equation:
Y=y0-yij
wherein X is a transverse offset, Y is a longitudinal offset, (X)0,y0) Is the coordinate of the central point of the lesion area, (x)ij,yij) The coordinates of the pixel points are shown, i is the row of the image, j is the column of the image, and i and j are positive real numbers.
4. The method of claim 3, wherein the determining the boundary of the focal region comprises:
screening the focus area in the pathological image through semantic segmentation according to the pathological image;
after determining the focal region, determining a boundary of the focal region.
5. The method of claim 4, wherein the screening the lesion area in a pathology image by semantic segmentation according to the pathology image comprises:
extracting the characteristics of the pathological image to obtain characteristic data of the pathological image;
obtaining the focus region segmentation frame information based on the characteristic data;
and obtaining the focus region semantic segmentation information of the pathological image based on the feature data and the focus region segmentation frame information.
6. A case identification processing system, comprising: the device comprises a login module, a reading module, an analysis module and a display module;
the login module is used for acquiring the authority of accessing the case identification processing system;
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a pathological image to be identified after acquiring system authority, and the pathological image to be identified comprises a focus area;
the analysis module is used for identifying a focus area in the pathological image to be identified through the trained neural network model according to the pathological image to be identified;
and the display module is used for displaying the identification result.
7. The system of claim 6, wherein the analysis module comprises:
a first determination unit for determining a boundary of the lesion region;
the first calculation unit is used for calculating the linear distance between any two pixel points on the boundary of the focus area;
and the second determining unit is used for drawing a circle by taking the linear distance between the two pixel points with the maximum linear distance value as the diameter, and taking the center of the circle as the center point of the focus area.
8. The system of claim 6, wherein the analysis module further comprises:
and the second calculating unit is used for calculating the offset of the pixel point on the pathological image from the central point of the focus area.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111166351.6A CN113689926A (en) | 2021-09-30 | 2021-09-30 | Case identification processing method, case identification processing system, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111166351.6A CN113689926A (en) | 2021-09-30 | 2021-09-30 | Case identification processing method, case identification processing system, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113689926A true CN113689926A (en) | 2021-11-23 |
Family
ID=78587537
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111166351.6A Pending CN113689926A (en) | 2021-09-30 | 2021-09-30 | Case identification processing method, case identification processing system, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113689926A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008289916A (en) * | 2000-06-30 | 2008-12-04 | Hitachi Medical Corp | Image diagnosis supporting device |
CN107563123A (en) * | 2017-09-27 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for marking medical image |
CN110599476A (en) * | 2019-09-12 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Disease grading method, device, equipment and medium based on machine learning |
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
-
2021
- 2021-09-30 CN CN202111166351.6A patent/CN113689926A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008289916A (en) * | 2000-06-30 | 2008-12-04 | Hitachi Medical Corp | Image diagnosis supporting device |
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
CN107563123A (en) * | 2017-09-27 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for marking medical image |
CN110599476A (en) * | 2019-09-12 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Disease grading method, device, equipment and medium based on machine learning |
Non-Patent Citations (1)
Title |
---|
张钢等: "一种病理图像自动标注的机器学习方法", 《计算机研究与发展》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059697B (en) | Automatic lung nodule segmentation method based on deep learning | |
CN108133476B (en) | Method and system for automatically detecting pulmonary nodules | |
CN110245657B (en) | Pathological image similarity detection method and detection device | |
CN112465834B (en) | Blood vessel segmentation method and device | |
CN111062955A (en) | Lung CT image data segmentation method and system | |
CN111028246A (en) | Medical image segmentation method and device, storage medium and electronic equipment | |
CN111340827A (en) | Lung CT image data processing and analyzing method and system | |
CN113298831A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN110570425A (en) | Lung nodule analysis method and device based on deep reinforcement learning algorithm | |
CN110992439A (en) | Fiber bundle tracking method, computer device and storage medium | |
CN114170440A (en) | Method and device for determining image feature points, computer equipment and storage medium | |
CN114359671A (en) | Multi-target learning-based ultrasonic image thyroid nodule classification method and system | |
CN115170795B (en) | Image small target segmentation method, device, terminal and storage medium | |
CN113689926A (en) | Case identification processing method, case identification processing system, electronic device and storage medium | |
CN109800820A (en) | A kind of classification method based on ultrasonic contrast image uniform degree | |
CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
EP4327333A1 (en) | Methods and systems for automated follow-up reading of medical image data | |
CN115115967A (en) | Video motion analysis method, device, equipment and medium for model creature | |
CN114170415A (en) | TMB classification method and system based on histopathology image depth domain adaptation | |
Kaur et al. | A survey on medical image segmentation | |
CN113255756A (en) | Image fusion method and device, electronic equipment and storage medium | |
CN112633405A (en) | Model training method, medical image analysis device, medical image analysis equipment and medical image analysis medium | |
CN116862930B (en) | Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes | |
CN116664594A (en) | Three-dimensional medical image two-stage segmentation method and device based on sharing CNN | |
CN114093504A (en) | Information generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211123 |
|
RJ01 | Rejection of invention patent application after publication |