CN113920095A - Apparatus and method for analysis management of cervical images, apparatus and storage medium - Google Patents

Apparatus and method for analysis management of cervical images, apparatus and storage medium Download PDF

Info

Publication number
CN113920095A
CN113920095A CN202111197630.9A CN202111197630A CN113920095A CN 113920095 A CN113920095 A CN 113920095A CN 202111197630 A CN202111197630 A CN 202111197630A CN 113920095 A CN113920095 A CN 113920095A
Authority
CN
China
Prior art keywords
image
processor
analysis
cervical
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111197630.9A
Other languages
Chinese (zh)
Inventor
宋敏敏
赵鹏飞
龚业勇
李育威
曹坤琳
宋麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Keya Medical Technology Corp
Original Assignee
Shenzhen Keya Medical Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Keya Medical Technology Corp filed Critical Shenzhen Keya Medical Technology Corp
Priority to CN202111197630.9A priority Critical patent/CN113920095A/en
Publication of CN113920095A publication Critical patent/CN113920095A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an apparatus and method, a device, and a storage medium for analysis management of cervical images. The apparatus includes at least one processor configured to: acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; presenting an intermediate analysis interface in association with at least a portion of the image based on the analysis results, wherein the intermediate analysis interface presents corresponding image blocks and attributes of the respective detected objects. The method and the device can simultaneously present the analysis result and at least part of the image to be analyzed related to the analysis result, thereby reducing the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical image, improving the accuracy and convenience of the analysis management of the cervical image and further improving the quality of medical service.

Description

Apparatus and method for analysis management of cervical images, apparatus and storage medium
The present application is a divisional application of the chinese patent application having an application number of 202110754716.0, an application date of 2021, 7/5/h, entitled "apparatus and method for analysis management of cervical images, apparatus and storage medium".
Technical Field
The present disclosure relates to the field of cervical image analysis technology, and more particularly, to an apparatus and method, a device, and a computer-readable storage medium for analysis management of cervical images.
Background
Cervical cancer is the most common gynecological malignancy, and the incidence rate thereof has a low-age trend in recent years, and the number of new cervical cancer cases worldwide is 52.8 ten thousand and 26.6 ten thousand deaths, wherein 85% of cervical cancer deaths occur in low and medium income areas with low general survey rate. In addition, cervical cancer is a disease that can be prevented and cured, and the early cure rate can reach 90%. Therefore, early screening and diagnosis are critical to the prevention and treatment of cervical cancer. Current cervical cancer screening techniques can be divided into two categories, morphology-based methods that examine at the cellular or tissue level to identify abnormalities, and molecular biology-based methods that examine cervical cancer markers for cervical epitheliomas. Traditional cervical cancer screening mainly relies on manual interpretation of a doctor, and not only is workload large and working efficiency low, but also misdiagnosis rate is high, so that large-area screening cannot be carried out.
In recent years, with the development of artificial intelligence technology, the intelligent screening and analysis of cervical cancer cells can be realized by automatically shooting pathological images and automatically analyzing and identifying cancer cells, so that the diagnosis workload of doctors is effectively reduced, and the diagnosis accuracy is improved. However, the existing cervical intelligent screening and analyzing system based on deep learning can only provide a single analysis result or screening image, so that a doctor cannot make a correct diagnosis by combining with a pathological peripheral image, and a phenomenon of misdiagnosis/missed diagnosis is likely to occur in an analysis and management process of a cervical image, and therefore, the above problem brings a great challenge to the application of the cervical intelligent screening and analyzing system.
Disclosure of Invention
The present disclosure is provided to solve the above-mentioned problems occurring in the prior art.
The present disclosure calls for an apparatus and method, device, and computer-readable storage medium for analytical management of cervical images. The device is used for acquiring at least one of a cervical liquid-based cell image, a cervical cell immunohistochemical image and a cervical histology image of a detected person as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; and presenting an intermediate analysis interface in association with at least the partial image based on the analysis result, wherein the intermediate analysis interface presents corresponding image blocks and attributes of each detected object, and can present the analysis result and the intermediate analysis interface presenting the corresponding image blocks and attributes of each detected object in association with the analysis result, so that a doctor can review the partial image to be analyzed based on the corresponding image blocks and attributes of the detected objects, and the diagnosis information after review by the doctor is subsequently used for generating an analysis report. Therefore, the possibility of misdiagnosis/missed diagnosis in the analysis and management process of the cervical images is reduced, the accuracy, convenience and user friendliness of the analysis and management of the cervical images are improved, and the quality of medical services is further improved.
According to a first aspect of the present disclosure, an apparatus for analysis management of cervical images is provided. The apparatus includes at least one processor configured to: acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; presenting an intermediate analysis interface in association with at least a portion of the image based on the analysis results, wherein the intermediate analysis interface presents corresponding image blocks and attributes of the respective detected objects.
According to a second aspect of the present disclosure, a method for analysis management of cervical images is provided. The method comprises the following steps: acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed; determining, by at least one processor, an analysis result based on the image to be analyzed and using a learning network; presenting, by at least one processor, at least a portion of an image to be analyzed; presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results, wherein the intermediate analysis interface presents corresponding image patches and attributes of each detected object.
According to a third aspect of the present disclosure, there is provided an apparatus for analysis management of cervical images, comprising a memory and a processor, wherein the memory is to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement a method for analysis management of cervical images.
According to a fourth aspect of the present disclosure, a computer-readable storage medium is provided, on which computer program instructions are stored, wherein the computer program instructions, when executed by a processor, implement a method for analysis management of cervical images.
With the apparatus and method for analysis management of cervical images, apparatus, and computer-readable storage medium according to various embodiments of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical immunohistochemical image, and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; and presenting an intermediate analysis interface in association with at least part of the images based on the analysis result, wherein the intermediate analysis interface presents corresponding image blocks and attributes of each detected object, and can present the analysis result and the intermediate analysis interface presenting the corresponding image blocks and attributes of each detected object in association with the analysis result, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, thereby reducing the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical images, improving the accuracy, convenience and user friendliness of the analysis management of the cervical images, and further improving the quality of medical services.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate various embodiments generally by way of example and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. The same reference numbers will be used throughout the drawings to refer to the same or like parts, where appropriate. Such embodiments are illustrative, and are not intended to be exhaustive or exclusive embodiments of the present apparatus or method.
Fig. 1 shows a block diagram of an apparatus for analysis management of cervical images according to an embodiment of the present disclosure;
FIG. 2(a) shows a schematic view of at least a portion of an image to be analyzed and an intermediate analysis interface presented in a correlated manner in accordance with an embodiment of the present disclosure;
fig. 2(b) shows a schematic diagram of at least a partial image and a preview image displayed in a picture-in-picture manner according to an embodiment of the present disclosure;
fig. 3 shows a flow diagram of a method for analysis management of cervical images in accordance with an embodiment of the present disclosure;
fig. 4 shows a flow diagram of another method for analysis management of cervical images in accordance with an embodiment of the present disclosure;
fig. 5 shows a flow diagram of yet another method for analysis management of cervical images, in accordance with an embodiment of the present disclosure;
fig. 6 shows a flow diagram of yet another method for analysis management of cervical images in accordance with an embodiment of the present disclosure; and
fig. 7 shows a block diagram of an apparatus for analysis management of cervical images according to an embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical aspects of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings. Embodiments of the present disclosure are described in further detail below with reference to the figures and the detailed description, but the present disclosure is not limited thereto. The order in which the various steps described herein are described as examples should not be construed as a limitation if there is no requirement for a context relationship between each other, and one skilled in the art would know that sequential adjustments may be made without destroying the logical relationship between each other, rendering the overall process impractical.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present disclosure, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
An apparatus and method for analysis management of cervical images according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a block diagram of an apparatus for analysis management of cervical images according to an embodiment of the present disclosure. As shown in fig. 1, the apparatus may include at least one processor 102.
Specifically, the at least one processor 102 is configured to acquire at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of the subject as an image to be analyzed, determine an analysis result based on the image to be analyzed and using a learning network; further, the at least one processor 102 presents at least a partial image of the image to be analyzed and presents an intermediate analysis interface in association with the at least partial image based on the analysis result, wherein the intermediate analysis interface presents corresponding image blocks and attributes of the respective detected objects. An example of the presentation of such an association is shown in fig. 2(a), but only as an example, and the present disclosure is not limited thereto. The doctor can check and view the intermediate analysis interface and at least part of the image to be analyzed aimed at in the same user interface (picture) without opening different user interfaces or switching among a plurality of different user interfaces. Thus, it is possible to easily review against the corresponding partial image (presentation of details in the original slide image) based on the corresponding image block and attribute of the detected object (presentation of the preliminary screening result of the AI). Further, the left side of fig. 2(a) may present at least a partial image of an image to be analyzed, and the right side of fig. 2(a) may present single cells, clumps of cells, and fuzzy clumps as detected objects, and present corresponding image blocks and any one of attributes such as low level, high level, and microorganism of the respective detected objects. The doctor may edit the intermediate analysis interface according to the review result, for example, may select and confirm the analysis of the detected object as image blocks with some attributes, and then save the image blocks, so as to enter the "next step" together with the corresponding image blocks and attributes of other saved detected objects reviewed by the doctor, for example, but not limited to, for generating an analysis report. Regardless of whether the analysis report is generated or not, the intermediate analysis interface is presented in the same user interface in association with at least part of the images, so that the operation difficulty of a doctor in the analysis management process of the cervical images can be reduced, the possibility of misdiagnosis/missed diagnosis is reduced, the accuracy, convenience and user friendliness of the analysis management of the cervical images are improved, and the quality of medical services is further improved.
In some embodiments, the presentation of at least a portion of the image in association with the intermediate analysis interface is further such that, once the user edits one of the at least a portion of the image and the intermediate analysis interface, the other automatically generates a corresponding change. For example, the user may mark (e.g., frame) at least a portion of the image with an editing tool, such as marking (e.g., framing) a corresponding image block of the detected object that the user manually identifies, or may manually enter an attribute of the corresponding image block; if the corresponding image block is missed in the primary screening result of the AI, the corresponding image block of the detected object which is manually identified is automatically supplemented into the intermediate analysis interface, and NLP processing can be carried out through manually inputting characters of the attribute of the corresponding image block so as to present the attribute of the corresponding image block of the supplemented detected object in the intermediate analysis interface. For another example, the user may abandon the AI prescreening results of image patches of certain attributes in the intermediate analysis interface, and once the user provides manual labeling of the attributes for the corresponding image patches on at least a portion of the image, both may be highlighted (e.g., highlighted) at the same time to prompt the user for an association between the two, thereby accurately drawing the user's attention to both to contrast with each other to make a more accurate review result.
Here, the at least one processor 102 may be a processing device including more than one general-purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the at least one processor 102 may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. The at least one processor 102 may also be one or more special-purpose processing devices, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like. Further, the at least one processor 102 may also present a user interface to prompt the user to enter settings for the structure of the learning network. For example, a list or menu of various structures of the learning network may be presented on the user interface for selection by the user, with the user selected learning network of a predetermined structure for subsequent analysis processes.
The cervical liquid-based cytology examination is also called cervical liquid-based thin-layer cell detection, and a liquid-based thin-layer cell detection system is adopted to detect and detect cervical cells and carry out cytological classification diagnosis, so that the cervical liquid-based cytology examination technology is the most advanced cervical cancer cytology examination technology internationally at present. Compared with the traditional cervical smear examination by smear pap smear examination, the cervical liquid-based cytology examination obviously improves the satisfaction degree of the sample and the detection rate of abnormal cervical cells. Partial precancerous lesions and microbial infections such as mold, trichomonas, virus, chlamydia, etc. can also be found, which is the most advanced technology applied to cervical cancer screening.
Immunohistochemistry is a method of determining the antigens (polypeptides and proteins) in tissue cells by applying the principle of antigen-antibody reaction, namely the principle of specific binding of antigens and antibodies, which is the basic principle of immunology, and developing color development agents (fluorescein, enzyme, metal ions and isotopes) of labeled antibodies through chemical reaction, and is called immunohistochemistry (immunohistochemistry) or immunocytochemistry (immunocytochemistry) for the positioning, the characterization and the relative quantification of the antigens. Cervical immunohistochemistry is used to further confirm the staging of the disease.
Histology is the subject of studying the fine structure of the normal human body and its related functions, and is a branch of anatomy in medical science, and the fine structure refers to a structure that can be observed clearly under a microscope. The images of normal microstructures obtained from histological studies are the essential basis of histopathology, which can only discuss the abnormal changes of these microstructures in the course of disease if the normal microstructures are clearly known.
The cervical fluid-based cell image, cervical cellular immunohistochemistry image, and cervical histology image may be in the format of "SVS", "ndpi", "Kfb", "mrxs", and the like. The cervical cell immunohistochemistry image and the cervical histology image may be images obtained by staining for the same set of cancer specific genes and/or antigens. Here, the cancer-specific gene refers to a gene that is specifically expressed or significantly overexpressed only in cancer cells, and the cancer-specific antigen refers to a novel antigen that is expressed only on the surface of certain cancer cells and is not present on normal cells. It should be noted that the cervical fluid-based cell image, the cervical cellular immunohistochemistry image, and the cervical histology image may be images acquired in real time from a medical imaging apparatus such as a microscope, a camera, or the like via a communication interface, or may be full-field digital pathological section images acquired from a server, which is not limited by the embodiment of the present disclosure.
Here, the communication interface may include a network adapter, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adapter (such as an optical fiber, USB 3.0, a thunderbolt interface, etc.), a wireless network adapter (such as a WiFi adapter), a telecommunications (such as a 3G, 4G/LTE, etc.) adapter, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like.
The Learning network may be a Deep Learning (DL) network and have a diverse class of analysis. Further, the learning Network may include one or a combination of a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN). Convolutional Neural networks are a class of feed Forward Neural Networks (FNNs) that contain convolutional computations and have a deep structure. The convolutional neural network has the characteristic learning ability and can carry out translation invariant classification on input information according to the hierarchical structure of the convolutional neural network. The convolutional layers of the convolutional neural network model may include at least one filter or kernel. More than one parameter of the at least one filter, such as kernel weights, sizes, shapes and structures, etc., may be determined by, for example, a back propagation based training process. The recurrent neural network is a recurrent neural network in which sequence data is used as input, recursion is performed in the evolution direction of the sequence, and all nodes (recurrent units) are connected in a chain manner. A recurrent Neural Network (ANN) is an Artificial Neural Network (ANN) that has a tree-like hierarchical structure and in which Network nodes recur input information in accordance with their connection order.
Further, the structure of the deep learning network is preset as a network model, and a loss function is set to train the learning network. Here, the deep Learning is one of Machine Learning (ML), which is a must-pass path for implementing Artificial Intelligence (AI). Deep learning forms a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data. The network model may be trained using supervised learning. The architecture of the network model may include a stack of different blocks and layers, each converting more than one input to more than one output. Examples of different layers may include more than one convolutional or full convolutional layer, a non-linear operator layer, a pooling layer or sub-sampling layer, a fully-connected layer, and/or a final lossy layer. Each layer may connect one upstream layer and one downstream layer. The Network model may include a Residual Network (ResNet) model, a split Network (UNet) model, an AlexNet model, a *** net model, a Visual Geometry Group (VGG) model, a Pyramid Scene Parsing Network (PSPNet) model, a deep lab v3 Network model, etc., which are not limited by the embodiments of the present disclosure. The loss function (loss function) is a function that maps the value of a random event or its related random variables to non-negative real numbers to represent the "risk" or "loss" of the random event.
The analysis result may be a binary analysis result of an analysis class of the learning network, for example, a normal cell and an abnormal cell, or a negative cell and a positive cell. Here, the category of the cell may include a single cell and a clump cell, the single cell may include one or more of a low-grade squamous epithelial lesion cell, a high-grade squamous epithelial lesion cell, a microbial cell, a metaplastic cell, a cervical canal cell, and an inflammatory cell; the clump cells may include one or more of low-grade squamous epithelial lesion cells, high-grade squamous epithelial lesion cells, adenocarcinoma cells, microbial cells, metaplastic cells, cervical cells, and inflammatory cells. Further, the analysis results may also include image satisfaction, inflammatory cell grade, and blurry mass.
In some embodiments, the intermediate analysis interface refers to an interface that presents corresponding image blocks and attributes of each detected object to a user based on the analysis result and at least a partial image of the image to be analyzed. As shown in fig. 2(a), the intermediate analysis interface may present the corresponding image block reference numerals and attribute reference numerals of each detected object in a list manner. Through the list presentation, the user can know the corresponding image blocks and attributes of all the detected objects together, and conveniently select or abandon the image blocks with various attributes detected by the AI. Specifically, the user may select to approve (select) or disapprove (give up) the AI detection results for image blocks of various attributes. In some embodiments, a corresponding intermediate analysis interface is presented for each slice (or target region). Besides the detected object, the evaluation result of the slice level, such as "slice satisfaction evaluation", "inflammation grade", "AI prescreening opinion", etc., can be presented in the intermediate analysis interface for the user to refer to, or manually corrected by the user.
For example, if the user is not satisfied with the slice quality with respect to the at least part of the image to be analyzed, the result of the "slice satisfaction evaluation" may be corrected from "satisfactory" to "unsatisfactory", and the AI prescreening analysis results corresponding to the intermediate analysis interface may be discarded without further consideration. For another example, if the "inflammation grade" of the AI prescreening is "none," but the user has obtained a different manual screening result, a corresponding correction may be made, and so on.
In some embodiments, the rows and columns of the list respectively represent the physical form type and physiological level of each detected object, a selection box is provided in association with each corresponding image block, a user can select the AI detection result of the corresponding image block by checking the selection box, and the AI detection result of the corresponding image block can be denied and discarded by canceling the checking.
Further, the intermediate analysis interface may present a fuzzy cluster as a detection object, the fuzzy cluster may be used as an area for a doctor to check heavily, and the number of the fuzzy clusters may be used as one of parameters for image quality evaluation.
The detection object may be a cell or a tissue in the image. In the cervical fluid-based cell image, the cervical fluid-based cells are in a round cake shape and are reddish blue in color; in the cervical cell immunohistochemical image, the shape of the cervical cells is a round cake shape, and the color is a dark tan; in the cervical histological image, the morphology of the cervical tissue is an irregular shape of a non-circular cake and thus, the cervical liquid-based cell image can be distinguished from the cervical cellular immunohistochemical image based on the color of the cervical liquid-based cells in the cervical liquid-based cell image, and the color of the cervical cells in the cervical cellular immunohistochemical image. In addition, the cervical fluid-based cell image and the cervical cell immunohistochemical image may be distinguished from the cervical histological image based on a morphology of the cervical fluid-based cells in the cervical fluid-based cell image, a morphology of the cervical cells in the cervical cell immunohistochemical image, and a morphology of the cervical tissue in the cervical histological image.
According to the apparatus for analysis management of cervical images of the embodiment of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; and presenting an intermediate analysis interface in association with at least part of the image based on the analysis result, wherein the intermediate analysis interface presents corresponding image blocks and attributes of each detected object, and can present the analysis result and the intermediate analysis interface presenting the corresponding image blocks and attributes of each detected object in association with the analysis result, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, thereby reducing the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical images, improving the accuracy and convenience of the analysis management of the cervical images, and further improving the quality of medical services.
In some embodiments, the at least one processor 102 is further configured to: receiving a first operation of editing a target area in at least part of an image by a user; and under the condition that the first operation is received, presenting an intermediate analysis interface corresponding to the target area, so that the intermediate analysis interface presents corresponding image blocks and attributes of all detected objects in the target area.
Specifically, under the condition that a first operation that a user edits a target area in at least a part of images is received, at least one processor 102 skips an interface of a currently presented analysis state to an intermediate analysis interface corresponding to the target area to present corresponding image blocks and attributes of each detected object in the target area to the user, so that the user can edit the target area in a targeted manner, accuracy of analysis and management of cervical images is improved, time of analysis and management of the cervical images is saved, and quality of medical services is further improved. Here, the editing operation may include one or more of a click operation, an addition operation, a modification operation, a deletion operation, a sketch operation, and an association operation.
In some embodiments, the at least one processor 102 is further configured to: receiving a second operation of editing each detected object in the intermediate analysis interface by the user; and under the condition that the second operation is received, adjusting at least partial image to present the edited corresponding image block and attribute of each detected object.
Specifically, upon receiving a second operation of the user to edit each detected object at the intermediate analysis interface, the at least one processor 102 adjusts at least a portion of the image to present a corresponding image block and attribute of each detected object after editing, i.e., to make the adjusted image include all detected objects edited by the second operation together with the peripheral region.
In some embodiments, the at least one processor 102 is further configured to: marking each detected object in at least part of the image and presenting the associated analysis result; receiving a third operation of editing the detected object and/or the associated analysis result by the user; and under the condition that the third operation is received, adjusting the intermediate analysis interface to present the edited detected object and/or the associated analysis result.
Specifically, the at least one processor 102 marks each detected object in at least a portion of the image and presents an associated analysis result, and upon receiving a third operation from the user to edit the detected object and/or the associated analysis result, the at least one processor 102 adjusts the intermediate analysis interface to present the edited detected object and/or the associated analysis result.
In some embodiments, the at least one processor 102 is further configured to: a peripheral image block including all detected objects presented by the intermediate analysis interface together with the peripheral area is presented, and the peripheral image block is taken as at least a partial image.
In some embodiments, the apparatus for analysis management of cervical images further comprises: a receiving unit 104 configured to receive a fourth operation of confirming each of the intermediate analysis interfaces by the user; a generating section 106 configured to generate an analysis report in a case where the fourth operation for all the intermediate analysis interfaces is received.
Specifically, the receiving unit 104 receives the fourth operation of the user to confirm each of the intermediate analysis interfaces, and when the fourth operation is received for all of the intermediate analysis interfaces, the generating unit 106 generates an analysis report, which may be an independent report or a joint report, but the embodiment of the present disclosure is not limited thereto. Here, the independent report may include an analysis report generated based on the cervical liquid-based cell image, an analysis report generated based on the cervical immunohistochemical image, and an analysis report generated based on the cervical histological image; the joint report may be a report generated in a fused manner based on the cervical fluid-based cell image, the cervical cellular immunohistochemistry image, and the cervical histology image.
In some embodiments, the apparatus for analysis management of cervical images further comprises: and a display unit 108 configured to display a preview image of the detection target in at least the partial image in a reduced size in a picture-in-picture manner.
Specifically, the display unit 108 may display at least a partial image and a preview image of the detection target in at least a partial image in a picture-in-picture manner, as shown in fig. 2 (b); further, in a case where a click operation of the user on any position in the preview image is received, the display section 108 may automatically locate a corresponding position in at least a part of the image based on the position, so that the user may perform comprehensive analysis with reference to pathological conditions of a peripheral region of the position, thereby improving accuracy of an analysis result of analysis management of the cervical image. Here, the preview image may be an image of a detected object such as a cell or tissue, as shown in the upper left corner in fig. 2 (b); at least part of the image may be an image of a portion other than the preview image in fig. 2 (b).
Fig. 3 shows a flow chart of a method for analysis management of cervical images according to an embodiment of the present disclosure. As shown in fig. 3, the method for analysis management of a cervical image may include acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed (step 302). The method may also include determining, by the at least one processor, an analysis result based on the image to be analyzed and using the learning network (step 304). The method may also include presenting, by the at least one processor, at least a portion of the image to be analyzed (step 306). The method may also include presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results, wherein the intermediate analysis interface presents corresponding image patches and attributes of each detected object (step 308).
According to the method for analysis management of cervical images of the embodiment of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image, and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; and presenting an intermediate analysis interface in association with at least part of the image based on the analysis result, wherein the intermediate analysis interface presents corresponding image blocks and attributes of each detected object, and can present the analysis result and the intermediate analysis interface presenting the corresponding image blocks and attributes of each detected object in association with the analysis result, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, thereby reducing the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical images, improving the accuracy and convenience of the analysis management of the cervical images, and further improving the quality of medical services.
In some embodiments, presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results includes: receiving, by at least one processor, a first operation of a user to edit a target region in at least a portion of an image; and presenting, by the at least one processor, an intermediate analysis interface corresponding to the target area in a case where the first operation is received, so that the intermediate analysis interface presents corresponding image blocks and attributes of each detected object in the target area.
In some embodiments, presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results includes: receiving, by the at least one processor, a second operation of the user to edit each detected object in the intermediate analysis interface; and adjusting, by the at least one processor, at least a portion of the image to present the edited corresponding image blocks and attributes of each detected object in the event that the second operation is received.
In some embodiments, presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results includes: marking, by at least one processor, each detected object in at least a portion of the image and presenting an associated analysis result; receiving, by the at least one processor, a third operation of the user to edit the detected object and/or the associated analysis result; and adjusting, by the at least one processor, the intermediate analysis interface to present the edited detected object and/or the associated analysis result in the event that the third operation is received.
In some embodiments, the editing operations include one or more of click operations, add operations, modify operations, delete operations, sketch operations, and associate operations.
In some embodiments, presenting, by the at least one processor, an intermediate analysis interface in association with at least a portion of the image based on the analysis results includes: presenting, by the at least one processor, a peripheral image block including all detected objects presented by the intermediate analysis interface together with the peripheral area, and taking the peripheral image block as at least a partial image.
In some embodiments, adjusting, by the at least one processor and upon receiving the second operation, at least a portion of the image to present the edited corresponding image blocks and attributes of the respective detected objects includes: at least one processor adjusts at least a portion of the image to include all of the detected objects edited by the second operation together with the peripheral region.
In some embodiments, the intermediate analysis interface presents the corresponding image patches and attributes of each detected object in a list.
In some embodiments, the intermediate analysis interface presents a fuzzy blob as the detection target.
In some embodiments, the rows and columns of the list respectively represent the physical modality type and physiological level of each detected subject, and a checkbox is provided in association with each corresponding image patch.
In some embodiments, the method for analytical management of cervical images further comprises: receiving, by the at least one processor, a fourth operation confirmed by the user for each intermediate analysis interface; generating, by the at least one processor, an analysis report upon receiving a fourth operation on all of the intermediate analysis interfaces.
In some embodiments, the method for analytical management of cervical images further comprises: and displaying a preview image of at least a part of the image in a reduced manner in a picture-in-picture manner.
Fig. 4 shows a flow diagram of another method for analysis management of cervical images according to an embodiment of the present disclosure. As shown in fig. 4, the method for analysis management of cervical images may include the following steps. The method begins by acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed (step 402). The method may also include determining, by the at least one processor, an analysis result based on the image to be analyzed and using a learning network (step 404). The method may also include presenting, by the at least one processor, at least a portion of the image to be analyzed (step 406). The method may also include receiving, by the at least one processor, a first operation by the user to edit the target area in at least a portion of the image (step 408). The method may further include presenting, by the at least one processor, an intermediate analysis interface corresponding to the target area if the first operation is received, such that the intermediate analysis interface presents corresponding image blocks and attributes of each detected object in the target area (step 410). The method may also include receiving a fourth operation from the user to confirm each of the intermediate analysis interfaces (step 412). The method may further include generating an analysis report if a fourth operation is received for all of the intermediate analysis interfaces (step 414).
According to the method for analysis management of cervical images of the embodiment of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image, and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network, and presenting at least part of the image to be analyzed; receiving a first operation of editing a target area in at least part of an image by a user; under the condition that the first operation is received, presenting an intermediate analysis interface corresponding to the target area, so that the intermediate analysis interface presents corresponding image blocks and attributes of all detected objects in the target area; receiving fourth operation of confirming each intermediate analysis interface by a user; generating an analysis report in case of receiving fourth operations on all the intermediate analysis interfaces; the preview image of at least a partial image is displayed in a reduced mode in a picture-in-picture mode, the analysis result and an intermediate analysis interface which is associated with the analysis result and displays the corresponding image blocks and attributes of all detected objects can be displayed, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical image is reduced, the accuracy and the convenience of the analysis management of the cervical image are improved, and the quality of medical services is further improved.
Fig. 5 shows a flow chart of yet another method for analysis management of cervical images according to an embodiment of the present disclosure. As shown in fig. 5, the method for analysis management of cervical images may include the following steps. The method begins by acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed (step 502). The method may also include determining, by the at least one processor, an analysis result based on the image to be analyzed and using a learning network (step 504). The method may also include presenting, by the at least one processor, at least a portion of the image to be analyzed (step 506). The method may also include receiving, by the at least one processor, a second operation by the user to edit each detected object in the intermediate analysis interface (step 508). The method may further include adjusting, by the at least one processor, at least a portion of the image to present the edited corresponding image blocks and attributes of the respective detected objects upon receipt of the second operation (step 510). The method may further include receiving a fourth operation of user confirmation of each intermediate analysis interface (step 512). The method may further include generating an analysis report in the event that a fourth operation is received for all of the intermediate analysis interfaces (step 514).
According to the method for analysis management of cervical images of the embodiment of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image, and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; receiving a second operation of editing each detected object in the intermediate analysis interface by the user; under the condition that the second operation is received, adjusting at least a part of images to present corresponding image blocks and attributes of each edited detected object; receiving fourth operation of confirming each intermediate analysis interface by a user; generating an analysis report in case of receiving fourth operations on all the intermediate analysis interfaces; the preview image of at least a partial image is displayed in a reduced mode in a picture-in-picture mode, the analysis result and an intermediate analysis interface which is associated with the analysis result and displays the corresponding image blocks and attributes of all detected objects can be displayed, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical image is reduced, the accuracy and the convenience of the analysis management of the cervical image are improved, and the quality of medical services is further improved.
Fig. 6 shows a flow chart of yet another method for analysis management of cervical images in accordance with an embodiment of the present disclosure. As shown in fig. 6, the method for analysis management of cervical images may include the following steps. The method begins by acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed (step 602). The method may also include determining, by the at least one processor, an analysis result based on the image to be analyzed and using a learning network (step 604). The method may also include presenting, by at least one processor, at least a portion of the image to be analyzed (step 606). The method may also include marking, by the at least one processor, each detected object in at least a portion of the image and presenting the associated analysis results (step 608). The method may also include receiving, by the at least one processor, a third operation of the user to edit the detected object and/or the associated analysis results (step 610). The method may also include adjusting, by the at least one processor, the intermediate analysis interface to present the edited detected object and/or associated analysis results if the third operation is received (step 612). The method may also include receiving a fourth operation from the user to confirm each of the intermediate analysis interfaces (step 614). The method may further include generating an analysis report in the event that a fourth operation is received for all of the intermediate analysis interfaces (step 616).
According to the method for analysis management of cervical images of the embodiment of the present disclosure, by acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image, and a cervical histological image of a subject as an image to be analyzed; determining an analysis result based on the image to be analyzed and by utilizing a learning network; presenting at least part of an image to be analyzed; marking each detected object in at least partial image, presenting associated analysis result, and receiving third operation of editing detected object and/or associated analysis result by user; under the condition that the third operation is received, adjusting the middle analysis interface to enable the middle analysis interface to present the edited detected object and/or the associated analysis result; receiving fourth operation of confirming each intermediate analysis interface by a user; generating an analysis report in case of receiving fourth operations on all the intermediate analysis interfaces; the preview image of at least a partial image is displayed in a reduced mode in a picture-in-picture mode, the analysis result and an intermediate analysis interface which is associated with the analysis result and displays the corresponding image blocks and attributes of all detected objects can be displayed, so that a doctor can make a correct diagnosis based on the corresponding image blocks and attributes of the detected objects, the possibility of misdiagnosis/missed diagnosis in the analysis management process of the cervical image is reduced, the accuracy and the convenience of the analysis management of the cervical image are improved, and the quality of medical services is further improved.
Fig. 7 shows a block diagram of an apparatus for analysis management of cervical images according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus for analysis management of cervical images is a general-purpose data processing device including a general-purpose computer hardware structure, and includes at least a processor 702 and a memory 704. The processor 702 and memory 704 are connected by a bus 706. The memory 704 is adapted to store instructions or programs that are executable by the processor 702. The processor 702 may be a stand-alone microprocessor or a collection of one or more microprocessors. Thus, the processor 702 implements the processing of data and the control of other devices by executing commands stored by the memory 704 to perform the method flows of the disclosed embodiments as described above. Bus 706 couples the various components described above together, to a display controller 708 and to a display device and input/output (I/O) device 710. Input/output (I/O) device 710 may be a mouse, keyboard, modem, network interface, touch input device, motion sensing input device, printer, and other devices known in the art. Typically, input/output (I/O) devices 710 are connected to the system through an input/output (I/O) controller 712.
The memory 704 may store, among other things, software components such as an operating system, communication modules, interaction modules, and applications. Each of the modules and applications described above corresponds to a set of executable program instructions that perform one or more functions and methods described in embodiments of the invention.
In some embodiments, the device for analyzing and managing the cervical images may be located somewhere, distributed in many places, or distributed, for example, in the cloud, which is not limited by the embodiments of the present disclosure. Accordingly, the display for presenting the results of the analysis management of the cervical image may be local or remote to the apparatus for analysis management of the cervical image, without limitation.
The above-described flowchart and/or block diagrams of methods, systems, and computer program products according to embodiments of the present disclosure describe various aspects of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Also, as will be appreciated by one skilled in the art, various aspects of the disclosed embodiments may be embodied as a system, method or computer program product. Accordingly, various aspects of embodiments of the present disclosure may take the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. Further, aspects of the disclosure may take the form of: a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer-readable media may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to: electromagnetic, optical, or any suitable combination thereof. The computer readable signal medium may be any of the following computer readable media: is not a computer readable storage medium and may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including: object oriented programming languages such as Java, Smalltalk, C + +, PHP, Python, and the like; and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package; executing in part on a user computer and in part on a remote computer; or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Moreover, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments based on the disclosure with equivalent elements, modifications, omissions, combinations (e.g., of various embodiments across), adaptations or alterations. The elements of the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (36)

1. An apparatus for analysis management of cervical images, the apparatus comprising at least one processor configured to:
acquiring at least one of a cervical fluid-based cell image, a cervical cell immunohistochemical image and a cervical histological image of a subject as an image to be analyzed;
determining an analysis result based on the image to be analyzed and by utilizing a learning network;
based on the analysis result, at least a partial image of the image to be analyzed is separately presented in association with an intermediate analysis interface, wherein the intermediate analysis interface presents corresponding image blocks and attributes of each detected object in the at least partial image in a table, and rows and columns of the table respectively represent physiological levels and physical morphology types of each detected object as the attributes.
2. The apparatus of claim 1, wherein the physical morphology type comprises fuzzy clusters.
3. The apparatus of claim 1, wherein at least a portion of the image to be analyzed is presented separately in association with the intermediate analysis interface in the same frame.
4. The apparatus of claim 1, wherein the at least one processor is further configured to:
and in the case that one of the at least partial image and the intermediate analysis interface is edited, enabling the other one to generate corresponding change.
5. The apparatus of claim 1, wherein the at least one processor is further configured to: and receiving the rechecking operation of the user.
6. The apparatus of claim 1, wherein the at least one processor is further configured to: and generating an analysis report by using the analysis result after rechecking by the user.
7. The apparatus according to claim 4, wherein, in a case where one of the at least partial image and the intermediate analysis interface is edited, causing the other to generate a corresponding change specifically includes at least one of:
under the condition that at least part of images are marked by a user to form corresponding image blocks of the detected object which is manually identified by the user and is missed in the intermediate analysis interface, automatically supplementing the corresponding image blocks of the manually identified detected object into the intermediate analysis interface;
receiving manual input of the attributes of the corresponding image blocks of the detected objects which are automatically supplemented and manually identified by the user;
performing NLP processing on characters with manually input attributes to display attributes of corresponding image blocks of the supplemented detected object in an intermediate analysis interface;
in case the user would abandon the image blocks with attributes in the intermediate analysis interface, both are highlighted at the same time as soon as the user provides the manual annotation of the attributes for the corresponding image block on at least part of the image.
8. The apparatus of claim 1, wherein the at least one processor is further configured to: the intermediate analysis interface also presents the results of the assessment at the slice level,
receiving an editing operation of a user on the evaluation result of the slice level;
and in the case of receiving an editing operation on the evaluation result of the slice level, correspondingly adjusting all analysis results corresponding to the intermediate analysis interface.
9. The apparatus of claim 1, wherein the at least one processor is further configured to: the intermediate analysis interface also provides a checkbox in the table in association with each corresponding image block,
receiving the checking operation of the user on the selected middle frame or canceling the checking operation;
and under the condition that the checking operation on the selected frame is received, selecting the analysis result of the image block, and under the condition that the checking cancellation operation on the selected frame is received, denying the analysis result of the image block.
10. The apparatus of claim 1, wherein the at least one processor is further configured to:
receiving a first operation of editing a target area in the at least partial image by a user;
and presenting an intermediate analysis interface corresponding to the target area under the condition that the first operation is received, so that the intermediate analysis interface presents corresponding image blocks and attributes of all detected objects in the target area.
11. The apparatus of claim 1, wherein the at least one processor is further configured to:
receiving a second operation of editing each detected object in the intermediate analysis interface by the user;
and under the condition that the second operation is received, adjusting the at least partial image to present the edited corresponding image blocks and attributes of the detection objects.
12. The apparatus of claim 1, wherein the at least one processor is further configured to:
marking each detected object in the at least part of the image and presenting the associated analysis result;
receiving a third operation of editing the detected object and/or the associated analysis result by the user;
and under the condition that the third operation is received, adjusting the intermediate analysis interface to present the edited detected object and/or the associated analysis result.
13. The apparatus according to any one of claims 10 to 12, wherein the editing operation comprises one or more of a click operation, an add operation, a modify operation, a delete operation, a sketch operation, and an associate operation.
14. The apparatus of claim 1, wherein the at least one processor is further configured to:
presenting a peripheral image block including all detected objects presented by the intermediate analysis interface together with a peripheral area, and taking the peripheral image block as the at least partial image.
15. The apparatus of claim 11, in which the at least one processor is further configured:
and adjusting the at least partial image to include all the detection objects edited by the second operation and the peripheral area.
16. The apparatus of any one of claims 1 to 12, further comprising:
the receiving part is configured to receive a fourth operation confirmed by the user on each intermediate analysis interface;
a generation section configured to generate an analysis report in a case where the fourth operation for all the intermediate analysis interfaces is received.
17. The apparatus of any one of claims 1 to 12, further comprising:
a display unit configured to display a preview image of a detection target in the at least partial image in a reduced size in the at least partial image in a picture-in-picture manner.
18. A method for analytical management of cervical images, the method comprising:
acquiring, by at least one processor, at least one of a cervical fluid-based cell image, a cervical cellular immunohistochemistry image, and a cervical histology image of a subject as an image to be analyzed;
determining, by the at least one processor, an analysis result based on the image to be analyzed and using a learning network;
presenting, by the at least one processor, at least a partial image of the image to be analyzed separately in association with an intermediate analysis interface based on the analysis result, wherein the intermediate analysis interface presents corresponding image patches and attributes of each of the at least partial image in a table, rows and columns of the table representing physiological levels and physical morphology types of each of the detected objects, respectively, as the attributes.
19. The method of claim 18, wherein the physical morphology types comprise fuzzy clusters.
20. The method of claim 18, wherein at least a portion of the image to be analyzed is presented separately in association with the intermediate analysis interface in the same frame.
21. The method of claim 18, wherein the at least one processor is further configured to:
and in the case that one of the at least partial image and the intermediate analysis interface is edited, enabling the other one to generate corresponding change.
22. The method of claim 18, wherein the at least one processor is further configured to: and receiving the rechecking operation of the user.
23. The method of claim 18, wherein the at least one processor is further configured to: and generating an analysis report by using the analysis result after rechecking by the user.
24. The method according to claim 21, wherein, in a case where one of the at least partial image and the intermediate analysis interface is edited, causing the other to generate a corresponding change specifically includes at least one of:
under the condition that at least part of images are marked by a user to form corresponding image blocks of the detected object which is manually identified by the user and is missed in the intermediate analysis interface, automatically supplementing the corresponding image blocks of the manually identified detected object into the intermediate analysis interface;
receiving manual input of the attributes of the corresponding image blocks of the detected objects which are automatically supplemented and manually identified by the user;
performing NLP processing on characters with manually input attributes to display attributes of corresponding image blocks of the supplemented detected object in an intermediate analysis interface;
in case the user would abandon the image blocks with attributes in the intermediate analysis interface, both are highlighted at the same time as soon as the user provides the manual annotation of the attributes for the corresponding image block on at least part of the image.
25. The method of claim 18, wherein the at least one processor is further configured to: the intermediate analysis interface also presents the results of the assessment at the slice level,
receiving an editing operation of a user on the evaluation result of the slice level;
and in the case of receiving an editing operation on the evaluation result of the slice level, correspondingly adjusting all analysis results corresponding to the intermediate analysis interface.
26. The method of claim 18, wherein the at least one processor is further configured to: the intermediate analysis interface also provides a checkbox in the table in association with each corresponding image block,
receiving the checking operation of the user on the selected middle frame or canceling the checking operation;
and under the condition that the checking operation on the selected frame is received, selecting the analysis result of the image block, and under the condition that the checking cancellation operation on the selected frame is received, denying the analysis result of the image block.
27. The method of claim 18, wherein presenting, by the at least one processor, an intermediate analysis interface in association with the at least partial image based on the analysis results comprises:
receiving, by the at least one processor, a first operation of a user to edit a target region in the at least part of the image;
and presenting, by the at least one processor, an intermediate analysis interface corresponding to the target area in a case where the first operation is received, so that the intermediate analysis interface presents corresponding image blocks and attributes of each detected object in the target area.
28. The method of claim 18, wherein presenting, by the at least one processor, an intermediate analysis interface in association with the at least partial image based on the analysis results comprises:
receiving, by the at least one processor, a second operation of a user to edit each detected object in the intermediate analysis interface;
and adjusting, by the at least one processor, the at least partial image to present the edited corresponding image blocks and attributes of the respective detected objects, if the second operation is received.
29. The method of claim 18, wherein presenting, by the at least one processor, an intermediate analysis interface in association with the at least partial image based on the analysis results comprises:
marking, by the at least one processor, each detected object in the at least partial image and presenting an associated analysis result;
receiving, by the at least one processor, a third operation of the user to edit the detected object and/or the associated analysis result;
adjusting, by the at least one processor, the intermediate analysis interface to present the edited detected object and/or the associated analysis result if the third operation is received.
30. The method of any one of claims 27 to 29, wherein the editing operation comprises one or more of a click operation, an add operation, a modify operation, a delete operation, a sketch operation, and an associate operation.
31. The method of claim 18, wherein presenting, by the at least one processor, an intermediate analysis interface in association with the at least partial image based on the analysis results comprises:
presenting, by the at least one processor, a peripheral tile comprising all detected objects presented by the intermediate analysis interface along with a peripheral area, and treating the peripheral tile as the at least partial image.
32. The method according to claim 28, wherein said adjusting, by the at least one processor and upon receiving the second operation, the at least partial image to present the edited corresponding image blocks and attributes of each detected object comprises:
the at least one processor adjusts the at least partial image to include all of the detected objects edited by the second operation together with the peripheral region.
33. The method of any one of claims 18 to 29, further comprising:
receiving fourth operation of confirming each intermediate analysis interface by a user;
and in the case that the fourth operation on all the intermediate analysis interfaces is received, generating an analysis report.
34. The method of any one of claims 18 to 29, further comprising:
and displaying a preview image of the detection object in the at least partial image in a reduced size in the at least partial image in a picture-in-picture manner.
35. An apparatus for analysis management of a cervical image, comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executable by the processor to perform operations performed by the method for analysis management of a cervical image of any of claims 18 to 34.
36. A computer storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the operations performed by the method for analysis management of cervical images of any of claims 18 to 34.
CN202111197630.9A 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium Pending CN113920095A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111197630.9A CN113920095A (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111197630.9A CN113920095A (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium
CN202110754716.0A CN113256628B (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110754716.0A Division CN113256628B (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium

Publications (1)

Publication Number Publication Date
CN113920095A true CN113920095A (en) 2022-01-11

Family

ID=77190606

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111197630.9A Pending CN113920095A (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium
CN202110754716.0A Active CN113256628B (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110754716.0A Active CN113256628B (en) 2021-07-05 2021-07-05 Apparatus and method for analysis management of cervical images, apparatus and storage medium

Country Status (1)

Country Link
CN (2) CN113920095A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114171167B (en) * 2022-02-11 2022-06-03 广州安必平医药科技股份有限公司 Image display method, device, terminal and storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2006254689B2 (en) * 2005-06-02 2012-03-08 Salient Imaging, Inc. System and method of computer-aided detection
US8989458B2 (en) * 2007-10-17 2015-03-24 Kabushiki Kaisha Toshiba Image diagnosis support system and image diagnosis support method
CN101396283A (en) * 2008-10-07 2009-04-01 深圳市蓝韵实业有限公司 Ultrasonic image assistant diagnostic system
JP5486364B2 (en) * 2009-09-17 2014-05-07 富士フイルム株式会社 Interpretation report creation apparatus, method and program
JP4931027B2 (en) * 2010-03-29 2012-05-16 富士フイルム株式会社 Medical image diagnosis support apparatus and method, and program
AU2011202211A1 (en) * 2011-05-12 2012-11-29 Jonathan Chernilo User interface for medical diagnosis
CN102682305B (en) * 2012-04-25 2014-07-02 深圳市迈科龙医疗设备有限公司 Automatic screening system and automatic screening method using thin-prep cytology test
CN103558404A (en) * 2013-11-08 2014-02-05 麦克奥迪(厦门)医疗诊断***有限公司 Automatic cell DNA (deoxyribonucleic acid) detecting and rechecking method based on digital slides
EP3291173A1 (en) * 2016-09-02 2018-03-07 Casio Computer Co., Ltd. Diagnosis assisting device, image processing method in diagnosis assisting device, and program
CN108416379A (en) * 2018-03-01 2018-08-17 北京羽医甘蓝信息技术有限公司 Method and apparatus for handling cervical cell image
CN109978826A (en) * 2019-02-20 2019-07-05 程俊美 A kind of cervical cancer cell pathology arrange negative method intelligence screening system and method
CN110163928B (en) * 2019-05-22 2021-07-09 数坤(北京)网络科技股份有限公司 Image linkage method, device and storage equipment based on blood vessel segmentation and focus
CN112380900A (en) * 2020-10-10 2021-02-19 深圳视见医疗科技有限公司 Deep learning-based cervical fluid-based cell digital image classification method and system
CN112813162B (en) * 2021-01-05 2021-09-21 中山大学附属第五医院 Application of DDX 19A-based method for promoting cervical squamous cell carcinoma metastasis
CN112861916A (en) * 2021-01-13 2021-05-28 武汉希诺智能医学有限公司 Invasive cervical carcinoma pathological image classification method and system
CN112817755B (en) * 2021-01-22 2023-12-19 西安交通大学 Edge cloud cooperative deep learning target detection method based on target tracking acceleration
CN112991360A (en) * 2021-03-01 2021-06-18 北京邮电大学 Pre-cervical cancer pathological histology segmentation optimization method

Also Published As

Publication number Publication date
CN113256628B (en) 2021-10-26
CN113256628A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
Awan et al. Glandular morphometrics for objective grading of colorectal adenocarcinoma histology images
Rawat et al. Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images
Khened et al. A generalized deep learning framework for whole-slide image segmentation and analysis
US11682192B2 (en) Deep-learning systems and methods for joint cell and region classification in biological images
Aubreville et al. Deep learning algorithms out-perform veterinary pathologists in detecting the mitotically most active tumor region
Sheikhzadeh et al. Automatic labeling of molecular biomarkers of immunohistochemistry images using fully convolutional networks
CN109791693B (en) Digital pathology system and related workflow for providing visualized whole-slice image analysis
Sunny et al. A smart tele-cytology point-of-care platform for oral cancer screening
EP3692497B1 (en) Histopathological image analysis
US20200388033A1 (en) System and method for automatic labeling of pathology images
Shi et al. Automated Ki-67 quantification of immunohistochemical staining image of human nasopharyngeal carcinoma xenografts
Marzahl et al. Deep learning-based quantification of pulmonary hemosiderophages in cytology slides
Singhal et al. A deep learning system for prostate cancer diagnosis and grading in whole slide images of core needle biopsies
Negahbani et al. PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer
CN111488921A (en) Panoramic digital pathological image intelligent analysis system and method
JP2018502279A (en) Classification of nuclei in histological images
US20220351860A1 (en) Federated learning system for training machine learning algorithms and maintaining patient privacy
Govind et al. Improving the accuracy of gastrointestinal neuroendocrine tumor grading with deep learning
CN111527519B (en) System and method for generating selective stain segmentation images of cell types of interest
Han et al. Histologic tissue components provide major cues for machine learning-based prostate cancer detection and grading on prostatectomy specimens
CN111095352B (en) Automated method and system for detecting cells in stained sample images
Stenman et al. Antibody supervised training of a deep learning based algorithm for leukocyte segmentation in papillary thyroid carcinoma
Joseph et al. Proliferation Tumour Marker Network (PTM-NET) for the identification of tumour region in Ki67 stained breast cancer whole slide images
JP2021524575A (en) Quantification of signals in stain aggregates
Lanng et al. Quality assessment of Ki67 staining using cell line proliferation index and stain intensity features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination