WO2023121510A1 - Détermination de pathologie d'organes de la cage thoracique sur la base d'images radiographiques - Google Patents

Détermination de pathologie d'organes de la cage thoracique sur la base d'images radiographiques Download PDF

Info

Publication number
WO2023121510A1
WO2023121510A1 PCT/RU2022/050306 RU2022050306W WO2023121510A1 WO 2023121510 A1 WO2023121510 A1 WO 2023121510A1 RU 2022050306 W RU2022050306 W RU 2022050306W WO 2023121510 A1 WO2023121510 A1 WO 2023121510A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pathology
data
neural network
ray
Prior art date
Application number
PCT/RU2022/050306
Other languages
English (en)
Russian (ru)
Inventor
Александр Сергеевич МОНГОЛИН
Тамерлан Айдын Оглы МУСТАФАЕВ
Original Assignee
Автономная некоммерческая организация высшего образования "Университет Иннополис"
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from RU2021137783A external-priority patent/RU2782518C1/ru
Application filed by Автономная некоммерческая организация высшего образования "Университет Иннополис" filed Critical Автономная некоммерческая организация высшего образования "Университет Иннополис"
Publication of WO2023121510A1 publication Critical patent/WO2023121510A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • a neural network that has been previously trained on a labeled set of images receives a digitized x-ray image of a patient's chest, analyzes it, and makes a prediction about whether whether a given patient has a pathology, thereby helping the doctor to make an optimal informed decision.
  • the present invention is directed to improving the accuracy of devices and methods for determining chest pathology based on x-ray images.
  • a data receiving unit configured to receive data to be processed comprising a frontal chest x-ray image from the device requesting data processing by communicating therewith;
  • an image preparation unit configured to prepare, using at least one processor, an x-ray image contained in the data to be processed by performing one or more preliminary transformations on it for use by a neural network operating on the basis of at least one processor, moreover, in the process
  • the image preparation unit is configured to automatically crop the image so that it covers only the lungs by:
  • a report generation unit configured to generate, using at least one processor, a report on an examination conducted in the device, the examination report containing at least one file containing an indication of a result of processing performed in the pathology prediction unit;
  • the reporting unit is further configured to:
  • the identification of possible areas with signs of pathologies is performed using computer vision methods that analyze the input image of the neural network and the activation within the neural network of the pathology predictor that were obtained by running this image through it.
  • a combination of Grad-CAM (weighted combination of gradient-based class activation maps) and Saliency maps (significance maps) methods are used as computer vision methods.
  • the indication of the found areas with signs of pathologies is made in the form of a heat map or an outline of the boundaries of the areas.
  • the report generation unit is configured to overlay an indication of the found areas with signs of pathologies on a copy of the input image of the neural network of the pathology prediction unit.
  • the indication of the found areas with signs of pathologies is linked to the size, shape and position of the image, which was directly analyzed by the neural network;
  • the image preparation unit is additionally configured to save the parameters of all transformations performed on the original image obtained in the data to be processed and associated with its resizing, displacement, rotation, cropping and cropping;
  • a method for determining the pathology of the chest organs based on the analysis of x-ray images comprising the steps of:
  • an x-ray image contained in the data to be processed is prepared by performing one or more preliminary transformations on it for use by a neural network operating on the basis of at least one processor, and when preparing the image, the image is automatically cropped in such a way so that it covers only the lungs, by:
  • FIG. 1 shows a schematic representation of a medical decision support system according to the present invention.
  • FIG. 1 shows an example of an image to be processed in the pathology detection device.
  • FIG. 1 shows a block diagram of a device for determining the pathology according to the present invention.
  • FIG. 1 shows a flowchart of the method for determining the pathology according to the present invention.
  • the chest health decision support system 100 will be generally described.
  • the medical diagnostic facility (MO) 110 contains an x-ray imaging device 120 (also may be referred to interchangeably as a x-ray diagnostic machine, x-ray machine, x-ray unit, x-ray, etc.).
  • x-ray imaging device 120 also may be referred to interchangeably as a x-ray diagnostic machine, x-ray machine, x-ray unit, x-ray, etc.
  • a medical specialist using X-ray machine 120 performs an x-ray examination of the chest organs of the examined patient (including fluorography).
  • a medical organization 110 there can be a clinic, a polyclinic, a doctor's office, a hospital, a hospital, a sanatorium, a medical care center, a pharmacy, a mobile unit, a mobile fluorography room, or any other organization, room or installation equipped with device 120 for obtaining x-ray images.
  • the generated image is stored in a local data store 131, such as a store based on or as part of a PACS (DICOM Image Archiving and Communication System), RIS (Radiological Information System), MIS (Medical Information System) 130, or other system or device suitable for storage of medical data. If necessary, additional target information can be added to the saved image, such as information about the patient, about the study, about the medical organization, about the medical specialist, etc.
  • a local data store 131 such as a store based on or as part of a PACS (DICOM Image Archiving and Communication System), RIS (Radiological Information System), MIS (Medical Information System) 130, or other system or device suitable for storage of medical data.
  • PACS DICOM Image Archiving and Communication System
  • RIS Radiological Information System
  • MIS Medical Information System 130
  • additional target information can be added to the saved image, such as information about the patient, about the study, about the medical organization, about the medical specialist, etc.
  • the data to be processed is then generated using a data communication device 132 such as PACS, RIS, MIS 130, or other suitable open or proprietary system.
  • a data communication device 132 such as PACS, RIS, MIS 130, or other suitable open or proprietary system.
  • the communication device 132 is shown in FIG. 1 as part of MIS 130, it can also be separate from MIS 130 or be part of another internal storage system of a medical organization.
  • depersonalization anonymization, de-identification
  • patient information which in one way or another can be considered personal
  • anonymized data from which a third party will not be able to restore the original data without having proper access to it.
  • the fields “PatientName” (patient name), “OtherPatientNames” (other patient names), “PatientID” (patient identifier) and, if necessary, other fields containing personal data or related to such, are anonymized.
  • the possible values of the depersonalized data may be predefined and known to all devices in the system, only to trusted devices, or only to the pathology device 160 so that they can determine whether the transmitted data is depersonalized.
  • the depersonalized data to be processed are transmitted from the medical organization 110 directly or through the central medical information system 150 to the device 160 for determining the pathology.
  • the implementation of depersonalization can be performed in the central medical information system 150, and in this case, non-anonymized data can first be transmitted from the medical organization 110 to the central medical information system 150, and then depersonalized data can be transmitted from it to the device 160 for determining the pathology. data to be processed.
  • the depersonalization process can occur both completely automatically, and if necessary, part of the depersonalization process or the entire depersonalization process can be performed with the participation of a person who can delete or edit data through the appropriate interface of the central medical information system 150.
  • the pathology detecting device 160 analyzes an X-ray image contained in the received data based on AI techniques and makes a prediction as to whether the image contains a pathology. If so, the pathology detecting device 160 indicates areas in the image that contain pathologies. If there is no pathology, then the image does not change.
  • the device 160 for determining the pathology can generate a report (or protocol) on the result of the work, containing a description of the result in the form of textual information.
  • the generated data (one or more images and/or report) is sent back (directly or indirectly) from the pathology device 160 to the medical organization 110 that requested data processing.
  • the results of the work received from the device 160 for determining the pathology are provided to the responsible person, for example, a radiologist, the attending physician or other medical specialist or a person who has access to such information and is responsible for receiving and processing it in the medical organization 110, using the viewer 140 , and using them, he makes a decision about the state of the chest organs, namely the presence or absence of a particular pathology. If necessary, the medical specialist, taking into account the results of the device 160 for determining the pathology, can make a decision on the treatment of the patient. Viewer 140 in FIG. 1 is simplistically referred to as display 140, however, in the preferred embodiment, it is an AWP (workstation) of a physician.
  • AWP workstation
  • a physician's workstation may be a computer based on an Intel Core i3 processor or equivalent, having 8 GB of RAM, 40 GB of free disk space, a DVD-R/RW CD-ROM drive, a network connection speed of 5 Mbps, monitor with a screen resolution of 1920x1080 and can be viewed using DICOM image viewers or through a web browser.
  • Intel Core i3 processor or equivalent having 8 GB of RAM, 40 GB of free disk space, a DVD-R/RW CD-ROM drive, a network connection speed of 5 Mbps, monitor with a screen resolution of 1920x1080 and can be viewed using DICOM image viewers or through a web browser.
  • the system 100 helps to improve the accuracy of medical decision making.
  • the data generated by the device 160 for determining the pathology is not sent directly to the medical organization 110 that requested data processing, but first (directly or through the central medical information system 150) to a specialized expert organization 170 that produces medical reports using the results. operation of the pathology device 160, or to an external radiologist 180 acting as an expert or consultant.
  • the medical organization 110 in response to the sent x-ray image, can receive from the expert organization 170 or from the external radiologist 180 (again, directly or through the central medical information system 150) an immediately ready conclusion or a preliminary conclusion that can be used to make a medical decision.
  • the term "external" in relation to a radiologist means that this doctor is not on the staff of the medical organization 110 that conducted the X-ray examination and requested the processing of the resulting image, and / or is not physically located in this organization and / or does not have access to MIS 130 of this organization.
  • the term “radiologist” in the context of the present invention implies that this is a medical specialist who has a proven qualification (knowledge, skills, abilities and experience) in the analysis (interpretation) of the results of an x-ray examination.
  • Specialists of the expert organization 170 and external radiologists 180 can access data from a specialized workstation (doctor's workstation) or using another suitable device, such as a computer, laptop, smartphone, tablet, VR helmet (virtual reality helmet), VR -glasses, etc.
  • a specialized workstation doctor's workstation
  • another suitable device such as a computer, laptop, smartphone, tablet, VR helmet (virtual reality helmet), VR -glasses, etc.
  • the medical decision support system 100 when the medical decision support system 100 includes a plurality of medical organizations 110 and/or a plurality of pathology devices 160, as well as expert organizations 170 or external radiologists 180, it is advisable to use a central medical information system 150.
  • the term “central” in this case indicates, first of all, not that this is a single central server that closes all possible connections, but that the central medical information system 150 occupies a place in the middle, in the center between the other participants in the medical decision support system 100, acting as an intermediate system for collecting, storing and redistributing data.
  • the central medical information system 150 can be both concentrated (centralized) and distributed, including those implemented in the cloud.
  • Images awaiting processing may be grouped into batches (series) for transfer to the device 160 for determining the pathology.
  • the pathology detecting device 160 can perform batch processing of the received images. Grouping into packages can be performed both in the medical organization 110 and in the central medical information system 150.
  • the central medical information system 150 can change the size and content of the packages received from the medical organizations 110 - for example, sort images by their resolution in pixels, by priority or other parameters and form new packages, and if necessary, images from different medical organizations 110 can be added to the same package.
  • the device 200 for detecting chest pathology based on X-ray images will be described in detail. It should be noted that the chest pathology device 200 fully corresponds to the pathology device 160 shown in FIG. 1, and has the same functions and capabilities, if applicable and does not conflict with the descriptions in this section.
  • the pathology determination device 200 includes a data receiving unit 210, a data storage unit 220, a data validation unit 230, an image preparation unit 240, a pathology prediction unit 250, a report generation unit 260, a data transmission unit 270, and a learning unit 280. Depending on the particular application, some of these blocks may be omitted, as will be explained in more detail later in this document.
  • the data receiving unit 210 receives data to be processed containing an x-ray image of the chest of the patient being examined.
  • the receiving unit 210 may be a separate chip, network card, or other suitable means capable of communicating with external devices in a wired and/or wireless manner, such as over a local area network (LAN) protocol, the Internet, etc. using Ethernet, fiber, WiFi, 4G, etc. technologies.
  • LAN local area network
  • the data storage unit 220 stores the data received by the data receiving unit 210 so that other units of the apparatus 200 can use it at the appropriate time. Received data can only be retained while it is being processed and erased when its processing is completed. For these purposes, in one embodiment, a short-term storage device such as RAM or the like is used. In another embodiment, the received data may be stored for a longer period of time than the immediate processing time, if necessary, and then a long-term storage device such as a hard disk or the like may be used.
  • the data storage unit 220 may store short-term or long-term data and/or files resulting from or during the operation of other units of the device.
  • the data validator 230 receives the data to be processed directly from the data receiving unit 210 or retrieves it from the data storage unit 220 . The data validator 230 then checks whether the received data is suitable for processing.
  • part of the validation operations can be performed on the side of the medical organization itself or on the side of the central medical information system 150.
  • the data validator 230 may not perform these operations, which can simplify and speed up processing, and thereby improve productivity.
  • the pathology device 200 may not know which validations have already been performed, or it may re-perform them for further revalidation. This can improve the processing quality.
  • an attempt is made to extract an image from the data to be processed. If the attempt fails, then it is concluded that the image file is corrupted or cannot be read. The reason for this may be factors such as the absence of an image in the data, the impossibility of reading metadata, the presence of any anomalous and unaccounted for tag values, etc. In this case, the data is not transferred for processing, and a corresponding indication is created for it. It is preferable to perform this operation in the data validation block 230, since even if it has already been performed by other devices, the data may be corrupted in the process of being sent to the pathology device 200 or may be in a format that is not currently available for one reason or another. device 200.
  • the pathology detecting apparatus 200 can also be checked whether the data to be processed matches the type of processing that is performed in this device 200 to determine the pathology. For example, if the pathology detecting apparatus 200 is to process AP chest X-rays, a check may be made whether the attached data contains a AP chest X-ray. Various implementations of such a check are known to those skilled in the art and may include, for example, a pre-trained neural network (neural network) that produces the appropriate classification, or other computer vision methods. If the check fails, then such data is not transferred for processing, and a corresponding indication is created for them. Examples of such errors, when the study is claimed to be an X-ray of the chest in a direct projection, but in fact it is not and cannot be processed, are shown in Fig. 4 and 5.
  • a check can be made whether the data is depersonalized. To do this, it is checked whether the fields that relate to personal data contain any values, and if so, whether these values are depersonalized. For example, if the "PatientName” field is empty or contains the predefined value "0" or "Anonymous” as indicated in FIG. 2 and FIG. 5, then this field is considered to be depersonalized, and the field with the value "Venus de Milo", indicated in FIG. 4, is not depersonalized. If the data is not depersonalized, then the corresponding image is not transferred for processing, and an indication of the impossibility of processing is generated on it.
  • the pathology device 200 may support several different valid values for the depersonalized data, in which case the data validator 230 may check if the field values in the received data match at least one of the respective supported values. If the value does not match the predefined valid value, then it is assumed that the received data is not depersonalized. In this case, they are not transferred for processing, and a corresponding indication is created for them.
  • the captured image has a size (pixel resolution) equal to or greater than a predetermined minimum size supported by the device 200, such as 1024x1024 pixels or 800x800 pixels, depending on the requirements of the particular application. If the source image is smaller than the minimum size, then the device 200 may not be accurate enough, so the image is not submitted for processing and an indication is generated for it.
  • a predetermined minimum size supported by the device 200 such as 1024x1024 pixels or 800x800 pixels, depending on the requirements of the particular application.
  • a check can be made whether the image is a positive or a negative.
  • Various implementations of such a check are known to those skilled in the art and may include, for example, a pre-trained neural network (neural network) that produces the appropriate classification, or other computer vision methods. If the inspection reveals that the image does not meet the input requirements of the pathology detection device 200, then the image preparation unit 240 can subsequently perform an appropriate conversion of the image to a negative or a positive.
  • Validation allows you to filter out data that cannot be analyzed or the accuracy of processing will have a deliberately low accuracy. Accordingly, the load on the most resource-intensive part of the analysis is reduced and the prediction accuracy is increased. In addition, screening out images that are not depersonalized ensures that no personal data is processed on the device 200 side, which reduces requirements for its implementation and certification.
  • the image preparation unit 240 receives from the data storage unit 220 and/or from the data validation unit 230 the validated chest x-ray image and performs preliminary transformations on it in order to prepare it for direct use in the pathology prediction unit 250 .
  • the image preparation performed in block 240 may be as follows.
  • the aspect ratio of the cropped lung image depends on the size and shape of the lungs. For further processing, it is required to bring it to a single format. To do this, the image is resized to the second size.
  • the second image size is predefined - for example, in the form of a square 224x224, 320x320 or 512x512 pixels. The selected value depends on the requirements of the specific neural network used further.
  • the learning process of the neural network is controlled by the learning block 280 by applying a learning algorithm to the trained neural network using the training data.
  • Instructions (markup) from the doctor are used by the trained neural network as ground truth.
  • the markup may be a number: "1" (sick).
  • the markup may be a different value, such as "healthy”.
  • images are prepared in the same way as described above with respect to block 240, and fed to the input of the neural network, using images with both the presence and absence of pathologies.
  • the neural network calculates predictions for one or more images. These predictions are compared with the indication of the true presence/absence of pathology, and the value of the loss function is calculated (how badly the neural network was wrong in detecting the presence of pathology). Further, using the gradient descent method and the backpropagation algorithm, all weights (weight coefficients) of the neural network are changed in accordance with the selected learning rate parameter in the direction opposite to the calculated gradient in order to minimize the error on the current image(s) ( I).
  • a pathology image can be generated for those x-ray images for which the analysis revealed a probability of pathology that exceeds a predetermined threshold probability of pathology, which indicates that the radiologist should pay attention to such an image.
  • the threshold probability of the presence of a pathology in the general case is 0.5 (or 50%), but depending on the accuracy of a particular trained model and the degree of confidence in it, it can vary in practical application both up and down.
  • a medical organization may specialize in the treatment of diseases, the early detection of which is critical in terms of treatment prognosis, so it may request imaging images if the device detects a probability of pathology of only 10% or more.
  • an indication of the found signs of pathologies in the form of a heat map or the boundaries of pathologies can be superimposed on the original x-ray image obtained in the data to be processed.
  • the parameters of all transformations performed on the original image and associated with changing its size, shifting, rotating, cropping and cropping are stored by the image preparation block 240 - for example, in the storage block 220 .
  • a visualization of the pathology is obtained in the form of a heat map or the boundaries of pathologies with reference to the size, shape and position of the image, which was directly analyzed by the neural network.
  • the text protocol of the study contains the results of the work and a description of the operation of the device 200 in the form of textual information, for example, in CSV format.
  • the protocol can be generated in the form of a DICOM Structured Report (structured DICOM report).
  • the text protocol of the study may contain:
  • the pathology determination method 400 may comprise step S410, in which data to be processed is received containing an X-ray image of the chest of the patient being examined.
  • the pathology determination method 400 may also include step S440, which prepares an image contained in the data to be processed for use by a neural network by performing one or more preliminary transformations on it.
  • the pathology determination method 400 may also comprise step S460 generating an examination report containing at least one of the following: a) a pathology imaging image; b) text protocol of the study.
  • the proposed method for determining the pathology of the chest organs provides increased accuracy of automatic determination of the probability of pathology, reduces the requirements for the qualification of medical personnel and reduces the influence of the human factor (mindfulness, fatigue, responsibility).
  • the result of the study provides a comprehensive set of information necessary for making the correct medical decision with increased speed and accuracy.
  • Devices and methods according to the present invention can be used to process x-ray images of chest organs in order to detect signs of pathologies in them.
  • One or more transmission units or devices (transmitters) described herein and one or more receiving units or devices (receivers) may be physically implemented in the same transceiver unit or device or in different blocks or devices.
  • a device or a transmission unit in this document may be referred to as a device or unit having the functions of not only transmitting, but also receiving data, information and/or signals.
  • the receiving device or unit may also include data, information and/or signaling functions.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • PLD programmable logic device
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors together with a DSP core, or any other similar configuration).
  • the functions described herein may be implemented in hardware, software running on one or more processors, firmware, or any combination of the foregoing.
  • the hardware and software implementing the functions may also be physically located in different locations, including such a distribution that parts of the functions are implemented in different physical locations, that is, distributed processing or distributed computing may be performed.
  • multi-threaded data processing can be performed, which in a simple representation can be expressed in the fact that the entire set of data to be processed is divided into a set of subsets , and each processor core performs processing on its assigned subset of data.
  • computer-readable media may include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), flash memory.
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electronically erasable programmable read-only memory
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM accelerated speed synchronous dynamic random access memory
  • SLDRAM synchronous link DRAM
  • DR RAM direct access bus random access memory
  • At least one of the steps in the method or blocks in the device may use an artificial intelligence (AI) model to perform the respective operations.
  • AI artificial intelligence
  • the AI-related function may be executed via a processor and non-volatile and/or volatile memory.
  • One or more processors direct the processing of input data in accordance with a predetermined operating rule or artificial intelligence (AI) model stored in non-volatile memory and/or non-volatile memory.
  • a predetermined operating rule or artificial intelligence model can be obtained through training.
  • the processor may perform a pre-processing operation on the data to convert it into a form suitable for use as input to the artificial intelligence model.
  • An artificial intelligence model may include multiple neural network layers.
  • Each of the multiple layers of the neural network includes a plurality of weights (coefficients) and performs a work operation for a given layer by computing using the plurality of weights of a given layer with respect to the input or calculation result of a previous layer.
  • neural networks include, but are not limited to, Convolutional Neural Network (CNN), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Restricted Boltzmann Machine (RBM), Deep Belief Network (DBN), Bidirectional Recurrent Deep Neural Network (BRDNN). ), generative adversarial networks (GANs), and deep Q-nets.
  • CNN Convolutional Neural Network
  • DNN Deep Neural Network
  • RNN Recurrent Neural Network
  • RBM Restricted Boltzmann Machine
  • DNN Deep Belief Network
  • BBN Deep Belief Network
  • BNN Bidirectional Recurrent Deep Neural Network
  • GANs generative adversarial networks
  • deep Q-nets deep Q-nets.
  • a learning algorithm is a method of training a predetermined target device (eg, a GPU or NPU based neural network) using a set of training data to call, enable, or control the target device to perform a determination or prediction.
  • a predetermined target device eg, a GPU or NPU based neural network
  • Examples of learning algorithms include, but are not limited to, supervised learning, unsupervised learning, partially supervised learning, or reinforcement learning.
  • some or all elements/blocks/modules of the proposed device are located in a common housing, can be placed on the same frame/structure/printed circuit board/chip and are structurally connected to each other through assembly (assembly) operations and functionally through communication lines .
  • the mentioned communication lines or channels are typical communication lines known to specialists, the material implementation of which does not require creative efforts.
  • the communication link may be a wire, a set of wires, a bus, a track, a wireless link (inductive, RF, infrared, ultrasonic, etc.). Communication protocols over communication lines are known to those skilled in the art and are not disclosed separately.
  • the functional connection of elements should be understood as a connection that ensures the correct interaction of these elements with each other and the implementation of one or another functionality of the elements.
  • Particular examples of functional communication may be communication with the ability to exchange information, communication with the ability to transmit electric current, communication with the ability to transmit mechanical motion, communication with the ability to transmit light, sound, electromagnetic or mechanical vibrations, etc.
  • the specific type of functional connection is determined by the nature of the interaction of the mentioned elements, and, unless otherwise indicated, is provided by well-known means, using principles well-known in the art.
  • the design of the elements of the proposed device is known to specialists in this field of technology and is not described separately in this document, unless otherwise indicated.
  • the elements of the device can be made from any suitable material. These components can be manufactured using known methods including, by way of example only, machining, investment casting, crystal growth. Assembly, connection, and other operations as described herein are also within the knowledge of a person skilled in the art, and thus will not be explained in more detail here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

L'invention se rapporte au domaine du traitement d'images et concerne plus précisément un dispositif et un procédé pour déterminer une pathologie sur la base de l'analyse d'images radiographiques. L'invention concerne un dispositif comprenant: une unité pour des données à traiter comprenant une image radiographique des organes de la cage thoracique; une unité préparation de l'image contenue dans les données à traiter en vue d'une utilisation par un réseau neuronal en exécutant sur celui-ci une ou plusieurs conversions préalables; une unité de prédiction de pathologie afin de déterminer la présence ou l'absence de pathologie sur une image en utilisant ledit réseau neuronal; une unité de génération de rapport sur les études menées dans ledit dispositif; et une unité de transmission de rapport vers un dispositif ayant demandé un traitement de données. L'invention permet d'augmenter la précision de détermination automatique de probabilité d'une pathologie, et de prendre une décision médicale plus rapidement et plus précisément.
PCT/RU2022/050306 2021-12-20 2022-09-28 Détermination de pathologie d'organes de la cage thoracique sur la base d'images radiographiques WO2023121510A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
RU2021137783 2021-12-20
RU2021137783A RU2782518C1 (ru) 2021-12-20 Устройство и способ для определения патологии органов грудной клетки на основе рентгеновских изображений

Publications (1)

Publication Number Publication Date
WO2023121510A1 true WO2023121510A1 (fr) 2023-06-29

Family

ID=86903460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2022/050306 WO2023121510A1 (fr) 2021-12-20 2022-09-28 Détermination de pathologie d'organes de la cage thoracique sur la base d'images radiographiques

Country Status (1)

Country Link
WO (1) WO2023121510A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068083A1 (en) * 2014-12-08 2018-03-08 20/20 Gene Systems, Inc. Methods and machine learning systems for predicting the likelihood or risk of having cancer
US20200058123A1 (en) * 2018-08-19 2020-02-20 Chang Gung Memorial Hospital, Linkou Method and system of analyzing medical images
WO2021248187A1 (fr) * 2020-06-09 2021-12-16 Annalise-Ai Pty Ltd Systèmes et procédés d'analyse automatisée d'images médicales

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068083A1 (en) * 2014-12-08 2018-03-08 20/20 Gene Systems, Inc. Methods and machine learning systems for predicting the likelihood or risk of having cancer
US20200058123A1 (en) * 2018-08-19 2020-02-20 Chang Gung Memorial Hospital, Linkou Method and system of analyzing medical images
WO2021248187A1 (fr) * 2020-06-09 2021-12-16 Annalise-Ai Pty Ltd Systèmes et procédés d'analyse automatisée d'images médicales

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ABOULHOSN JAMIL: "Rotational angiography and 3D overlay in transcatheter congenital interventions", INTERVENTIONAL CARDIOLOGY, vol. 5, no. 4, 1 August 2013 (2013-08-01), pages 405 - 410, XP093077447, ISSN: 1755-5302, DOI: 10.2217/ica.13.30 *

Similar Documents

Publication Publication Date Title
US11553874B2 (en) Dental image feature detection
US11416747B2 (en) Three-dimensional (3D) convolution with 3D batch normalization
US9846938B2 (en) Medical evaluation machine learning workflows and processes
US20190088359A1 (en) System and Method for Automated Analysis in Medical Imaging Applications
US11651850B2 (en) Computer vision technologies for rapid detection
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
Fareed et al. ADD-Net: an effective deep learning model for early detection of Alzheimer disease in MRI scans
CN113889229A (zh) 基于人机结合的医学影像诊断标准的构建方法
Jain et al. Early detection of brain tumor and survival prediction using deep learning and an ensemble learning from radiomics images
RU2782518C1 (ru) Устройство и способ для определения патологии органов грудной клетки на основе рентгеновских изображений
WO2023121510A1 (fr) Détermination de pathologie d'organes de la cage thoracique sur la base d'images radiographiques
RU2789260C1 (ru) Система поддержки принятия врачебных решений на основе анализа медицинских изображений
EA045328B1 (ru) Устройство и способ для определения патологии органов грудной клетки на основе рентгеновских изображений
RU2813938C1 (ru) Устройство и способ для определения границ патологии на медицинском изображении
RU2806982C1 (ru) Устройство и способ для анализа медицинских изображений
Sangulagi et al. Detection of Covid-19 from Chest X-ray images
Beniwal et al. COVID Detection Using Chest X-ray Images Using Ensembled Deep Learning
EA044868B1 (ru) Устройство и способ для определения патологии на основе анализа медицинских изображений
Padmapriya et al. Computer-Aided Diagnostic System for Brain Tumor Classification using Explainable AI
EP4273796A1 (fr) Traitement de données d'images spectrales générées par un scanner de tomographie assistée par ordinateur
EP4369284A1 (fr) Amélioration d'image à l'aide d'un apprentissage machine génératif
Dubey et al. Enhancing Brain Tumor Detection Using Convolutional Neural Networks in Medical
Gancheva et al. Medical X-ray Image Classification Method Based on Convolutional Neural Networks
Bejarano The Benefits of Artificial Intelligence in Radiology: Transforming Healthcare through Enhanced Diagnostics and Workflow Efficiency
Lu et al. Improving Brain Tumor MRI Image Classification Prediction based on Fine-tuned MobileNet.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22912085

Country of ref document: EP

Kind code of ref document: A1