WO2014072928A1 - Enabling interpretation of a medical image - Google Patents

Enabling interpretation of a medical image Download PDF

Info

Publication number
WO2014072928A1
WO2014072928A1 PCT/IB2013/059967 IB2013059967W WO2014072928A1 WO 2014072928 A1 WO2014072928 A1 WO 2014072928A1 IB 2013059967 W IB2013059967 W IB 2013059967W WO 2014072928 A1 WO2014072928 A1 WO 2014072928A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
task
interpretation
user
medical image
Prior art date
Application number
PCT/IB2013/059967
Other languages
French (fr)
Inventor
Iwo Willem Oscar Serlie
Merlijn Sevenster
Zarko Aleksovski
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2014072928A1 publication Critical patent/WO2014072928A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the invention relates to a system and method for assisting a user in carrying out an interpretation of a medical image.
  • the invention further relates to a workstation and an imaging apparatus comprising said system, and to a computer program product comprising instructions for causing a processor system to perform said method.
  • the interpretation of medical images is a frequently occurring task in medical practice.
  • a clinician may interpret a medical image to answer his/her own clinical question.
  • Another example is radiology reporting.
  • a radiologist may be tasked with generating a radiology report to address a clinical question of a clinician.
  • the radiologist typically provides input to the radiology report by reading the medical image and dictating observations and findings.
  • a problem of relying on guidelines is that an image interpreter can easily deviate, inadvertently or advertently, from the structured image interpretation.
  • a first aspect of the invention provides a system for assisting a user in carrying out an interpretation of a medical image, the system comprising:
  • a task interface for obtaining task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing the interpretation of the medical image;
  • a progress unit for enabling the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task
  • an image processor for generating an output image based on the medical image, the output image including a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task;
  • a display output connectable to a display for displaying the output image.
  • a workstation and imaging apparatus comprising the system set forth.
  • a method for assisting a user in carrying out an interpretation of a medical image comprising:
  • said generating comprises: establishing a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task.
  • a computer program product comprising instructions for causing a processor system to perform the method set forth.
  • the aforementioned measures provide an image interface which obtains the medical image, e.g., from an internal or external storage medium.
  • a task interface is provided which obtains task data.
  • the task data constitutes a data representation of a plurality of image interpretation tasks.
  • Each of the plurality of image interpretation tasks is a task to be carried out by the user, with the task involving the user interpreting (parts of) the medical image.
  • the plurality of image interpretation tasks effectively constitutes a workflow.
  • a progress unit is provided to enable the user to progress through the plurality of image interpretation tasks, e.g., automatically or based on user input. For example, the progress unit may estimate or learn from user input that an image interpretation task has been completed by the user.
  • a current image interpretation task is established every time, i.e., after each progress.
  • An image processor which generates an output image based on the medical image.
  • the output image represents or comprises the medical image.
  • the image processor establishes a visual guidance in the output image.
  • the visual guidance is image-based, i.e., constituted by pixels, voxels, etc., and is established such that it visually guides the user towards carrying out the current image interpretation task.
  • the image processor 160 may obtain the current image interpretation task, or information indicative of said task, from the progress unit 140.
  • the aforementioned measures have the effect that an output image is generated from the medical image, with the output image comprising a visual guidance which guides the user towards carrying out the current image interpretation task.
  • the visual guidance is thus part of the output image.
  • the user does not need to remember the current image interpretation task as he is provided with the visual guidance automatically when viewing the output image.
  • the user does not need to divert his/her attention from the output image, e.g., to view a guideline or report template provided separately from the medical image.
  • it is convenient for the user to adhere to a structure when carrying out an interpretation of the medical image.
  • the task data is indicative of a region of interest that is of relevance in the current image interpretation task
  • the visual guidance is arranged for visually guiding the user towards the region of interest.
  • the task data may be indicative of a region of interest in that it may task the user with, e.g., inspecting a particular organ.
  • the image processor is arranged for establishing the visual guidance by modifying a content of the medical image.
  • the visual guidance is thus provided in the output image by modified content of the medical image, i.e., content having been modified so as to visually guide the user towards carrying out the current image interpretation task.
  • the user obtains the visual guidance automatically when viewing the content of the medical image. It is thus not needed to provide the visual guidance separately.
  • the image processor is arranged for establishing the visual guidance by detecting the region of interest in the medical image, and highlighting the region of interest in the medical image.
  • the part of the medical image which is of relevance to the current image interpretation task is thus highlighted to enable the user to easily carry out the current image interpretation task.
  • the user is less likely to view other parts of the medical image which may otherwise result in an unstructured interpretation.
  • the image processor is arranged for highlighting the region of interest by masking parts of the medical image which do not comprise the region of interest.
  • the image processor is arranged for highlighting the region of interest by masking parts of the medical image which do not comprise the region of interest.
  • the user is discouraged or even prevented from viewing parts of the medical image which are of no or lesser relevance to the current image interpretation task.
  • an unstructured interpretation of the medical image for example due to the user experiencing the aforementioned 'instant happiness' bias, is less likely, or even entirely prevented.
  • the region of interest itself is not modified, which otherwise may hinder a medical interpretation of said region.
  • the image processor is arranged for detecting characters in the medical image to enable detecting patient identifiers embedded in the medical image as the region of interest.
  • the system thus is enabled to visually guide the user towards patient identifiers which are deemed to be of relevance to the current inspection task.
  • the image processor is arranged for establishing the visual guidance by establishing a visual representation of the current image interpretation task in or next to the medical image.
  • the visual guidance is thus provided by a visual representation of the current image interpretation task, e.g., a textual or graphical representation.
  • a visual representation of the current image interpretation task e.g., a textual or graphical representation.
  • the image processor is arranged for establishing the visual guidance by including a plurality of visual representations in the output image, each representing a respective one of the plurality of image interpretation tasks, and highlighting the visual representation of the current image interpretation task.
  • the user is provided with an overview of the plurality of image interpretation tasks, enabling the user to see, e.g., previous and next tasks.
  • the current image interpretation task is highlighted so as to visually guide the user towards carrying out the current image interpretation task.
  • the progress unit is arranged for i) obtaining user input indicative of a completion of the current image interpretation task, and ii) progressing through the plurality of image interpretation tasks based on said completion. The user is thus provided with control over when to progress through the plurality of image interpretation tasks.
  • the user input is further indicative of an outcome of the current image interpretation task
  • the progress unit is arranged for progressing through the plurality of image interpretation tasks further based on said outcome.
  • Certain image interpretation tasks may be conditional on an outcome of a previous image interpretation tasks.
  • the progress unit is arranged to enable said conditional progressing.
  • the task data is indicative of a potential outcome of the current image interpretation task
  • the progress unit is arranged for querying the user for said potential outcome.
  • a potential outcome of an image interpretation tasks may be identified automatically, i.e., in separation of the user manually carrying out the image interpretation tasks. For example, if the user is tasked with verifying a patient's identity by interpreting patient identifiers comprised in the medical image, the presumed patient identity may be available to the system, e.g., by means of image metadata provided as part of the task data.
  • the progress unit can control the progressing through the plurality of image interpretation tasks accordingly.
  • the progress unit may abort progressing through the remainder of the plurality of interpretation tasks.
  • the system comprises a radiology input for enabling the user to generate a structured report based on the plurality of image interpretation tasks.
  • the report is structured in that it is based on the plurality of image interpretation tasks.
  • the user can generate a structured report on the basis of carrying out the structured image interpretation.
  • the structured report is obtained by structuring a manner in which the user interprets the medical image and thus obtains input for the report.
  • the radiology input is arranged for automatically filling in the outcome of the current image interpretation task in a report template for the structured report.
  • the task interface is arranged for obtaining the task data from at least one of the group of: an image interpretation guideline, a report template, and a reporting guideline. Said sources are well suited for structuring the image interpretation.
  • a person skilled in the art will appreciate that the method may be applied to multi-dimensional image data, e.g. to two-dimensional (2-D), three-dimensional (3-D) or four-dimensional (4-D) images.
  • a dimension of the multi-dimensional image data may relate to time.
  • a three-dimensional image may comprise a time domain series of two- dimensional images.
  • the image may be acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic
  • MRI Resonance Imaging
  • US Ultrasound
  • PET Positron Emission Tomography
  • SPECT Single Photon Emission Computed Tomography
  • NM Nuclear Medicine
  • Fig. 1 shows a system for enabling a user to carry out a structured image interpretation, and a display for displaying an output image of the system;
  • Fig. 2 shows a method according to the present invention
  • Fig. 3 shows a computer program product according to the present invention
  • Fig. 4 shows a workflow from a guideline for interpreting a shoulder X-ray image, the workflow being constituted by a plurality of image interpretation tasks;
  • Fig. 5a shows a shoulder X-ray image comprising patient identifiers
  • Fig. 5b shows the shoulder being masked so as to highlight the patient identifiers, with the user being queried on the patient identifiers;
  • Fig. 5c shows parts of the shoulder being masked to highlight the humerus, with the user being queried on a completion of a current image interpretation task
  • Fig. 6 shows a textual representation of a plurality of image interpretation tasks next to the medical image, with a current image interpretation task being highlighted.
  • Fig. 1 shows a system 100 system for enabling a user to carry out a structured interpretation of a medical image.
  • the system 100 comprises an image interface 110 for obtaining the medical image 112.
  • the image interface 110 may be connectable to an external storage database 115 such as, e.g., a Picture Archiving and Communication System (PACS).
  • the system 100 further comprises a task interface 120 for obtaining task data 122 representing a plurality of image interpretation tasks.
  • the plurality of image interpretation tasks when carried out by the user, provide a structured interpretation of the medical image.
  • the task interface 120 may be connectable to an external medical database 125.
  • the task interface 120 may obtain the task data 122 from, or in the form of, e.g., an image interpretation guideline, a report template, or a reporting guideline as stored on the external medical database 125.
  • the system 100 further comprises a progress unit 140 for enabling the user to progress through the plurality of image interpretation tasks.
  • the progress unit 140 is shown to receive the task data 122 from the task interface 120.
  • a current image interpretation task 310, 410 is established by the progress unit 140.
  • the system 100 further comprises an image processor 160 for generating an output image 162-166 based on the medical image.
  • the image processor 160 is shown to receive the medical image 112 from the image interface 110.
  • the image processor 160 is arranged for, as part of said generating, establishing a visual guidance in the output image 162-166 for visually guiding the user towards carrying out the current image interpretation task.
  • the image processor 160 is shown to receive the current image interpretation task 310, 410 from the progress unit 140.
  • the image processor 160 may receive the task data 122 from the task interface 120, whilst receiving a task identifier from the progress unit 140 enabling the image processor 160 to identify the current image interpretation task 310, 410 from the task data 122.
  • the system 100 further comprises a display output 170 which is connectable to a display 175.
  • the display output 170 is shown to provide display data 172 of the output image 162-166 to the display 175, thereby enabling the display 175 to display the output image 162-166 to a user.
  • the progress unit 140 may be arranged for progressing through the plurality of image interpretation tasks based on user input 142.
  • the progress unit may obtain user input 142 indicative of a completion of the current image interpretation task 310, 410, and progress through the plurality of image interpretation tasks based on said completion.
  • the system 100 may further comprise a radiology input 180 for enabling the user to generate a structured report 184 based on the plurality of image interpretation tasks.
  • the radiology input 180 is shown to receive the current image interpretation task 310, 410 from the progress unit 140 and radiology input 182 from the user, e.g., via a radiology input device such as a dictation device (not shown).
  • the operation of the system 100 may be briefly explained as follows.
  • the image interface 110 obtains the medical image 112.
  • the task interface 120 obtains task data 122 representing the plurality of image interpretation tasks.
  • the progress unit 140 enables the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task 310, 410.
  • the image processor 160 generates an output image 162-166 based on the medical image 112. As part of said generating, the image processor 160 establishes a visual guidance in the output image 162-166 for visually guiding the user towards carrying out the current image interpretation task 310, 410.
  • the display output 170 provides display data 172 of the output image 162-166 to the display 175.
  • Fig. 2 shows a method 200 for enabling a user to carry out an interpretation of a medical image.
  • the method 200 comprises, in a step titled "OBTAINING MEDICAL IMAGE", obtaining 210 the medical image.
  • the method 200 further comprises, in a step titled “OBTAINING TASK DATA”, obtaining 220 task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing a structured interpretation of the medical image.
  • the method 200 further comprises, in a step titled "ESTABLISHING CURRENT IMAGE INTERPRETATION
  • the method 200 further comprises, in a step titled “GENERATING OUTPUT IMAGE”, generating 240 an output image based on the medical image.
  • Said step of generating 240 comprises, in a sub-step titled “ESTABLISHING VISUAL GUIDANCE", establishing 250 a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task.
  • the method 200 may correspond to an operation of the system 100. However, the method 200 may also be performed in separation of the system 100.
  • Fig. 3 shows a computer program product 270 comprising instructions for causing a processor system to perform the aforementioned method 200.
  • the computer program product 270 may be comprised on a computer readable medium 260, for example in the form of as a series of machine readable physical marks and/or as a series of elements having different electrical, e.g., magnetic, or optical properties or values.
  • Fig. 4 shows a workflow 300 from a guideline for interpreting a shoulder X- ray image.
  • the workflow 300 is constituted by a plurality of image interpretation tasks 310- 360 to be carried out by the user, namely a first task 310 titled “Verify patient and laterality”, a second task 320 titled “Check humerus”, a third task 330 titled “Check coracoids process”, a fourth task 340 titled “Check clavicle”, a fifth task 350 titled “Check scapula”, and a sixth task 360 titled "Check ribs”.
  • the workflow 300 is, by way of example, a linear workflow in that the image interpretation tasks are to be carried out sequentially by the user.
  • the task interface 120 may obtain task data 122 representing the workflow 300.
  • the task data 122 may be a structured language document, such as an Extensible Markup Language (XML) document.
  • XML Extensible Markup Language
  • the progress unit 140 enables the user to progress through the plurality of image interpretation tasks 310-360, thereby establishing a current image interpretation task.
  • the current image interpretation task may be the first task 310 requiring the user to verify the patient and laterality.
  • the user may progress to the second task 320, requiring the user to check the humerus.
  • the progress unit 140 may establish the transition between the tasks, i.e., the user's progress, based on user input 140.
  • the user input 140 may be obtained from a user input device such as, e.g., a mouse or a keyboard, or a user input system such as, e.g., a dictation system or an eyeball tracker system.
  • the workflow 300 may be represented by a state diagram, in which each task of the workflow 300 is represented by a state, and in which a completion of a task by the user corresponds to a transition, i.e., a progression, from one state to another state.
  • the workflow 300 may be modeled by a state machine.
  • the state machine may be implemented or executed by the progress unit 140.
  • the state machine may advance in state based on user input 142 and/or the contextual parameters. In the example of Fig. 4, the transitions between states are effected by user input 142 representing an "OK", with the "OK" being indicative that the user carried out the current image interpretation task.
  • Fig. 5a shows a shoulder X-ray image 112. In addition to showing a shoulder
  • the X-ray image further comprises patient identifiers 117, i.e., in the form of image data.
  • the patient identifiers identify the patient by name, i.e., "John Doe", as well as by identity number, i.e. ID 123456.
  • the X-ray image 112 comprises a marking 118 "L" identifying the X-ray image as a left lateral image.
  • the system 100 may enable the user to interpret the X-ray image 112 in a structured manner, e.g., according to the workflow 300 as shown in Fig. 4, as follows.
  • the first task 310 in the workflow 300 is to verify the patient and the laterality of the X-ray image 112.
  • the patient identifiers 117 and the marker 118 constitute regions of interest which are of relevance in the current image interpretation task 310.
  • a reason for this is that the user has to interpret the patient identifiers 117 and the marker 118 in order to verify the patient and the laterality of the X-ray image 112.
  • the image processor 160 may be arranged for generating the visual guidance so as to visually guiding the user towards the region of interest, e.g., the patient identifiers 117 and the marker 118.
  • Fig. 5b shows an example of this.
  • an output image 162 of the system 100 is shown in which the patient identifiers 117 and the marker 118 were highlighted with respect to other parts of the medical image.
  • the image processor 160 may be arranged for generating the output image 162 by modifying a content of the medical image 112 so as to establish the visual guidance.
  • the image processor 160 may be arranged for generating the output image 162 by detecting the region of interest 117, 118 in the medical image 112, and highlighting the region of interest 117, 118 in the medical image 112.
  • the image processor 160 may detect this type of region of interest, e.g., textual information, by detecting characters in the medical image 112.
  • the image processor 160 may employ Optical Character Recognition (OCR) to identify the region of interest 117, 118. Having identified the region of interest 117, 118, the image processor 160 may highlight the region of interest by masking parts of the medical image 112 which do not comprise the region of interest. In the example of Fig. 5b, the shoulder 114 is masked since it is of no or little relevance to the current image interpretation task 310 of verifying the patient and the laterality of the X-ray image 112. Fig. 5b also shows a result of an optional aspect of the present invention, namely that the progress unit 140 may obtain user input 142 which is indicative of an outcome of the current image interpretation task 310.
  • OCR Optical Character Recognition
  • the progress unit 140 may obtain said user input 142 by establishing, via the image processor 160, a user interface element 144 in the output image 162 which prompts the user to indicate said outcome.
  • the progress unit 140 may be arranged for querying the user for a potential outcome of the current image interpretation task 310.
  • the user may be specifically queried whether or not the current image interpretation task 310 resulted in the potential outcome.
  • the potential outcome may be available to the system.
  • the task data 122 may be indicative of the potential outcome of the current image interpretation task.
  • the task data 122 may be indicative of the patient's name and the patient's identity number.
  • the potential outcome may be determined from other sources, such as, e.g., metadata of the medical image 112.
  • the progress unit 140 may query 146 the user whether the X-ray image 112 shown to the user corresponds to the patient "John Doe" having an identity number 123456. The user may confirm or reject said potential outcome by selecting 'Yes' Y or 'No' N within the user interface element 144.
  • the progress unit 140 may arranged for progressing through the plurality of image interpretation tasks 310-360 based on said outcome.
  • the progress unit 140 may progress to a second task 320 of the workflow 300 in case the user confirms the potential outcome by selecting 'Yes' Y, However, if the user were to reject the potential outcome, i.e., by selecting 'No' N, the progress unit 140 may abort progressing through the remainder of the plurality of interpretation tasks 310-360.
  • Fig. 5c shows a result of the progress unit 140 progressing to the second task 320 of the workflow 300.
  • the second task 320 thus becomes the current image interpretation task 320 to be carried out by the user.
  • the second task 320 requires the user to check the humerus.
  • the humerus therefore constitutes a region of interest which is of relevance for the current interpretation task 320.
  • the image processor 160 may visually guide the user towards the region of interest by detecting the region of interest in the X-ray image 112, i.e., the humerus 115, and by highlighting the region of interest.
  • the image processor 160 may employ suitable detection algorithms as are known per se from the field of medical image analysis.
  • the detection algorithm may be selected from a plurality of detection algorithms based on the current image interpretation task 320.
  • Fig. 5c shows the humerus 115 being highlighted by masking parts of the medical image 112 which do not comprise the region of interest. In this example, all other parts of the shoulder are masked. However, the patient identifiers 117 and marking 118 are not masked, thereby enabling the user to easily verify the patient's name, identity, laterality, etc., during the structured image interpretation. It is noted that, in general, various ways of masking may be advantageously used. For example, the image intensity of the other parts may be reduced. The image intensity may also be reduced completely, i.e., set to black. Alternatively or additionally, the other parts may be blurred. Alternatively or additionally, the image intensity of the region of interest may be increased.
  • Fig. 5c also shows a result of an optional aspect of the present invention, namely that the progress unit 140 may be arranged for obtaining user input 142 indicative of a completion of the current image interpretation task 320.
  • the progress unit 140 may be arranged for progressing through the plurality of image interpretation tasks 310-360 based on said completion.
  • the progress unit 140 may obtain said user input 142 by establishing, via the image processor 160, the user interface element 144 in the output image 164.
  • the user interface element 144 may prompt 148 the user to indicate whether he/she wishes to proceed to the next task.
  • the next task may correspond to an inspection of a next bone.
  • the user By selecting ⁇ ', the user thus indicates that he/she has completed the current image interpretation task 320 and wishes to proceed to the next task.
  • the progress unit 140 may then proceed to the third task 330. It is noted that the above process may be repeated until all tasks of the workflow 300 have been completed. As a result, the user has carried out a structured interpretation of the X-ray image 112.
  • the image processor 160 may be arranged for generating the output image by establishing a visual representation of the current image interpretation task in or next to the medical image 112.
  • Fig. 6 shows an example of this.
  • a portion of an output image 166 is shown which comprises a plurality of visual representations, each representing a respective one of a plurality of image interpretation tasks 400.
  • representations are textual representations of the respective image interpretation tasks, i.e., denoting the organs which the user has to inspect.
  • the visual representations may also take a different form, e.g., do not need to be representations of organs but may rather represent other aspects of the tasks and/or visually represent the tasks in different ways.
  • the visual representation of a current image interpretation task 410 is being highlighted.
  • the highlighting is by means of font attributes, i.e., underlining the textual representation, establishing a higher text intensity, etc.
  • various alternatives may be used, e.g., providing an arrow besides the current interpretation task, using color coding, etc.
  • the progress unit 140 may progress through the plurality of image interpretation tasks 400, each time causing the image processor 160 to highlight a different one of the visual representations which corresponds to a current image interpretation task.
  • Fig. 6 further shows, by way of example, DICOM attributes 420 of the medical image 114 being displayed in the output image 166, with the plurality of visual representations being displayed nearby the DICOM attributes 420.
  • the mechanism to display the DICOM attributes 420 may also be used to display said visual representations.
  • the present invention may be advantageously used in radiology reporting.
  • a radiologist may be mandated by a reporting guideline, such as those of the
  • Radiological Society of North American to report on certain structures in a medical image, e.g., certain organs, tissue, etc.
  • a corresponding plurality of image interpretation tasks may be derived from the reporting guideline. Each task may require an inspection of a structure and thus be represented by the name of the structure.
  • a textual representation of all structures may be displayed in or next to the medical image. While progressing through the plurality of image interpretation tasks, the radiologist may dictate results of the image interpretation, e.g., medical findings and observations. To visually guide the radiologist towards carrying out the current image interpretation task, the corresponding textual representation may be highlighted. Additionally or alternatively, the corresponding structure may be highlighted in the medical image.
  • the textual representation of previous tasks may be automatically removed.
  • the textual representations of pending tasks i.e., structures which have not been inspected yet, may be displayed.
  • the results of a current image interpretation task may be available to the system in the form of recorded speech.
  • the system may employ speech recognition to obtain the results in text-form.
  • the dictation means and/or the speech recognition may be part of a radiology input of the system.
  • the system may automatically generate a structured report, e.g., by including the results in a part of a report template. The part may be automatically identified by the system based on the current image interpretation task.
  • the system may employ natural language processing, lexicons and/or ontologies to identify said part from the current image interpretation task and/or the results of the radiology reporting.
  • the plurality of image interpretation tasks may be obtained from a report template, e.g., based on sections of the report template.
  • a direct relation exists between a part of the report template, e.g., a section, and a current image interpretation task.
  • the reporting input may thus directly fill in the outcome of the current image interpretation task in the appropriate part of the report template.
  • the present invention may be implemented in various ways.
  • the present invention may be implemented as a software plug-in for a PACS.
  • the plug-in, or other computer-implementation may be established as follows.
  • a plurality of image annotation modules may be provided, implementing selected functionality of the image processor.
  • the image annotation modules may comprise:
  • Organ detection modules for detecting organs in a radiology image of a particular protocol (e.g., MR, X-ray, CT) and anatomy (e.g., neuro, breast, abdomen, knee).
  • a particular module may detect several organs, or a series of modules may be used to detect one organ.
  • the module may label each pixel of the medical image with a label corresponding the organ, e.g., "knee", and with the label "none” if it depicts no organ known by the module.
  • a module may label each pixel with a binary value indicating if it depicts the organ at hand.
  • tissue detection modules for detecting particular tissue types (e.g., skin, bone, cartilage, muscle, fat, water) in the radiology image.
  • tissue types e.g., skin, bone, cartilage, muscle, fat, water
  • the detection may again be relative to the particular protocol and anatomy.
  • Optical character recognition modules for detecting characters printed on the radiology image that convey patient identifiers (e.g., name, birth date and gender).
  • a guideline engine may be provided, implementing selected functionality of the progress unit.
  • the guideline engine may comprise:
  • An internal means of representing an image interpretation or reporting guideline by means of, for instance, a state-based mechanism.
  • a first mapping device that associates a set of pixels with a state.
  • a second mapping device that associates a state with zero or more actions required by the user.
  • An input stream e.g., a user input, by which the state of the engine can be changed, e.g., when the user presses a button "OK" to confirm that the image actually belongs to the patient he/she intends to diagnose.
  • an image alteration engine may be provided, implementing selected functionality of the image processor.
  • the image alteration engine may comprise: An input stream that receives the state of the guideline engine and its associated pixel labels.
  • An input stream that receives the radiology image An input stream that receives the radiology image.
  • a mapping device that sends each pixel label and state received through the input stream to an image manipulation command and/or a pop-up or any other device for obtaining user input.
  • Image manipulation commands may include: decreasing signal intensity by 50%; increasing signal intensity by 30%; setting signal to black; blurring the signal by taking into account neighboring pixels; etc.
  • An aggregation device that manipulates the radiology image based on the image manipulation commands received from the mapping device.
  • the image may be annotated by means of the image annotation modules.
  • the pixels depicting patient name and ID may be recognized, the circled L, as well as the bone structures.
  • the output of the annotation modules may be a labeling of the pixels.
  • the pixels depicting an alphanumerical character may be labeled "char"; the pixels depicting the humerus (bone of upper arm) may be labeled
  • the clavicula (shoulder blade) overlaps with the humerus. Accordingly, some pixels of the medical image may be labeled both "humerus” and "clavicula”.
  • the guideline engine may then be set in motion.
  • the guideline engine may start at the first state: "Verify patient and laterality”.
  • the guideline engine may map this state to one or more pixel labels, in this case "char” and "non-char”.
  • the state-pixel label combination may be sent through the guideline engine's output stream to the image alteration engine.
  • the image alteration engine may read the state and pixel label combination from its input stream. This information may be mapped to image modification commands. In this case, "char” may be mapped to "do nothing"; and “non-char” may be mapped to "suppress".
  • the radiology image may be subjected to these commands.
  • the result may be displayed on a display in the form of an output image, e.g., as shown in Fig.
  • the guideline engine may map the state to the request for the user to confirm that this is actually the intended patient. A popup may appear on the screen asking for confirmation.
  • the guideline engine may take the response of the user into account when determining the next state. In this particular example, the user may indeed want to read the case of John Doe. Moreover, the laterality may be correct. As a result, the bone of the upper arm may be highlighted by suppressing the other bones and tissues, as shown in Fig. 5c.
  • alphanumerical characters are not shown to be suppressed, but may alternatively also be suppressed as they may not be considered valuable according to the guidelines for interpreting the image.
  • the user may proceed to the task by pressing the "OK" button.
  • the medical image may revert to an all-normal state, i.e., the output image may show the medical image in an unaltered form.
  • the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice.
  • the program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention.
  • a program may have many different architectural designs.
  • a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person.
  • the subroutines may be stored together in one executable file to form a self-contained program.
  • Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions).
  • one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time.
  • the main program contains at least one call to at least one of the sub-routines.
  • the sub-routines may also comprise function calls to each other.
  • An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing step of at least one of the methods set forth herein. These instructions may be sub-divided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
  • Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
  • the carrier of a computer program may be any entity or device capable of carrying the program.
  • the carrier may include a storage medium, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk.
  • the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means.
  • the carrier may be constituted by such a cable or other device or means.
  • the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A system (100) for enabling a user to carry out an interpretation of a medical image, the system comprising: -an image interface (110) for obtaining the medical image (112); -a task interface (120) for obtaining task data (122) representing a plurality of image interpretation tasks (310-360, 400), the plurality of image interpretation tasks, when carried out by the user, providing a structured interpretation of the medical image; -a progress unit (140) for enabling the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task (310, 410); -an image processor (160) for generating an output image (162-166) based on the medical image, wherein the image processor is arranged for, as part of said generating, establishing a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task; and -a display output (170) connectable to a display (175) for displaying the output image.

Description

Enabling interpretation of a medical
FIELD OF THE INVENTION
The invention relates to a system and method for assisting a user in carrying out an interpretation of a medical image. The invention further relates to a workstation and an imaging apparatus comprising said system, and to a computer program product comprising instructions for causing a processor system to perform said method.
BACKGROUND OF THE INVENTION
The interpretation of medical images, also referred to as 'reading' of medical images, is a frequently occurring task in medical practice. For example, a clinician may interpret a medical image to answer his/her own clinical question. Another example is radiology reporting. Here, a radiologist may be tasked with generating a radiology report to address a clinical question of a clinician. The radiologist typically provides input to the radiology report by reading the medical image and dictating observations and findings.
There is a need for structured image interpretation, i.e., image interpretation in which the user follows a predetermined structure, as provided by, e.g., guidelines. A reason for this may be to ensure that an image interpreter does not miss potentially relevant aspects of the medical image, such as certain organs. Another reason for this is to reduce the occurrence of the so-termed 'instant happiness' bias that may occur when an image interpreter catches an (obvious) finding and correlates it immediately with the clinical history of the patient and the clinical question. Satisfied by his/her own wit, the interpreter fails to detect other - more covert - findings that may also be pertinent to the clinical question.
For example, in Dutch hospitals, orthopedics residents are instructed to adhere to the following guidelines regardless the clinical data of the patient :
1. Check name of patient - is this an image of the intended patient?
2. Follow contours of the bone - is a fracture visible?
3. Follow contours of cartilage - is it present and/or damaged?
4. Interpret other tissue and organs.
Other guidelines may exist that enforce image interpreters to view the organs in a medical image in a particular order. In case of radiology reporting, a radiologist may use a structure of a report template as a basis for interpreting the medical image in a structured manner.
SUMMARY OF THE INVENTION
A problem of relying on guidelines is that an image interpreter can easily deviate, inadvertently or advertently, from the structured image interpretation.
It would be advantageous to provide a system or method which enables a user to better adhere to a structured interpretation of a medical image.
To better address this concern, a first aspect of the invention provides a system for assisting a user in carrying out an interpretation of a medical image, the system comprising:
an image interface for obtaining the medical image;
a task interface for obtaining task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing the interpretation of the medical image;
a progress unit for enabling the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task;
an image processor for generating an output image based on the medical image, the output image including a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task; and
a display output connectable to a display for displaying the output image. In a further aspect of the invention, a workstation and imaging apparatus is provided comprising the system set forth.
In a further aspect of the invention, a method is provided for assisting a user in carrying out an interpretation of a medical image, the method comprising:
obtaining the medical image;
obtaining task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing a structured interpretation of the medical image;
enabling the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task;
generating an output image based on the medical image;
wherein said generating comprises: establishing a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task.
In a further aspect of the invention, a computer program product is provided comprising instructions for causing a processor system to perform the method set forth.
The aforementioned measures provide an image interface which obtains the medical image, e.g., from an internal or external storage medium. Moreover, a task interface is provided which obtains task data. The task data constitutes a data representation of a plurality of image interpretation tasks. Each of the plurality of image interpretation tasks is a task to be carried out by the user, with the task involving the user interpreting (parts of) the medical image. The plurality of image interpretation tasks effectively constitutes a workflow. A progress unit is provided to enable the user to progress through the plurality of image interpretation tasks, e.g., automatically or based on user input. For example, the progress unit may estimate or learn from user input that an image interpretation task has been completed by the user. As the user progresses through the plurality of image interpretation tasks, a current image interpretation task is established every time, i.e., after each progress.
An image processor is provided which generates an output image based on the medical image. The output image represents or comprises the medical image. When generating the output image, the image processor establishes a visual guidance in the output image. The visual guidance is image-based, i.e., constituted by pixels, voxels, etc., and is established such that it visually guides the user towards carrying out the current image interpretation task. For that purpose, the image processor 160 may obtain the current image interpretation task, or information indicative of said task, from the progress unit 140.
The aforementioned measures have the effect that an output image is generated from the medical image, with the output image comprising a visual guidance which guides the user towards carrying out the current image interpretation task. The visual guidance is thus part of the output image.
Consequently, the user does not need to remember the current image interpretation task as he is provided with the visual guidance automatically when viewing the output image. Advantageously, the user does not need to divert his/her attention from the output image, e.g., to view a guideline or report template provided separately from the medical image. Advantageously, it is convenient for the user to adhere to a structure when carrying out an interpretation of the medical image.
Optionally, the task data is indicative of a region of interest that is of relevance in the current image interpretation task, and the visual guidance is arranged for visually guiding the user towards the region of interest. The task data may be indicative of a region of interest in that it may task the user with, e.g., inspecting a particular organ. By establishing a visual guidance that specifically guides the user towards the region of interest, the user can more easily carry out the current image interpretation task. Advantageously, by guiding the user towards the region of interest, the user is less likely to view other parts of the medical image which may result in an unstructured interpretation.
Optionally, the image processor is arranged for establishing the visual guidance by modifying a content of the medical image. The visual guidance is thus provided in the output image by modified content of the medical image, i.e., content having been modified so as to visually guide the user towards carrying out the current image interpretation task. Advantageously, the user obtains the visual guidance automatically when viewing the content of the medical image. It is thus not needed to provide the visual guidance separately.
Optionally, the image processor is arranged for establishing the visual guidance by detecting the region of interest in the medical image, and highlighting the region of interest in the medical image. The part of the medical image which is of relevance to the current image interpretation task is thus highlighted to enable the user to easily carry out the current image interpretation task. Advantageously, the user is less likely to view other parts of the medical image which may otherwise result in an unstructured interpretation.
Optionally, the image processor is arranged for highlighting the region of interest by masking parts of the medical image which do not comprise the region of interest. By masking the parts of the medical image which do not comprise the region of interest, the user is discouraged or even prevented from viewing parts of the medical image which are of no or lesser relevance to the current image interpretation task. Advantageously, an unstructured interpretation of the medical image, for example due to the user experiencing the aforementioned 'instant happiness' bias, is less likely, or even entirely prevented.
Advantageously, by masking other parts of the medical image, the region of interest itself is not modified, which otherwise may hinder a medical interpretation of said region.
Optionally, the image processor is arranged for detecting characters in the medical image to enable detecting patient identifiers embedded in the medical image as the region of interest. The system thus is enabled to visually guide the user towards patient identifiers which are deemed to be of relevance to the current inspection task.
Optionally, the image processor is arranged for establishing the visual guidance by establishing a visual representation of the current image interpretation task in or next to the medical image. The visual guidance is thus provided by a visual representation of the current image interpretation task, e.g., a textual or graphical representation. As a result of seeing the visual representation, the current image interpretation task is brought to the user's attention, thereby guiding the user towards carrying out the current image interpretation task.
Optionally, the image processor is arranged for establishing the visual guidance by including a plurality of visual representations in the output image, each representing a respective one of the plurality of image interpretation tasks, and highlighting the visual representation of the current image interpretation task. Advantageously, the user is provided with an overview of the plurality of image interpretation tasks, enabling the user to see, e.g., previous and next tasks. The current image interpretation task is highlighted so as to visually guide the user towards carrying out the current image interpretation task.
Optionally, the progress unit is arranged for i) obtaining user input indicative of a completion of the current image interpretation task, and ii) progressing through the plurality of image interpretation tasks based on said completion. The user is thus provided with control over when to progress through the plurality of image interpretation tasks.
Optionally, the user input is further indicative of an outcome of the current image interpretation task, and the progress unit is arranged for progressing through the plurality of image interpretation tasks further based on said outcome. Certain image interpretation tasks may be conditional on an outcome of a previous image interpretation tasks. The progress unit is arranged to enable said conditional progressing.
Optionally, the task data is indicative of a potential outcome of the current image interpretation task, and the progress unit is arranged for querying the user for said potential outcome. A potential outcome of an image interpretation tasks may be identified automatically, i.e., in separation of the user manually carrying out the image interpretation tasks. For example, if the user is tasked with verifying a patient's identity by interpreting patient identifiers comprised in the medical image, the presumed patient identity may be available to the system, e.g., by means of image metadata provided as part of the task data. By querying the user for said potential outcome, the progress unit can control the progressing through the plurality of image interpretation tasks accordingly. For example, if the user rejects the potential outcome, i.e., if the user determines that the patient's identity differs from the presumed patient identify queried for by the system, the progress unit may abort progressing through the remainder of the plurality of interpretation tasks.
Optionally, the system comprises a radiology input for enabling the user to generate a structured report based on the plurality of image interpretation tasks. The report is structured in that it is based on the plurality of image interpretation tasks. Thus, the user can generate a structured report on the basis of carrying out the structured image interpretation. Advantageously, the structured report is obtained by structuring a manner in which the user interprets the medical image and thus obtains input for the report.
Optionally, the radiology input is arranged for automatically filling in the outcome of the current image interpretation task in a report template for the structured report.
Optionally, the task interface is arranged for obtaining the task data from at least one of the group of: an image interpretation guideline, a report template, and a reporting guideline. Said sources are well suited for structuring the image interpretation.
It will be appreciated by those skilled in the art that two or more of the above- mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
Modifications and variations of the workstation, the imaging apparatus, the method, and/or the computer program product, which correspond to the described
modifications and variations of the system, can be carried out by a person skilled in the art on the basis of the present description.
A person skilled in the art will appreciate that the method may be applied to multi-dimensional image data, e.g. to two-dimensional (2-D), three-dimensional (3-D) or four-dimensional (4-D) images. A dimension of the multi-dimensional image data may relate to time. For example, a three-dimensional image may comprise a time domain series of two- dimensional images. The image may be acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic
Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
The invention is defined in the independent claims. Advantageous yet optional embodiments are defined in the dependent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,
Fig. 1 shows a system for enabling a user to carry out a structured image interpretation, and a display for displaying an output image of the system;
Fig. 2 shows a method according to the present invention;
Fig. 3 shows a computer program product according to the present invention; Fig. 4 shows a workflow from a guideline for interpreting a shoulder X-ray image, the workflow being constituted by a plurality of image interpretation tasks;
Fig. 5a shows a shoulder X-ray image comprising patient identifiers;
Fig. 5b shows the shoulder being masked so as to highlight the patient identifiers, with the user being queried on the patient identifiers;
Fig. 5c shows parts of the shoulder being masked to highlight the humerus, with the user being queried on a completion of a current image interpretation task; and
Fig. 6 shows a textual representation of a plurality of image interpretation tasks next to the medical image, with a current image interpretation task being highlighted.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 shows a system 100 system for enabling a user to carry out a structured interpretation of a medical image. The system 100 comprises an image interface 110 for obtaining the medical image 112. For that purpose, the image interface 110 may be connectable to an external storage database 115 such as, e.g., a Picture Archiving and Communication System (PACS). The system 100 further comprises a task interface 120 for obtaining task data 122 representing a plurality of image interpretation tasks. Here, the plurality of image interpretation tasks, when carried out by the user, provide a structured interpretation of the medical image. The task interface 120 may be connectable to an external medical database 125. The task interface 120 may obtain the task data 122 from, or in the form of, e.g., an image interpretation guideline, a report template, or a reporting guideline as stored on the external medical database 125. The system 100 further comprises a progress unit 140 for enabling the user to progress through the plurality of image interpretation tasks. For that purpose, the progress unit 140 is shown to receive the task data 122 from the task interface 120. As a result of progressing through the plurality of image interpretation tasks, a current image interpretation task 310, 410 is established by the progress unit 140.
The system 100 further comprises an image processor 160 for generating an output image 162-166 based on the medical image. For that purpose, the image processor 160 is shown to receive the medical image 112 from the image interface 110. The image processor 160 is arranged for, as part of said generating, establishing a visual guidance in the output image 162-166 for visually guiding the user towards carrying out the current image interpretation task. For that purpose, the image processor 160 is shown to receive the current image interpretation task 310, 410 from the progress unit 140. Alternatively, the image processor 160 may receive the task data 122 from the task interface 120, whilst receiving a task identifier from the progress unit 140 enabling the image processor 160 to identify the current image interpretation task 310, 410 from the task data 122. The system 100 further comprises a display output 170 which is connectable to a display 175. The display output 170 is shown to provide display data 172 of the output image 162-166 to the display 175, thereby enabling the display 175 to display the output image 162-166 to a user.
The progress unit 140 may be arranged for progressing through the plurality of image interpretation tasks based on user input 142. For example, the progress unit may obtain user input 142 indicative of a completion of the current image interpretation task 310, 410, and progress through the plurality of image interpretation tasks based on said completion. As is also shown in Fig. 1, the system 100 may further comprise a radiology input 180 for enabling the user to generate a structured report 184 based on the plurality of image interpretation tasks. The radiology input 180 is shown to receive the current image interpretation task 310, 410 from the progress unit 140 and radiology input 182 from the user, e.g., via a radiology input device such as a dictation device (not shown).
The operation of the system 100 may be briefly explained as follows. The image interface 110 obtains the medical image 112. The task interface 120 obtains task data 122 representing the plurality of image interpretation tasks. The progress unit 140 enables the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task 310, 410. The image processor 160 generates an output image 162-166 based on the medical image 112. As part of said generating, the image processor 160 establishes a visual guidance in the output image 162-166 for visually guiding the user towards carrying out the current image interpretation task 310, 410. Lastly, the display output 170 provides display data 172 of the output image 162-166 to the display 175.
Fig. 2 shows a method 200 for enabling a user to carry out an interpretation of a medical image. The method 200 comprises, in a step titled "OBTAINING MEDICAL IMAGE", obtaining 210 the medical image. The method 200 further comprises, in a step titled "OBTAINING TASK DATA", obtaining 220 task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing a structured interpretation of the medical image. The method 200 further comprises, in a step titled "ESTABLISHING CURRENT IMAGE INTERPRETATION
TASK", enabling 230 the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task. The method 200 further comprises, in a step titled "GENERATING OUTPUT IMAGE", generating 240 an output image based on the medical image. Said step of generating 240 comprises, in a sub-step titled "ESTABLISHING VISUAL GUIDANCE", establishing 250 a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task.
The method 200 may correspond to an operation of the system 100. However, the method 200 may also be performed in separation of the system 100.
Fig. 3 shows a computer program product 270 comprising instructions for causing a processor system to perform the aforementioned method 200. The computer program product 270 may be comprised on a computer readable medium 260, for example in the form of as a series of machine readable physical marks and/or as a series of elements having different electrical, e.g., magnetic, or optical properties or values.
The operation of the system 100 may be explained in more detail as follows.
Fig. 4 shows a workflow 300 from a guideline for interpreting a shoulder X- ray image. The workflow 300 is constituted by a plurality of image interpretation tasks 310- 360 to be carried out by the user, namely a first task 310 titled "Verify patient and laterality", a second task 320 titled "Check humerus", a third task 330 titled "Check coracoids process", a fourth task 340 titled "Check clavicle", a fifth task 350 titled "Check scapula", and a sixth task 360 titled "Check ribs". It is noted that the medical term humerus refers to a bone of the upper arm, the medical term clavicle to a collarbone, and the medical term scapula to a shoulder blade. The workflow 300 is, by way of example, a linear workflow in that the image interpretation tasks are to be carried out sequentially by the user. The task interface 120 may obtain task data 122 representing the workflow 300. In general, the task data 122 may be a structured language document, such as an Extensible Markup Language (XML) document.
The progress unit 140 enables the user to progress through the plurality of image interpretation tasks 310-360, thereby establishing a current image interpretation task. For example, when starting the interpretation of the medical image 122, the current image interpretation task may be the first task 310 requiring the user to verify the patient and laterality. After having completed the first task 310, the user may progress to the second task 320, requiring the user to check the humerus. The progress unit 140 may establish the transition between the tasks, i.e., the user's progress, based on user input 140. The user input 140 may be obtained from a user input device such as, e.g., a mouse or a keyboard, or a user input system such as, e.g., a dictation system or an eyeball tracker system.
It is noted that the workflow 300 may be represented by a state diagram, in which each task of the workflow 300 is represented by a state, and in which a completion of a task by the user corresponds to a transition, i.e., a progression, from one state to another state. Effectively, the workflow 300 may be modeled by a state machine. The state machine may be implemented or executed by the progress unit 140. The state machine may advance in state based on user input 142 and/or the contextual parameters. In the example of Fig. 4, the transitions between states are effected by user input 142 representing an "OK", with the "OK" being indicative that the user carried out the current image interpretation task.
Fig. 5a shows a shoulder X-ray image 112. In addition to showing a shoulder
114, the X-ray image further comprises patient identifiers 117, i.e., in the form of image data. In this example, the patient identifiers identify the patient by name, i.e., "John Doe", as well as by identity number, i.e. ID 123456. Moreover, the X-ray image 112 comprises a marking 118 "L" identifying the X-ray image as a left lateral image.
The system 100 may enable the user to interpret the X-ray image 112 in a structured manner, e.g., according to the workflow 300 as shown in Fig. 4, as follows.
The first task 310 in the workflow 300 is to verify the patient and the laterality of the X-ray image 112. In this case, the patient identifiers 117 and the marker 118 constitute regions of interest which are of relevance in the current image interpretation task 310. A reason for this is that the user has to interpret the patient identifiers 117 and the marker 118 in order to verify the patient and the laterality of the X-ray image 112. The image processor 160 may be arranged for generating the visual guidance so as to visually guiding the user towards the region of interest, e.g., the patient identifiers 117 and the marker 118.
Fig. 5b shows an example of this. Here, an output image 162 of the system 100 is shown in which the patient identifiers 117 and the marker 118 were highlighted with respect to other parts of the medical image. For enabling said highlighting, the image processor 160 may be arranged for generating the output image 162 by modifying a content of the medical image 112 so as to establish the visual guidance. In particular, the image processor 160 may be arranged for generating the output image 162 by detecting the region of interest 117, 118 in the medical image 112, and highlighting the region of interest 117, 118 in the medical image 112. The image processor 160 may detect this type of region of interest, e.g., textual information, by detecting characters in the medical image 112. For example, the image processor 160 may employ Optical Character Recognition (OCR) to identify the region of interest 117, 118. Having identified the region of interest 117, 118, the image processor 160 may highlight the region of interest by masking parts of the medical image 112 which do not comprise the region of interest. In the example of Fig. 5b, the shoulder 114 is masked since it is of no or little relevance to the current image interpretation task 310 of verifying the patient and the laterality of the X-ray image 112. Fig. 5b also shows a result of an optional aspect of the present invention, namely that the progress unit 140 may obtain user input 142 which is indicative of an outcome of the current image interpretation task 310. The progress unit 140 may obtain said user input 142 by establishing, via the image processor 160, a user interface element 144 in the output image 162 which prompts the user to indicate said outcome. In particular, the progress unit 140 may be arranged for querying the user for a potential outcome of the current image interpretation task 310. As such, the user may be specifically queried whether or not the current image interpretation task 310 resulted in the potential outcome. The potential outcome may be available to the system. For example, the task data 122 may be indicative of the potential outcome of the current image interpretation task. In the example of Fig. 5b, the task data 122 may be indicative of the patient's name and the patient's identity number. Additionally or alternatively, the potential outcome may be determined from other sources, such as, e.g., metadata of the medical image 112. As such, the progress unit 140 may query 146 the user whether the X-ray image 112 shown to the user corresponds to the patient "John Doe" having an identity number 123456. The user may confirm or reject said potential outcome by selecting 'Yes' Y or 'No' N within the user interface element 144.
The progress unit 140 may arranged for progressing through the plurality of image interpretation tasks 310-360 based on said outcome. In the example of Fig. 5b, the progress unit 140 may progress to a second task 320 of the workflow 300 in case the user confirms the potential outcome by selecting 'Yes' Y, However, if the user were to reject the potential outcome, i.e., by selecting 'No' N, the progress unit 140 may abort progressing through the remainder of the plurality of interpretation tasks 310-360.
Fig. 5c shows a result of the progress unit 140 progressing to the second task 320 of the workflow 300. The second task 320 thus becomes the current image interpretation task 320 to be carried out by the user. The second task 320 requires the user to check the humerus. The humerus therefore constitutes a region of interest which is of relevance for the current interpretation task 320. The image processor 160 may visually guide the user towards the region of interest by detecting the region of interest in the X-ray image 112, i.e., the humerus 115, and by highlighting the region of interest. In order to detect regions of interest such as organs and tissues, the image processor 160 may employ suitable detection algorithms as are known per se from the field of medical image analysis. The detection algorithm may be selected from a plurality of detection algorithms based on the current image interpretation task 320. Fig. 5c shows the humerus 115 being highlighted by masking parts of the medical image 112 which do not comprise the region of interest. In this example, all other parts of the shoulder are masked. However, the patient identifiers 117 and marking 118 are not masked, thereby enabling the user to easily verify the patient's name, identity, laterality, etc., during the structured image interpretation. It is noted that, in general, various ways of masking may be advantageously used. For example, the image intensity of the other parts may be reduced. The image intensity may also be reduced completely, i.e., set to black. Alternatively or additionally, the other parts may be blurred. Alternatively or additionally, the image intensity of the region of interest may be increased.
Fig. 5c also shows a result of an optional aspect of the present invention, namely that the progress unit 140 may be arranged for obtaining user input 142 indicative of a completion of the current image interpretation task 320. Here, the progress unit 140 may be arranged for progressing through the plurality of image interpretation tasks 310-360 based on said completion. The progress unit 140 may obtain said user input 142 by establishing, via the image processor 160, the user interface element 144 in the output image 164. In this example, the user interface element 144 may prompt 148 the user to indicate whether he/she wishes to proceed to the next task. In this example, the next task may correspond to an inspection of a next bone. By selecting ΌΚ', the user thus indicates that he/she has completed the current image interpretation task 320 and wishes to proceed to the next task. The progress unit 140 may then proceed to the third task 330. It is noted that the above process may be repeated until all tasks of the workflow 300 have been completed. As a result, the user has carried out a structured interpretation of the X-ray image 112.
Alternatively or in addition to highlighting a region of interest in the medical image 112, the image processor 160 may be arranged for generating the output image by establishing a visual representation of the current image interpretation task in or next to the medical image 112. Fig. 6 shows an example of this. Here, a portion of an output image 166 is shown which comprises a plurality of visual representations, each representing a respective one of a plurality of image interpretation tasks 400. In this example, the visual
representations are textual representations of the respective image interpretation tasks, i.e., denoting the organs which the user has to inspect. However, the visual representations may also take a different form, e.g., do not need to be representations of organs but may rather represent other aspects of the tasks and/or visually represent the tasks in different ways.
Moreover, in Fig. 6, the visual representation of a current image interpretation task 410 is being highlighted. In this example, the highlighting is by means of font attributes, i.e., underlining the textual representation, establishing a higher text intensity, etc. It is noted that various alternatives may be used, e.g., providing an arrow besides the current interpretation task, using color coding, etc. Similarly as explained in reference to Figs. 5a-5c, the progress unit 140 may progress through the plurality of image interpretation tasks 400, each time causing the image processor 160 to highlight a different one of the visual representations which corresponds to a current image interpretation task.
Fig. 6 further shows, by way of example, DICOM attributes 420 of the medical image 114 being displayed in the output image 166, with the plurality of visual representations being displayed nearby the DICOM attributes 420. The mechanism to display the DICOM attributes 420 may also be used to display said visual representations.
The present invention may be advantageously used in radiology reporting. Here, a radiologist may be mandated by a reporting guideline, such as those of the
Radiological Society of North American (RSNA), to report on certain structures in a medical image, e.g., certain organs, tissue, etc. A corresponding plurality of image interpretation tasks may be derived from the reporting guideline. Each task may require an inspection of a structure and thus be represented by the name of the structure. A textual representation of all structures may be displayed in or next to the medical image. While progressing through the plurality of image interpretation tasks, the radiologist may dictate results of the image interpretation, e.g., medical findings and observations. To visually guide the radiologist towards carrying out the current image interpretation task, the corresponding textual representation may be highlighted. Additionally or alternatively, the corresponding structure may be highlighted in the medical image. Moreover, when said textual representations are provided, the textual representation of previous tasks may be automatically removed. Hence, only the textual representations of pending tasks, i.e., structures which have not been inspected yet, may be displayed. The results of a current image interpretation task may be available to the system in the form of recorded speech. The system may employ speech recognition to obtain the results in text-form. The dictation means and/or the speech recognition may be part of a radiology input of the system. The system may automatically generate a structured report, e.g., by including the results in a part of a report template. The part may be automatically identified by the system based on the current image interpretation task. For that purpose, the system may employ natural language processing, lexicons and/or ontologies to identify said part from the current image interpretation task and/or the results of the radiology reporting. Optionally, the plurality of image interpretation tasks may be obtained from a report template, e.g., based on sections of the report template. Here, a direct relation exists between a part of the report template, e.g., a section, and a current image interpretation task. The reporting input may thus directly fill in the outcome of the current image interpretation task in the appropriate part of the report template.
The present invention may be implemented in various ways. For example, the present invention may be implemented as a software plug-in for a PACS. The plug-in, or other computer-implementation, may be established as follows. A plurality of image annotation modules may be provided, implementing selected functionality of the image processor. For example, the image annotation modules may comprise:
Organ detection modules for detecting organs in a radiology image of a particular protocol (e.g., MR, X-ray, CT) and anatomy (e.g., neuro, breast, abdomen, knee). A particular module may detect several organs, or a series of modules may be used to detect one organ. In the former case, the module may label each pixel of the medical image with a label corresponding the organ, e.g., "knee", and with the label "none" if it depicts no organ known by the module. In the latter case, a module may label each pixel with a binary value indicating if it depicts the organ at hand.
- Tissue detection modules for detecting particular tissue types (e.g., skin, bone, cartilage, muscle, fat, water) in the radiology image. The detection may again be relative to the particular protocol and anatomy.
Optical character recognition modules for detecting characters printed on the radiology image that convey patient identifiers (e.g., name, birth date and gender).
Moreover, a guideline engine may be provided, implementing selected functionality of the progress unit. For that purpose, the guideline engine may comprise:
An internal means of representing an image interpretation or reporting guideline, by means of, for instance, a state-based mechanism.
A first mapping device that associates a set of pixels with a state. - A second mapping device that associates a state with zero or more actions required by the user.
An input stream, e.g., a user input, by which the state of the engine can be changed, e.g., when the user presses a button "OK" to confirm that the image actually belongs to the patient he/she intends to diagnose.
- An output stream that communicates the state of the engine to external algorithms, and the pixels associated with that state.
Moreover, an image alteration engine may be provided, implementing selected functionality of the image processor. For that purpose, the image alteration engine may comprise: An input stream that receives the state of the guideline engine and its associated pixel labels.
An input stream that receives the radiology image.
A mapping device that sends each pixel label and state received through the input stream to an image manipulation command and/or a pop-up or any other device for obtaining user input. Image manipulation commands may include: decreasing signal intensity by 50%; increasing signal intensity by 30%; setting signal to black; blurring the signal by taking into account neighboring pixels; etc.
An aggregation device that manipulates the radiology image based on the image manipulation commands received from the mapping device.
An output stream that returns the manipulated image.
The operation of the above computer-implementation may be explained based on the aforementioned workflow 300 for interpreting a shoulder X-ray image.
Firstly, the image may be annotated by means of the image annotation modules. In particular the pixels depicting patient name and ID may be recognized, the circled L, as well as the bone structures. The output of the annotation modules may be a labeling of the pixels. For instance, the pixels depicting an alphanumerical character may be labeled "char"; the pixels depicting the humerus (bone of upper arm) may be labeled
"humerus", etc. Note that one pixel may have multiple labels. Some pixels may depict two bones if they overlap in the medical image. For example, in the X-ray image shown in Fig.
5a, the clavicula (shoulder blade) overlaps with the humerus. Accordingly, some pixels of the medical image may be labeled both "humerus" and "clavicula".
The guideline engine may then be set in motion. The guideline engine may start at the first state: "Verify patient and laterality". The guideline engine may map this state to one or more pixel labels, in this case "char" and "non-char". The state-pixel label combination may be sent through the guideline engine's output stream to the image alteration engine. The image alteration engine may read the state and pixel label combination from its input stream. This information may be mapped to image modification commands. In this case, "char" may be mapped to "do nothing"; and "non-char" may be mapped to "suppress". The radiology image may be subjected to these commands. The result may be displayed on a display in the form of an output image, e.g., as shown in Fig. 5b. Moreover, the guideline engine may map the state to the request for the user to confirm that this is actually the intended patient. A popup may appear on the screen asking for confirmation. The guideline engine may take the response of the user into account when determining the next state. In this particular example, the user may indeed want to read the case of John Doe. Moreover, the laterality may be correct. As a result, the bone of the upper arm may be highlighted by suppressing the other bones and tissues, as shown in Fig. 5c. The
alphanumerical characters are not shown to be suppressed, but may alternatively also be suppressed as they may not be considered valuable according to the guidelines for interpreting the image. The user may proceed to the task by pressing the "OK" button.
Finally, when all states in the guideline have been visited, the medical image may revert to an all-normal state, i.e., the output image may show the medical image in an unaltered form.
It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The subroutines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing step of at least one of the methods set forth herein. These instructions may be sub-divided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a storage medium, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A system (100) for assisting a user in carrying out an interpretation of a medical image (112), the system comprising:
an image interface (110) for obtaining the medical image (112); a task interface (120) for obtaining task data (122) representing a plurality of image interpretation tasks (310-360, 400), the plurality of image interpretation tasks, when carried out by the user, providing the interpretation of the medical image;
a progress unit (140) for enabling the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task (310, 410);
an image processor (160) for generating an output image (162-166) based on the medical image, the output image including a visual guidance for visually guiding the user towards carrying out the current image interpretation task; and
a display output (170) connectable to a display (175) for displaying the output image.
2. The system (100) according to claim 1, wherein the task data (122) is indicative of a region of interest (115) that is of relevance in the current image interpretation task (310, 410), and wherein the visual guidance is arranged for visually guiding the user towards the region of interest.
3. The system (100) according to claim 2, wherein the image processor (160) is arranged for establishing the visual guidance by modifying a content (114) of the medical image (112).
4. The system (100) according to claim 3, wherein the image processor (160) is arranged for establishing the visual guidance by:
detecting the region of interest (115) in the medical image (112); and highlighting the region of interest in the medical image.
5. The system (100) according to claim 4, wherein the image processor (160) is arranged for highlighting the region of interest (115) by masking parts of the medical image (112) which do not comprise the region of interest.
6. The system (100) according to claim 4, wherein the image processor (160) is arranged for detecting characters in the medical image (112) to enable detecting patient identifiers (116, 118) embedded in the medical image as the region of interest.
7. The system (100) according to claim 1, wherein the image processor (160) is arranged for establishing the visual guidance by establishing a visual representation of the current image interpretation task (410) in or next to the medical image (112).
8. The system (100) according to claim 7, wherein the image processor (160) is arranged for establishing the visual guidance by:
- including a plurality of visual representations in the output image, each representing a respective one of the plurality of image interpretation tasks (400); and
highlighting the visual representation of the current image interpretation task
(410).
9. The system (100) according to claim 1, wherein the progress unit (140) is arranged for i) obtaining user input (142) indicative of a completion of the current image interpretation task (310, 410), and ii) progressing through the plurality of image interpretation tasks (310-360, 400) based on said completion.
10. The system (100) according to claim 9, wherein the user input (142) is further indicative of an outcome of the current image interpretation task (310, 410), and wherein the progress unit (140) is arranged for progressing through the plurality of image interpretation tasks (310-360, 400) further based on said outcome.
11. The system (100) according to claim 10, wherein the task data (122) is indicative of a potential outcome of the current image interpretation task (310, 410), and wherein the progress unit (140) is arranged for querying the user for said potential outcome.
12. The system (100) according to claim 1, further comprising a radiology input (180) for enabling the user to generate a structured report (184) based on the plurality of image interpretation tasks (310-360, 400).
13. The system (100) according to claim 12, wherein the radiology input (180) is arranged for automatically filling in the outcome of the current image interpretation task (310, 410) in a report template for the structured report (184).
14. The system (100) according to claim 1, wherein the task interface (120) is arranged for obtaining the task data (122) from at least one of the group of: an image interpretation guideline, a report template, and a reporting guideline.
15. Workstation or imaging apparatus comprising the system of claim 1.
16. A method (200) for assisting a user in carrying out an interpretation of a medical image, the method comprising:
obtaining (210) the medical image;
obtaining (220) task data representing a plurality of image interpretation tasks, the plurality of image interpretation tasks, when carried out by the user, providing the interpretation of the medical image;
enabling (230) the user to progress through the plurality of image interpretation tasks, thereby establishing a current image interpretation task; and
generating (240) an output image based on the medical image; wherein said generating comprises:
establishing (250) a visual guidance in the output image for visually guiding the user towards carrying out the current image interpretation task.
17. A computer program product (270) comprising instructions for causing a processor system to perform the method according to claim 16.
PCT/IB2013/059967 2012-11-08 2013-11-07 Enabling interpretation of a medical image WO2014072928A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261723837P 2012-11-08 2012-11-08
US61/723,837 2012-11-08

Publications (1)

Publication Number Publication Date
WO2014072928A1 true WO2014072928A1 (en) 2014-05-15

Family

ID=49709785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/059967 WO2014072928A1 (en) 2012-11-08 2013-11-07 Enabling interpretation of a medical image

Country Status (1)

Country Link
WO (1) WO2014072928A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274585A1 (en) * 2006-05-25 2007-11-29 Zhang Daoxian H Digital mammography system with improved workflow
US20120172700A1 (en) * 2010-05-21 2012-07-05 Siemens Medical Solutions Usa, Inc. Systems and Methods for Viewing and Analyzing Anatomical Structures

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070274585A1 (en) * 2006-05-25 2007-11-29 Zhang Daoxian H Digital mammography system with improved workflow
US20120172700A1 (en) * 2010-05-21 2012-07-05 Siemens Medical Solutions Usa, Inc. Systems and Methods for Viewing and Analyzing Anatomical Structures

Similar Documents

Publication Publication Date Title
JP6749835B2 (en) Context-sensitive medical data entry system
US10276265B2 (en) Automated anatomically-based reporting of medical images via image annotation
EP2987144B1 (en) Grouping image annotations
US20180060535A1 (en) Updating probabilities of conditions based on annotations on medical images
US10729396B2 (en) Tracking anatomical findings within medical images
JP6657210B2 (en) Picture archiving system with text image linking based on text recognition
US11361530B2 (en) System and method for automatic detection of key images
US8786601B2 (en) Generating views of medical images
JP7258772B2 (en) holistic patient radiology viewer
CN106796621B (en) Image report annotation recognition
US20170262584A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
US20150235007A1 (en) System and method for generating a report based on input from a radiologist
US20180060488A1 (en) Customizing annotations on medical images
US20180060534A1 (en) Verifying annotations on medical images using stored rules
US20230334663A1 (en) Development of medical imaging ai analysis algorithms leveraging image segmentation
WO2014072928A1 (en) Enabling interpretation of a medical image
US20120191720A1 (en) Retrieving radiological studies using an image-based query

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13799390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13799390

Country of ref document: EP

Kind code of ref document: A1