EP1565796A2 - Verfahren und gerät zur 3d bilddokumentation und navigation - Google Patents

Verfahren und gerät zur 3d bilddokumentation und navigation

Info

Publication number
EP1565796A2
EP1565796A2 EP03724726A EP03724726A EP1565796A2 EP 1565796 A2 EP1565796 A2 EP 1565796A2 EP 03724726 A EP03724726 A EP 03724726A EP 03724726 A EP03724726 A EP 03724726A EP 1565796 A2 EP1565796 A2 EP 1565796A2
Authority
EP
European Patent Office
Prior art keywords
ego
view
user
data object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP03724726A
Other languages
English (en)
French (fr)
Inventor
Patrick Dube
Alexandre Boudreau
Eric Fournier
Claude Kauffmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dynapix Intelligence Imaging Inc
Original Assignee
Dynapix Intelligence Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynapix Intelligence Imaging Inc filed Critical Dynapix Intelligence Imaging Inc
Publication of EP1565796A2 publication Critical patent/EP1565796A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • the present invention relates to 3D image documentation. More specifically, the invention relates to a computer controlled graphical user interface for documenting and navigating through a 3D image using a network of embedded graphical objects (EGO).
  • EGO embedded graphical objects
  • 3D digital images are becoming commonly used in various fields of application as they provide more information and a richer analysis context for specific tasks at hand.
  • image interpretation and analysis is primordial and involves multidisciplinary expertise
  • efficient sharing is difficult to attain with existing technologies since it involves applying multiple intricate steps, which makes the sharing process highly time-consuming and error-prone.
  • Classical image knowledge integration strategies usually involve a combination of the following steps: selection of one or more visualization perspective; extraction of some snapshots from the perspectives;manual/automatical identification of the structures that are to be documented from the perspectives; insertion of the snapshots in a written report that describes the structures observed in the image; and archiving of reports and raw data independently for further consultation and/or management.
  • KM systems such as mind mapping software
  • Such systems follow basic, well-known, hierarchical and/or associative methods to interconnect portions of information within network representation.
  • U.S. Pat. No. 5,812,134 discloses a user interface navigational system and method for interactive representation of information contained within a database.
  • This system graphically depicts the organization of an information base as three-dimensional "molecules" consisting of structured parallel "threads" of connected nodes each encompassing a specific aspect of the overall database.
  • the component nodes which share a commonality of subject, are arranged in a natural, linear progression which reflects the organizational structure of the information subject represented by the thread, thereby providing the user with a visual guide suggesting the appropriate sequence of nodes to be viewed.
  • U.S. Pat. No. 6,037,944 discloses a computer-user-interface navigational system for displaying a thought network from a thought perspective.
  • This system utilizes associative thought networks to organize and represent digitally-stored thoughts.
  • a graphical representation of the thought network is displayed, including a plurality of display icons corresponding to the thoughts, and a plurality of connecting lines corresponding to the relationships among the thoughts.
  • Users are able to select a current thought by interacting with the graphical representation, and the current thought is processed by automatically showing the thought related to the current thought and/or by invoking an application program associated with the current thought in a transparent manner.
  • the present invention relates to 3D image documentation. More specifically, the invention relates to a computer controlled graphical user interface for documenting and navigating through a 3D image using a network of embedded graphical objects (EGO).
  • EGO embedded graphical objects
  • any scene, perspective, or view plane from the 3D image can be taken as central focus within the context of the present invention.
  • This new focus is the principal axis that determines what subset of an information network will be accessible to the user. Therefore, there is provided an effective and full integrative methodology for documenting a 3D image by explicitly embedding an information network within the spatial frame of reference of said 3D image; optimizing visual representation of the embedded information network in relationship to the 3Dimage by combining 2D and 3D representation approaches; and performing multiscale navigation through the 3D image by combining the use of the embedded information network, hierarchical multiscale image partitioning and non-linear volume slicing.
  • a method for annotating a 3D graphical data object comprising: identifying at least one spatial position or region of the graphical data object; providing annotation information associated with the at least one spatial position or region, the at least one spatial position or region and the associated annotation information forming an embedded graphical object (EGO); defining a view of the graphical data object; and generating a display of the view of the graphical data object and of at least some of the EGO's within the desired view.
  • EGO embedded graphical object
  • the method also comprises associative link data defining relations between EGOs.
  • a method for annotating a 3D graphical data object that also comprises data-mining search capabilities in which the search is performed on databases containing EGOs or other information relevant to a graphical data object.
  • methods for automatically identifying spatial positions or regions within a graphical data object are provided.
  • a method for displaying a 3D graphical data object comprising: defining a non-planar 3D surface referenced with respect to the 3D graphical data object by manipulating in 3D to fit a 3D object within the graphical data object; and generating a display of a view of the 3D surface in context of the 3D graphical data object.
  • the method of the present invention can be delivered as a computer program product as would be obvious to one skilled in the art.
  • Fig. 1 is a flow chart of an embodiment of the method of the present invention
  • Fig. 2 is an embodiment of a graphical data object with associated EGOs
  • Fig. 3 is an embodiment of a graphical data object with associated EGOs
  • Fig. 4 is an embodiment of graphical data objects with associated EGOs
  • Fig. 5 is a flow chart of the steps involved in the "snapping" of an EGO to a " surface
  • Fig. 6 is a flow chart of steps in morphology association annotation
  • Fig. 7A is a data object in a frustrum
  • Fig. 7B is a data object in a frustrum showing a user-defined contour
  • Fig. 7C is a data object in a frustrum showing a user-defined contour extended in 3 D
  • Fig. 7 D is a data object in a frustrum showing intersecting sub-volume of contour
  • Fig. 7E shows the final associated sub-volume
  • Fig. 8 is a flow chart of an embodiment of automatic image annotation process
  • Fig. 9 is a diagram of an embodiment of the system of the present invention.
  • Fig.10 illustrates an embodiment of an information storage subsystem
  • Fig. 11 is a flow chart of an embodiment of the steps involved in return on information process.
  • Fig. 12 is a flow chart of another embodiment of the steps involved in return on information process.
  • Scene is intended to mean a graphical representation of the considered 3D image, wherein said graphical representation may be achieved through operations leading to a modified view of said image with objective of emphasizing on certain structures or portions of said image.
  • a scene may further comprise graphical objects of various nature.
  • View refers to a specific representation of the 3D image.
  • View may be used interchangeably with the term Scene.
  • EGO is intended to mean embedded graphical object which is a 2D or 3D pictogram incorporated in the 3D frame of reference of the 3D image.
  • A-EGO is intended to mean attached EGO.
  • An attached EGO is an EGO that is directly associated to a visual structure within a 3D image.
  • An A-EGO can be associated to a visual structure within a specific view or scene.
  • F-EGO is intended to mean Floating embedded graphical object. This specific kind of EGO incorporates every EGO that are not associated with visible structures within the image, view, or scene. We note that an A-EGO can temporarily become an F-EGO if it is to be displayed without being associated with a structure in the current scene.
  • HROI is intended to mean Hierarchical Region Of Interest.
  • a HROI is a multilevel 3D regionalization of a 3D image: one or a plurality of 3D regions of interest may be defined within a 3D image, wherein a 3D region of interest may further comprise one or a plurality of 3D regions of interest. Each subregion is associated to its own scale domain.
  • Scale Domain is intended to mean a "zoom level" of the 3D image at which objects within the image and the attributes of these objects are explicitly visible without further requiring zooming to simultaneously view both objects and attributes.
  • Segmentation or “Image Segmentation” is intended to mean the manual or automatic identification, delineation, and quantification of objects within images.
  • a system and methods for annotating 3D graphical data objects which advantageously facilitates knowledge management associated with and visualization of 3D graphical data objects.
  • the system and methods also favors the exchange and the contextual integration of information relative to 3D images, or higher dimensions, being analyzed.
  • a multidimensional data visualization system and method that relates matrix data, multimedia information, vectorial information and graphical objects.
  • the matrix data is the image/volume to be visualized (e.g. MRI medical data).
  • the multimedia information is the contextual or global knowledge, associated with particular structures within the image being analyzed and visualized, added by the user or automatically (e.g. audio note describing a tumor).
  • the vectorial information adds an extra layer of representation that can be associated with the image data itself or that can act as interactive tools.
  • the vectorial information relates to all graphical data that is to be displayed using vectorial rendering, a term commonly used in computer graphics.
  • a method in which multimedia information is provided and associated with a spatial position or region of a 3D graphical data object to form an embedded graphical data object or EGO is provided and associated with a spatial position or region of a 3D graphical data object to form an embedded graphical data object or EGO.
  • a spatial position or region is first defined at 10 and information pertinent to the region is provided automatically or by the user and associated with the region to form an EGO at 12.
  • a view of the graphical object is then defined at 14 and a display of the view with one or more EGOs is generated at 16.
  • EGOs Embedded graphical objects
  • an EGO is a pictographic element (icon), that can be freely positioned and saved within a portion of a 3D image and/or within the graphical monitor in general, and that represents a multi-media information structure. Through an EGO, existing information can be consulted and new information can be added.
  • A-EGO Attached EGO
  • F-EGO floating EGO
  • An A-EGO is an EGO that is positioned at a specific coordinate within a 3D frame of reference. They are preferably linked to a particular voxel, line, polygon, or polyhedron within the volume.
  • the A-EGO 's role is to document visual structures within a volume such as segmented objects, visual image objects and/or hierarchichal regions of interest (HROI, to be further defined below). Interacting with this type of EGO enables, for instance, an easy access to its related multimedia information of interest.
  • An F-EGO is an EGO that is, at a particular instance or time, not directly linked to a specific image structure (voxel, etc.) at a precise coordinate.
  • the F-EGO refers to, without limitation, an analysis project, a particular scene or view, a website, or any piece of relevant information that is not within the volume itself or that cannot be spatially positioned within a frame of reference.
  • F-EGO An example of F-EGO is one that is related to a scene or view that was previously created and saved. In this case, the F-EGO acts as a shortcut to this scene/view. When the user navigates through this F-EGO, a new graphical representation is displayed from the perspective of the scene or view this particular F-EGO relates to.
  • EGOs can have various pictographic representations.
  • the A-EGO pictogram as a Pyramid (20).
  • Fig. 2 is an example showing a heart 20 with A-EGOs 22 and F-EGOs 24.
  • a computer generated surface 26 is also shown within heart 20.
  • Such a geometric pictographic representation permits to precisely target a structure in the image by orienting it so that the point of the pyramid points in the direction of the structure.
  • a pictogram facilitates and accelerates the visual recognition of this EGO by the user.
  • F-EGOs from A-EGOs, we use different geometric shapes.
  • FIG. 3 A further example of an A-EGO is shown in Fig. 3 wherein a blood vessel 30, an aneurysm 32 and an A-EGO 34 are schematically represented.
  • a knowledge management system that enables the construction/recording of multimedia information and its association to a specific context/region within a multidimensional image.
  • the multimedia information can be constructed by integrating text documents, audio files, images and videos.
  • the recording of new information can be achieved interactively through electronic data-capture: digital audio recording for audio information, and standard keyboard input for textual information.
  • the multimedia information can be generated automatically, when associated with specialized algorithms.
  • the association of the multimedia information with a spatial context within the image is achieved by spatially positioning graphical objects in the 3D image's frame of reference and by associating the appropriate information to the graphical objects.
  • Spatial navigation can be conducted in multiple ways. For instance through the manipulation of and interaction with 3D surfaces (planar or nonlinear) within the volume as described in one embodiment of the invention. Semantic navigation involves the traversal of a network of semantically associated EGOs, permitting the exploration of information and associated contextual structures within a 3D image.
  • the information can be partitioned as a function of its position in a spatio- semantic scale continuum advantageously permitting the display of desired information and avoiding the display of an excessive or unnecessary amount of information.
  • the 3D image can be partitioned into Hierarchical Regions of Interest (HROI) to generate scale domains. This partitioning enables a user to focus on a particular portion of the 3D image at a specific scale domain, thereofore only taking into account the information that is of interest.
  • HROI Hierarchical Regions of Interest
  • one or more spatio- semantic scale domain(s) may be defined.
  • three sublevels of information can be documented: region, objects and object attributes.
  • a region can be considered as a part of a volume having a significant meaning for the user at the desired scale of the analysis and that holds one or more clusters of objects expressed at the desired scale from the perspective of the user.
  • a user may select a specific scale domain by displaying a first current HROI and each HROI embedded inside the current HROI. The user may then select a new current HROI from the ones available using the cursor/control device. This operation may be performed either in the graphical window and/or in an inheritance graph window, illustrating the relationship between the plurality of HROI defining the hierarchical partitioning of space in 3D image.
  • HROIs are of rectangular shape.
  • the hierarchical subdivision of the 3D graphical data object enables the association of particular clusters of properties with a specific scale domain rather than the whole image, allowing for instance the user to represent the same area of a 3D image at two different scales and to associate each scale domain with its own view, annotations, and embedded graphical objects.
  • a view can be created by selecting a subset of voxels from the 3D image being visualized and thereafter modifying the spectral properties of the selected voxels.
  • the spectral modification can be of various nature, such as, but not limited to, the alteration of the level of transparency or the alteration of contrast.
  • the means of modifying the spectral properties of a subset of voxels in order to create a scene are multiple and include, without restriction, 3D surfaces that intersect the 3D image, voxel thresholding, and object segmentation methods.
  • Views can be generated by the process of defining, automatically or manually, a linear (plane) or non-linear surface geometry, its size, orientation, and positioning within the 3D image.
  • a user may manually modify the surface's properties and parameters (can also be an automatic process) which has a direct impact on the graphical representation of the 3D image and 3D surface.
  • a scene may be created by displaying only the portion of the 3D image that is intersected by the surface.
  • a scene may also be generated by removing the portion of the volume that resides either in front or behind the surface, which allows viewing of a sub-volume of the 3D graphical data object.
  • the process of removing a portion of the volume is denoted by the term "Slicing".
  • the volume-intersecting surface is a tool that is used for spatial volume navigation, as well as for scene creation, in either an automated or manual (interactive) fashion.
  • the 3D surface is a discrete approximation to a continuous mathematical surface equation. Discretization of continuous functions can be achieved in many ways.
  • the discrete surface is obtained by using a Thin-Plate-Spline (TPS), which is the 3D equivalent to spline curves.
  • TPS Thin-Plate-Spline
  • the TPS uses a set of control points, from which a smooth surface approximation is generated.
  • the data structure uses a set of matrices to store the control points.
  • the control points By storing only the control points associated with a surface, the amount of information needed by the system to generate the surface is significantly reduced.
  • the TPS function When the surface is displayed, the TPS function generates the complete surface from the set of control points.
  • surfaces can be deformed to make navigation and scene creation processes as intuitive and flexible as possible.
  • the deformation process can originate from user instructions or can be automated.
  • User-generated surface deformation can be achieved in various ways, such as, but not limited to, parameter-based deformation and on-screen visual deformation.
  • parameter-based deformation the user specifies a set of parameters, such as the spatial displacement of control points, which will induce a surface deformation.
  • the on-screen deformation process is highly intuitive. This method permits a user to visualize the deformation process of a surface in the graphical display by simply using a control device, such as a computer mouse, to spatially displace the surface's control points. This is a real-time process, which means that the displacement of a control point immediately changes the surface geometry, in which case the changes are instantly shown in the graphical display.
  • a control device such as a computer mouse
  • Automated surface deformation process can be used in conjunction with segmentation algorithms. These algorithms extract information from the 3D image, which is thereafter used to obtain an insight on the volume's structure. In turn, this structural information is used to set a surface's parameters.
  • the automated surface deformation process can use information of various sources, in order to automatically and properly "parameterize” the surface.
  • Surface tension can be introduced in the surface deformation process to provide varying degrees of deformation.
  • the concept of surface deformation and tension is well known in the field of 3D computer assisted design (used by software such as 3D Studio Max). If the surface has zero tension, the displacement of a control point will have no impact on neighboring control points, which generates a local deformation only. On the other hand, by setting a certain level of tension, the displacement of a control point will in this case have a direct impact on the spatial positioning of the neighboring control points.
  • the tension can be modified according to a selected function such as, but not limited to, an exponential function.
  • the surface resolution controls, to a certain degree, the precision of the surface deformation.
  • the change in surface resolution can be of local or global nature.
  • a local change of resolution is made by the addition of a certain number of control points within a defined neighboring region with respect to the current control point. This local change of resolution permits more precise deformation within a specific region of the surface.
  • For a global change of resolution the number of control points is increased over the entire surface.
  • the change of resolution can be either isotropic or anisotropic, and can follow a certain distribution function such as, but not limited to, a Gaussian distribution.
  • the surface can undergo rigid alterations, such as rotations and translations. This process is achieved by applying the transformation on the overall surface data-structure.
  • Scenes can be generated by automatically or manually thresholding selected voxels so that a threshold value defines which voxels will be displayed, according to their spectral values.
  • the newly generated scene is composed of a subset of voxels of the original volume.
  • Specific objects/structures within a 3D image can be automatically or manually segmented to allow their removal or alternatively, permit these objects/structures to be emphasized so as to create a precise and/or custom scene. This method therefore permits the manual/automatic creation of a scene containing only relevant information.
  • a scene may be stored in a database (scene-database) so that every element associated with this scene is permanently kept. This enables the retrieval and visualization of a scene at later times.
  • Spatial navigation allows progressive and intuitive volume visualization. This process uses one or multiple deformable or non deformable surfaces.
  • Such surfaces allow users to dynamically visualize portions of a volume through simple interactions such as translations, rotations, and deformations.
  • a progressive change in the spatial context being displayed is achieved. This progressive change enables a user to visualize one region of the 3D graphical data object at a time, in a smooth and continuous manner, thus achieving spatial navigation within a volume.
  • Spatial navigation can be achieved by using one or multiple non-deformable linear surfaces (planes), one or multiple non-linear deformable surfaces, or any combination thereof.
  • Plane-based visualization allows standard sectional views that can be of interest for specific contexts or applications and provides a familiar tool to users.
  • Deformable surface based visualization provides a way to achieve intricate sectional views that can be of great value.
  • Their nonlinear geometry allows certain structures to be circumvented that may not be of interest or that may hide underlying relevant structures within the volume at a certain instance.
  • EGOs can be linked to form a network of EGOs. Since there may be associative links of varying nature between EGOs, a Generic type of link is defined, from which specific types can inherit.
  • Types of specific associative links can be, without limitation, of causality or similarity.
  • a "causality" association may be of interest for emphasizing that the considered structure may have been caused by another.
  • an aortic stenosis may cause serious damage to the heart's ventricles.
  • Newly created links can be assigned a default type, such as basic generic type, or can be defined by the user, or automatically, to any specific type.
  • GALs can be graphically represented, which directly illustrates the interrelation between EGOs. Furthermore, since there may be multiple types of links simultaneously displayed, a method for link type discrimination must be provided.
  • the associative links are represented by graphical lines positioned within the 3D frame of reference such as represented in Fig. 4 where A-EGO 34 is linked to A-EGO 40 which is related to aneurysm 42 on blood vessel 44.
  • the different types of links can be discriminated using separate colors. Additional information, such as pictograms, can be added to the line display and may provide directional cues, for example.
  • New EGOs are preferably created by interacting directly with the graphical window.
  • the user either defines a current scene by manually or automatically generating a desired view of the image (such as through image segmentation), or selects an existing scene amongst a list of registered scenes for the considered image.
  • a scene can be generated by simply thresholding the image and displaying the pixels/voxels that remain, or even by simply rendering the image without modifications.
  • the user may create a new EGO by activating the EGO button and thereafter interactively positioning the newly created EGO within the graphical window.
  • the positioning of an EGO within the 3D image is facilitated by a novel 3D image annotation mechanism.
  • the image annotation mechanism comprises a snapping mechanism, which automatically snaps an EGO to an intersecting 3D mesh surface.
  • This mechanism facilitates the positioning of an EGO by automatically positioning the EGO perpendicularly to the mesh surface. This process is real-time and is applied while the EGO is moved within the 3D mesh representation of the image, which further facilitates the positioning of an EGO.
  • a scene can be generated at 50, a mesh rendering at 52 and an EGO created and/or displayed at 54.
  • the annotation snapping process can comprise 3 main steps:
  • the first step occurs when a user positions a pointing device at 55, such as a mouse cursor, within the screen where the image mesh representation is displayed.
  • a pointing device at 55, such as a mouse cursor
  • the 2D to 3D conversion is necessary so as to identify the corresponding coordinate within the 3D image.
  • a ray picking algorithm is applied in the 3D image so as to identify the closest intersection point of the cursor and the 3D image mesh.
  • the closest detected intersection point is thereafter used as the snapping point, where the EGO is automatically positioned 69.
  • the annotation system is based on the linear algebra property of the Tenderer's viewport frustrum 70 (Fig. 7A-E). In terms of 3D space, the frustum is the region of space that is currently visible through the camera.
  • the view frustrum is the volume of space that includes everything that is currently visible from a given viewpoint. It is defined by six planes arranged in the shape of a pyramid with the apex chopped off. If a point is inside this volume then it's in the frustrum and it's visible. If a point is outside of the frustrum then it isn't visible.
  • visible it is meant all structures such as structure 72 that are potentially visible. For example, a structure might be behind another structure that obscures it, but it is still in the frustrum.
  • a second aspect is the automatic morphology association.
  • This mechanism automatically associates an EGO with the image sub-volume it is positioned on. This process is real-time.
  • the objective is to provide the possibility of performing morphology-based annotation data-mining.
  • the user annotation and morphology association process is composed of the following steps:
  • the user draws on the display a 2D contour 74 representing the projected region of interest
  • the contour is then transformed from screen coordinates to viewport coordinates on the near plane of frustrum;
  • the contour represented in the viewport is then extended 76 in the viewport space frustrum to form a volume
  • intersection operation is then performed to clip the subset 78 of the dataset that lies inside the volume
  • a spatial distribution is computed along the z axis (of the viewport space) of the vertices of the dataset' s mesh representation;
  • the elements of the distribution are pondered with their z buffer test result during rendering;
  • the maximum value (Vm) of the spatial distribution is computed;
  • the dataset sub region 79 represented by the elements of the distribution is associated to a corresponding EGO, wherein the said sub-region represents the
  • steps 61 to 68 are summarized in Fig 6 as steps 61 to 68.
  • the system simply saves the associated morphology information and associates it to the considered EGO.
  • An existing EGO can be deleted from a scene or the network of EGOs by interacting directly with the graphical scene. The user first overlays the graphical cursor over a desired EGO and then presses a control button. The user is then offered the option to remove the EGO. Another means for deleting the EGO is by browsing or searching through a list of EGOs and deleting the considered EGO from the list. In a preferred embodiment, after deletion of an EGO from within the graphical window or a list, the system automatically verifies for the presence of any GALs associated to the selected EGO. In such event, the GALs related to the deleted EGO are removed from the EGO network and graphical window.
  • EGOs may exist as independent entities, i.e. having no association with other EGOs.
  • EGOs can be linked either graphically or using a text-based approach.
  • a user may define a link between two EGOs by clicking over a first EGO within the graphical window which then displays a "properties" dialog box. From this dialog box, the user may select from a list the EGO('s) to which the current EGO will be linked. The next step requires the user to select the type of Generic Associative Link that defines the newly created link. It is also possible to graphically link two EGOs by clicking a first EGO and then dragging the mouse cursor to a second EGO.
  • EGOs While the linking of EGOs, either using a graphical approach or a text approach, can be made interactively by the user, it is also possible to use an automated linking method.
  • the latter can use algorithms to define network relationships of various types among EGOs.
  • a criterion used by the algorithm can be, but is not limited to, semantic or spatial information.
  • EGOs that document structures that are spatially near to one another could be linked by a GAL of type "Proximity”.
  • GAL of type "Proximity" Such automated methods are particularly useful in that they allow linking different EGOs, from one or multiple 3D images and from the same or different scenes, without the need for user intervention.
  • Deleting a link between two existing EGOs At any moment, the user can remove a link between two existing EGOs. This operation is preferably performed by interacting directly with the graphical scene. The user first overlays the graphical cursor over a desired GAL and then presses a control button. The user is then offered the option to remove the link.
  • Another mechanism can be the deletion of links within the multimedia information panel, from which the GAL are removed from the EGO network and from the graphical window.
  • Associating an EGO to a Scene Defining and/or selecting a scene allows the user to visualize a portion of the 3D image as well as its associated EGOs and GALs.
  • the process of associating an EGO with a scene is important in order to enable efficient navigation through the 3D image.
  • Associating an EGO to a particular scene can be achieved in several ways.
  • the system automatically associates an existing EGO to a scene using an automated algorithm "plug-in".
  • the default algorithm searches in the EGO database for every A-EGO that intersects with the 3D surface that defines a scene. This default algorithm can be triggered each time a new scene is created.
  • Alternative algorithms can be used to select EGOs according to scene properties.
  • a possible interactive approach is to create a new EGO in the current scene, the newly created EGO being associated with the current scene.
  • Another way is to select one or a plurality of EGOs from a list of existing EGOs and associating the selected EGOs to the current scene.
  • an automated algorithm can also be used to associate a default scene to a particular EGO.
  • such an algorithm would define a new scene and/or select a scene from the plurality of already available scenes according to specific spatial criteria such as defining/selecting a scene that maximize the visible portion of the image structure documented by the EGO.
  • semantic criteria could also be used to define/select a default scene to be associated with a particular EGO (i.e.: defining/selecting a scene that allows the simultaneous view of every A-EGO related to the currently selected EGO by a particular type of GAL).
  • Removing the association between an EGO and a Scene At any time, the user is allowed to remove the association between an EGO and a scene. In the present embodiment, this operation is performed by interactively activating the scene properties panel, as described previously, for the desired scene. Once the panel is activated, the user may select an existing EGO associated with the scene and remove its association.
  • Display of the EGO network is preferably configured according to the current scene. More specifically, the selection, the positioning, and the display of particular subsets of the EGO network can be directly determined from the parameters that define the scene, the relative position of the structures within the scene and the network relationships between the selected EGOs that are to be displayed. Such a method constitutes an effective way to quickly retrieve clusters of information by automatically determining which EGO is to be visible in the Graphical Window at a particular time. EGOs not directly associated to, or EGOs that are not visible within the current scene may also be displayed as F- EGOs positioned outside the spatial reference of the current scene. This allows for the efficient selection of external EGOs.
  • the F-EGO acts as a link (shortcut) to the associated external EGO visible in another scene or even in another image.
  • the user has an indication of the type of associative link defined between any current EGO and the external EGO.
  • a user can view the external EGO's associated multimedia annotation information, such as diagnostic text written by a radiologist, without requiring viewing the EGO in its actual image context. This can be achieved by selecting the according F-EGO and selecting an option such as "show annotation text".
  • EGO's of a Network may be hierarchical in nature. That is to say, and EGO may be related to another EGO based on commonly shared information of a similar nature. For example, the information may be more and more detailed as one moves down the hierarchical tree from parent to child EGOs. In this respect it will be appreciated that EGOs of a same hierarchical level can relate to the same scale. It will be further appreciated that a user may select related (parent or child) EOG's to be displayed within a current view.
  • EGO multimedia information structure Each EGO multimedia information structure contents are displayed in the multimedia editing window.
  • the user can add or remove the content of the multimedia Information structure.
  • the user first right clicks on the desired EGO in the scene graphical window.
  • the system displays the multimedia edition panel to the screen, presenting text based, audio based, and video based edition widgets to the user.
  • the EGO database can preferably be searched according to the EGO ID or any descriptors present in the multimedia information structure.
  • Automated Image Annotation Regions or positions of interest to the user within an image can be automatically identified and annotated by automated image annotation.
  • This automatic identification and annotation can also comprise automatic positioning of EGO's.
  • the objective is to reduce the time and effort required by a user for the analysis and annotation of images.
  • the automated image annotation process can be used by the system to automatically identify possible aneurysms within 3D thoracic CT images thereby facilitating and accelerating the specialist's diagnostic process.
  • the general automated image annotation process may comprise the following steps:
  • Steps 1 to 5 allow for basic annotation, where only a visual marker (EGO) is positioned in the image to visually identify the object of interest.
  • Steps 6 to 8 allow for adding relevant textual information to the EGO (such as quantitative or qualitative information on the object), defining pertinent EGO networks based on associative information, as well as defining a view that allows users to efficiently view each EGO and their associated image objects, respectively.
  • the previously defined step 2 uses a Level-Set based segmentation method.
  • the automated annotation process works as follows: The system first loads a patient's CT scan image, and then launches the well known 3D level-set segmentation method.
  • the segmentation method is based on the concept of having a 2D surface that evolves within 3D space until equilibrium of the surface is attained.
  • the surface is deformed by structures of varying intensity value within the 3D image until it is broken in separate sub-surfaces that eventually "wrap" each of the objects of interest.
  • initial surface parameters and constraints such as curvature, force and speed
  • the method will segment only the objects that are of interest. In the presently considered embodiment, these objects are vascular aneurysms.
  • each segmented object forms an object of interest. For each of these objects the following quantitative information is computed: the center of mass of each object defines the coordinates of the object and the voxels that represent the object define its structure and volume.
  • a textual annotation can have the following content and structure: "The identified object is a possible aneurysm with a volume of 250 units and longest diameter of 12 units”.
  • This textual and quantitative information is added to the EGO's data structure so that a user may view this associated info ⁇ nation at all times by simply activating the considered EGO.
  • the association can be of various nature, such as an association based on distance. For instance, in a large volume where multiple aneurysms have been identified and where some are very distant to one another, the automatic linking can generate links between EGO's that are the most distant appart so as to facilitate subsequent visualization and navigation within the image from one object to the other. To do so, the system analyzes the spatial coordinates of every segmented object and creates an associative link between EGO's that surpass a certain distance threshold.
  • each segmented object is rendered in mesh (vectorial surface rendering) while the remaing volume data is rendered in transparency. This allows for the simultaneous visualization of raw image data, segmented objects and represented EGO's.
  • FIG. 9 An embodiment of the automated annotation system of the present invention is generally described in Figure 9.
  • the overall operations require the system's main processor (98) to first invoke the image loader (90) for reading the digital image from a storage media.
  • the image loader reads the image (91) and saves it in non permanent memory (92).
  • the main processor then instantiates the segmentation method (93) which segments the image saved in memory.
  • the segmentation method then saves the segmentation results in memory (92).
  • the main processor invokes the Annotator (94) which uses the information saved in memory by the segmentation method to create and position the required number of EGO's.
  • the annotator collects the quantitative information associated to each object and generates a textual annotation for each relating EGO .
  • the main processor invokes the linker (95) to generate associative link data.
  • the main processor invokes the View Generator (97) to generate optimized views .
  • Automatic report generation is the process of automatically gathering information, structuring it, and formatting the generated content so as to build a report that follows certain standards and regulations. Automatic report generation can be implemented in a straightforward manner using the method of the present invention.
  • the first step is to specify a report template that defines how the content should be formatted, structured, and ordered.
  • the next step consists in manually/automatically selecting the EGOs containing the content to be incorporated in the report. Following the content selection, the order in which this content is to be integrated within the report can be chosen.
  • An interesting aspect of this report generation method is that the insertion of contextual images/snapshots is an automatic and precise process, due to the fact that each EGO can be associated with a default scene/view.
  • a scene/view-plane when inserting the content of an EGO within a report, a scene/view-plane can be obtained as an image snapshot and thereafter directly integrated within the report.
  • This feature is quite interesting since the user is not required to manually take a snapshot of the visual context associated with this particular EGO's information, which may be imprecise and time consuming.
  • the final step is to extract this information from the multimedia information database and insert it in a new report according to the format template.
  • This newly created and formatted report is then saved in a particular file format, such as but not limited to, Adobe PDF, HTML, Rich Text, or Microsoft Word.
  • the generated reports can be used for, without limitation, archival and administrative purposes, or for sharing and consulting of specific information in hardcopy format.
  • Semantic navigation is the process of traversing a network of semantically associated EGOs, permitting the exploration of information and associated contextual structures within a 3D image.
  • Such network can be defined as described above.
  • the semantic navigation and network allow the easy recovery of knowledge previously acquired during the analysis, as it maps the semantic processes performed by the user while performing an analysis.
  • semantic navigation is performed by means of two distinct but related steps: semantic scene exploration (SSE) and, semantic scene transition (SST).
  • SSE is the step through which the user will visualize the GALs relating EGOs that are associated with the current Scene. This may be achieved using the following methods:
  • Such a method includes the following steps:
  • This step may be achieved by setting the " -" sign on the left of each EGO to a "+” • state using the graphical cursor/control device; Activating a global link display function from a dialog box; this includes the operation of: Display/hide GAL of tagged EGOs associated with the current scene;
  • EGOs when one of the selected EGOs represents the current region, the latter is depicted by a wiremesh sphere .
  • every EGOs associated with the current region become visible within the 3D frame of reference of the current region, no matter if they are associated with the current scene or not.
  • SST Semantic Scene Transition.
  • SST allows the user to forwardly and/or backwardly navigate from an EGO scene to another.
  • the user can perform this step by repeating the following operations:
  • the A-EGO being activated and current state appearing at the screen
  • a "default" scene associated with the A-EGO being activated is set and appears at the screen
  • the pictographic content of the Multimedia information structure associated with the activated A-EGO is displayed;
  • Every new A-EGO associated with the new scene are displayed;
  • the shaded links between the activated A-EGO and the other EGOs are displayed.
  • the system can record a navigation sequence and either replay dynamically a recorded sequence or store the sequence in the multimedia information structure of a desired EGO for further consultation.
  • the Global Network Display provides a means for displaying and navigating among every EGO within a current project from a global perspective, using prior art mind mapping techniques.
  • the global mind-mapping window offers a one-click access to any knowledge integrated within a project. This permits the user to have a complete view of the overall integrated EGOs along with their interrelations.
  • the contextual network of EGOs can be synchronized with the global Network window, which means that when a link is modified in either representation, the change is simultaneously displayed in the complementing representation. Furthermore, when a user navigates within the global network window, the navigation process is coordinated with the 3D image scene being displayed. When an EGO is selected within the global network window, its associated default scene is simultaneously displayed in the 3D image window.
  • an Information storage subsystem to temporarily or permanently store the information/data generated and/or recorded.
  • This data can comprise, without restriction:
  • Scene/View parameter data Object Property data; EGO data; EGO Link data; EGO Annotation history (EGO creation and modification) data, comprising: Author; Date and Time; Revision number; Image Processing parameter data; Image processing history data; 3D graphical data object (Raw image data); 3D graphical data object storage location data; User Account data; User profile data; Network Node registry and directory service data.
  • the scene and view parameter data refers to the information that defines the latter, such as, but not limited to, volume orientation, 3D intersecting surfaces, volume slicing, 3D region of interests, surface rendering, volume rendering, and voxel transparency.
  • the EGO annotation history data is a recording of the EGO creation and modification process. Each new EGO creation and modification instance is recorded in a database, with information concerning the author, the date and time the EGO was created/modified, the revision number of the modified EGO with link information to the previous EGO.
  • the image processing parameter data refers to the specific parameters related to image processing algorithms and methods that are to be used with the associated 3D graphical data object and defined analysis protocol.
  • the image processing (or editing) history data is information that relates to the different processing steps applied to the 3D graphical data object, the order in which they were applied and link information to the appropriate image processing parameter data for each processing step.
  • these image processing parameters can be stored in EGOs. Selection of an EGO containing one or more histories of object editing can automatically cause the editing operations to be applied to a selected graphical data object. This advantageously allows a user to obtain a different view of an object treated according to a pre-stored protocol.
  • a preferred embodiment ( Figure 10) of the present invention stores the project related data (Scene, EGO%) in a first database 100 (project database), the 3D graphical data objects in a second database, preferably a PACS 102 (picture archiving and communication system), the user account and user profile data in a third database (user database) 104.
  • This type of configuration insures that each data element is stored in a specialized archiving system, providing a secure, robust, and efficient information storage mechanism.
  • the databases are remote to the user, but not restricted to. In a clinical context, where the present invention may be used in one or a plurality of hospitals, a secure and remotely accessible information storage system is mandatory.
  • the 3D graphical data objects are frequently stored in a central repository such as PACS, with restricted access, where the data is required to remain uncorrupted and unmodified at all times.
  • a central repository such as PACS
  • the patient related data will remain unmodified and will be kept in its current storage medium, by providing a new database specific for the storage of project and analysis related data, as previously described.
  • the project data is gathered from the project database, and if required, a working copy of the related 3D graphical data objects are gathered from the appropriate database and transferred to the user's local system.
  • the project data contains all the necessary information to open an in-progress analysis project or a completed analysis project.
  • This allows a user to view the associated EGO network and associated multimedia data, any instance of a view, scene, and region of interest, as well as any other information described in the present invention, without the need to modify the original source 3D graphical data object.
  • This allows for instance for the following of an analysis process (such as a diagnosis process), to dynamically review a proposed analysis, and permits the user to validate an analysis by explicitly selecting the view/scene of interest and thereafter modifying the view, scene, and segmentation parameters and or algorithms, without having to redo the entire process that leads to the current view.
  • processing steps/operations that led to a final view/scene.
  • These processing steps can be of various nature, such as, without limitation, image processing operations (brightness, contrast, threshold adjustments), volume manipulation (rotation, translation, scaling, slicing), rendering operations (mesh rendering, voxel rendering etc.), as well as specialized image segmentation operations.
  • the segmentation algorithm can be for the automated segmentation of vascular aneurisms.
  • the system automatically creates and saves a Process History that tracks every operation, as described above, applied to the current multidimensional image. Every parameter associated to the performed operation is saved in an organized data structure.
  • the data structure takes into account the order in which the operations were performed and allows for the modification of any node.
  • the data structure is a linked list, where every node of the list can point to any given number of nodes or to any number of linked lists.
  • Figure 11 summarizes the steps of return on information.
  • a 3D image is displayed, at 112 the image is manipulated and the process history is saved at 113.
  • the final view is saved at 114 and a visual cue is created at 116.
  • the visual cue can be activated at 117 to review 3D data and intermediate views can be reviewed at 118 and the final view can be validated at 119.
  • Figure 12 illustrates the efficiency of this data structure for the review and modification of a current history of operations.
  • This data structure also offers the possibility to directly point to an object, that is in volatile memory (RAM) or on a permanent physical storage media, which stores the specific parameters associated the current node's scene/view. In this manner, a specific scene/view can easily be generated in real-time by simply accessing these parameters and building the scene/view accordingly.
  • RAM volatile memory
  • a new process history branch will be created at the point where the modification was applied.
  • the linked list composed of software objects is saved on a permanent storage media with according object state. This allows for the subsequent dynamic return on information with possibility of scene/view review and modifications. This information is at any time associated to a specific analysis project file which holds any according information.
  • the storage may be on a typical computer hard-drive or on a remote archiving system, server, or database.
  • the herein described information storage subsystem allows for a user or an automated system to intelligently search and mine information contained in a plurality of local and/or remote databases.
  • the databases contain a wide variety of information
  • a user may search in the databases to which he/she has access, for information related to a current analysis project, in both a quantitative and qualitative manner, such as by mining for specific keywords contained in any EGOs along with the type of association to other EGOs.
  • a user may also refer to previously stored images, by means of searching or by navigating through a network of EGO that reference such images.
  • a specialized data-mining system can find intricate relations and patterns by mining the images and their contained structures, the plurality of EGOs and their content and associations, and any other source of information.
  • the EGO identified during the search can be selected and displayed within their associated view.
  • the invention provides users with the possibility of searching and performing data-mining on both quantitative and qualitative data, associated to one or a plurality of analysis projects.
  • Textual data-mining requires the user to specify a keyword that is to be searched for within the database containing the textual annotation of every EGO.
  • the mining concept by itself is straightforward as it only requires searching for a keyword in a database, as it is with standard database querying.
  • the concept of annotating an image, then searching for keywords present in contextual annotations and thereafter having the possibility to view the annotations comprising this keyword within the 3D image context itself is innovative.
  • the data-mining of annotations and the contextual review process comprises the following steps:
  • a specific keyword is input to the system's data-mining user interface
  • the user interface dispatches the event to the Mining component
  • the Mining component searches the specified database(s) for the specified keyword
  • the visual manager displays a series of F-EGO's that are linked to the actual EGO's containing the specified keyword
  • the user activates the F-EGO of choice which displays the linked EGO's associated 3D image in the appropriate view, allowing for the simultaneous visualization of the EGO and its associated image view;
  • the EGO's textual annotation is displayed to the user.
  • each EGO is directly associated to its multimedia content, to its related project, and contains link data that discloses to which other EGO's a specific EGO is associated to along with the type of association.
  • the visual manager has all the information required to display the found EGO's.
  • the first level'of visual representation is the depiction of F-EGO's in the main display.
  • These F-EGO's are visual markers that are directly linked to the actual EGO's positioned in the current or an external 3D image. In this way, to display a specific EGO within its image context, the user is simply required to activate the F-EGO using for instance the pointing device to click on the visual representation of the F-EGO.
  • the system • loads the according project and image, if it is external to the current project, and displays the image in the view associated to the considered EGO. To do so, the system reads the EGO's associated data which holds specific viewing parameters such as image processing operations and image orientation. At that point, the main display now contains a view of the considered 3D image along with the associated network of EGO's. If the newly selected EGO is associated to a plurality of EGO's then the association links will also be displayed. A new F-EGO will also be depicted, allowing the user to return in the preivous view or project.
  • the Mining component allows the user to search for keywords contained in
  • EGOs of a current scene/project in external projects, as well as local or remote databases containing the EGO associated information. This provides the user with a high degree of control on the specificity of the data-mining process.
  • the herein described invention provides the possibility to mine annotations based on their associated object morphology, as well as automatically discover recurrent or similar image processing protocols.
  • the morphology-oriented data-mining can be achieved by using segmentation algorithms to extract specific objects and compute their morphology.
  • the considered EGO is associated to object properties in addition to multi-media information.
  • a user simply needs to anchor the EGO to the object. This process is based on the herein described automated snapping algorithm, where an EGO is automatically associated to an intersecting mesh surface.
  • a user can therefore search for EGO's associated to objects of specific morphology (volume, diameter, surface). For instance, a user may search for every EGO with an associated morphology volume greater than a specified value.
  • the annotation mechanism also provides the possibility of automatically associating a 3D region of interest to a specific EGO. This selected region of interest can also be used for the mo ⁇ hology oriented data-mining, as opposed to regions of interest obtained using automated segmentation algorithms.
  • the mo ⁇ hological data-mining allows for users to visually appreciate the data-mining results by visualizing the considered images in their optimal views, as opposed to simply review a quantitative data-mining result.
  • the herein described data- mining system is of great value in a clinical context: A specialist may batch process a series of images without requiring user intervention, allowing for efficient pre- processing and pre-analysis.
  • the automated segmentation algorithm identifies possible aneurisms and segments the structures. This information is thereafter used by the automated annotation system, where EGO's are placed in the vicinity of the segmented structures, with associated information automatically generated.
  • EGO's are associated to the segmented structures, textual information, as well as the generated views for allowing efficient visualization of these EGO's and their associated structures.
  • the specialist may query the system to visualize specific clinical cases. For instance, the specialist may decide to first view and inspect critical cases, in which case the specialist can use the data-mining system to discover the images that contain aneurisms of a certain critical volume. The data-mining results will display to the user a series of EGO's that can be activated in order to view their associated images and content.
  • the user interface dispatches the event to the Mining component
  • the Mining component searches the specified database(s) for EGO's with associated regions of interest of similar mo ⁇ hology
  • the visual manager displays a series of F-EGO's that are linked to the actual EGO's containing the specified keyword
  • Protocol Data-Mining The user activates the F-EGO of choice which displays the linked EGO's associated 3D image in the appropriate view, allowing for the simultaneous visualization of the EGO and its associated image view.
  • Image processing protocols are of important value in the field of medical imaging, where standardized procedures can be applied for the analysis of specific image modalities and anatomical regions.
  • image processing protocols provide a standardized methodology. These protocols are most commonly defined through “trial and error” experimentation, a process prone to errors and time consuming. Newly defined and efficient protocols are in general shared with other specialists in the community through publications and scientific symposia. The sharing of new protocols is therefore inefficient and punctual. Furthermore, as the resolution and scope of images increase, the protocols become more complex and even more difficult to share within the community.
  • the method of the present invention provides a novel mechanism for the efficient discovery and sharing of complex image processing protocols for any type of image modality and anatomical structure.
  • the system of the present invention records a history of operations and annotations, providing the possibility of mining these user defined operations. For instance, to achieve a manual segmentation of a specific structure, a user may have used various image processing operations such as thresholds, mo ⁇ hological operations and modification of visualization and rendering properties. These intermediate operations leading to the final view of the image are automatically recorded by the system and as a whole form a specific protocol. Based on this information, it becomes possible for the system to mine recurrent or similar image processing protocols. In this context, a user can for instance simply select an existing protocol and mine for similar ones.
  • the data-mining system takes in input the various steps forming the protocol and searches through the various histories saved in a database. Similarity criteria can be specified, where for instance a user defines that a similar protocol should have no more than 2 differing operations. This specific type of mining allows for a user to verify if this specific protocol has also been used for the same application by other specialists, therefore providing a certain level of quality assurance. In another embodiment, a user can simply search for non specific recurrent protocols, in which case the data-mining system exhaustively searches through the histories of operations in order to identify histories with similar operations. This type of data-mining is further specialized by specifying mo ⁇ hological criteria to the system, where similar histories are also required to have similar final segmented object mo ⁇ hology.
  • the operations should lead to a somewhat spherical object.
  • the mo ⁇ hology criteria can be based on parameters such as, without limitation, long and short diameters, coherence, volume, and density.
  • the discovered protocols can be directly applied to a new image, generating a view of the image according to the series of operations.
  • the user interface dispatches the event to the Mining component
  • the Mining component searches the specified database(s) for similar histories of operations and/or mo ⁇ hological parameters
  • the visual manager displays a list of found histories of operations
  • the user selects a specific history of operations of interest
  • the image is modified according to history, upon activation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
EP03724726A 2002-05-24 2003-05-26 Verfahren und gerät zur 3d bilddokumentation und navigation Withdrawn EP1565796A2 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US38259102P 2002-05-24 2002-05-24
US382591P 2002-05-24
PCT/CA2003/000760 WO2003100542A2 (en) 2002-05-24 2003-05-26 A method and apparatus for integrative multiscale 3d image documentation and navigation

Publications (1)

Publication Number Publication Date
EP1565796A2 true EP1565796A2 (de) 2005-08-24

Family

ID=29584432

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03724726A Withdrawn EP1565796A2 (de) 2002-05-24 2003-05-26 Verfahren und gerät zur 3d bilddokumentation und navigation

Country Status (5)

Country Link
EP (1) EP1565796A2 (de)
JP (1) JP2005528681A (de)
CN (1) CN1662933A (de)
AU (1) AU2003229193A1 (de)
WO (1) WO2003100542A2 (de)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1941452B1 (de) * 2005-10-21 2019-12-11 Koninklijke Philips N.V. Verfahren und system zur interaktiven sondierung und annotation von medizinischen abbildungen mit profil-flaggen
US8179396B2 (en) * 2006-08-02 2012-05-15 General Electric Company System and methods for rule-based volume rendition and navigation
EP2054829A2 (de) * 2006-08-11 2009-05-06 Koninklijke Philips Electronics N.V. Anatomie-bezogene bildinhalt-abhängige anwendungen zur effizienten diagnose
US20080117225A1 (en) * 2006-11-21 2008-05-22 Rainer Wegenkittl System and Method for Geometric Image Annotation
US11228753B1 (en) 2006-12-28 2022-01-18 Robert Edwin Douglas Method and apparatus for performing stereoscopic zooming on a head display unit
US10795457B2 (en) 2006-12-28 2020-10-06 D3D Technologies, Inc. Interactive 3D cursor
US11315307B1 (en) 2006-12-28 2022-04-26 Tipping Point Medical Images, Llc Method and apparatus for performing rotating viewpoints using a head display unit
US11275242B1 (en) 2006-12-28 2022-03-15 Tipping Point Medical Images, Llc Method and apparatus for performing stereoscopic rotation of a volume on a head display unit
JP4868186B2 (ja) * 2007-01-23 2012-02-01 日本電気株式会社 マーカ生成及びマーカ検出のシステム、方法とプログラム
WO2008138140A1 (en) * 2007-05-15 2008-11-20 Val-Chum, Societe En Commandite A method for tracking 3d anatomical and pathological changes in tubular-shaped anatomical structures
JP2010015497A (ja) * 2008-07-07 2010-01-21 Konica Minolta Medical & Graphic Inc プログラム、可搬型記憶媒体及び情報処理装置
JP5274305B2 (ja) 2009-02-27 2013-08-28 キヤノン株式会社 画像処理装置、画像処理方法、コンピュータプログラム
CN101504775B (zh) * 2009-03-19 2011-08-31 浙江大学 一种基于图像集的漫游视频自动生成方法
EP2635182B1 (de) * 2010-11-02 2020-12-02 Covidien LP Bildanzeigeanwendung und verfahren für ausrichtungsempfindliche anzeigevorrichtungen
US9202012B2 (en) * 2011-06-17 2015-12-01 Covidien Lp Vascular assessment system
EP3814874A1 (de) 2018-06-27 2021-05-05 Colorado State University Research Foundation Verfahren und vorrichtung zur effizienten darstellung, verwaltung, aufzeichnung und wiedergabe interaktiver multinutzer-erfahrungen der virtuellen realität
CN109903261B (zh) * 2019-02-19 2021-04-09 北京奇艺世纪科技有限公司 一种图像处理方法、装置及电子设备
TW202207007A (zh) * 2020-08-14 2022-02-16 新穎數位文創股份有限公司 物件辨識裝置與物件辨識方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO03100542A3 *

Also Published As

Publication number Publication date
WO2003100542A3 (en) 2004-11-18
WO2003100542A2 (en) 2003-12-04
JP2005528681A (ja) 2005-09-22
AU2003229193A1 (en) 2003-12-12
AU2003229193A8 (en) 2003-12-12
CN1662933A (zh) 2005-08-31

Similar Documents

Publication Publication Date Title
EP3380966B1 (de) Strukturierte fundobjekte zur integration von drittanwendungen in den bildinterpretationsarbeitsablauf
WO2003100542A2 (en) A method and apparatus for integrative multiscale 3d image documentation and navigation
US7688318B2 (en) Reusable data constructs for a modeling system
JP5405678B2 (ja) 医用レポート作成装置、医用レポート参照装置及びそのプログラム
US9690831B2 (en) Computer-implemented system and method for visual search construction, document triage, and coverage tracking
US8774560B2 (en) System for manipulation, modification and editing of images via remote device
US7737995B2 (en) Graphical user interface system and process for navigating a set of images
US6968511B1 (en) Graphical user interface, data structure and associated method for cluster-based document management
Schaer et al. Deep learning-based retrieval system for gigapixel histopathology cases and the open access literature
US9390236B2 (en) Retrieving and viewing medical images
US20070106633A1 (en) System and method for capturing user actions within electronic workflow templates
US20060112142A1 (en) Document retrieval method and apparatus using image contents
US20050237324A1 (en) Method and system for panoramic display of medical images
JP5242022B2 (ja) 医用レポート作成装置、医用レポート参照装置及びそのプログラム
US10976899B2 (en) Method for automatically applying page labels using extracted label contents from selected pages
Crissaff et al. ARIES: enabling visual exploration and organization of art image collections
JP2008090644A (ja) 医用レポート作成システム、医用レポート作成方法
WO2012117103A2 (en) System and method to index and query data from a 3d model
JP3369734B2 (ja) 3次元計算機支援設計装置及び方法
WO2007050962A2 (en) Method for capturing user actions within electronic template
Bäuerle et al. Semantic Hierarchical Exploration of Large Image Datasets.
Wernert et al. PViN: a scalable and flexible system for visualizing pedigree databases
US20120191720A1 (en) Retrieving radiological studies using an image-based query
US11061919B1 (en) Computer-implemented apparatus and method for interactive visualization of a first set of objects in relation to a second set of objects in a data collection
EP4310852A1 (de) Systeme und verfahren zum modifizieren von bilddaten eines medizinischen bilddatensatzes

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20041223

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20061201