JP2007014435A - Image processing device, method and program - Google Patents

Image processing device, method and program Download PDF

Info

Publication number
JP2007014435A
JP2007014435A JP2005196932A JP2005196932A JP2007014435A JP 2007014435 A JP2007014435 A JP 2007014435A JP 2005196932 A JP2005196932 A JP 2005196932A JP 2005196932 A JP2005196932 A JP 2005196932A JP 2007014435 A JP2007014435 A JP 2007014435A
Authority
JP
Japan
Prior art keywords
dimensional
image
density value
standard
projection image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2005196932A
Other languages
Japanese (ja)
Inventor
Yoshiyuki Moriya
禎之 守屋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Holdings Corp
Original Assignee
Fujifilm Holdings Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Holdings Corp filed Critical Fujifilm Holdings Corp
Priority to JP2005196932A priority Critical patent/JP2007014435A/en
Publication of JP2007014435A publication Critical patent/JP2007014435A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To improve the detecting accuracy of an abnormal shadow related to an image processing device, method and program performing image processing on an medical image. <P>SOLUTION: This image processing device estimates an imaging direction (v) from a two-dimensional radiography image 21, projects a three-dimensional standard image 11 expressing the three-dimensional construction of a plurality of anatomical structures at a standard concentration value, in the imaging direction (v) and generates a two-dimensional projection image 41. A rate of the concentration value contributing to a prescribed anatomical structure out of the concentration values of respective points on the two-dimensional projection image 21 is calculated according to a prescribed amount of anatomical structures out of the plurality of anatomical structures of the three-dimensional standard image 11 superimposed in the imaging direction (v). The two-dimensional projection image 41 is positioned on the two-dimensional radiography image 21 and the concentration value contributing to the prescribed anatomical structure out of the concentration values on the two-dimensional radiography image 21 is calculated corresponding to the rate of the concentration value contributing to the prescribed anatomical structure out of the concentration values of the positioned two-dimensional projection image 21. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

本発明は、医用画像に画像処理を施す画像処理装置、画像処理方法およびそのプログラムに関するものである。   The present invention relates to an image processing apparatus, an image processing method, and a program for performing image processing on a medical image.

従来、医療分野においては、ある患者についての撮影時期が異なる2つ以上の医用画像を比較読影して、画像間の差異を調べ、その差異に基づいて異常陰影を検出したり、疾患の進行状況や治癒状況を把握して治療方針を検討したりすることが行われている。   Conventionally, in the medical field, two or more medical images with different shooting times for a patient are comparatively read, the difference between the images is examined, an abnormal shadow is detected based on the difference, and the progress of the disease In addition, the treatment situation is examined by grasping the healing situation.

胸部X線画像には、肋骨や鎖骨など様々な解剖学的構造物に起因する背景画像が存在するが、それに影響されて異常陰影の検出が正常に行われない場合がある。そこで、背景画像をフィルタリング処理により除去した画像を用いて、異常陰影の自動検出を行う方法が提案されている(例えば、特許文献1等)。   In the chest X-ray image, there are background images caused by various anatomical structures such as ribs and clavicles. However, abnormal shadows may not be detected normally due to the influence of the background images. Therefore, a method for automatically detecting an abnormal shadow using an image obtained by removing a background image by filtering processing has been proposed (for example, Patent Document 1).

また、医用画像の比較読影によって異常陰影を検出する際、異常陰影が初期の肺癌のような小型で見落としやすい異常陰影の場合は、撮影時期が異なる2つの医用画像について、画像間で画素を対応させた減算処理等を行って差異を求め、その差異を表す差異画像(経時サブトラクション画像)において、画素値が所定値以上の領域あるいは特徴的な形状、例えば略円形のものを異常陰影候補として検出する手法(例えば、特許文献2等)が提案されている。   Also, when detecting abnormal shadows by comparative interpretation of medical images, if the abnormal shadow is a small abnormal shadow that is easy to overlook, such as early lung cancer, pixel correspondence between the images for two medical images with different shooting times A subtraction process is performed to obtain a difference, and in a difference image (temporal subtraction image) representing the difference, a region having a pixel value equal to or larger than a predetermined value or a characteristic shape such as a substantially circular shape is detected as an abnormal shadow candidate. A technique (for example, Patent Document 2) has been proposed.

ところで、上記手法において、比較対象となる撮影時期が異なる2つの時系列的な画像は、撮影時の患者の姿勢変化等により被写体の位置ずれが生じるため、通常は比較対象となる画像の位置合わせを行う。   By the way, in the above method, the two time-series images having different imaging timings to be compared are subject to a positional shift of the subject due to a change in the posture of the patient at the time of imaging. I do.

上記位置合わせを行う手法として、比較対象となる同一被写体についての2つの画像のうち少なくとも一方の画像全体に対して、回転、平行移動および拡大・縮小等の大局的位置合わせ(アフィン変換等の線形変換)後、さらに、局所領域ごとの対応位置関係に基づいた非線形歪変換(例えば2次元多項式によるカーブフィッティングを用いた非線形歪み変換等)処理(ワーピング)による局所位置合わせを行う手法(例えば、特許文献3)が提案されている。   As a method for performing the above-described alignment, global alignment such as rotation, translation, enlargement / reduction, or the like (linear such as affine transformation) is performed on at least one of the two images of the same subject to be compared. After the conversion), a method for local alignment by non-linear distortion conversion (for example, non-linear distortion conversion using curve fitting by a two-dimensional polynomial) processing (warping) based on the corresponding positional relationship for each local region (for example, patent) Document 3) has been proposed.

しかし、上述の各手法は2次元に投影された後の画像を用いて位置合わせを行うため、時系列的な画像間で患者の姿勢変化などにより大きな3次元的な変動(前傾、後傾、側旋)が生じた場合には、2次元平面上の変形では3次元的な変動に対応する変形ができず、差異画像上にアーチファクト(偽像)が発生することがある。そこで、特許文献4では、主成分分析などの統計的な手法を用いて、標準的なモデルを複数作成して、その中から被写体を撮影した撮影画像と最も似通ったものを選択する手法が提案されている。
特開平6−121792号公報 特開2002−158923号公報 特開2002−32735号公報 特開2004−41694号公報
However, since each of the above-described methods performs alignment using images after being projected in two dimensions, large three-dimensional fluctuations (forward tilt, backward tilt, etc.) due to changes in patient posture between time-series images. , Lateral rotation), the deformation on the two-dimensional plane cannot be deformed corresponding to the three-dimensional variation, and an artifact (false image) may occur on the difference image. Therefore, Patent Document 4 proposes a method of creating a plurality of standard models using a statistical method such as principal component analysis and selecting a model most similar to a photographed image obtained by photographing a subject. Has been.
JP-A-6-121792 JP 2002-158923 A JP 2002-32735 A JP 2004-41694 A

しかしながら、特許文献1ではフィルタリング処理によって、背景画像を除去しようとしているが、肋骨や鎖骨などは複雑な構造を持つため、背景画像を正確に除去することは困難である。   However, in Patent Document 1, an attempt is made to remove the background image by filtering, but it is difficult to accurately remove the background image because ribs, collarbones, and the like have a complicated structure.

特許文献2や3の手法では、過去に撮影された医用画像が存在しない患者の場合は、比較対象となる画像がなく、見落としやすい小型の異常陰影の検出に効果を発揮する比較読影に活用することができなかった。   In the methods of Patent Documents 2 and 3, in the case of a patient who does not have a medical image taken in the past, there is no image to be compared, and this is used for comparative interpretation that is effective in detecting small abnormal shadows that are easily overlooked. I couldn't.

また、特許文献4の手法は、あくまで2次元画像のモデルに基づき、肋骨画像と軟部組織画像とを推定しているため、肋骨が重なる部分などを正確に再現することは難しく、肋骨が重なる部分については、正確な判定を行うことは困難であった。   In addition, since the technique of Patent Document 4 estimates a rib image and a soft tissue image based on a model of a two-dimensional image to the last, it is difficult to accurately reproduce a portion where the rib overlaps, and a portion where the rib overlaps It was difficult to make an accurate determination.

本発明は、上記事情に鑑みなされたものであり、異常陰影の検出精度を向上させることができるような画像処理装置、方法およびプログラムを提供することを目的とするものである。   The present invention has been made in view of the above circumstances, and an object thereof is to provide an image processing apparatus, method, and program capable of improving the detection accuracy of abnormal shadows.

本発明の画像処理装置は、被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を記憶する3次元標準画像記憶手段と、
被写体を所定の撮影方向から撮影した2次元撮影画像を記憶する2次元撮影画像記憶手段と、
前記2次元撮影画像から前記撮影方向を推定する撮影方向推定手段と、
前記3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の複数の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段と、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出手段と、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せ手段と、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出手段とを備えたことを特徴とするものである。
An image processing apparatus according to the present invention includes a three-dimensional standard image storage unit that stores a three-dimensional standard image in which three-dimensional structures of a plurality of anatomical structures of a subject are represented by standard density values;
Two-dimensional photographed image storage means for storing a two-dimensional photographed image obtained by photographing a subject from a predetermined photographing direction;
Shooting direction estimation means for estimating the shooting direction from the two-dimensional shot image;
A two-dimensional projection image obtained by projecting the three-dimensional standard image in the photographing direction is obtained at each point on the two-dimensional projection image from density values of a plurality of anatomical structures of the three-dimensional standard image overlapping in the photographing direction. Two-dimensional projection image generation means for calculating and generating density values;
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating means for calculating according to the amount of the predetermined anatomical structure,
Alignment means for aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. It is characterized by comprising a density value calculating means for calculating a density value contributing to an object.

また、本願発明の画像処理方法は、前記被写体を所定の撮影方向から撮影した2次元撮影画像から前記撮影方向を推定する撮影方向推定ステップと、
被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段ステップと、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出ステップと、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せステップと、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出ステップとを備えたことを特徴とするものである。
The image processing method of the present invention includes a shooting direction estimation step of estimating the shooting direction from a two-dimensional shot image obtained by shooting the subject from a predetermined shooting direction.
The three-dimensional standard image in which a two-dimensional projection image obtained by projecting a three-dimensional standard image representing a three-dimensional structure of a plurality of anatomical structures of a subject with standard density values in the photographing direction is overlapped in the photographing direction. A two-dimensional projection image generation means step for calculating and generating a density value at each point on the two-dimensional projection image from the density value of the anatomical structure of
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating step of calculating according to the amount of the predetermined anatomical structure,
An alignment step of aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. And a density value calculating step for calculating a density value that contributes to the object.

本願発明のプログラムは、コンピュータを、
被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を記憶する3次元標準画像記憶手段から前記3次元標準画像を読み取る3次元標準画像読取手段と、
被写体を所定の撮影方向から撮影した2次元撮影画像を記憶する2次元撮影画像記憶手段から前記2次元撮影画像を読み取る2次元撮影画像読取手段と、
前記2次元撮影画像から前記撮影方向を推定する撮影方向推定手段と、
前記3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の複数の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段と、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出手段と、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せ手段と、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出手段として機能させることを特徴とするものである。
The program of the present invention provides a computer,
Three-dimensional standard image reading means for reading the three-dimensional standard image from a three-dimensional standard image storage means for storing a three-dimensional standard image representing a three-dimensional structure of a plurality of anatomical structures of a subject with standard density values When,
Two-dimensional photographed image reading means for reading the two-dimensional photographed image from a two-dimensional photographed image storage means for storing a two-dimensional photographed image obtained by photographing a subject from a predetermined photographing direction;
Shooting direction estimation means for estimating the shooting direction from the two-dimensional shot image;
A two-dimensional projection image obtained by projecting the three-dimensional standard image in the photographing direction is obtained at each point on the two-dimensional projection image from density values of a plurality of anatomical structures of the three-dimensional standard image overlapping in the photographing direction. Two-dimensional projection image generation means for calculating and generating density values;
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating means for calculating according to the amount of the predetermined anatomical structure,
Alignment means for aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. It is characterized by functioning as a density value calculating means for calculating a density value contributing to an object.

「濃度値」とは、光学濃度値のみを意味するものではなく、画像の濃淡を表わす値であればよい。   The “density value” does not mean only the optical density value, but may be a value representing the density of the image.

「複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像」とは、3次元上に存在する骨や臓器などの解剖学的構造物を、その標準的な濃度で表した画像をいう。   “Three-dimensional standard image representing the three-dimensional structure of multiple anatomical structures with standard density values” is the standard for anatomical structures such as bones and organs existing in three dimensions. An image expressed in a specific density.

また、所定の解剖学的構造物は、骨または軟部組織であってもよい。   The predetermined anatomical structure may be bone or soft tissue.

「軟部組織」とは、骨を除いた解剖学的構造物をいう。   “Soft tissue” refers to an anatomical structure excluding bones.

また、3次元標準画像は、多数の断層撮影により得られた多数の断層画像から再構成した標準的な濃淡画像が望ましい。   The three-dimensional standard image is preferably a standard gray image reconstructed from a large number of tomographic images obtained by a large number of tomographic images.

「断層画像」は、CT装置やMRI装置などの断層撮影装置によって取得されたものであり、3次元の画像情報を持つものである。   The “tomographic image” is acquired by a tomographic apparatus such as a CT apparatus or an MRI apparatus, and has three-dimensional image information.

本発明によれば、3次元標準画像から2次元撮影画像上の画素値が各構造物に寄与する濃度値の割合を算出して、2次元撮影画像上の濃度値から各解剖学的構造物に寄与する濃度値を算出することにより、解剖学的構造物の重なりや厚さに応じて、2次元撮影画像から各解剖学的構造物に寄与する濃度値を正確に算出することが可能になる。   According to the present invention, the ratio of the density value that the pixel value on the two-dimensional photographed image contributes to each structure is calculated from the three-dimensional standard image, and each anatomical structure is calculated from the density value on the two-dimensional photographed image. By calculating the concentration value that contributes to the anatomical structure, it is possible to accurately calculate the concentration value that contributes to each anatomical structure from the two-dimensional image according to the overlap and thickness of the anatomical structure Become.

また、骨や軟部組織についても、骨の重なりを考慮して、骨と軟部組織の濃度を正確に把握して骨部画像と軟部画像とに分けることができる。   In addition, regarding bone and soft tissue, it is possible to accurately grasp the density of bone and soft tissue in consideration of the overlap of bones and to divide into bone image and soft image.

さらに、3次元標準画像に多数の断層画像を用いて作成した解剖学的構造物を標準的な濃度で表した濃淡画像を用いることにより、2次元投影画像上の濃度値の割合を正確に把握することができる。   In addition, by using a grayscale image that expresses anatomical structures created using a number of tomographic images as standard density in a three-dimensional standard image, the ratio of density values on the two-dimensional projection image can be accurately grasped. can do.

本発明の画像処理装置について、図に基づいて説明する。図1に示すように、本願発明の画像処理装置1は、被写体の解剖学的構造物の3次元の構造を標準的な濃淡で表した3次元標準画像11を記憶する3次元標準画像記憶手段10と、被写体を所定の撮影方向から撮影した2次元撮影画像21を記憶する2次元撮影画像記憶手段20と、2次元撮影画像から前記撮影方向を推定する投影方向推定手段30と、撮影方向に前記3次元標準画像を投影して2次元投影画像を生成する2次元投影画像生成手段40と、2次元投影画像の濃度値のうち、骨に寄与する濃度値の割合を算出する濃度寄与率算出手段50と、2次元投影画像と2次元撮影画像21とを位置合せする位置合せ手段60と、位置合せした2次元投影画像の濃度値のうち骨に寄与する濃度値の割合に対応して、2次元撮影画像21上の濃度値から骨に寄与する濃度値を算出する濃度値算出手段70とを備え、骨部画像80と軟部画像81を生成する。   The image processing apparatus of the present invention will be described with reference to the drawings. As shown in FIG. 1, the image processing apparatus 1 of the present invention is a three-dimensional standard image storage means for storing a three-dimensional standard image 11 in which a three-dimensional structure of a subject's anatomical structure is represented by standard shading. 10, a two-dimensional photographed image storage means 20 for storing a two-dimensional photographed image 21 obtained by photographing a subject from a predetermined photographing direction, a projection direction estimating means 30 for estimating the photographing direction from the two-dimensional photographed image, A two-dimensional projection image generating means 40 that projects the three-dimensional standard image to generate a two-dimensional projection image, and a density contribution ratio calculation that calculates a ratio of density values contributing to bones among the density values of the two-dimensional projection image. Corresponding to the ratio of the density value contributing to the bone among the density values of the means 50, the two-dimensional projection image and the two-dimensional captured image 21, and the density value of the aligned two-dimensional projection image, On 2D image 21 And a density value calculation section 70 for calculating a contributing density value to the bone from the density value, to generate the bone image 80 and the soft tissue image 81.

まず、3次元標準画像11の生成方法について説明する。3次元標準画像11は、事前に、多くの被写体をCT装置で撮影した多数の断層画像から各臓器や骨などの解剖学的な構造物の濃度値や位置の平均値を求めて、被写体全体の解剖学的な構造物が標準的な濃度と位置を持った濃淡画像を作成し、ハードディスクなどの記憶装置(3次元標準画像記憶手段10)に記憶する。   First, a method for generating the three-dimensional standard image 11 will be described. The three-dimensional standard image 11 is obtained by calculating an average value of concentration and position of anatomical structures such as organs and bones from a large number of tomographic images obtained by photographing many subjects with a CT apparatus in advance. A gray image having standard density and position is created and stored in a storage device (three-dimensional standard image storage means 10) such as a hard disk.

2次元撮影画像21は、単純X線撮影装置などを用いて被写体を所定の撮影方向から撮影した2次元の画像である。撮影された2次元撮影画像21は画像サーバーなどの記憶装置(2次元撮影画像記憶手段20)に一旦記憶される。本実施の形態では、2次元撮影画像21について、胸部を撮影した胸部X線画像について説明する。   The two-dimensional photographed image 21 is a two-dimensional image obtained by photographing a subject from a predetermined photographing direction using a simple X-ray photographing apparatus or the like. The photographed two-dimensional photographed image 21 is temporarily stored in a storage device (two-dimensional photographed image storage means 20) such as an image server. In the present embodiment, a chest X-ray image obtained by capturing the chest with respect to the two-dimensional captured image 21 will be described.

以下、図2のフローチャートに従って、画像処理装置1で骨に寄与する濃度値と軟部組織に寄与する濃度値を求める場合について具体的に説明する。   Hereinafter, the case where the image processing apparatus 1 obtains the density value contributing to the bone and the density value contributing to the soft tissue according to the flowchart of FIG. 2 will be specifically described.

まず、2次元撮影画像読取手段22を用いて、2次元撮影画像記憶手段20から検査対象となる被写体の胸部X線画像21を読み取り(S100)、投影方向推定手段30で、胸部X線画像21から撮影方向を推定する(S101)。   First, the chest X-ray image 21 of the subject to be inspected is read from the two-dimensional photographed image storage means 20 using the two-dimensional photographed image reading means 22 (S100), and the chest X-ray image 21 is projected by the projection direction estimating means 30. The shooting direction is estimated from (S101).

被写体の胸部を撮影した場合、図3に示すように、撮影方向によって、同一肋骨の前肋骨と後肋骨(点P1と点P2)間の距離(D1)や、脊髄の中心線L1と胸骨の中心線L2のずれ幅が異なる。   When the subject's chest is imaged, as shown in FIG. 3, the distance (D1) between the anterior and posterior ribs (points P1 and P2) of the same rib, the center line L1 of the spinal cord, and the sternum depending on the shooting direction The shift width of the center line L2 is different.

そこで、まず、例えば文献「Peter de Souza, “Automatic Rib Detection in Chest Radiographs”, Computer Vision, Graphics and image Processing 23, 129-161 (1983)」に記載されているような手法を用いて、エッジ抽出フィルタや放物線を検出するハフ変換などにより肋骨の形状を検出する。検出された肋骨の形状から、後肋骨と前肋骨の上下の距離D1から上下方向の角度を推定し、脊髄の中心線L1と胸骨の中心線L2から左右方向の角度を推定し、さらに、上下方向の角度と左右方向の角度から撮影方向を求める。   Therefore, first, edge extraction is performed using a technique described in, for example, the document “Peter de Souza,“ Automatic Rib Detection in Chest Radiographs ”, Computer Vision, Graphics and image Processing 23, 129-161 (1983)”. The shape of the rib is detected by a filter or a Hough transform that detects a parabola. From the detected rib shape, the vertical angle is estimated from the vertical distance D1 between the posterior and anterior ribs, the horizontal angle is estimated from the spinal cord center line L1 and the sternum center line L2, and The shooting direction is obtained from the angle of the direction and the angle of the left and right direction.

次に、3次元標準画像読取手段12より、3次元標準画像記憶手段10から3次元標準画像11を読み取り(S102)、2次元投影画像生成手段40で、投影方向に3次元標準画像11を投影して2次元投影画像41を作成する(S103)。   Next, the three-dimensional standard image 11 is read from the three-dimensional standard image reading unit 12 (S102), and the two-dimensional projection image generating unit 40 projects the three-dimensional standard image 11 in the projection direction. Thus, a two-dimensional projection image 41 is created (S103).

図4に示すように、3次元標準画像11を2次元投影画像41上の点Qに投影すると、投影方向vに伸びるラインmが3次元標準画像11を貫く解剖学的構造物の持つ濃度値とその厚さから2次元投影画像41上の点Qの濃度値が決まる。例えば、胸部では、ラインm上に、身体を被っている脂肪や筋肉などの軟部組織A、前肋骨などの骨部B、肺などの軟部組織C、後肋骨などの骨部Dが存在し、さらに、身体を被っている脂肪や筋肉などの軟部組織Eが存在している。胸部X線画像21上には、骨、肺、筋肉、脂肪などの解剖学的構造物はそれぞれ異なる濃度で撮影される。また、X線が透過した解剖学的構造物の厚さに応じても、胸部X線画像21上の濃度の濃淡が異なって現れる。そこで、ラインmが3次元標準画像11を貫く複数の解剖学的構造物のそれぞれが持つ濃度値とその厚さに応じて、2次元投影画像41の濃度値を算出する。投影した2次元投影画像41上にどの程度の濃度値が現れるかは、3次元標準画像11と予め撮影された多数の胸部X線画像21のサンプル画像を用いて、解剖学的構造物の種類とその厚さとの関係から予め求めておくようにする。このような濃度値の関係を用いて、骨、肺、筋肉、脂肪など3次元標準画像の解剖学的構造物が、撮影方向にどのように重なっているかに応じて、2次元投影画像41上の点Qの濃度値を算出する。   As shown in FIG. 4, when the three-dimensional standard image 11 is projected onto the point Q on the two-dimensional projection image 41, the line m extending in the projection direction v has a density value that the anatomical structure penetrates the three-dimensional standard image 11. The density value of the point Q on the two-dimensional projection image 41 is determined from the thickness thereof. For example, in the chest, a soft tissue A such as fat or muscle covering the body, a bone B such as an anterior rib, a soft tissue C such as a lung, or a bone D such as a posterior rib is present on the line m. Furthermore, there exists soft tissue E such as fat and muscle covering the body. On the chest X-ray image 21, anatomical structures such as bones, lungs, muscles, and fats are photographed at different densities. Further, the density on the chest X-ray image 21 appears differently depending on the thickness of the anatomical structure through which X-rays are transmitted. Therefore, the density value of the two-dimensional projection image 41 is calculated according to the density value and the thickness of each of the plurality of anatomical structures through which the line m penetrates the three-dimensional standard image 11. How much density value appears on the projected two-dimensional projection image 41 is determined based on the type of anatomical structure using the three-dimensional standard image 11 and sample images of a number of chest X-ray images 21 taken in advance. And the relationship between the thickness and the thickness thereof. Using such a relationship of density values, on the two-dimensional projection image 41, depending on how the anatomical structures of the three-dimensional standard image such as bone, lung, muscle, and fat overlap in the imaging direction. The density value at point Q is calculated.

さらに、濃度寄与率算出手段50で、2次元投影画像41上の各点の濃度値のうち骨に寄与する濃度値の寄与率を、撮影方向に重なった3次元標準画像の解剖学的構造の中に骨の量に応じて算出する(S104)。例えば、図4に示すように、2次元投影画像上の点Qにおける濃度値が、軟部組織Aによる濃度の割合(以下、寄与率という)がa%、骨部Bによる濃度の寄与率がb%、軟部組織Cによる濃度の寄与率がc%、骨部Dによる濃度の寄与率がd%、軟部組織Eよる濃度の寄与率がe%(但し、a+b+c+d+e=100%)である場合には、骨部(B,D)に寄与する寄与率はb+d%である。   Further, the density contribution rate calculating means 50 calculates the contribution rate of the density value contributing to the bone among the density values of each point on the two-dimensional projection image 41 in the anatomical structure of the three-dimensional standard image overlapped in the imaging direction. It calculates according to the amount of bone inside (S104). For example, as shown in FIG. 4, the density value at the point Q on the two-dimensional projection image is such that the density ratio by the soft tissue A (hereinafter referred to as the contribution ratio) is a%, and the density contribution ratio by the bone B is b. %, The contribution ratio of the density by the soft tissue C is c%, the contribution ratio of the density by the bone part D is d%, and the contribution ratio of the density by the soft tissue E is e% (provided that a + b + c + d + e = 100%). The contribution ratio contributing to the bone parts (B, D) is b + d%.

次に、位置合せ手段60で、2次元投影画像41と胸部X線画像21とを位置合せする(S105)。2次元投影画像41と胸部X線画像21の両画像の位置合わせを行うことができれば、位置合わせのための位置変換処理(アフィン変換やワーピングなど)をいずれの画像(あるいは、両方の画像)に対して行ってもよいが、2次元投影画像41を胸部X線画像21に位置合せするのが好ましい。なぜならば、本願を利用して所定の解剖学的構造物を表わす2次元投影画像41を生成した際に、元の胸部X線画像21と容易に比較読影することができるからである。そこで、本実施の形態では2次元投影画像41を胸部X線画像21に位置合せする位置合せの手法を図5から図8を用いて具体的に説明する。   Next, the alignment unit 60 aligns the two-dimensional projection image 41 and the chest X-ray image 21 (S105). If both the two-dimensional projection image 41 and the chest X-ray image 21 can be aligned, position conversion processing (affine transformation, warping, etc.) for alignment is performed on any image (or both images). However, it is preferable to align the two-dimensional projection image 41 with the chest X-ray image 21. This is because, when the two-dimensional projection image 41 representing a predetermined anatomical structure is generated using the present application, it can be easily compared and read with the original chest X-ray image 21. Therefore, in the present embodiment, an alignment method for aligning the two-dimensional projection image 41 with the chest X-ray image 21 will be specifically described with reference to FIGS.

まず、図5に示すように、2次元投影画像41(図5(a))を,胸部X線画像21(図5(b))中の胸部の位置に大局的な位置合わせをする。その後、大局的位置合わせした画像を分割して得られた多数の局所領域について、それぞれ位置合わせを行う。   First, as shown in FIG. 5, the two-dimensional projection image 41 (FIG. 5A) is globally aligned with the position of the chest in the chest X-ray image 21 (FIG. 5B). Thereafter, alignment is performed for each of a large number of local regions obtained by dividing the globally aligned image.

大局的位置合わせでは、2次元投影画像41に対して回転・平行移動および拡大・縮小等のアフィン変換を施して胸部X線画像21に大雑把に大局的な位置合わせした2次元投影画像41を作成する(図6参照)。続いて、図7に示すように、2次元投影画像41を多数の矩形小領域であるテンプレート領域T2に分割して(図7(a)の例では、同図上の各点を中心にテンプレート領域T2を設定している)、2次元投影画像41の各テンプレート領域T2に対応する胸部X線画像21上の位置を探索するために、これらの各テンプレート領域T2よりも大きい領域の探索領域R1を胸部X線画像21上に設定し(図7(b)の例では、同上の各点を中心に探索領域R1を設定している)、各探索領域R1の中で2次元投影画像41の各テンプレート領域T2が最も一致する場所を求める。具体的には、正規化相互相関値などを用いてテンプレート領域T2の濃淡情報と探索領域内の濃淡情報とが最も一致する場所を求める。2次元投影画像41のテンプレート領域T2と、テンプレート領域T2に対応する胸部X線画像21上の対応位置を求め、その対応位置に基づいて、2次元投影画像41上の各テンプレート領域T2が胸部X線画像21の対応位置に一致するように、アフィン変換した2次元投影画像41全体を図8に示すようなワーピング(非線形歪変換)を行い、両画像を詳細に位置合わせする(詳細は、本願出願人が出願の特開2002-157593公報など参照)。   In the global alignment, the two-dimensional projection image 41 is subjected to affine transformation such as rotation, parallel movement, enlargement / reduction, and the like to create a two-dimensional projection image 41 roughly aligned with the chest X-ray image 21. (See FIG. 6). Subsequently, as shown in FIG. 7, the two-dimensional projection image 41 is divided into a plurality of small rectangular regions T2 (in the example of FIG. 7A, the template is centered on each point in the figure. In order to search for a position on the chest X-ray image 21 corresponding to each template region T2 of the two-dimensional projection image 41), a search region R1 that is larger than each template region T2 is set. Is set on the chest X-ray image 21 (in the example of FIG. 7B, the search region R1 is set around each point of the same), and the two-dimensional projection image 41 of each search region R1 is set. A place where the template regions T2 are the best match is obtained. Specifically, a location where the grayscale information in the template area T2 and the grayscale information in the search area most closely match is obtained using a normalized cross correlation value or the like. A template region T2 of the two-dimensional projection image 41 and a corresponding position on the chest X-ray image 21 corresponding to the template region T2 are obtained, and each template region T2 on the two-dimensional projection image 41 is determined based on the corresponding position. The entire affine-transformed two-dimensional projection image 41 is warped (nonlinear distortion transformation) as shown in FIG. 8 so as to match the corresponding position of the line image 21, and both images are aligned in detail (details are described in this application). (See Japanese Patent Application Laid-Open No. 2002-157593 filed by the applicant).

2次元投影画像41は胸部X線画像21に近くなるように投影した画像であるが、撮影条件や被写体によって濃度値にずれが生じる。そこで、濃度値算出手段70は、胸部X線画像21上の点Pの濃度値に濃度寄与率算出手段50で求めた骨の寄与率を掛け合わせて骨の濃度値を算出する(S106)。位置合せ手段60で胸部X線画像21に位置合わせした2次元投影画像41上の点Qが、胸部X線画像21上の点Pに対応する場合には、骨に寄与する濃度値を、
(点Pの骨に寄与する濃度値)=(点Pの濃度値)×(点Qの骨濃度寄与率)/100
より求める。あるいは、軟部組織に寄与する濃度値を、
(点Pの軟部に寄与する濃度値)
=(点Pの濃度値)×(100−(点Qの骨濃度寄与率))/100
より求めることができる。
The two-dimensional projection image 41 is an image projected so as to be close to the chest X-ray image 21, but a deviation occurs in the density value depending on the imaging conditions and the subject. Therefore, the density value calculating unit 70 calculates the bone density value by multiplying the density value of the point P on the chest X-ray image 21 by the bone contribution rate obtained by the density contribution rate calculating unit 50 (S106). When the point Q on the two-dimensional projection image 41 aligned with the chest X-ray image 21 by the alignment means 60 corresponds to the point P on the chest X-ray image 21, the density value contributing to the bone is
(Concentration value contributing to bone at point P) = (Concentration value at point P) × (Bone concentration contribution ratio at point Q) / 100
Ask more. Alternatively, the concentration value that contributes to soft tissue,
(Concentration value contributing to the soft part of point P)
= (Concentration value at point P) × (100− (concentration of bone concentration at point Q)) / 100
It can be obtained more.

このようにして胸部X線画像21全体について骨と軟部の濃度値を算出して、骨部画像80と軟部画像81を作成する。   In this way, bone and soft part density values are calculated for the entire chest X-ray image 21 to create a bone part image 80 and a soft part image 81.

上述のように作成した軟部画像は、肋骨の重なりに応じて肋骨を除去することができる。そこで、この軟部画像から肺がんなどの異常陰影を検出するようにすれば、濃度勾配を利用するフィルタを用いて異常陰影を検出する際にも肋骨に影響されることなく精度よく異常陰影を検出することが可能になる。   The soft part image created as described above can remove the ribs according to the overlapping of the ribs. Therefore, if an abnormal shadow such as lung cancer is detected from the soft part image, the abnormal shadow is accurately detected without being affected by the rib even when the abnormal shadow is detected using a filter using a density gradient. It becomes possible.

上述の3次元標準画像は、年齢、性別、身長、体重などに応じて異なる3次元標準画像を用意しておき、被写体に応じた3次元標準画像を用いるようにしてもよい。   As the above-described three-dimensional standard image, a different three-dimensional standard image may be prepared according to age, sex, height, weight, and the like, and a three-dimensional standard image corresponding to the subject may be used.

上述では、骨部画像と軟部画像を求める場合について説明したが、心臓など個別の解剖学的構造物の濃度を個別に求めるようにすることも可能である。   In the above description, the case of obtaining the bone part image and the soft part image has been described. However, it is also possible to individually obtain the concentration of individual anatomical structures such as the heart.

また、上述では、2次元撮影画像が胸部画像について詳細に説明したが、他の部位についても同様に骨部の濃度を算出することが可能である。   In the above description, the two-dimensional photographed image has been described in detail for the chest image, but it is also possible to calculate the bone concentration for other parts as well.

さらに、被写体本人が過去に断層撮影を行い、その断層画像が残っている場合には、3次元標準画像として被写体本人の画像を用いることができる。   Further, when the subject himself has taken a tomographic image in the past and the tomographic image remains, the subject's own image can be used as the three-dimensional standard image.

また、上記各手段をコンピュータ上に機能させるようなプログラムを記録した媒体を用いてコンピュータにインストールすることによって、画像処理装置として動作させることができる。   Moreover, it can operate as an image processing apparatus by installing in a computer using the medium which recorded the program which functions each said means on a computer.

画像処理装置の構成図Configuration diagram of image processing device 画像処理装置の処理の流れを示すフローチャートFlowchart showing processing flow of image processing apparatus 撮影方向の推定方法を説明するための図Diagram for explaining the method of estimating the shooting direction 2次元投影画像と2次元撮影画像と3次元基準画像との関係を説明するための図The figure for demonstrating the relationship between a two-dimensional projection image, a two-dimensional picked-up image, and a three-dimensional reference image. 位置合わせする2次元投影画像と2次元撮影画像の一例An example of a two-dimensional projection image and a two-dimensional captured image to be aligned 大局的位置合わせを説明するための図Diagram for explaining global alignment 局所的位置合わせを説明するための図Diagram for explaining local alignment ワーピングを説明するための図Illustration for explaining warping

符号の説明Explanation of symbols

1 画像処理装置
10 3次元標準画像記憶手段
11 3次元標準画像
12 3次元標準画像読取手段
20 2次元撮影画像記憶手段
21 2次元撮影画像
22 2次元撮影画像読取手段
30 投影方向推定手段
40 2次元投影画像生成手段
50 濃度寄与率算出手段
60 位置合せ手段
70 濃度値算出手段
80 骨部画像
81 軟部画像
DESCRIPTION OF SYMBOLS 1 Image processing apparatus 10 3D standard image memory | storage means 11 3D standard image 12 3D standard image reading means 20 2D captured image memory | storage means 21 2D captured image 22 2D captured image reading means 30 Projection direction estimation means 40 2D Projection image generation means 50 Density contribution rate calculation means 60 Positioning means 70 Density value calculation means 80 Bone part image 81 Soft part image

Claims (5)

被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を記憶する3次元標準画像記憶手段と、
被写体を所定の撮影方向から撮影した2次元撮影画像を記憶する2次元撮影画像記憶手段と、
前記2次元撮影画像から前記撮影方向を推定する撮影方向推定手段と、
前記3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の複数の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段と、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出手段と、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せ手段と、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出手段とを備えたことを特徴とする画像処理装置。
Three-dimensional standard image storage means for storing a three-dimensional standard image representing a three-dimensional structure of a plurality of anatomical structures of a subject with standard density values;
Two-dimensional photographed image storage means for storing a two-dimensional photographed image obtained by photographing a subject from a predetermined photographing direction;
Shooting direction estimation means for estimating the shooting direction from the two-dimensional shot image;
A two-dimensional projection image obtained by projecting the three-dimensional standard image in the photographing direction is obtained at each point on the two-dimensional projection image from density values of a plurality of anatomical structures of the three-dimensional standard image overlapping in the photographing direction. Two-dimensional projection image generation means for calculating and generating density values;
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating means for calculating according to the amount of the predetermined anatomical structure,
Alignment means for aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. An image processing apparatus comprising: a density value calculating unit that calculates a density value contributing to an object.
前記所定の解剖学的構造物が、骨または軟部組織であることを特徴とする請求項1記載の画像処理装置。   The image processing apparatus according to claim 1, wherein the predetermined anatomical structure is a bone or a soft tissue. 前記3次元標準画像が、多数の断層撮影により得られた多数の断層画像から再構成した標準的な濃淡画像であることを特徴とする請求項1または2記載の画像処理装置。   3. The image processing apparatus according to claim 1, wherein the three-dimensional standard image is a standard grayscale image reconstructed from a large number of tomographic images obtained by a large number of tomographic images. 被写体を所定の撮影方向から撮影した2次元撮影画像から前記撮影方向を推定する撮影方向推定ステップと、
被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段ステップと、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出ステップと、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せステップと、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出ステップとを備えたことを特徴とする画像処理方法。
A shooting direction estimation step of estimating the shooting direction from a two-dimensional shot image obtained by shooting the subject from a predetermined shooting direction;
The three-dimensional standard image in which a two-dimensional projection image obtained by projecting a three-dimensional standard image representing a three-dimensional structure of a plurality of anatomical structures of a subject with standard density values in the photographing direction is overlapped in the photographing direction. A two-dimensional projection image generation means step for calculating and generating a density value at each point on the two-dimensional projection image from the density value of the anatomical structure of
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating step of calculating according to the amount of the predetermined anatomical structure,
An alignment step of aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. An image processing method comprising: a density value calculating step for calculating a density value contributing to an object.
コンピュータを、
被写体の複数の解剖学的構造物の3次元の構造を標準的な濃度値で表した3次元標準画像を記憶する3次元標準画像記憶手段から前記3次元標準画像を読み取る3次元標準画像読取手段と、
被写体を所定の撮影方向から撮影した2次元撮影画像を記憶する2次元撮影画像記憶手段から前記2次元撮影画像を読み取る2次元撮影画像読取手段と、
前記2次元撮影画像から前記撮影方向を推定する撮影方向推定手段と、
前記3次元標準画像を前記撮影方向に投影した2次元投影画像を、前記撮影方向に重なる前記3次元標準画像の複数の解剖学的構造物の濃度値から前記2次元投影画像上の各点における濃度値を算出して生成する2次元投影画像生成手段と、
前記2次元投影画像上の各点の濃度値のうち所定の解剖学的構造物に寄与する濃度値の割合を、前記撮影方向に重なった前記3次元標準画像の複数の解剖学的構造物のうち前記所定の解剖学的構造物の量に応じて算出する濃度寄与率算出手段と、
前記2次元投影画像と前記2次元撮影画像とを位置合せする位置合せ手段と、
前記位置合せした2次元投影画像の濃度値のうち前記所定の解剖学的構造物に寄与する濃度値の割合に対応して、前記2次元撮影画像上の濃度値から前記所定の解剖学的構造物に寄与する濃度値を算出する濃度値算出手段として機能させることを特徴とするプログラム。
Computer
Three-dimensional standard image reading means for reading the three-dimensional standard image from a three-dimensional standard image storage means for storing a three-dimensional standard image representing a three-dimensional structure of a plurality of anatomical structures of a subject with standard density values When,
Two-dimensional photographed image reading means for reading the two-dimensional photographed image from a two-dimensional photographed image storage means for storing a two-dimensional photographed image obtained by photographing a subject from a predetermined photographing direction;
Shooting direction estimation means for estimating the shooting direction from the two-dimensional shot image;
A two-dimensional projection image obtained by projecting the three-dimensional standard image in the photographing direction is obtained at each point on the two-dimensional projection image from density values of a plurality of anatomical structures of the three-dimensional standard image overlapping in the photographing direction. Two-dimensional projection image generation means for calculating and generating density values;
The ratio of the density value contributing to a predetermined anatomical structure among the density values of each point on the two-dimensional projection image is calculated by using the plurality of anatomical structures of the three-dimensional standard image overlapped in the imaging direction. A concentration contribution rate calculating means for calculating according to the amount of the predetermined anatomical structure,
Alignment means for aligning the two-dimensional projection image and the two-dimensional captured image;
The predetermined anatomical structure is determined from the density value on the two-dimensional photographed image corresponding to the ratio of the density value contributing to the predetermined anatomical structure in the density value of the aligned two-dimensional projection image. A program which functions as a density value calculating means for calculating a density value contributing to an object.
JP2005196932A 2005-07-06 2005-07-06 Image processing device, method and program Withdrawn JP2007014435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005196932A JP2007014435A (en) 2005-07-06 2005-07-06 Image processing device, method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005196932A JP2007014435A (en) 2005-07-06 2005-07-06 Image processing device, method and program

Publications (1)

Publication Number Publication Date
JP2007014435A true JP2007014435A (en) 2007-01-25

Family

ID=37752112

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005196932A Withdrawn JP2007014435A (en) 2005-07-06 2005-07-06 Image processing device, method and program

Country Status (1)

Country Link
JP (1) JP2007014435A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011508620A (en) * 2007-12-18 2011-03-17 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 2D / 3D image registration based on features
JP2012254243A (en) * 2011-06-10 2012-12-27 Mitsubishi Electric Corp Image collator, patient positioner and image verification method
JP2021053213A (en) * 2019-09-30 2021-04-08 龍一 中原 Medical image processing device, medical image processing program, and teacher data creation method
US11278257B2 (en) 2015-03-20 2022-03-22 Fujifilm Corporation Diagnostic auxiliary image generation apparatus, diagnostic auxiliary image generation method, and diagnostic auxiliary image generation program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011508620A (en) * 2007-12-18 2011-03-17 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 2D / 3D image registration based on features
JP2012254243A (en) * 2011-06-10 2012-12-27 Mitsubishi Electric Corp Image collator, patient positioner and image verification method
US11278257B2 (en) 2015-03-20 2022-03-22 Fujifilm Corporation Diagnostic auxiliary image generation apparatus, diagnostic auxiliary image generation method, and diagnostic auxiliary image generation program
JP2021053213A (en) * 2019-09-30 2021-04-08 龍一 中原 Medical image processing device, medical image processing program, and teacher data creation method
JP7360651B2 (en) 2019-09-30 2023-10-13 龍一 中原 Medical image processing device, medical image processing program, and medical image processing method

Similar Documents

Publication Publication Date Title
JP5643304B2 (en) Computer-aided lung nodule detection system and method and chest image segmentation system and method in chest tomosynthesis imaging
JP5491174B2 (en) Deformable registration of images for image-guided radiation therapy
JP4510564B2 (en) Radiation imaging apparatus and program thereof
JP4647360B2 (en) DIFFERENTIAL IMAGE CREATION DEVICE, DIFFERENTIAL IMAGE CREATION METHOD, AND PROGRAM THEREOF
JP4104054B2 (en) Image alignment apparatus and image processing apparatus
KR101028365B1 (en) Multistage matching method and apparatus of pulmonary nodules in serial computed tomography scan
EP3201876B1 (en) Medical image processing apparatus, medical image processing method
JP6131161B2 (en) Image registration apparatus, method, program, and three-dimensional deformation model generation method
JP6361435B2 (en) Image processing apparatus and program
Baka et al. Statistical shape model-based femur kinematics from biplane fluoroscopy
WO2012008500A1 (en) Signal-processing device, signal-processing program, and computer-readable recording medium with a signal-processing program recorded thereon
El-Baz et al. A novel approach for automatic follow-up of detected lung nodules
JP2007029514A (en) Image analyzer, image analysis method and its program
JP4574500B2 (en) Alignment apparatus, alignment method and program thereof
JP2007014435A (en) Image processing device, method and program
JP4738970B2 (en) Image processing apparatus, image processing method and program thereof
JP4571378B2 (en) Image processing method, apparatus, and program
JP7273416B2 (en) Image processing device, image processing method and image processing program
JP2006175057A (en) Image processing apparatus, and its program and method
JP5537266B2 (en) Energy subtraction processing apparatus and method, and program
JP2004152043A (en) Method for correcting difference image, and image processor
JP2001291087A (en) Method and device for positioning image
Imamura et al. Registration of preoperative CTA and intraoperative fluoroscopic images for assisting aortic stent grafting
JP2014064606A (en) Image processing device and method for generating time-lapse difference image
US11696736B2 (en) Anatomical landmark detection and identification from digital radiography images containing severe skeletal deformations

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20061209

A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20081007