WO2016208016A1 - Image-processing device, image-processing method, and image-processing program - Google Patents

Image-processing device, image-processing method, and image-processing program Download PDF

Info

Publication number
WO2016208016A1
WO2016208016A1 PCT/JP2015/068264 JP2015068264W WO2016208016A1 WO 2016208016 A1 WO2016208016 A1 WO 2016208016A1 JP 2015068264 W JP2015068264 W JP 2015068264W WO 2016208016 A1 WO2016208016 A1 WO 2016208016A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
image
mucosal
unit
mucous membrane
Prior art date
Application number
PCT/JP2015/068264
Other languages
French (fr)
Japanese (ja)
Inventor
光隆 木村
北村 誠
都士也 上山
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2015/068264 priority Critical patent/WO2016208016A1/en
Priority to JP2017524509A priority patent/JPWO2016208016A1/en
Publication of WO2016208016A1 publication Critical patent/WO2016208016A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/044Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for absorption imaging

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program that perform image processing on an image obtained by imaging the inside of a lumen of a living body.
  • a technique in which a specific region such as an abnormal region is detected using an identification criterion with respect to an intraluminal image obtained by imaging a living body lumen (in the digestive tract) using a medical observation apparatus such as an endoscope.
  • the identification standard used at this time is usually created based on images of various variations of mucosal areas and abnormal areas extracted from the intraluminal image as a learning sample.
  • Patent Document 1 discloses a process of changing the position, direction, and appearance of an arbitrary region of interest or scaling or rotating the region of interest with respect to an image acquired as a learning sample.
  • a technique is disclosed in which a new image is generated by performing, a feature amount is calculated from the new image and the original image, and an identification reference is created.
  • Patent Document 1 when the technique disclosed in Patent Document 1 is to be applied to an intraluminal image, the state in the lumen is appropriately adjusted by simply performing the geometric processing as described above on the region of interest. It is difficult to obtain a reflected learning sample.
  • the present invention has been made in view of the above, and provides an image processing apparatus, an image processing method, and an image processing program capable of acquiring a learning sample that appropriately reflects a state in a lumen. With the goal.
  • an image processing apparatus includes a mucosal region extraction unit that extracts a mucosal region from an intraluminal image obtained by imaging the inside of a lumen of a living body, and the mucosal region. And an image generation unit that generates an image different from the intraluminal image by processing the mucosal region in the intraluminal image based on the surface texture.
  • the image processing method includes a mucosal region extraction step for extracting a mucosal region from an intraluminal image obtained by imaging the inside of a lumen of a living body, acquiring a surface property of the mucosal region, and based on the surface property, And an image generation step of generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image.
  • An image processing program obtains a mucosal region extraction step for extracting a mucosal region from an intraluminal image obtained by imaging the inside of the lumen of a living body, obtains a surface property of the mucosal region, and based on the surface property,
  • An image generating step of generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image is characterized in that the computer executes.
  • an image different from the intraluminal image is generated by processing the mucosal region based on the surface properties of the mucosal region extracted from the intraluminal image. It is possible to acquire a learning sample appropriately reflected.
  • FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 3 is a schematic diagram showing a mucosal region extracted from an intraluminal image.
  • FIG. 4 is a schematic diagram showing an image in which a villi region is combined with a mucous membrane region.
  • FIG. 5 is a schematic diagram illustrating an image in which a villi region and a blood vessel region are combined with a mucous membrane region.
  • FIG. 6 is a block diagram showing a configuration of a calculation unit provided in the image processing apparatus according to Embodiment 2 of the present invention.
  • FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 3 is a schematic diagram showing a mucos
  • FIG. 7 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 8 is a flowchart showing the creation process of the identification standard shown in FIG.
  • FIG. 9 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to Embodiment 3 of the present invention.
  • FIG. 10 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 11 is a block diagram illustrating a configuration of a calculation unit included in an image processing apparatus according to Embodiment 4 of the present invention.
  • FIG. 12 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 13 is a flowchart showing the determination processing of the attribute of the mucous membrane region shown in FIG. FIG.
  • FIG. 14 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to the fifth embodiment of the present invention.
  • FIG. 15 is a flowchart showing the operation of the calculation unit shown in FIG.
  • FIG. 16 is a flowchart showing a new image generation process shown in FIG.
  • FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
  • the image processing apparatus 1 according to the first embodiment extracts a mucosal region from an intraluminal image acquired by imaging the inside of the lumen of a living body using a medical observation apparatus such as an endoscope, and this mucosa.
  • This is an apparatus for executing image processing for generating a new image different from the original intraluminal image based on the surface texture of the region.
  • the intraluminal image is usually a color image having pixel levels (pixel values) for wavelength components of R (red), G (green), and B (blue) at each pixel position.
  • the endoscope for imaging the living body may be any of a capsule endoscope, a flexible endoscope, a rigid endoscope, and the like.
  • the image processing apparatus 1 includes a control unit 10 that controls the operation of the entire image processing apparatus 1 and image data of an intraluminal image generated by imaging the inside of the lumen by the medical observation apparatus.
  • An image acquisition unit 20 to be acquired an input unit 30 for inputting a signal according to an external operation to the control unit 10, a display unit 40 for displaying various information and images, and an image acquired by the image acquisition unit 20
  • a storage unit 50 that stores data and various programs, and a calculation unit 100 that executes predetermined image processing on the image data are provided.
  • the control unit 10 includes a general-purpose processor such as a CPU (Central Processing Unit) and a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC (Application Specific Integrated Circuit).
  • a general-purpose processor such as a CPU (Central Processing Unit) and a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC (Application Specific Integrated Circuit).
  • the various operations stored in the storage unit 50 are read to give instructions to each unit constituting the image processing apparatus 1 and data transfer, thereby supervising the overall operation of the image processing apparatus 1. And control.
  • the control unit 10 is a dedicated processor, the processor may execute various processes independently, or the processor and the storage unit 50 cooperate with each other by using various data stored in the storage unit 50 or the like. Various processes may be executed by combining them.
  • the image acquisition unit 20 is appropriately configured according to the mode of the system including the medical observation apparatus.
  • the image acquisition unit 20 is configured by an interface that captures image data generated in the medical observation apparatus.
  • the image acquisition unit 20 includes a communication device connected to the server, and performs data communication with the server to obtain image data. get.
  • the image data generated by the medical observation apparatus may be transferred using a portable storage medium.
  • the image acquisition unit 20 is detachably mounted with a portable storage medium and stored. It is constituted by a reader device that reads out image data of a captured image.
  • the input unit 30 is realized by input devices such as a keyboard, a mouse, a touch panel, and various switches, for example, and outputs an input signal generated in response to an external operation on these input devices to the control unit 10.
  • the display unit 40 is realized by a display device such as an LCD (Liquid Crystal Display) or an EL (Electro-Luminescence) display, and displays various screens including intraluminal images under the control of the control unit 10.
  • a display device such as an LCD (Liquid Crystal Display) or an EL (Electro-Luminescence) display, and displays various screens including intraluminal images under the control of the control unit 10.
  • the storage unit 50 includes various IC (integrated circuit) memories such as ROM (Read Only Memory) and RAM (Random Access Memory) such as flash memory that can be updated and recorded, a hard disk or a CD-ROM (built-in or connected by a data communication terminal). It is realized by an information storage device such as Compact Disc Read Only Memory) and an information writing / reading device for the information storage device.
  • IC integrated circuit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • flash memory such as flash memory that can be updated and recorded
  • a hard disk or a CD-ROM built-in or connected by a data communication terminal.
  • an information storage device such as Compact Disc Read Only Memory
  • the storage unit 50 operates the image processing device 1 and causes the image processing device 1 to execute various functions. Stores data used during execution.
  • the storage unit 50 extracts a mucosal region from the intraluminal image, and generates an image that is different from the original intraluminal image based on the surface properties of the mucosal region. Is stored.
  • the storage unit 50 stores information such as identification criteria used in the image processing.
  • the calculation unit 100 is configured using a general-purpose processor such as a CPU or a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC.
  • a general-purpose processor such as a CPU or a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC.
  • the image processing program stored in the program storage unit 51 is read to extract a mucosal region from the intraluminal image, and based on the surface properties of the mucosal region, the original tube Image processing for generating a new image different from the intracavity image is executed.
  • the arithmetic unit 100 is a dedicated processor, the processor may execute various processes independently, or the processor and the storage unit 50 cooperate with each other by using various data stored in the storage unit 50 or the like. The image processing may be executed by combining them.
  • the calculation unit 100 acquires a mucosal region extraction unit 110 that extracts a mucosal region from an intraluminal image, acquires the surface property of the mucosal region, and based on this surface property, the mucosa in the intraluminal image. And an image generation unit 120 that generates the new image by processing the region.
  • the image generation unit 120 includes a fine structure generation unit 121 that generates a new image representing the fine structure of the mucosal surface.
  • the fine structure generation unit 121 includes a villi generation unit 121a that generates a villi region that represents villi existing on the mucosal surface, and a blood vessel generation unit 121b that generates a blood vessel region that represents a blood vessel seen through the mucosal surface.
  • the villi generation unit 121a stores a model representing villi, and generates a new image by pasting this model on the mucosa region.
  • a model representing villi is referred to as a villi model.
  • One or more villi models are created in advance by extracting a region including one or more villi from an intraluminal image of a living body.
  • the blood vessel generation unit 121b stores a model representing a blood vessel, and generates a new image by pasting this model on the mucous membrane region.
  • a model representing a blood vessel is referred to as a blood vessel model.
  • One or more blood vessel models are created in advance by extracting a region including one or more blood vessels from an intraluminal image of a living body.
  • FIG. 2 is a flowchart showing the operation of the calculation unit 100.
  • the calculation unit 100 acquires the intraluminal image acquired by the medical observation apparatus by reading it from the storage unit 50.
  • the mucosal region extraction unit 110 extracts the mucosal region from the intraluminal image. Specifically, the mucous membrane region extraction unit 110 calculates a feature amount for each pixel or for each section obtained by dividing the intraluminal image into a plurality of sections based on the pixel value of each pixel constituting the intraluminal image. The mucous membrane region is extracted by performing threshold processing using this feature amount and a discrimination criterion created in advance.
  • the feature amount for extracting the mucous membrane region includes a color feature amount, a shape feature amount, and a texture feature amount.
  • the color feature amount include pixel values (R, G, and B component values) of each pixel, color ratios such as G / R values or B / G values, hue, saturation, brightness, and color differences.
  • Shape feature values include the area (number of pixels) of the region extracted from the intraluminal image, the perimeter, the ferret diameter including the horizontal or vertical ferret diameter, the HOG feature value (Histogram of Oriented Gradients), and the SIFT feature value. (Scale Invariant Feature Transform).
  • the texture feature amount include Local Binary Pattern (LBP). LBP is a feature quantity that represents the magnitude relationship between pixel values of a pixel of interest and pixels in the surrounding eight directions as a 256-dimensional histogram that is 2 8.
  • the mucous membrane region extraction unit 110 calculates such a feature amount, and based on a discrimination criterion created in advance, the intraluminal image includes a mucosal region including a villi region and a vascular region, and other regions, Specifically, it is determined as a non-mucosal region including a residue region, a bubble region, a dark region, etc., and a mucosal region is extracted.
  • a mucosal region a region surrounded by the outline of the mucous membrane region may be extracted, or a rectangular region including the mucous membrane region may be extracted.
  • the entire intraluminal image in which the mucosal area is detected may be extracted as the mucosal area.
  • FIG. 3 is a schematic diagram showing a mucosal region extracted from an intraluminal image.
  • FIG. 3 shows an example in which a rectangular region including a mucous membrane region is extracted.
  • the mucous membrane region extraction unit 110 outputs the extracted image of the mucous membrane region to the image generation unit 120 and stores it in the storage unit 50.
  • the image generation unit 120 acquires the surface property of the mucosal region extracted in step S11.
  • the surface properties of the mucosal area include the unevenness of the mucous membrane, the color of the mucous membrane, the presence or absence of halation areas, the presence or absence of unnecessary areas such as residues and bubbles, the presence or absence of villi, the color of villi, the presence or absence of blood vessels, This refers to the state of shape, color, bleeding, etc.
  • These surface properties can be acquired by calculating the color feature value, shape feature value, and texture feature value listed in step S11 for the mucosal region.
  • the image generation unit 120 generates a new image by processing the mucosal region in the intraluminal image based on the surface property of the mucosal region. Specifically, a process of attaching a villi model or a blood vessel model to the mucosal area, or a process of changing the color, shape, or texture of the mucosal area. In the first embodiment, a process of attaching a villi model and a blood vessel model to the mucosa region will be described.
  • the villi generating unit 121a generates a new image in which the villi area is synthesized in the mucosa area by pasting the villi model on the mucosa area. Specifically, the villus generation unit 121a selects one of the villus models stored in advance and pastes it to the mucosa region. As a method for selecting a villi model, a user may select an arbitrary villi model by displaying a plurality of villi models on the display unit 40 (see FIG. 1), or the villi generation unit 121a may randomly select a villi model. May be.
  • the villus generation unit 121a may appropriately select the villus model according to the surface properties of the mucosal region.
  • the selection method and adjustment method of the villus model according to the surface property of the mucous membrane region will be described.
  • the villi generating unit 121a includes the mucous membrane shown in the intraluminal image and the endoscope that images the intraluminal image.
  • the positional relationship with the imaging device provided in the mirror specifically, the positional relationship regarding whether the endoscope has imaged the mucous membrane from the front or whether the endoscope has imaged the mucosa from an oblique direction is acquired. This positional relationship can be determined from the shape of the edge extracted from the mucous membrane region.
  • the greater the circularity of the edge that is, the closer the shape of the edge is to a perfect circle, the more it is determined that the imaging device of the endoscope faces the direction closer to the front with respect to the mucous membrane. It is determined that the smaller the degree is, that is, the farther the edge shape is from the perfect circle, the more the imaging element of the endoscope is oriented obliquely with respect to the mucous membrane.
  • an acceleration sensor is provided in an endoscope that captures an intraluminal image and the detection value of the acceleration sensor is attached to the intraluminal image as image auxiliary information, based on the image auxiliary information The positional relationship between the mucous membrane and the endoscope may be acquired.
  • the villi generating unit 121a selects a villi model that is nearly circular and pastes it to the mucosa region.
  • the villi generating unit 121a selects a generally known elongated villi model.
  • the villi generation unit 121a determines the direction in which the villi model is attached. Specifically, the orientation of the villi model is adjusted so that the proximal end portion of the villi model is on the near side of the lumen and the distal end portion of the villi model is on the back side of the lumen.
  • a region where the luminance is lower than the predetermined value and the area is larger than the predetermined value is extracted from the intraluminal image, and the position where the luminance is the lowest in this region is extracted. It can be decided by making it in the back.
  • the villi model is rotated and attached to the mucous membrane region so that the tip side of the villi model faces toward the back.
  • the villi generating unit 121a When considering the depth of the mucosal region in the intraluminal image, the villi generating unit 121a first calculates the luminance of each pixel constituting the intraluminal image, A region having a relatively low value is determined to be deep, and a region having a relatively high luminance value is determined to be shallow. When the depth is deep, the villi appear small, and when the depth is shallow, the villi appears large. Therefore, the villi generating unit 121a selects a villi model having a size corresponding to the depth of the mucosal region in the intraluminal image and pastes it to the mucosal region. Alternatively, the villus generation unit 121a may paste an arbitrarily selected villus model on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
  • the villus generation unit 121a may select a villus model in accordance with the size of the villus reflected in the mucosa region. Specifically, the villi generating unit 121a first extracts an edge from the mucous membrane region. This edge represents the outline of each villi. The villi generating unit 121a selects a villi model having a size close to the interval between the edges. Or the villi production
  • the villi generating unit 121a first calculates the luminance from the pixel value of each pixel constituting the mucosal region, and further calculates the average value (average luminance) of the luminance of each pixel. Value). Then, the villi generating unit 121a compares the average luminance value of the previously stored villi model with the average luminance value of the mucosal region, and is equal to the average luminance value of the mucosal region or within a predetermined range of the average luminance value of the mucosal region. A villi model having an average luminance value (for example, within ⁇ tens of 10%) is selected. When the villus model meeting this condition is not stored, the villus generation unit 121a may select the villus model having the average luminance value closest to the average luminance value of the mucosa region.
  • the villus generation unit 121a adjusts the luminance of the villus model so that the difference in luminance between the region where the villus model is pasted and its peripheral region does not become too large in the mucosa region.
  • the villi model is pasted after adjusting the luminance of the villi model to a relatively high luminance area in the mucous membrane area.
  • the villi model is pasted after adjusting the luminance of the villi model to a relatively low luminance area of the mucosa area.
  • the villi generating unit 121a extracts the R value of the pixel values of each pixel constituting the mucosal region, and calculates the average value (average value) of the R values of each pixel. R value) is calculated.
  • the villus generation unit 121a compares the average R value of the previously stored villus model with the average R value of the mucosal region, and is equal to or within a predetermined range of the average R value of the mucosal region (for example, A villus model having an average R value that is within a few tens of percent) is selected. When the villus model meeting this condition is not stored, the villus generation unit 121a may select the villus model having the average R value closest to the average R value of the mucosal region.
  • the villi generating unit 121a creates a histogram of R values in the selected villi model, and multiplies the histogram by a coefficient so that the median R value in the villi model is equal to the median R value in the mucosal region. To adjust the R value in the villi model. Then, the villi model with the adjusted R value is attached to the mucosa region.
  • the villi generating unit 121a may adjust the G value, the B value, the R / G value, the R / B value, and the like in the villi model by a similar method.
  • the villi generating unit 121a converts the image of the mucosal region into an image of the frequency space by performing Fourier transform on the mucous membrane region, and based on the frequency distribution To obtain the frequency.
  • the frequency to be acquired may be, for example, the frequency having the maximum intensity or the frequency having the minimum intensity. Alternatively, it may be the median value of the frequency distribution.
  • the villi generating unit 121a selects a villi model whose frequency is within a predetermined range with respect to the acquired frequency of the mucous membrane region. Alternatively, the villi model having the closest frequency to the frequency of the mucous membrane region may be selected.
  • the villus generation unit 121a pastes the selected villus model on the mucosa region as it is.
  • the selected villi model includes only one villi image, this villi model is duplicated, and the arrangement interval of the villi model is adjusted so as to be close to the frequency of the mucosa region and pasted to the mucosa region.
  • the villi generating unit 121a may execute the above-described methods (1-1) to (1-5) alone or in appropriate combination.
  • the villus model selected in consideration of the positional relationship between the mucous membrane and the endoscope is enlarged or reduced according to the depth of the mucosal area, or the color of the villus model according to the color and brightness of the mucosal area. The brightness may be adjusted.
  • FIG. 4 is a schematic diagram showing a new image generated by pasting a villi model to the mucosa region shown in FIG.
  • the blood vessel generation unit 121b generates a new image in which the blood vessel region is synthesized in the mucosal region by pasting the blood vessel model on the mucosal region. Specifically, the blood vessel generation unit 121b selects one of the blood vessel models stored in advance and pastes it on the mucosa region. Alternatively, a blood vessel model may be further attached to an image (see FIG. 4) in which the villi region is combined with the mucous membrane region.
  • the user may select an arbitrary blood vessel model by displaying a plurality of blood vessel models on the display unit 40 (see FIG. 1), or the blood vessel generation unit 121b may select a random blood vessel model at random. May be.
  • the blood vessel generation unit 121b may appropriately select a blood vessel model in accordance with the surface properties of the mucous membrane region.
  • a method for selecting and adjusting a blood vessel model according to the surface properties of the mucosa region will be described.
  • the blood vessel generation unit 121b first acquires the positional relationship between the mucous membrane and the imaging device provided in the endoscope. .
  • the positional relationship between the mucous membrane and the imaging device of the endoscope is determined based on the blood vessel thickness extracted from the R value of each pixel constituting the mucosal region and calculated from the edge of the blood vessel region. be able to. That is, when the ratio of the thickness between the one end and the other end of the blood vessel is within a predetermined range, it is determined that the endoscope is facing the front with respect to the mucous membrane, and the thickness between the one end and the other end of the blood vessel is determined.
  • the imaging element of the endoscope is inclined obliquely with respect to the mucous membrane.
  • an acceleration sensor is provided in an endoscope that captures an intraluminal image and the detection value of the acceleration sensor is attached to the intraluminal image as image auxiliary information, based on the image auxiliary information The positional relationship between the mucous membrane and the endoscope may be acquired.
  • the blood vessel generation unit 121b selects a blood vessel model having a uniform thickness and pastes it on the mucous membrane region.
  • the blood vessel generation unit 121b selects a blood vessel model whose thickness is not uniform. Furthermore, the blood vessel generation unit 121b calculates the ratio of the thickness between one end and the other end of the blood vessel from the edge of the blood vessel region extracted when determining the positional relationship between the mucous membrane and the imaging device of the endoscope. The blood vessel model is rotated so that the thickness ratio between the one end and the other end is similar, and is then attached to the mucosal region. As a specific example, when the thickness of the blood vessel is narrowed from the left to the right of the screen, the blood vessel model is also attached to the mucosa region after being rotated so as to become thin from the left to the right of the screen.
  • the blood vessel generation unit 121b first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) above. When the depth is deep, the blood vessel appears thin, and when the depth is shallow, the blood vessel appears thick. Therefore, the blood vessel generation unit 121b selects a blood vessel model having a thickness corresponding to the depth of the mucous membrane region. Alternatively, the blood vessel generation unit 121b may enlarge or reduce the arbitrarily selected blood vessel model according to the depth of the mucosal region. Specifically, the blood vessel model is reduced when the depth is deep, and the blood vessel model is enlarged when the depth is shallow.
  • the enlargement rate or reduction rate of the blood vessel model at this time is determined based on the thickness of the blood vessel region existing in the mucosal region based on the thickness of the blood vessel calculated based on the edge of the blood vessel region extracted from the mucosal region.
  • the thickness is determined to be close to the value.
  • the blood vessel generation unit 121b first calculates the luminance from the pixel value of each pixel constituting the blood vessel region in the mucous membrane region, and further calculates the average value of the luminance of each pixel. (Average luminance value) is calculated. Then, the blood vessel generation unit 121b compares the average luminance value of the blood vessel model stored in advance with the average luminance value of the blood vessel region and is equal to the average luminance value of the blood vessel region or within a predetermined range of the average luminance value of the blood vessel region. A blood vessel model having an average luminance value (for example, within ⁇ tens of 10%) is selected. When the blood vessel model meeting this condition is not stored, the blood vessel generation unit 121b may select a blood vessel model having an average luminance value closest to the average luminance value of the blood vessel region.
  • the blood vessel generation unit 121b changes the luminance of the blood vessel model so that the difference in luminance between the region where the blood vessel model is pasted and the surrounding blood vessel region does not become too large in the mucosa region.
  • the blood vessel model is pasted on the region where the luminance of the surrounding blood vessel region is high after adjusting the luminance of the blood vessel model to be high.
  • the blood vessel model is pasted after adjusting the luminance of the blood vessel model to a region where the luminance of the surrounding blood vessel region is low.
  • the blood vessel generation unit 121b extracts the R value of the pixel values of each pixel constituting the blood vessel region in the mucous membrane region, and calculates the R value of each pixel. An average value (average R value) is calculated.
  • the blood vessel generation unit 121b compares the average R value of the blood vessel model stored in advance with the average R value of the blood vessel region, or is equal to or within a predetermined range of the average R value of the blood vessel region (for example, A blood vessel model having an average R value that is within several tens of percent) is selected. When the blood vessel model meeting this condition is not stored, the blood vessel generation unit 121b may select a blood vessel model having an average R value closest to the average R value of the blood vessel region.
  • the blood vessel generation unit 121b creates a histogram of R values in the selected blood vessel model, and multiplies the histogram by a coefficient so that the median R value in the blood vessel model is equal to the median R value in the blood vessel region.
  • the R value in the blood vessel model is adjusted.
  • the blood vessel model whose R value is adjusted is pasted on the mucosal region.
  • the blood vessel generation unit 121b may adjust the G value, the B value, the R / G value, the R / B value, and the like in the blood vessel model by a similar method.
  • the blood vessel generation unit 121b first calculates the pitch between blood vessel regions by extracting the edges of the blood vessel regions in the mucosal region. Then, a blood vessel model having a pitch within a predetermined range (for example, within several tens of percent) of the pitch between the blood vessel regions is selected from the plurality of blood vessel models. When a blood vessel model having a pitch within the predetermined range is not stored in the blood vessel generation unit 121b, a blood vessel model having a pitch closest to the pitch between the blood vessel regions among the blood vessel models stored in the blood vessel generation unit 121b. Select.
  • the blood vessel generation unit 121b pastes the selected blood vessel model as it is on the mucous membrane region.
  • the blood vessel model is duplicated, and the placement interval of the blood vessel model is adjusted so as to be close to the interval between the blood vessel regions, and is pasted on the mucosa region.
  • the blood vessel generation unit 121b may select or adjust the blood vessel model based on the shape of the blood vessel, that is, the number of branches, the thickness, or the length of the blood vessel. Specifically, the blood vessel generation unit 121b extracts edges from the blood vessel region in the mucous membrane region, and counts the intersections of the extracted edges. The number of intersections is the number of blood vessel branches. The blood vessel generation unit 121b increases the number of branches by pasting the blood vessel model to the mucosal region so that the edge in the arbitrarily selected blood vessel model and the edge extracted from the blood vessel region in the mucosal region intersect.
  • the blood vessel generation unit 121b may reduce the number of branches by filling the blood vessel region in the mucous membrane region with pixel values of surrounding pixels.
  • the blood vessel generation unit 121b calculates the thickness of the blood vessel region based on the edge extracted from the blood vessel region in the mucosal region, selects a blood vessel model having a thickness different from the thickness, and pastes it on the mucosal region. Thereby, the thickness of the blood vessel region in the mucous membrane region can be changed.
  • the blood vessel generation unit 121b calculates the length of the blood vessel region based on the edge extracted from the blood vessel region in the mucosal region, selects a blood vessel model having a length different from this length, and pastes it on the mucosal region. Thereby, the length of the blood vessel region in the mucous membrane region can be changed.
  • the blood vessel generation unit 121b may execute the above methods (2-1) to (2-6) alone or in appropriate combination.
  • a blood vessel model selected in consideration of the color of the blood vessel region may be enlarged or reduced in accordance with the depth of the mucosal region.
  • add a smoothing process etc. after pasting the blood vessel model to the mucosal region it is possible to reduce a sense of incongruity at the boundary between the blood vessel model and the mucous membrane region.
  • FIG. 5 is a schematic diagram showing a new image generated by further attaching a blood vessel model to the image shown in FIG.
  • step S14 following step S13 the image generation unit 120 outputs a newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the arithmetic unit 100 ends.
  • the villi model or blood vessel model selected and adjusted based on the surface property of the mucosal region is pasted to the mucosal region extracted from the intraluminal image.
  • a new image is generated, so that a learning sample appropriately reflecting the state in the lumen can be obtained.
  • FIG. 6 is a block diagram showing a configuration of a calculation unit provided in the image processing apparatus according to Embodiment 2 of the present invention.
  • the image processing apparatus according to the second embodiment includes a calculation unit 200 illustrated in FIG. 6 instead of the calculation unit 100 illustrated in FIG.
  • the configuration and operation of each part of the image processing apparatus other than the arithmetic unit 200 are the same as those in the first embodiment.
  • the calculation unit 200 uses the new image generated by the image generation unit 120 in addition to the mucosal region extraction unit 110 and the image generation unit 120 to create an identification criterion for identifying an area in the intraluminal image.
  • a reference creation unit 210 is provided. Among these, the configurations and operations of the mucous membrane region extraction unit 110 and the image generation unit 120 are the same as those in the first embodiment.
  • the identification criterion creating unit 210 creates a criterion for identifying from a mucosal region an area having specific characteristics different from normal mucous membranes, such as a lesion or a lesion candidate suspected of being a lesion, as an abnormal area.
  • the identification reference creation unit 210 sets different weights for the feature amount calculated from the mucosal region extracted from the intraluminal image and the feature amount calculated from the new image generated by the image generation unit 120.
  • a weight setting unit 211 is included.
  • FIG. 7 is a flowchart showing the operation of the calculation unit 200. Steps S10 to S14 in FIG. 7 are the same as those in the first embodiment as a whole. However, when the mucosal region extracted in step S11 is stored in the storage unit 50, the fact that it is the original mucosal region is indicated. When the image generated in step S13 is stored in the storage unit 50, a flag indicating that the image is a newly generated image is added. Hereinafter, the image generated in step S13 is also referred to as a generated image.
  • step S21 following step S14 the identification reference creation unit 210 reads the mucous membrane region extracted in step S11 and the generated image generated in step S13 from the storage unit 50, and identifies based on the mucous membrane region and the generated image. Create a standard.
  • FIG. 8 is a flowchart showing the identification reference creation process in step S21.
  • the identification reference creation unit 210 calculates a color feature value, a shape feature value, and a texture feature value as the feature values of the mucous membrane region and the generated image.
  • the type of each feature amount is the same as that described in step S11.
  • the weight setting unit 211 sets weights for the feature amounts respectively calculated from the mucosal region extracted from the intraluminal image and the generated image. Specifically, the weight setting unit 211 discriminates between the mucous membrane area and the generated image based on the mucous membrane area and the flag added to the generated image, and the weight given to the feature amount of the mucosal area is the generated image. Each weight is set to be larger than the weight given to the feature amount.
  • the mucous membrane area is highly reliable because it is a copy of the actual mucosa extracted from the intraluminal image, whereas the generated image is created from the model based on various assumptions. This is because the reliability is relatively lower than the mucosal region.
  • the identification criterion creating unit 210 creates a feature quantity distribution by multiplying the feature quantity calculated in step S211 by the weight set in step S212, and based on this feature quantity distribution, a support vector is created.
  • An identification criterion is created using a learning device such as a machine (SVM).
  • SVM machine
  • the weighting set in step S212 is multiplied by the feature amount of each mucous membrane region and generated image to perform calculation.
  • step S22 following step S21 the identification reference creation unit 210 outputs the created identification reference and stores it in the storage unit 50. Thereafter, the operation of the arithmetic unit 200 ends.
  • the identification standard is created using the mucosal region extracted from the intraluminal image and the image generated based on the mucosal region as learning samples. Therefore, a highly reliable identification standard can be created. At this time, it is possible to further improve the reliability of the identification criterion by changing the weight given to the mucous membrane region and the feature amount of the generated image.
  • FIG. 9 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to Embodiment 3 of the present invention.
  • the image processing apparatus according to the third embodiment includes a calculation unit 300 illustrated in FIG. 9 instead of the calculation unit 100 illustrated in FIG.
  • the configuration and operation of each part of the image processing apparatus other than the arithmetic unit 300 are the same as those in the first embodiment.
  • the calculation unit 300 includes a mucous membrane region extraction unit 110 and an image generation unit 310. Among these, the operation of the mucous membrane region extraction unit 110 is the same as that of the first embodiment.
  • the image generation unit 310 generates a new image by processing the mucosal region extracted by the mucous membrane region extraction unit 110. More specifically, the image generation unit 310 includes a non-mucosal region generation unit 311 that generates a region representing a subject other than the biological mucous membrane, that is, a non-mucosal region.
  • the non-mucosal region generation unit 311 generates a new image by synthesizing the bubble region, the residue region, or the treatment tool region in which the treatment tool is shown with the mucous membrane region. Alternatively, a new image is generated by generating or deleting a halation region or a dark region from the mucous membrane region.
  • the non-mucosal region generation unit 311 stores one or more foam models representing bubbles, residue models representing residues, and treatment instrument models representing treatment instruments.
  • FIG. 10 is a flowchart showing the operation of the calculation unit 300. Note that steps S10 to S12 in FIG. 10 are the same as those in the first embodiment.
  • step S31 subsequent to step S12, the image generation unit 310 generates a new image in which the non-mucosal region is combined with the mucosal region based on the surface properties of the mucosal region.
  • a new image generation process will be described for each type of non-mucosal region.
  • the non-mucosal area generation unit 311 When a foam area is combined with a mucosal area The non-mucosal area generation unit 311 generates a new image in which the foam area is combined with the mucosal area by pasting the foam model onto the mucosal area. . Specifically, the non-mucosal region generation unit 311 selects one of the foam models stored in advance and pastes it to the mucosal region. As a foam model selection method, a user may select an arbitrary foam model by displaying a plurality of foam models on the display unit 40 (see FIG. 1), or the non-mucosal region generation unit 311 may randomly select a model. You may choose.
  • the non-mucosal region generation unit 311 may appropriately select a foam model according to the surface properties of the mucosal region.
  • a method for selecting and adjusting a foam model according to the surface properties of the mucous membrane region will be described.
  • the non-mucosal region generating unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the bubbles appear small, and when the depth is shallow, the bubbles appear large. Therefore, the non-mucosal area generation unit 311 selects and pastes a foam model having a size corresponding to the depth of the mucosal area. . Alternatively, an arbitrarily selected foam model may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
  • the non-mucosal region generation unit 311 first calculates the luminance from the pixel values of each pixel constituting the mucosal region, and further calculates the average of the luminance of each pixel. A value (average luminance value) is calculated. Then, the non-mucosal region generation unit 311 compares the average brightness value of the bubble model stored in advance with the average luminance value of the mucosal region, and is equal to or equal to the average luminance value of the mucosal region. A bubble model having an average luminance value that is within a range (for example, within ⁇ 10%) is selected. When a foam model that meets this condition is not stored, the non-mucosal region generation unit 311 may select a foam model having an average luminance value that is closest to the average luminance value of the mucosal region.
  • the non-mucosal area generation unit 311 adjusts the brightness of the foam model so that the difference in brightness between the area where the foam model is pasted and the surrounding area in the mucosa area does not become too large.
  • the foam model is pasted after adjusting the brightness of the foam model to a relatively high brightness area in the mucous membrane area.
  • the foam model is pasted after adjusting the brightness of the foam model to a relatively low brightness area in the mucous membrane area.
  • the non-mucosal area generation unit 311 determines the pitch range of the foam model to be attached to the mucosal area.
  • the pitch range may be determined randomly within a predetermined range, or the user may be allowed to input an arbitrary pitch range. Then, the non-mucosal region generation unit 311 pastes the arbitrarily selected bubble model to the mucosal region while overlapping or spacing the same so as to be within the previously determined pitch range.
  • the non-mucosal region generation unit 311 may execute the above methods (3-1-1) to (3-1-3) alone, or may execute a combination of a plurality of methods as appropriate.
  • 3-1-1) to (3-1-3) alone, or may execute a combination of a plurality of methods as appropriate.
  • the non-mucosal area generation unit 311 When Residual Area is Combined with Mucosal Area The non-mucosal area generation unit 311 generates an image in which the residual model is combined with the mucosal area by pasting the residual model onto the mucosal area. Specifically, the non-mucosal region generation unit 311 selects any one of the residue models stored in advance and pastes it on the mucosal region. The residue model may be selected by the user, or the non-mucosal region generation unit 311 may select it randomly. Alternatively, as described below, the non-mucosal region generation unit 311 may appropriately select a residue model according to the surface properties of the mucosal region. Hereinafter, a method for selecting and adjusting a residue model according to the surface properties of the mucosa region will be described.
  • the non-mucosal region generation unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the residue appears small, and when the depth is shallow, the residue appears large. Therefore, the non-mucosal region generation unit 311 selects and pastes a residue model having a size corresponding to the depth of the mucosal region. . Alternatively, an arbitrarily selected residue model may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
  • the non-mucosal area generation unit 311 calculates the average luminance value of the mucosal area in the same manner as (3-1-2) above. Then, the non-mucosal region generation unit 311 compares the average luminance value of the residue model stored in advance with the average luminance value of the mucosal region, and is equal to or equal to the average luminance value of the mucosal region. A residue model having an average luminance value within a range (for example, within ⁇ 10%) is selected. When the residue model that meets this condition is not stored, the non-mucosal region generation unit 311 may select a residue model having an average luminance value closest to the average luminance value of the mucosal region.
  • the non-mucosal region generation unit 311 adjusts the luminance of the residue model so that the difference in luminance between the region to which the residue model is pasted and its peripheral region does not become too large in the mucosal region.
  • the residue model is pasted after adjusting the brightness of the residue model to a relatively high region of the mucous membrane region.
  • the residue model is pasted after adjusting the brightness of the residue model to a relatively low brightness region in the mucous membrane region.
  • the non-mucosal region generation unit 311 determines the range of the pitch of the residue model to be attached to the mucosal region.
  • the pitch range may be determined randomly within a predetermined range, or the user may be allowed to input an arbitrary pitch range. Then, the non-mucosal region generation unit 311 pastes the arbitrarily selected residue model to the mucosal region while overlapping or leaving a gap so as to be within the previously determined pitch range.
  • the non-mucosal region generation unit 311 may execute the above methods (3-2-1) to (3-2-3) alone, or may execute a combination of a plurality of methods as appropriate.
  • the non-mucosal region generation unit 311 may execute the above methods (3-2-1) to (3-2-3) alone, or may execute a combination of a plurality of methods as appropriate.
  • the non-mucosal region generating unit 311 determines a region for generating a halation region on the mucosal region.
  • the region for generating the halation region may be determined randomly by the non-mucosal region generation unit 311 or may be displayed on the display unit 40 and designated by the user.
  • the non-mucosal region generation unit 311 sets the luminance value in the determined region to a halation region by making the luminance value higher than a predetermined threshold.
  • the non-mucosal region generating unit 311 extracts the halation region from the mucosal region. Specifically, a luminance value is calculated from the pixel values of each pixel constituting the mucous membrane region, and a region having a luminance value higher than a predetermined threshold is determined as a halation region. Or it is good also as displaying a mucous membrane area
  • the non-mucosal region generation unit 311 erases the halation region by interpolating the halation region using the pixel values of pixels around the halation region.
  • the non-mucosal region generation unit 311 may independently execute one of the generation and deletion of the halation regions described in (3-3-1) and (3-3-2). You may perform combining both.
  • the non-mucosal area generating unit 311 determines an area for generating a dark area on the mucosal area.
  • the area for generating the dark area may be determined randomly by the non-mucosal area generation unit 311 or may be displayed on the display unit 40 and designated by the user.
  • the non-mucosal area generation unit 311 sets the area as a dark area by making the luminance value in the determined area lower than a predetermined threshold.
  • the non-mucosal area generating unit 311 extracts the dark area from the mucosal area. Specifically, a luminance value is calculated from the pixel values of each pixel constituting the mucous membrane area, and an area having a luminance value lower than a predetermined threshold is determined as a dark area. Or it is good also as displaying a mucous membrane area
  • the non-mucosal region generation unit 311 erases the dark part region by interpolating the dark part region using the pixel values of pixels around the dark part region.
  • non-mucosal region generation unit 311 may independently execute one of the dark region combining and erasing described in (3-4-1) and (3-4-2), You may perform combining both.
  • the non-mucosal region generation unit 311 When a treatment tool region is combined with a mucosal region The non-mucosal region generation unit 311 generates a new image in which the treatment tool region is combined with the mucosal region by pasting the treatment tool model on the mucosal region. To do. Specifically, the non-mucosal region generation unit 311 selects one of the treatment tool models from the pre-stored treatment tool models and pastes it on the mucosal region. As a method for selecting a treatment tool model, a user may select an arbitrary treatment tool model by displaying a plurality of treatment tool models on the display unit 40, or the non-mucosal region generation unit 311 may select a random treatment tool model. May be.
  • the non-mucosal region generation unit 311 may appropriately select a treatment instrument model in accordance with the surface properties of the mucosal region.
  • a method for selecting and adjusting a treatment tool model according to the surface properties of the mucosa region will be described.
  • the non-mucosal region generation unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the treatment tool appears small, and when the depth is shallow, the treatment tool appears large. Therefore, the non-mucosal region generation unit 311 selects a treatment tool model having a size corresponding to the depth of the mucosal region and selects the mucosal region. Paste to. Alternatively, a treatment tool model arbitrarily selected may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
  • the non-mucosal area generation unit 311 calculates the average luminance value of the mucosal area in the same manner as (3-1-2) above. Then, the non-mucosal region generation unit 311 compares the average luminance value of the treatment instrument model stored in advance with the average luminance value of the mucosal region, and is equal to the average luminance value of the mucosal region or the average luminance value of the mucosal region. A treatment instrument model having an average luminance value within a predetermined range (for example, within ⁇ tens of 10%) is selected. When the treatment instrument model meeting this condition is not stored, the non-mucosal region generation unit 311 may select a treatment instrument model having an average luminance value closest to the average luminance value of the mucosa region.
  • the non-mucosal region generation unit 311 adjusts the luminance of the treatment instrument model so that the difference in luminance between the region where the treatment instrument model is pasted and its peripheral region does not become too large in the mucous membrane region. Specifically, the treatment tool model is pasted to a region having a relatively high luminance in the mucosal region after adjusting the luminance of the treatment tool model to be high. On the other hand, the treatment instrument model is pasted after adjusting the brightness of the treatment instrument model to a relatively low area in the mucous membrane area.
  • the non-mucosal region generation unit 311 may execute the above methods (3-5-1) and (3-5-2) alone, or may execute a combination of a plurality of methods as appropriate.
  • 3-5-1 and (3-5-2) alone, or may execute a combination of a plurality of methods as appropriate.
  • step S32 following step S31 the image generation unit 310 outputs the newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the arithmetic unit 300 ends.
  • the foam model, the residue model, and the treatment tool selected and adjusted based on the surface property of the mucosal region with respect to the mucosal region extracted from the intraluminal image. Since a new image is generated by pasting a model or generating or deleting a halation region or a dark region with respect to the mucous membrane region, a learning sample appropriately reflecting the state in the lumen can be obtained.
  • FIG. 11 is a block diagram illustrating a configuration of a calculation unit included in an image processing apparatus according to Embodiment 4 of the present invention.
  • the image processing apparatus according to the fourth embodiment includes a calculation unit 400 shown in FIG. 11 instead of the calculation unit 100 shown in FIG.
  • the configuration and operation of each part of the image processing apparatus other than the arithmetic unit 400 are the same as those in the first embodiment.
  • the calculation unit 400 determines a mucosal region attribute determination unit 410 that determines the attribute of the mucosal region, and determines a new image generation method based on the mucosal region attribute.
  • a generation method determination unit 420 The configurations and operations of the mucous membrane region extraction unit 110 and the image generation unit 120 are the same as those in the first embodiment.
  • the attribute of the mucous membrane region includes whether or not there is an abnormal region having characteristics different from normal mucous membrane in the mucosal region, and if there is an abnormal region, the type of the abnormal region, Features such as the type of organ located and whether or not the mucous membrane area is an unnecessary area not suitable for observation are included.
  • the unnecessary area that is not suitable for observation is a blurred area where an image is blurred or a color shift area where a color shift occurs in the image.
  • the mucosal region attribute determination unit 410 includes an abnormal region extraction unit 411 that extracts an abnormal region from the mucous membrane region, an abnormal type estimation unit 412 that estimates the type of the extracted abnormal region, and the mucous membrane region is located.
  • An organ discriminating unit 413 that discriminates the type of organ, and an unnecessary region determining unit 414 that determines whether or not the mucosal region is an unnecessary region not suitable for observation are provided.
  • the generation method determination unit 420 includes a generation number determination unit 421 that determines the number of images to be newly generated based on the determination result of the mucous membrane region attribute determination unit 410, that is, the attribute of the mucous membrane region.
  • FIG. 12 is a flowchart showing the operation of the calculation unit 400. Note that steps S10 and S11 in FIG. 12 are the same as those in the first embodiment.
  • step S41 following step S11 the mucous membrane area attribute determining unit 410 determines the attribute of the mucosal area extracted in step S11.
  • FIG. 13 is a flowchart showing the mucosal region attribute determination process.
  • the abnormal area extraction unit 411 extracts an abnormal area from the mucous membrane area. Specifically, the abnormal area extraction unit 411 first calculates a color feature amount based on the pixel value of each pixel constituting the mucosal area, and uses a normal mucosal area identification criterion created in advance as a threshold value. By performing the processing, a mucosal region having normal mucous membrane characteristics is extracted as a normal region, and a mucosal region other than the normal region is extracted as an abnormal region.
  • the color feature amount includes an R value, a G value, and a B value among pixel values, a value that is secondarily calculated based on these values, specifically, a G / R value or a B / G value. Color ratio such as value, hue, saturation, brightness, color difference and the like.
  • the identification standard is created based on the color feature amount of the abnormal area collected in advance.
  • the abnormality type estimation unit 412 estimates the type of abnormal region for each of the abnormal regions extracted in step S411. Specifically, first, the abnormality type estimation unit 412 calculates a color feature amount, a shape feature amount, and a texture feature amount for each abnormal region, and uses these determination criteria created in advance for each type of abnormal region. A feature amount is determined. As the color feature amount, the color feature amount listed in step S411 is used. As the shape feature amount, the area (number of pixels) of the abnormal region, the perimeter, the ferret diameter including the horizontal ferret diameter or the vertical ferret diameter, the HOG feature amount, the SIFT feature amount, and the like are used. Examples of the texture feature amount include LBP.
  • the abnormal region discrimination standard is obtained by creating a probability density function from the feature amount distribution of the abnormal region collected in advance for each type.
  • the abnormality type estimation unit 412 estimates each abnormal region as one of bleeding, ulcer, tumor, and villi abnormality using a discrimination criterion created for each type.
  • the organ discriminating unit 413 discriminates the type of organ where the mucosal region is located. Specifically, the organ discriminating unit 413 calculates an average R value, an average G value, and an average B value of the intraluminal image from which the mucosal region is extracted, and the intraluminal image is calculated based on these values. It is determined whether the imaged organ is an esophagus, stomach, small intestine, or large intestine.
  • the organ discriminating unit 413 has R, G, B in which the average R value, the average G value, and the average B value of the intraluminal image are preset for each of the esophagus, stomach, small intestine, and large intestine. It is determined which range of each color element is included. For example, when the average R value, average G value, and average B value of the intraluminal image are within the range of R, G, and B color elements of the esophagus, the organ discriminating unit 413 determines that the mucosal region to be discriminated is the esophagus. It is determined that it is located at. The same applies to the stomach, small intestine, and large intestine (Reference: JP-A-2006-288612).
  • the unnecessary area determination unit 414 determines whether or not the mucosal area is an unnecessary area that is not suitable for observation, specifically, whether or not the mucosal area is blurred or a color shift occurs in the mucosal area. It is determined whether or not.
  • the determination as to whether or not the mucous membrane area is blurred is performed as follows. That is, the unnecessary area determination unit 414 extracts edges from the mucosal area by performing processing such as a Sobel filter and a Laplacian filter on the mucosal area, and the edge strength in the mucosal area is a predetermined value. In the following cases, it is determined that the mucosal region is blurred.
  • the determination as to whether or not a color shift has occurred in the mucosal area is performed as follows. That is, the unnecessary area determination unit 414 calculates the difference between the R, G, and B values of the target pixel in the mucous membrane area and the R, G, and B values of adjacent pixels adjacent to the target pixel. It is determined whether the difference between the values is equal to or less than a threshold value. Then, the unnecessary area determination unit 414 determines that a color shift has occurred in the pixel of interest when the difference is larger than the threshold value in at least one of R, G, and B. By performing the process for the pixel of interest on the entire mucosal area, the color shift in the entire mucosal area is determined.
  • the mucosa The area is determined as a color misregistration area. Thereafter, the process returns to the main routine.
  • step S42 following step S41 the generation number determination unit 421 determines the number of new images to be generated based on the abnormal region type, organ type, and unnecessary region determination result determined in step S41. . In detail, it determines as follows.
  • the generation number determination unit 421 determines the number of image generations according to the importance of the type of abnormal area. Specifically, since the importance of the abnormal region increases in the order of abnormal villus, tumor, ulcer, and bleeding, the number of generated images is also increased in the order of abnormal villus, tumor, ulcer, and bleeding.
  • the generation number determination unit 421 determines the number of image generations according to the type of organ. Specifically, the number of images to be generated is set to be large when the type of organ that is the examination target of the endoscopic examination matches the type of organ in which the mucosal region to be discriminated is located. Note that the type of the organ to be examined may be input by the user using the input unit 30 or may be stored in advance in the storage unit 50 as related information of the intraluminal image.
  • the generation number determination unit 421 sets a smaller number of image generations when the mucosal region is an unnecessary region.
  • step S43 the image generation unit 120 acquires the surface property of the mucosal region.
  • the process for acquiring the surface property of the mucosa region is the same as that in the first embodiment (see step S12 in FIG. 2).
  • step S44 the image generation unit 120 newly generates the number of images determined in step S42 based on the surface property of the mucosal region.
  • the individual processing for generating a new image is the same as that in the first embodiment (see step S13 in FIG. 2).
  • step S45 the image generation unit 120 outputs the newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the calculation unit 400 ends.
  • the fourth embodiment of the present invention since the number of images to be generated based on the mucosal area is changed according to the attribute of the mucosal area, important abnormal areas are extracted. More learning samples can be acquired from the mucosal region or the mucosal region in the organ to be examined.
  • FIG. 14 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to the fifth embodiment of the present invention.
  • the image processing apparatus according to the fifth embodiment includes a calculation unit 500 shown in FIG. 14 instead of the calculation unit 100 shown in FIG.
  • the configuration and operation of each part of the image processing apparatus other than the arithmetic unit 500 are the same as those in the first embodiment.
  • the calculation unit 500 includes a mucosal region extraction unit 110, an image generation unit 510 that generates a new image based on the surface properties of the mucosal region, a mucosal region attribute determination unit 520 that determines the attribute of the mucosal region, And a generation method determination unit 530 that determines a generation method of a new image based on the attribute.
  • the operation of the mucous membrane region extraction unit 110 is the same as that of the first embodiment.
  • the image generation unit 510 includes a fine structure generation unit 511 that generates a fine structure by changing color information, shape information, and texture information in the mucosal region extracted from the intraluminal image.
  • the fine structure generation unit 511 includes a color information change unit 511a that changes the color information of the mucosa region, a shape information change unit 511b that changes the shape information of the mucosa region, and a texture that changes the texture information of the mucosa region.
  • an information changing unit 511c is included in the fine structure by changing color information, shape information, and texture information in the mucosal region extracted from the intraluminal image.
  • the fine structure generation unit 511 includes a color information change unit 511a that changes the color information of the mucosa region, a shape information change unit 511b that changes the shape information of the mucosa region, and a texture that changes the texture information of the mucosa region.
  • an information changing unit 511c is an information changing unit 511c.
  • the mucosal region attribute determination unit 520 includes an abnormal region extraction unit 521 that extracts an abnormal region from the mucous membrane region, an abnormal type estimation unit 522 that estimates the type of the extracted abnormal region, and the type of organ in which the mucosal region is located.
  • the generation method determination unit 530 determines the generation method of the fine structure in the newly generated image based on the determination result by the mucous membrane region attribute determination unit 520.
  • the generation method determination unit 530 includes a weight determination unit 531 that determines a weight given as a parameter to each piece of information when the fine structure generation unit 511 changes the color information, shape information, and texture information of the mucous membrane region.
  • FIG. 15 is a flowchart showing the operation of the calculation unit 500. Note that steps S10 and S11 in FIG. 15 are the same as those in the first embodiment (see FIG. 2).
  • step S51 subsequent to step S11 the mucosa region attribute determining unit 520 determines the attribute of the mucosa region extracted in step S11. Specifically, the abnormal area extraction unit 521 extracts an abnormal area from the mucous membrane area, and the abnormality type estimation unit 522 estimates the type of the abnormal area. In addition, the organ discriminating unit 523 discriminates the type of organ where the mucosal region is located.
  • the abnormal region extraction processing, abnormal region type estimation processing, and organ type determination processing are the same as those in the fourth embodiment (see steps S411 to S413 in FIG. 13).
  • the generation method determination unit 530 determines a new image generation method based on the attribute of the mucous membrane region. Specifically, the weight given to each information when color information, shape information, and texture information in the mucous membrane region are changed is set. Hereinafter, a method for setting the weight based on the attribute of the mucous membrane region will be described. Note that the weights given to the color information, the shape information, and the texture information are normalized so that the sum is 1.
  • the weight of the color information is changed to the weight of the shape information and the texture information. Larger than. Note that the weights of the shape information and the texture information may be approximately the same.
  • the weight of the color information and the shape information is textured. Make it larger than the weight of the information. Note that the weights of the color information and the shape information may be approximately the same.
  • the weight of the shape information is the weight of the color information and the texture information. Larger than. Note that the weights of the color information and the texture information may be approximately the same.
  • the weight of the texture information is set to the color information and the shape information. Make it larger than the weight. Note that the weights of the color information and the shape information may be approximately the same.
  • the weight of the color information is changed from the weight of the shape information and texture information. Also make it bigger. Note that the weights of the shape information and the texture information may be approximately the same.
  • the weight of the texture information is changed from the weight of the color information and the shape information. Also make it bigger. Note that the weights of the color information and the shape information may be approximately the same.
  • the weight of the shape information is more than the weight of the color information and texture information. Also make it bigger. Note that the weights of the color information and the texture information may be approximately the same.
  • the weight determination unit 531 may determine the weight based on either the abnormal region type or the organ type, or may determine the weight based on both the abnormal region type and the organ type. good. In the latter case, the weight determined based on the type of abnormal region (see (5-1-1) to (5-1-4) above) and the weight determined based on the type of organ (above (5-2) -1) to (5-2-3)) may be calculated.
  • step S53 the image generation unit 510 acquires the surface property of the mucosal region extracted in step S11. Specifically, the hue of each pixel is acquired as the color information of the mucosa region, the outline of the mucosa region is acquired as the shape information, and the luminance value of each pixel is acquired as the texture information.
  • step S54 the fine structure generation unit 511 generates a new image by the generation method determined in step S52 based on the surface property of the mucosal region.
  • the change of the color information, the shape information, and the texture information may be executed individually or may be executed by combining a plurality of types of information changes.
  • FIG. 16 is a flowchart showing a new image generation process. Below, the case where the change of multiple types of information is combined is demonstrated.
  • the color information changing unit 511a changes the color information of the mucous membrane region extracted in step S11. Specifically, the color information changing unit 511a first calculates the average value (average H value) of the hue (H value) in each pixel constituting the mucous membrane region. Then, the H value of each pixel constituting the mucous membrane region is varied at a predetermined interval so that the average H value falls within a predetermined range, and a plurality of images of the mucosal region having different H values are created. Specifically, the average H value is set within a range obtained by multiplying a predetermined range as hues (H values) that can be taken by the mucous membrane region of the living body and the weight of the color information.
  • the range in which the average H value can be varied is within 90% of the range of the H value that can be taken by the mucous membrane region of the living body. If the weight of the color information is zero, step S541 is omitted.
  • the shape information changing unit 511b changes the shape information of the mucous membrane region whose color information has been changed in step S541. Specifically, a plurality of images in which the shape of the mucous membrane region is changed are generated for each of the plurality of mucosal region images having different H values by a known geometric transformation process such as affine transformation. At this time, a conversion amount in affine transformation or the like is determined according to the weight of the shape information. Specifically, the amount of conversion increases as the weight of the shape information increases. If the weight of the shape information is zero, step S542 is omitted.
  • the texture information changing unit 511c changes the texture information of the mucous membrane region whose shape information has been changed in step S542. Specifically, a plurality of images in which the texture of the mucosal surface is changed by applying filter processing such as a sharpening filter or a smoothing filter to each mucosal region image having a different H value or shape. create. At this time, parameters in the filter processing are determined according to the weight of the texture information. Specifically, the parameter is determined so that extreme sharpening or extreme smoothing is allowed as the weight of the texture information is increased. If the texture information weight is zero, step S543 is omitted. Thereafter, the process returns to the main routine.
  • filter processing such as a sharpening filter or a smoothing filter
  • step S55 following step S54 the image generation unit 510 outputs a newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the calculation unit 500 ends.
  • a new image is generated by changing the color information, shape information, and texture information of the mucosal area according to the attribute of the mucosal area.
  • a learning sample appropriately reflecting the state of the mucosal region can be obtained.
  • Embodiments 1 to 5 described above can be realized by executing an image processing program stored in a storage device on a computer system such as a personal computer or a workstation.
  • a computer system such as a personal computer or a workstation.
  • a computer system is used by being connected to other computer systems, servers, and other devices via a public line such as a local area network (LAN), a wide area network (WAN), or the Internet.
  • the image processing apparatuses according to Embodiments 1 to 5 acquire image data of intraluminal images via these networks, and various output devices (viewers, viewers, etc.) connected via these networks.
  • the image processing result may be output to a printer or the like, or the image processing result may be stored in a storage device (storage medium and its reading device) connected via these networks.
  • the present invention is not limited to the first to fifth embodiments, and various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the respective embodiments and modifications.
  • some constituent elements may be excluded from all the constituent elements shown in each embodiment or modification, or may be formed by appropriately combining the constituent elements shown in different embodiments or modifications. May be.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Provided is an image-processing device and the like that makes it possible to acquire training samples that properly reflect the state inside the lumen. An image-processing device 1 is equipped with a mucosal region extraction unit 110 for extracting the mucosal region from an intraluminal image taken inside the lumen of a living body, and an image-generating unit 120 that generates an image different from the intraluminal image by acquiring the surface properties of the mucosal region and processing the mucosal region in the intraluminal image on the basis of the surface properties.

Description

画像処理装置、画像処理方法、及び画像処理プログラムImage processing apparatus, image processing method, and image processing program
 本発明は、生体の管腔内を撮像した画像に対して画像処理を行う画像処理装置、画像処理方法、及び画像処理プログラムに関する。 The present invention relates to an image processing apparatus, an image processing method, and an image processing program that perform image processing on an image obtained by imaging the inside of a lumen of a living body.
 内視鏡等の医用観察装置を用いて生体の管腔内(消化管内)を撮像した管腔内画像に対し、識別基準を用いて異常領域等の特定領域を検出する技術が知られている。この際に用いられる識別基準は、通常、学習サンプルとして管腔内画像から抽出された様々なバリエーションの粘膜領域や異常領域の画像をもとに作成される。 A technique is known in which a specific region such as an abnormal region is detected using an identification criterion with respect to an intraluminal image obtained by imaging a living body lumen (in the digestive tract) using a medical observation apparatus such as an endoscope. . The identification standard used at this time is usually created based on images of various variations of mucosal areas and abnormal areas extracted from the intraluminal image as a learning sample.
 画像の識別に関する技術として、例えば特許文献1には、学習サンプルとして取得した画像に対し、任意の注目領域の位置、方向、外観を変更したり、注目領域を拡大縮小又は回転させたりといった処理を行うことにより新たな画像を生成し、新たな画像及びもとの画像から特徴量を算出して識別基準を作成する技術が開示されている。 As a technique related to image identification, for example, Patent Document 1 discloses a process of changing the position, direction, and appearance of an arbitrary region of interest or scaling or rotating the region of interest with respect to an image acquired as a learning sample. A technique is disclosed in which a new image is generated by performing, a feature amount is calculated from the new image and the original image, and an identification reference is created.
米国特許第8903167号明細書U.S. Pat. No. 8,903,167
 しかし、上記特許文献1に開示された技術を管腔内画像に適用しようとした場合、注目領域に対して上述したような幾何学的な処理を行うだけでは、管腔内の状態を適切に反映した学習サンプルを取得することは困難である。 However, when the technique disclosed in Patent Document 1 is to be applied to an intraluminal image, the state in the lumen is appropriately adjusted by simply performing the geometric processing as described above on the region of interest. It is difficult to obtain a reflected learning sample.
 本発明は、上記に鑑みて為されたものであって、管腔内の状態を適切に反映した学習サンプルを取得することができる画像処理装置、画像処理方法、及び画像処理プログラムを提供することを目的とする。 The present invention has been made in view of the above, and provides an image processing apparatus, an image processing method, and an image processing program capable of acquiring a learning sample that appropriately reflects a state in a lumen. With the goal.
 上述した課題を解決し、目的を達成するために、本発明に係る画像処理装置は、生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出部と、前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成部と、を備えることを特徴とする。 In order to solve the above-described problems and achieve the object, an image processing apparatus according to the present invention includes a mucosal region extraction unit that extracts a mucosal region from an intraluminal image obtained by imaging the inside of a lumen of a living body, and the mucosal region. And an image generation unit that generates an image different from the intraluminal image by processing the mucosal region in the intraluminal image based on the surface texture. And
 本発明に係る画像処理方法は、生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出ステップと、前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成ステップと、を含むことを特徴とする。 The image processing method according to the present invention includes a mucosal region extraction step for extracting a mucosal region from an intraluminal image obtained by imaging the inside of a lumen of a living body, acquiring a surface property of the mucosal region, and based on the surface property, And an image generation step of generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image.
 本発明に係る画像処理プログラムは、生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出ステップと、前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成ステップと、をコンピュータに実行させることを特徴とする。 An image processing program according to the present invention obtains a mucosal region extraction step for extracting a mucosal region from an intraluminal image obtained by imaging the inside of the lumen of a living body, obtains a surface property of the mucosal region, and based on the surface property, An image generating step of generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image is characterized in that the computer executes.
 本発明によれば、管腔内画像から抽出した粘膜領域の表面性状に基づいて、この粘膜領域を加工することにより上記管腔内画像とは異なる画像を生成するので、管腔内の状態を適切に反映した学習サンプルを取得することが可能となる。 According to the present invention, an image different from the intraluminal image is generated by processing the mucosal region based on the surface properties of the mucosal region extracted from the intraluminal image. It is possible to acquire a learning sample appropriately reflected.
図1は、本発明の実施の形態1に係る画像処理装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention. 図2は、図1に示す演算部の動作を示すフローチャートである。FIG. 2 is a flowchart showing the operation of the calculation unit shown in FIG. 図3は、管腔内画像から抽出された粘膜領域を示す模式図である。FIG. 3 is a schematic diagram showing a mucosal region extracted from an intraluminal image. 図4は、粘膜領域に絨毛領域を合成した画像を示す模式図である。FIG. 4 is a schematic diagram showing an image in which a villi region is combined with a mucous membrane region. 図5は、粘膜領域に絨毛領域及び血管領域を合成した画像を示す模式図である。FIG. 5 is a schematic diagram illustrating an image in which a villi region and a blood vessel region are combined with a mucous membrane region. 図6は、本発明の実施の形態2に係る画像処理装置が備える演算部の構成を示すブロック図である。FIG. 6 is a block diagram showing a configuration of a calculation unit provided in the image processing apparatus according to Embodiment 2 of the present invention. 図7は、図6に示す演算部の動作を示すフローチャートである。FIG. 7 is a flowchart showing the operation of the calculation unit shown in FIG. 図8は、図7に示す識別基準の作成処理を示すフローチャートである。FIG. 8 is a flowchart showing the creation process of the identification standard shown in FIG. 図9は、本発明の実施の形態3に係る画像処理装置が備える演算部の構成を示すブロック図である。FIG. 9 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to Embodiment 3 of the present invention. 図10は、図9に示す演算部の動作を示すフローチャートである。FIG. 10 is a flowchart showing the operation of the calculation unit shown in FIG. 図11は、本発明の実施の形態4に係る画像処理装置が備える演算部の構成を示すブロック図である。FIG. 11 is a block diagram illustrating a configuration of a calculation unit included in an image processing apparatus according to Embodiment 4 of the present invention. 図12は、図11に示す演算部の動作を示すフローチャートである。FIG. 12 is a flowchart showing the operation of the calculation unit shown in FIG. 図13は、図12に示す粘膜領域の属性の判定処理を示すフローチャートである。FIG. 13 is a flowchart showing the determination processing of the attribute of the mucous membrane region shown in FIG. 図14は、本発明の実施の形態5に係る画像処理装置が備える演算部の構成を示すブロック図である。FIG. 14 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to the fifth embodiment of the present invention. 図15は、図14に示す演算部の動作を示すフローチャートである。FIG. 15 is a flowchart showing the operation of the calculation unit shown in FIG. 図16は、図15に示す新たな画像の生成処理を示すフローチャートである。FIG. 16 is a flowchart showing a new image generation process shown in FIG.
 以下、本発明の実施の形態に係る画像処理装置、画像処理方法、及び画像処理プログラムについて、図面を参照しながら説明する。なお、これらの実施の形態によって本発明が限定されるものではない。また、各図面の記載において、同一部分には同一の符号を付して示している。 Hereinafter, an image processing apparatus, an image processing method, and an image processing program according to an embodiment of the present invention will be described with reference to the drawings. Note that the present invention is not limited to these embodiments. Moreover, in description of each drawing, the same code | symbol is attached | subjected and shown to the same part.
(実施の形態1)
 図1は、本発明の実施の形態1に係る画像処理装置の構成を示すブロック図である。本実施の形態1に係る画像処理装置1は、内視鏡等の医用観察装置を用いて生体の管腔内を撮像することにより取得された管腔内画像から粘膜領域を抽出し、この粘膜領域の表面性状に基づいて、もとの管腔内画像とは異なる新たな画像を生成する画像処理を実行する装置である。管腔内画像は、通常、各画素位置においてR(赤)、G(緑)、B(青)の波長成分に対する画素レベル(画素値)を持つカラー画像である。なお、生体内を撮像する内視鏡は、カプセル内視鏡、軟性内視鏡、硬性内視鏡等のいずれであっても良い。
(Embodiment 1)
FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention. The image processing apparatus 1 according to the first embodiment extracts a mucosal region from an intraluminal image acquired by imaging the inside of the lumen of a living body using a medical observation apparatus such as an endoscope, and this mucosa. This is an apparatus for executing image processing for generating a new image different from the original intraluminal image based on the surface texture of the region. The intraluminal image is usually a color image having pixel levels (pixel values) for wavelength components of R (red), G (green), and B (blue) at each pixel position. Note that the endoscope for imaging the living body may be any of a capsule endoscope, a flexible endoscope, a rigid endoscope, and the like.
 図1に示すように、画像処理装置1は、該画像処理装置1全体の動作を制御する制御部10と、医用観察装置が管腔内を撮像して生成した管腔内画像の画像データを取得する画像取得部20と、外部からの操作に応じた信号を制御部10に入力する入力部30と、各種情報や画像の表示を行う表示部40と、画像取得部20によって取得された画像データや種々のプログラムを格納する記憶部50と、画像データに対して所定の画像処理を実行する演算部100とを備える。 As shown in FIG. 1, the image processing apparatus 1 includes a control unit 10 that controls the operation of the entire image processing apparatus 1 and image data of an intraluminal image generated by imaging the inside of the lumen by the medical observation apparatus. An image acquisition unit 20 to be acquired, an input unit 30 for inputting a signal according to an external operation to the control unit 10, a display unit 40 for displaying various information and images, and an image acquired by the image acquisition unit 20 A storage unit 50 that stores data and various programs, and a calculation unit 100 that executes predetermined image processing on the image data are provided.
 制御部10は、CPU(Central Processing Unit)等の汎用プロセッサやASIC(Application Specific Integrated Circuit)等の特定の機能を実行する各種演算回路等の専用プロセッサを用いて構成される。制御部10が汎用プロセッサである場合、記憶部50が記憶する各種プログラムを読み込むことによって画像処理装置1を構成する各部への指示やデータの転送等を行い、画像処理装置1全体の動作を統括して制御する。また、制御部10が専用プロセッサである場合、プロセッサが単独で種々の処理を実行しても良いし、記憶部50が記憶する各種データ等を用いることで、プロセッサと記憶部50が協働又は結合して種々の処理を実行しても良い。 The control unit 10 includes a general-purpose processor such as a CPU (Central Processing Unit) and a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC (Application Specific Integrated Circuit). When the control unit 10 is a general-purpose processor, the various operations stored in the storage unit 50 are read to give instructions to each unit constituting the image processing apparatus 1 and data transfer, thereby supervising the overall operation of the image processing apparatus 1. And control. Further, when the control unit 10 is a dedicated processor, the processor may execute various processes independently, or the processor and the storage unit 50 cooperate with each other by using various data stored in the storage unit 50 or the like. Various processes may be executed by combining them.
 画像取得部20は、医用観察装置を含むシステムの態様に応じて適宜構成される。例えば、医用観察装置を画像処理装置1に接続する場合、画像取得部20は、医用観察装置において生成された画像データを取り込むインタフェースによって構成される。また、医用観察装置によって生成された画像データを保存しておくサーバを設置する場合、画像取得部20は、サーバと接続される通信装置等で構成され、サーバとデータ通信を行って画像データを取得する。或いは、医用観察装置によって生成された画像データを、可搬型の記憶媒体を用いて受け渡ししても良く、この場合、画像取得部20は、可搬型の記憶媒体を着脱自在に装着し、記憶された画像の画像データを読み出すリーダ装置によって構成される。 The image acquisition unit 20 is appropriately configured according to the mode of the system including the medical observation apparatus. For example, when a medical observation apparatus is connected to the image processing apparatus 1, the image acquisition unit 20 is configured by an interface that captures image data generated in the medical observation apparatus. Further, when installing a server for storing image data generated by the medical observation apparatus, the image acquisition unit 20 includes a communication device connected to the server, and performs data communication with the server to obtain image data. get. Alternatively, the image data generated by the medical observation apparatus may be transferred using a portable storage medium. In this case, the image acquisition unit 20 is detachably mounted with a portable storage medium and stored. It is constituted by a reader device that reads out image data of a captured image.
 入力部30は、例えばキーボードやマウス、タッチパネル、各種スイッチ等の入力デバイスによって実現され、これらの入力デバイスに対する外部からの操作に応じて発生させた入力信号を制御部10に出力する。 The input unit 30 is realized by input devices such as a keyboard, a mouse, a touch panel, and various switches, for example, and outputs an input signal generated in response to an external operation on these input devices to the control unit 10.
 表示部40は、LCD(Liquid Crystal Display)やEL(Electro-Luminescence)ディスプレイ等の表示装置によって実現され、制御部10の制御の下で、管腔内画像を含む各種画面を表示する。 The display unit 40 is realized by a display device such as an LCD (Liquid Crystal Display) or an EL (Electro-Luminescence) display, and displays various screens including intraluminal images under the control of the control unit 10.
 記憶部50は、更新記録可能なフラッシュメモリ等のROM(Read Only Memory)やRAM(Random Access Memory)といった各種IC(integrated circuit)メモリ、内蔵若しくはデータ通信端子で接続されたハードディスク若しくはCD-ROM(Compact Disc Read Only Memory)等の情報記憶装置及び該情報記憶装置に対する情報の書込読取装置等によって実現される。記憶部50は、画像取得部20によって取得された管腔内画像の画像データの他、画像処理装置1を動作させると共に、種々の機能を画像処理装置1に実行させるためのプログラムや、このプログラムの実行中に使用されるデータ等を格納する。具体的には、記憶部50は、管腔内画像から粘膜領域を抽出し、この粘膜領域の表面性状に基づいて、もとの管腔内画像とは異なる新たな画像を生成する画像処理プログラムを格納するプログラム記憶部51を有する。また、記憶部50は、該画像処理において用いられる識別基準等の情報を格納する。 The storage unit 50 includes various IC (integrated circuit) memories such as ROM (Read Only Memory) and RAM (Random Access Memory) such as flash memory that can be updated and recorded, a hard disk or a CD-ROM (built-in or connected by a data communication terminal). It is realized by an information storage device such as Compact Disc Read Only Memory) and an information writing / reading device for the information storage device. In addition to the image data of the intraluminal image acquired by the image acquisition unit 20, the storage unit 50 operates the image processing device 1 and causes the image processing device 1 to execute various functions. Stores data used during execution. Specifically, the storage unit 50 extracts a mucosal region from the intraluminal image, and generates an image that is different from the original intraluminal image based on the surface properties of the mucosal region. Is stored. The storage unit 50 stores information such as identification criteria used in the image processing.
 演算部100は、CPU等の汎用プロセッサやASIC等の特定の機能を実行する各種演算回路等の専用プロセッサを用いて構成される。演算部100が汎用プロセッサである場合、プログラム記憶部51が記憶する画像処理プログラムを読み込むことにより、管腔内画像から粘膜領域を抽出し、この粘膜領域の表面性状に基づいて、もとの管腔内画像とは異なる新たな画像を生成する画像処理を実行する。また、演算部100が専用プロセッサである場合、プロセッサが単独で種々の処理を実行しても良いし、記憶部50が記憶する各種データ等を用いることで、プロセッサと記憶部50が協働又は結合して画像処理を実行しても良い。 The calculation unit 100 is configured using a general-purpose processor such as a CPU or a dedicated processor such as various arithmetic circuits that execute specific functions such as an ASIC. When the calculation unit 100 is a general-purpose processor, the image processing program stored in the program storage unit 51 is read to extract a mucosal region from the intraluminal image, and based on the surface properties of the mucosal region, the original tube Image processing for generating a new image different from the intracavity image is executed. In addition, when the arithmetic unit 100 is a dedicated processor, the processor may execute various processes independently, or the processor and the storage unit 50 cooperate with each other by using various data stored in the storage unit 50 or the like. The image processing may be executed by combining them.
 次に、演算部100の構成について説明する。図1に示すように、演算部100は、管腔内画像から粘膜領域を抽出する粘膜領域抽出部110と、粘膜領域の表面性状を取得し、この表面性状に基づいて管腔内画像における粘膜領域を加工することにより上記新たな画像を生成する画像生成部120とを備える。 Next, the configuration of the calculation unit 100 will be described. As shown in FIG. 1, the calculation unit 100 acquires a mucosal region extraction unit 110 that extracts a mucosal region from an intraluminal image, acquires the surface property of the mucosal region, and based on this surface property, the mucosa in the intraluminal image. And an image generation unit 120 that generates the new image by processing the region.
 画像生成部120は、粘膜表面の微細構造を表す新たな画像を生成する微細構造生成部121を備える。詳細には、微細構造生成部121は、粘膜表面に存在する絨毛を表す絨毛領域を生成する絨毛生成部121aと、粘膜表面に透見する血管を表す血管領域を生成する血管生成部121bとを有する。 The image generation unit 120 includes a fine structure generation unit 121 that generates a new image representing the fine structure of the mucosal surface. Specifically, the fine structure generation unit 121 includes a villi generation unit 121a that generates a villi region that represents villi existing on the mucosal surface, and a blood vessel generation unit 121b that generates a blood vessel region that represents a blood vessel seen through the mucosal surface. Have.
 絨毛生成部121aは、絨毛を表すモデルを格納しており、このモデルを粘膜領域に貼り付けることにより、新たな画像を生成する。以下、絨毛を表すモデルを絨毛モデルという。絨毛モデルは、生体の管腔内画像から1つ以上の絨毛を含む領域を抽出することにより予め1つ以上作成しておく。 The villi generation unit 121a stores a model representing villi, and generates a new image by pasting this model on the mucosa region. Hereinafter, a model representing villi is referred to as a villi model. One or more villi models are created in advance by extracting a region including one or more villi from an intraluminal image of a living body.
 血管生成部121bは、血管を表すモデルを格納しており、このモデルを粘膜領域に貼り付けることにより、新たな画像を生成する。以下、血管を表すモデルを血管モデルという。血管モデルは、生体の管腔内画像から1つ以上の血管を含む領域を抽出することにより予め1つ以上作成しておく。 The blood vessel generation unit 121b stores a model representing a blood vessel, and generates a new image by pasting this model on the mucous membrane region. Hereinafter, a model representing a blood vessel is referred to as a blood vessel model. One or more blood vessel models are created in advance by extracting a region including one or more blood vessels from an intraluminal image of a living body.
 次に、演算部100の動作を説明する。図2は、演算部100の動作を示すフローチャートである。まず、ステップS10において、演算部100は、医用観察装置により取得された管腔内画像を記憶部50から読み出すことにより取得する。 Next, the operation of the calculation unit 100 will be described. FIG. 2 is a flowchart showing the operation of the calculation unit 100. First, in step S <b> 10, the calculation unit 100 acquires the intraluminal image acquired by the medical observation apparatus by reading it from the storage unit 50.
 続くステップS11において、粘膜領域抽出部110は、管腔内画像から粘膜領域を抽出する。詳細には、粘膜領域抽出部110は、管腔内画像を構成する各画素の画素値に基づき、画素ごと、或いは管腔内画像を複数の区画に分画した区画ごとに特徴量を算出し、この特徴量と事前に作成された判別基準を用いて閾値処理を行うことにより、粘膜領域を抽出する。 In subsequent step S11, the mucosal region extraction unit 110 extracts the mucosal region from the intraluminal image. Specifically, the mucous membrane region extraction unit 110 calculates a feature amount for each pixel or for each section obtained by dividing the intraluminal image into a plurality of sections based on the pixel value of each pixel constituting the intraluminal image. The mucous membrane region is extracted by performing threshold processing using this feature amount and a discrimination criterion created in advance.
 粘膜領域を抽出するための特徴量としては、色特徴量や、形状特徴量や、テクスチャ特徴量が挙げられる。色特徴量としては、各画素の画素値(R、G、B成分の各値)、G/R値又はB/G値等の色比、色相、彩度、明度、色差等が挙げられる。形状特徴量としては、管腔内画像から抽出される領域の面積(画素数)、周囲長、水平フェレ径又は垂直フェレ径を含むフェレ径、HOG特徴量(Histogram of Oriented Gradients)やSIFT特徴量(Scale Invariant Feature Transform)等が挙げられる。テクスチャ特徴量としては、例えばLocal Binary Pattern(LBP)が挙げられる。LBPは、注目画素とその周囲8方向の画素との画素値の大小関係を、2の8乗である256次元のヒストグラムで表した特徴量である。 The feature amount for extracting the mucous membrane region includes a color feature amount, a shape feature amount, and a texture feature amount. Examples of the color feature amount include pixel values (R, G, and B component values) of each pixel, color ratios such as G / R values or B / G values, hue, saturation, brightness, and color differences. Shape feature values include the area (number of pixels) of the region extracted from the intraluminal image, the perimeter, the ferret diameter including the horizontal or vertical ferret diameter, the HOG feature value (Histogram of Oriented Gradients), and the SIFT feature value. (Scale Invariant Feature Transform). Examples of the texture feature amount include Local Binary Pattern (LBP). LBP is a feature quantity that represents the magnitude relationship between pixel values of a pixel of interest and pixels in the surrounding eight directions as a 256-dimensional histogram that is 2 8.
 粘膜領域抽出部110は、このような特徴量を算出し、事前に作成しておいた判別基準に基づき、管腔内画像内を、絨毛領域や血管領域を含む粘膜領域とそれ以外の領域、具体的には残渣領域や泡領域や暗部領域等を含む非粘膜領域とに判別し、粘膜領域を抽出する。なお、粘膜領域としては、粘膜領域の輪郭で囲まれる領域を抽出しても良いし、粘膜領域を含む矩形領域を抽出しても良い。或いは、粘膜領域が検出された管腔内画像全体を粘膜領域として抽出しても良い。図3は、管腔内画像から抽出された粘膜領域を示す模式図である。図3においては、粘膜領域を含む矩形領域が抽出された例を示している。 The mucous membrane region extraction unit 110 calculates such a feature amount, and based on a discrimination criterion created in advance, the intraluminal image includes a mucosal region including a villi region and a vascular region, and other regions, Specifically, it is determined as a non-mucosal region including a residue region, a bubble region, a dark region, etc., and a mucosal region is extracted. As the mucosal region, a region surrounded by the outline of the mucous membrane region may be extracted, or a rectangular region including the mucous membrane region may be extracted. Alternatively, the entire intraluminal image in which the mucosal area is detected may be extracted as the mucosal area. FIG. 3 is a schematic diagram showing a mucosal region extracted from an intraluminal image. FIG. 3 shows an example in which a rectangular region including a mucous membrane region is extracted.
 粘膜領域抽出部110は、抽出した粘膜領域の画像を画像生成部120に出力すると共に、記憶部50に記憶させる。 The mucous membrane region extraction unit 110 outputs the extracted image of the mucous membrane region to the image generation unit 120 and stores it in the storage unit 50.
 続くステップS12において、画像生成部120は、ステップS11において抽出された粘膜領域の表面性状を取得する。粘膜領域の表面性状とは、粘膜の凹凸状態、粘膜の色、ハレーション領域の有無、残渣や泡など、観察に不要な不要領域の有無、絨毛の有無、絨毛の色、血管の有無、血管の形状や色、出血の有無等の状態のことである。これらの表面性状は、粘膜領域に対し、上記ステップS11において列挙した色特徴量、形状特徴量、テクスチャ特徴量を算出することにより取得することができる。 In subsequent step S12, the image generation unit 120 acquires the surface property of the mucosal region extracted in step S11. The surface properties of the mucosal area include the unevenness of the mucous membrane, the color of the mucous membrane, the presence or absence of halation areas, the presence or absence of unnecessary areas such as residues and bubbles, the presence or absence of villi, the color of villi, the presence or absence of blood vessels, This refers to the state of shape, color, bleeding, etc. These surface properties can be acquired by calculating the color feature value, shape feature value, and texture feature value listed in step S11 for the mucosal region.
 続くステップS13において、画像生成部120は、粘膜領域の表面性状に基づいて管腔内画像における粘膜領域を加工することにより、新たな画像を生成する。具体的には、粘膜領域に対して絨毛モデルや血管モデルを貼り付ける処理や、粘膜領域の色や形状やテクスチャを変化させる処理が挙げられる。本実施の形態1においては、粘膜領域に絨毛モデル及び血管モデルを貼り付ける処理を説明する。 In subsequent step S <b> 13, the image generation unit 120 generates a new image by processing the mucosal region in the intraluminal image based on the surface property of the mucosal region. Specifically, a process of attaching a villi model or a blood vessel model to the mucosal area, or a process of changing the color, shape, or texture of the mucosal area. In the first embodiment, a process of attaching a villi model and a blood vessel model to the mucosa region will be described.
 絨毛生成部121aは、粘膜領域に絨毛モデルを貼り付けることにより、粘膜領域に絨毛領域が合成された新たな画像を生成する。詳細には、絨毛生成部121aは、予め格納している絨毛モデルの中からいずれかの絨毛モデルを選択して粘膜領域に貼り付ける。絨毛モデルの選択方法としては、複数の絨毛モデルを表示部40(図1参照)に表示するなどしてユーザに任意の絨毛モデルを選択させても良いし、絨毛生成部121aがランダムに選択しても良い。或いは、以下に説明するように、粘膜領域の表面性状に応じて絨毛生成部121aが絨毛モデルを適宜選択しても良い。以下、粘膜領域の表面性状に応じた絨毛モデルの選択方法及び調整方法を説明する。 The villi generating unit 121a generates a new image in which the villi area is synthesized in the mucosa area by pasting the villi model on the mucosa area. Specifically, the villus generation unit 121a selects one of the villus models stored in advance and pastes it to the mucosa region. As a method for selecting a villi model, a user may select an arbitrary villi model by displaying a plurality of villi models on the display unit 40 (see FIG. 1), or the villi generation unit 121a may randomly select a villi model. May be. Alternatively, as described below, the villus generation unit 121a may appropriately select the villus model according to the surface properties of the mucosal region. Hereinafter, the selection method and adjustment method of the villus model according to the surface property of the mucous membrane region will be described.
(1-1)粘膜と内視鏡に設けられた撮像素子との位置関係を考慮する場合
 絨毛生成部121aは、管腔内画像に写った粘膜と、この管腔内画像を撮像した内視鏡に設けられた撮像素子との位置関係、具体的には、内視鏡が正面から粘膜を撮像したか、或いは内視鏡が斜め方向から粘膜を撮像したかという位置関係を取得する。この位置関係は、粘膜領域から抽出したエッジの形状から判断することができる。具体的には、エッジの円形度が大きいほど、即ちエッジの形状が真円に近いほど、粘膜に対して内視鏡の撮像素子が正面に近い方向を向いていると判断され、エッジの円形度が小さいほど、即ちエッジの形状が真円から遠いほど、粘膜に対して内視鏡の撮像素子が斜め方向を向いていると判断される。或いは、管腔内画像を撮像した内視鏡に加速度センサが設けられ、この加速度センサの検出値が画像付帯情報として管腔内画像に付帯されている場合には、この画像付帯情報に基づいて粘膜と内視鏡との位置関係を取得しても良い。
(1-1) Considering the positional relationship between the mucous membrane and the imaging device provided in the endoscope The villi generating unit 121a includes the mucous membrane shown in the intraluminal image and the endoscope that images the intraluminal image. The positional relationship with the imaging device provided in the mirror, specifically, the positional relationship regarding whether the endoscope has imaged the mucous membrane from the front or whether the endoscope has imaged the mucosa from an oblique direction is acquired. This positional relationship can be determined from the shape of the edge extracted from the mucous membrane region. Specifically, the greater the circularity of the edge, that is, the closer the shape of the edge is to a perfect circle, the more it is determined that the imaging device of the endoscope faces the direction closer to the front with respect to the mucous membrane. It is determined that the smaller the degree is, that is, the farther the edge shape is from the perfect circle, the more the imaging element of the endoscope is oriented obliquely with respect to the mucous membrane. Alternatively, when an acceleration sensor is provided in an endoscope that captures an intraluminal image and the detection value of the acceleration sensor is attached to the intraluminal image as image auxiliary information, based on the image auxiliary information The positional relationship between the mucous membrane and the endoscope may be acquired.
 粘膜に対して内視鏡の撮像素子が正面に近い方向を向いている場合、粘膜表面に存在する絨毛の先端は内視鏡の撮像素子の方を向いている。そのため、管腔内画像においては円形状に近い絨毛が観察される。この場合、絨毛生成部121aは、円形状に近い絨毛モデルを選択して粘膜領域に貼り付ける。 When the imaging device of the endoscope faces the direction close to the front with respect to the mucous membrane, the tip of the villi existing on the mucosal surface faces the imaging device of the endoscope. For this reason, villi close to a circular shape are observed in the intraluminal image. In this case, the villi generating unit 121a selects a villi model that is nearly circular and pastes it to the mucosa region.
 一方、粘膜に対して内視鏡の撮像素子が斜めを向いている場合、内視鏡は粘膜表面に存在する絨毛を横向きに近い方向から撮像する。そのため、管腔内画像においては基端側から先端側に向かって突出する細長い形状の絨毛が観察される。そこで、この場合、絨毛生成部121aは、一般的に知られる細長い形状の絨毛モデルを選択する。 On the other hand, when the imaging element of the endoscope is directed obliquely with respect to the mucous membrane, the endoscope images the villi existing on the mucosal surface from a direction close to the lateral direction. Therefore, in the intraluminal image, elongated villi protruding from the proximal end side toward the distal end side are observed. Therefore, in this case, the villi generating unit 121a selects a generally known elongated villi model.
 細長い形状の絨毛モデルを選択した際、絨毛生成部121aは、絨毛モデルを貼り付ける向きを決定する。具体的には、絨毛モデルの基端部が管腔の手前側、絨毛モデルの先端部が管腔の奥側となるように、絨毛モデルの向きを調整する。 When the elongated villi model is selected, the villi generation unit 121a determines the direction in which the villi model is attached. Specifically, the orientation of the villi model is adjusted so that the proximal end portion of the villi model is on the near side of the lumen and the distal end portion of the villi model is on the back side of the lumen.
 管腔の奥側、手前側といった向きは、管腔内画像において輝度が所定値よりも低く且つ面積が所定値よりも大きい領域を抽出し、この領域内において輝度が最も低い位置を管腔の奥とすることで決定することができる。この奥の方に絨毛モデルの先端側が向くように絨毛モデルを回転させて粘膜領域に貼り付ける。 For the orientation of the inner side and the front side of the lumen, a region where the luminance is lower than the predetermined value and the area is larger than the predetermined value is extracted from the intraluminal image, and the position where the luminance is the lowest in this region is extracted. It can be decided by making it in the back. The villi model is rotated and attached to the mucous membrane region so that the tip side of the villi model faces toward the back.
(1-2)粘膜領域の奥行を考慮する場合
 管腔内画像における粘膜領域の奥行を考慮する場合、絨毛生成部121aはまず、管腔内画像を構成する各画素の輝度を算出し、輝度値が相対的に低い領域を奥行が深いと判断し、輝度値が相対的に高い領域を奥行が浅いと判断する。奥行が深い場合、絨毛は小さく写り、奥行が浅い場合、絨毛は大きく写る。そこで、絨毛生成部121aは、管腔内画像における粘膜領域の奥行に応じた大きさの絨毛モデルを選択して粘膜領域に貼り付ける。或いは、絨毛生成部121aは、任意に選択された絨毛モデルを、粘膜領域の奥行に応じて拡大又は縮小して粘膜領域に貼り付けても良い。
(1-2) Considering the depth of the mucosal region When considering the depth of the mucosal region in the intraluminal image, the villi generating unit 121a first calculates the luminance of each pixel constituting the intraluminal image, A region having a relatively low value is determined to be deep, and a region having a relatively high luminance value is determined to be shallow. When the depth is deep, the villi appear small, and when the depth is shallow, the villi appears large. Therefore, the villi generating unit 121a selects a villi model having a size corresponding to the depth of the mucosal region in the intraluminal image and pastes it to the mucosal region. Alternatively, the villus generation unit 121a may paste an arbitrarily selected villus model on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
 或いは、絨毛生成部121aは、粘膜領域に写った絨毛の大きさに合わせて絨毛モデルを選択しても良い。詳細には、絨毛生成部121aはまず、粘膜領域からエッジを抽出する。このエッジは1つ1つの絨毛の輪郭を表す。絨毛生成部121aは、このエッジ同士の間隔に近い大きさの絨毛モデルを選択する。或いは、絨毛生成部121aは、任意に選択された絨毛モデルを、エッジ同士の間隔に応じて拡大又は縮小しても良い。 Alternatively, the villus generation unit 121a may select a villus model in accordance with the size of the villus reflected in the mucosa region. Specifically, the villi generating unit 121a first extracts an edge from the mucous membrane region. This edge represents the outline of each villi. The villi generating unit 121a selects a villi model having a size close to the interval between the edges. Or the villi production | generation part 121a may expand or reduce the arbitrarily selected villi model according to the space | interval of edges.
(1-3)粘膜領域の明るさを考慮する場合
 絨毛生成部121aは、まず、粘膜領域を構成する各画素の画素値から輝度を算出し、さらに、各画素の輝度の平均値(平均輝度値)を算出する。そして、絨毛生成部121aは、予め格納している絨毛モデルの平均輝度値を粘膜領域の平均輝度値と比較し、粘膜領域の平均輝度値と等しい、又は粘膜領域の平均輝度値の所定範囲内(例えば±数10%以内)である平均輝度値を有する絨毛モデルを選択する。この条件に合う絨毛モデルを格納していない場合、絨毛生成部121aは、粘膜領域の平均輝度値に対して最も近い平均輝度値を有する絨毛モデルを選択しても良い。
(1-3) Considering the brightness of the mucous membrane region The villi generating unit 121a first calculates the luminance from the pixel value of each pixel constituting the mucosal region, and further calculates the average value (average luminance) of the luminance of each pixel. Value). Then, the villi generating unit 121a compares the average luminance value of the previously stored villi model with the average luminance value of the mucosal region, and is equal to the average luminance value of the mucosal region or within a predetermined range of the average luminance value of the mucosal region. A villi model having an average luminance value (for example, within ± tens of 10%) is selected. When the villus model meeting this condition is not stored, the villus generation unit 121a may select the villus model having the average luminance value closest to the average luminance value of the mucosa region.
 続いて、絨毛生成部121aは、粘膜領域内において、絨毛モデルを貼り付ける領域とその周辺領域との間で輝度の差が大きくなり過ぎないように、絨毛モデルの輝度を調整する。具体的には、粘膜領域のうち輝度が比較的高い領域に対しては、絨毛モデルの輝度を高く調整した上で、絨毛モデルを貼り付ける。反対に、粘膜領域のうち輝度が比較的低い領域に対しては、絨毛モデルの輝度を低く調整した上で、絨毛モデルを貼り付ける。 Subsequently, the villus generation unit 121a adjusts the luminance of the villus model so that the difference in luminance between the region where the villus model is pasted and its peripheral region does not become too large in the mucosa region. Specifically, the villi model is pasted after adjusting the luminance of the villi model to a relatively high luminance area in the mucous membrane area. On the other hand, the villi model is pasted after adjusting the luminance of the villi model to a relatively low luminance area of the mucosa area.
(1-4)粘膜領域の色を考慮する場合
 絨毛生成部121aは、まず、粘膜領域を構成する各画素の画素値のうちのR値を抽出し、各画素のR値の平均値(平均R値)を算出する。絨毛生成部121aは、予め格納している絨毛モデルの平均R値を粘膜領域の平均R値と比較し、粘膜領域の平均R値と等しい、又は粘膜領域の平均R値の所定範囲内(例えば数10%以内)である平均R値を有する絨毛モデルを選択する。この条件に合う絨毛モデルを格納していない場合、絨毛生成部121aは、粘膜領域の平均R値に対して最も近い平均R値を有する絨毛モデルを選択しても良い。
(1-4) Considering the color of the mucous membrane region First, the villi generating unit 121a extracts the R value of the pixel values of each pixel constituting the mucosal region, and calculates the average value (average value) of the R values of each pixel. R value) is calculated. The villus generation unit 121a compares the average R value of the previously stored villus model with the average R value of the mucosal region, and is equal to or within a predetermined range of the average R value of the mucosal region (for example, A villus model having an average R value that is within a few tens of percent) is selected. When the villus model meeting this condition is not stored, the villus generation unit 121a may select the villus model having the average R value closest to the average R value of the mucosal region.
 続いて、絨毛生成部121aは、選択した絨毛モデルにおけるR値のヒストグラムを作成し、絨毛モデルにおけるR値の中央値が粘膜領域におけるR値の中央値と等しくなるようにヒストグラムを係数倍することにより、絨毛モデルにおけるR値を調整する。そして、R値を調整した絨毛モデルを粘膜領域に貼り付ける。 Subsequently, the villi generating unit 121a creates a histogram of R values in the selected villi model, and multiplies the histogram by a coefficient so that the median R value in the villi model is equal to the median R value in the mucosal region. To adjust the R value in the villi model. Then, the villi model with the adjusted R value is attached to the mucosa region.
 絨毛生成部121aは、同様の手法により、絨毛モデルにおけるG値、B値、R/G値、R/B値等を調整しても良い。 The villi generating unit 121a may adjust the G value, the B value, the R / G value, the R / B value, and the like in the villi model by a similar method.
(1-5)粘膜領域の密度を考慮する場合
 絨毛生成部121aは、まず、粘膜領域に対してフーリエ変換を行うことにより、粘膜領域の画像を周波数空間の画像に変換し、周波数分布に基づいて周波数を取得する。取得する周波数は、例えば、強度が最大の周波数でも良いし、強度が最低の周波数でも良い。或いは、周波数分布の中央値であっても良い。絨毛生成部121aは、取得した粘膜領域の周波数に対して周波数が所定範囲内である絨毛モデルを選択する。或いは、粘膜領域の周波数に対して周波数が最も近い絨毛モデルを選択しても良い。
(1-5) Considering the density of the mucous membrane region First, the villi generating unit 121a converts the image of the mucosal region into an image of the frequency space by performing Fourier transform on the mucous membrane region, and based on the frequency distribution To obtain the frequency. The frequency to be acquired may be, for example, the frequency having the maximum intensity or the frequency having the minimum intensity. Alternatively, it may be the median value of the frequency distribution. The villi generating unit 121a selects a villi model whose frequency is within a predetermined range with respect to the acquired frequency of the mucous membrane region. Alternatively, the villi model having the closest frequency to the frequency of the mucous membrane region may be selected.
 絨毛生成部121aは、選択した絨毛モデルに複数の絨毛の画像が含まれる場合、選択した絨毛モデルをそのまま粘膜領域に貼り付ける。一方、選択した絨毛モデルに絨毛の画像が1つしか含まれない場合、この絨毛モデルを複製し、粘膜領域の周波数と近くなるように絨毛モデルの配置間隔を調整して粘膜領域に貼り付ける。 When the selected villus model includes a plurality of villus images, the villus generation unit 121a pastes the selected villus model on the mucosa region as it is. On the other hand, when the selected villi model includes only one villi image, this villi model is duplicated, and the arrangement interval of the villi model is adjusted so as to be close to the frequency of the mucosa region and pasted to the mucosa region.
 絨毛生成部121aは、上記(1-1)~(1-5)の手法を単独で実行しても良いし、適宜組み合わせて実行しても良い。一例として、粘膜と内視鏡との位置関係を考慮して選択した絨毛モデルを、粘膜領域の奥行に合わせて拡大又は縮小したり、粘膜領域の色や明るさに合わせて絨毛モデルの色や明るさを調整したりしても良い。 The villi generating unit 121a may execute the above-described methods (1-1) to (1-5) alone or in appropriate combination. As an example, the villus model selected in consideration of the positional relationship between the mucous membrane and the endoscope is enlarged or reduced according to the depth of the mucosal area, or the color of the villus model according to the color and brightness of the mucosal area. The brightness may be adjusted.
 複数の手法を組み合わせる場合や、異なる手法により選択された複数の絨毛モデルを同じ粘膜領域に貼り付ける場合には、絨毛モデルを粘膜領域に貼り付けた後、さらに平滑化処理等を追加することで、絨毛モデルと粘膜領域との境界における違和感を低減させることができる。図4は、図3に示す粘膜領域に絨毛モデルを貼り付けることにより生成した新たな画像を示す模式図である。 When combining multiple methods or pasting multiple villus models selected by different methods to the same mucosal region, add a smoothing process after pasting the villus model to the mucosal region. In addition, the sense of incongruity at the boundary between the villi model and the mucosal region can be reduced. FIG. 4 is a schematic diagram showing a new image generated by pasting a villi model to the mucosa region shown in FIG.
 また、血管生成部121bは、粘膜領域に血管モデルを貼り付けることにより、粘膜領域に血管領域が合成された新たな画像を生成する。詳細には、血管生成部121bは、予め格納している血管モデルの中からいずれかの血管モデルを選択して粘膜領域に貼り付ける。或いは、粘膜領域に絨毛領域が合成された画像(図4参照)に対して、さらに血管モデルを貼り付けても良い。 Also, the blood vessel generation unit 121b generates a new image in which the blood vessel region is synthesized in the mucosal region by pasting the blood vessel model on the mucosal region. Specifically, the blood vessel generation unit 121b selects one of the blood vessel models stored in advance and pastes it on the mucosa region. Alternatively, a blood vessel model may be further attached to an image (see FIG. 4) in which the villi region is combined with the mucous membrane region.
 血管モデルの選択方法としては、複数の血管モデルを表示部40(図1参照)に表示するなどしてユーザに任意の血管モデルを選択させても良いし、血管生成部121bがランダムに選択しても良い。或いは、以下に説明するように、粘膜領域の表面性状に応じて血管生成部121bが血管モデルを適宜選択しても良い。以下、粘膜領域の表面性状に応じた血管モデルの選択方法及び調整方法を説明する。 As a blood vessel model selection method, the user may select an arbitrary blood vessel model by displaying a plurality of blood vessel models on the display unit 40 (see FIG. 1), or the blood vessel generation unit 121b may select a random blood vessel model at random. May be. Alternatively, as described below, the blood vessel generation unit 121b may appropriately select a blood vessel model in accordance with the surface properties of the mucous membrane region. Hereinafter, a method for selecting and adjusting a blood vessel model according to the surface properties of the mucosa region will be described.
(2-1)粘膜と内視鏡に設けられた撮像素子との位置関係を考慮する場合
 血管生成部121bは、まず、粘膜と内視鏡に設けられた撮像素子との位置関係を取得する。粘膜と内視鏡の撮像素子との位置関係は、粘膜領域を構成する各画素のR値から血管領域を抽出し、この血管領域のエッジから算出される血管の太さをもとに判断することができる。即ち、血管の一端と他端との太さの比が所定範囲内である場合、粘膜に対して内視鏡が正面を向いていると判断され、血管の一端と他端との太さの比が所定範囲を超えている場合、粘膜に対して内視鏡の撮像素子が斜めを向いていると判断される。或いは、管腔内画像を撮像した内視鏡に加速度センサが設けられ、この加速度センサの検出値が画像付帯情報として管腔内画像に付帯されている場合には、この画像付帯情報に基づいて粘膜と内視鏡との位置関係を取得しても良い。
(2-1) Considering the positional relationship between the mucous membrane and the imaging device provided in the endoscope The blood vessel generation unit 121b first acquires the positional relationship between the mucous membrane and the imaging device provided in the endoscope. . The positional relationship between the mucous membrane and the imaging device of the endoscope is determined based on the blood vessel thickness extracted from the R value of each pixel constituting the mucosal region and calculated from the edge of the blood vessel region. be able to. That is, when the ratio of the thickness between the one end and the other end of the blood vessel is within a predetermined range, it is determined that the endoscope is facing the front with respect to the mucous membrane, and the thickness between the one end and the other end of the blood vessel is determined. When the ratio exceeds the predetermined range, it is determined that the imaging element of the endoscope is inclined obliquely with respect to the mucous membrane. Alternatively, when an acceleration sensor is provided in an endoscope that captures an intraluminal image and the detection value of the acceleration sensor is attached to the intraluminal image as image auxiliary information, based on the image auxiliary information The positional relationship between the mucous membrane and the endoscope may be acquired.
 粘膜に対して内視鏡の撮像素子が正面に近い方向を向いている場合、血管の太さは粘膜領域において大きくは変化しない。そのため、血管生成部121bは、太さが一様な血管モデルを選択して粘膜領域に貼り付ける。 When the imaging device of the endoscope faces the direction close to the front with respect to the mucous membrane, the thickness of the blood vessel does not change greatly in the mucosal region. Therefore, the blood vessel generation unit 121b selects a blood vessel model having a uniform thickness and pastes it on the mucous membrane region.
 一方、粘膜に対して内視鏡の撮像素子が斜めを向いている場合、血管の太さは粘膜領域において大きく変化することがある。そのため、血管生成部121bは、太さが一様でない血管モデルを選択する。さらに、血管生成部121bは、粘膜と内視鏡の撮像素子との位置関係を判定する際に抽出した血管領域のエッジから血管の一端と他端との太さの比を算出し、血管の一端と他端との太さの比が類似するように血管モデルを回転させた上で粘膜領域に貼り付ける。具体例として、血管の太さが画面の左から右に向かって細くなっている場合、血管モデルも画面の左から右に向かって細くなるように回転させた上で粘膜領域に貼り付ける。 On the other hand, when the imaging device of the endoscope is oriented obliquely with respect to the mucous membrane, the thickness of the blood vessel may greatly change in the mucosal region. Therefore, the blood vessel generation unit 121b selects a blood vessel model whose thickness is not uniform. Furthermore, the blood vessel generation unit 121b calculates the ratio of the thickness between one end and the other end of the blood vessel from the edge of the blood vessel region extracted when determining the positional relationship between the mucous membrane and the imaging device of the endoscope. The blood vessel model is rotated so that the thickness ratio between the one end and the other end is similar, and is then attached to the mucosal region. As a specific example, when the thickness of the blood vessel is narrowed from the left to the right of the screen, the blood vessel model is also attached to the mucosa region after being rotated so as to become thin from the left to the right of the screen.
(2-2)粘膜領域の奥行を考慮する場合
 血管生成部121bは、まず、上記(1-2)と同様にして管腔内画像における粘膜領域の奥行を判断する。奥行が深い場合、血管は細く写り、奥行が浅い場合、血管は太く写るため、血管生成部121bは、粘膜領域の奥行に応じた太さの血管モデルを選択する。或いは、血管生成部121bは、任意に選択した血管モデルを、粘膜領域の奥行に応じて拡大又は縮小しても良い。具体的には、奥行が深い場合には血管モデルを縮小し、奥行が浅い場合には血管モデルを拡大する。この際の血管モデルの拡大率又は縮小率は、粘膜領域から抽出される血管領域のエッジをもとに算出した血管の太さに基づいて、粘膜領域に存在する血管領域の太さと、血管モデルの太さとが近い値となるように決定する。
(2-2) Considering the Depth of the Mucosal Region The blood vessel generation unit 121b first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) above. When the depth is deep, the blood vessel appears thin, and when the depth is shallow, the blood vessel appears thick. Therefore, the blood vessel generation unit 121b selects a blood vessel model having a thickness corresponding to the depth of the mucous membrane region. Alternatively, the blood vessel generation unit 121b may enlarge or reduce the arbitrarily selected blood vessel model according to the depth of the mucosal region. Specifically, the blood vessel model is reduced when the depth is deep, and the blood vessel model is enlarged when the depth is shallow. The enlargement rate or reduction rate of the blood vessel model at this time is determined based on the thickness of the blood vessel region existing in the mucosal region based on the thickness of the blood vessel calculated based on the edge of the blood vessel region extracted from the mucosal region. The thickness is determined to be close to the value.
(2-3)血管の明るさを考慮する場合
 血管生成部121bは、まず、粘膜領域内の血管領域を構成する各画素の画素値から輝度を算出し、さらに、各画素の輝度の平均値(平均輝度値)を算出する。そして、血管生成部121bは、予め格納している血管モデルの平均輝度値を血管領域の平均輝度値と比較し、血管領域の平均輝度値と等しい、又は血管領域の平均輝度値の所定範囲内(例えば±数10%以内)である平均輝度値を有する血管モデルを選択する。この条件に合う血管モデルを格納していない場合、血管生成部121bは、血管領域の平均輝度値に対して最も近い平均輝度値を有する血管モデルを選択しても良い。
(2-3) Considering the Brightness of Blood Vessel The blood vessel generation unit 121b first calculates the luminance from the pixel value of each pixel constituting the blood vessel region in the mucous membrane region, and further calculates the average value of the luminance of each pixel. (Average luminance value) is calculated. Then, the blood vessel generation unit 121b compares the average luminance value of the blood vessel model stored in advance with the average luminance value of the blood vessel region and is equal to the average luminance value of the blood vessel region or within a predetermined range of the average luminance value of the blood vessel region. A blood vessel model having an average luminance value (for example, within ± tens of 10%) is selected. When the blood vessel model meeting this condition is not stored, the blood vessel generation unit 121b may select a blood vessel model having an average luminance value closest to the average luminance value of the blood vessel region.
 続いて、血管生成部121bは、粘膜領域内において、血管モデルを貼り付ける領域とその周辺の血管領域との間で輝度の差が大きくなり過ぎないように、血管モデルの輝度を変更する。具体的には、周辺の血管領域の輝度が高い領域に対しては、血管モデルの輝度を高く調整した上で、血管モデルを貼り付ける。反対に、周辺の血管領域の輝度が低い領域に対しては、血管モデルの輝度を低く調整した上で、血管モデルを貼り付ける。 Subsequently, the blood vessel generation unit 121b changes the luminance of the blood vessel model so that the difference in luminance between the region where the blood vessel model is pasted and the surrounding blood vessel region does not become too large in the mucosa region. Specifically, the blood vessel model is pasted on the region where the luminance of the surrounding blood vessel region is high after adjusting the luminance of the blood vessel model to be high. On the other hand, the blood vessel model is pasted after adjusting the luminance of the blood vessel model to a region where the luminance of the surrounding blood vessel region is low.
(2-4)血管領域の色を考慮する場合
 血管生成部121bは、まず、粘膜領域内の血管領域を構成する各画素の画素値のうちのR値を抽出し、各画素のR値の平均値(平均R値)を算出する。血管生成部121bは、予め格納している血管モデルの平均R値を血管領域の平均R値と比較し、血管領域の平均R値と等しい、又は血管領域の平均R値の所定範囲内(例えば数10%以内)である平均R値を有する血管モデルを選択する。この条件に合う血管モデルを格納していない場合、血管生成部121bは、血管領域の平均R値に対して最も近い平均R値を有する血管モデルを選択しても良い。
(2-4) Considering the color of the blood vessel region First, the blood vessel generation unit 121b extracts the R value of the pixel values of each pixel constituting the blood vessel region in the mucous membrane region, and calculates the R value of each pixel. An average value (average R value) is calculated. The blood vessel generation unit 121b compares the average R value of the blood vessel model stored in advance with the average R value of the blood vessel region, or is equal to or within a predetermined range of the average R value of the blood vessel region (for example, A blood vessel model having an average R value that is within several tens of percent) is selected. When the blood vessel model meeting this condition is not stored, the blood vessel generation unit 121b may select a blood vessel model having an average R value closest to the average R value of the blood vessel region.
 続いて、血管生成部121bは、選択した血管モデルにおけるR値のヒストグラムを作成し、血管モデルにおけるR値の中央値が血管領域におけるR値の中央値と等しくなるようにヒストグラムを係数倍することにより、血管モデルにおけるR値を調整する。そして、R値を調整した血管モデルを粘膜領域に貼り付ける。 Subsequently, the blood vessel generation unit 121b creates a histogram of R values in the selected blood vessel model, and multiplies the histogram by a coefficient so that the median R value in the blood vessel model is equal to the median R value in the blood vessel region. Thus, the R value in the blood vessel model is adjusted. Then, the blood vessel model whose R value is adjusted is pasted on the mucosal region.
 血管生成部121bは、同様の手法により、血管モデルにおけるG値、B値、R/G値、R/B値等を調整しても良い。 The blood vessel generation unit 121b may adjust the G value, the B value, the R / G value, the R / B value, and the like in the blood vessel model by a similar method.
(2-5)血管領域の密度を考慮する場合
 血管生成部121bは、まず、粘膜領域内の血管領域のエッジを抽出することにより、血管領域同士のピッチを算出する。そして、複数の血管モデルの中から、血管領域同士のピッチの所定範囲内(例えば数10%以内)であるピッチを有する血管モデルを選択する。上記所定範囲内であるピッチを有する血管モデルが血管生成部121bに格納されていない場合には、血管生成部121bが格納する血管モデルのうち、血管領域同士のピッチに最も近いピッチを有する血管モデルを選択する。
(2-5) Considering the density of blood vessel regions The blood vessel generation unit 121b first calculates the pitch between blood vessel regions by extracting the edges of the blood vessel regions in the mucosal region. Then, a blood vessel model having a pitch within a predetermined range (for example, within several tens of percent) of the pitch between the blood vessel regions is selected from the plurality of blood vessel models. When a blood vessel model having a pitch within the predetermined range is not stored in the blood vessel generation unit 121b, a blood vessel model having a pitch closest to the pitch between the blood vessel regions among the blood vessel models stored in the blood vessel generation unit 121b. Select.
 血管生成部121bは、選択した血管モデルに複数の血管が含まれる場合、選択した血管モデルをそのまま粘膜領域に貼り付ける。一方、選択した血管モデルに血管が1つしか含まれていない場合、血管モデルを複製し、血管領域同士の間隔と近くなるように血管モデルの配置間隔を調整して粘膜領域に貼り付ける。 When the selected blood vessel model includes a plurality of blood vessels, the blood vessel generation unit 121b pastes the selected blood vessel model as it is on the mucous membrane region. On the other hand, when the selected blood vessel model includes only one blood vessel, the blood vessel model is duplicated, and the placement interval of the blood vessel model is adjusted so as to be close to the interval between the blood vessel regions, and is pasted on the mucosa region.
(2-6)血管の形状を考慮する場合、
 血管生成部121bは、血管の形状、即ち、血管の分岐数や太さや長さに基づいて血管モデルを選択又は調整しても良い。詳細には、血管生成部121bは、粘膜領域内の血管領域からエッジを抽出し、抽出したエッジ同士の交点をカウントする。この交点の数が血管の分岐数である。血管生成部121bは、任意に選択した血管モデルにおけるエッジと粘膜領域内の血管領域から抽出したエッジとが交差するように血管モデルを粘膜領域に貼り付けることにより、分岐数を増加させる。
(2-6) When considering the shape of blood vessels,
The blood vessel generation unit 121b may select or adjust the blood vessel model based on the shape of the blood vessel, that is, the number of branches, the thickness, or the length of the blood vessel. Specifically, the blood vessel generation unit 121b extracts edges from the blood vessel region in the mucous membrane region, and counts the intersections of the extracted edges. The number of intersections is the number of blood vessel branches. The blood vessel generation unit 121b increases the number of branches by pasting the blood vessel model to the mucosal region so that the edge in the arbitrarily selected blood vessel model and the edge extracted from the blood vessel region in the mucosal region intersect.
 或いは、血管生成部121bは、粘膜領域内の血管領域を、周囲の画素の画素値を用いて埋めることにより、分岐数を減少させても良い。 Alternatively, the blood vessel generation unit 121b may reduce the number of branches by filling the blood vessel region in the mucous membrane region with pixel values of surrounding pixels.
 また、血管生成部121bは、粘膜領域内の血管領域から抽出したエッジをもとに血管領域の太さを算出し、この太さと異なる太さの血管モデルを選択して粘膜領域に貼り付ける。それにより、粘膜領域内の血管領域の太さを変化させることができる。 In addition, the blood vessel generation unit 121b calculates the thickness of the blood vessel region based on the edge extracted from the blood vessel region in the mucosal region, selects a blood vessel model having a thickness different from the thickness, and pastes it on the mucosal region. Thereby, the thickness of the blood vessel region in the mucous membrane region can be changed.
 また、血管生成部121bは、粘膜領域内の血管領域から抽出したエッジをもとに血管領域の長さを算出し、この長さと異なる長さの血管モデルを選択して粘膜領域に貼り付ける。それにより、粘膜領域内の血管領域の長さを変化させることができる。 Further, the blood vessel generation unit 121b calculates the length of the blood vessel region based on the edge extracted from the blood vessel region in the mucosal region, selects a blood vessel model having a length different from this length, and pastes it on the mucosal region. Thereby, the length of the blood vessel region in the mucous membrane region can be changed.
 血管生成部121bは、上記(2-1)~(2-6)の手法を単独で実行しても良いし、適宜組み合わせて実行しても良い。一例として、血管領域の色を考慮して選択された血管モデルを、粘膜領域の奥行に合わせて拡大又は縮小しても良い。複数の手法を組み合わせる場合や、異なる手法により選択された複数の血管モデルを同じ粘膜領域に貼り付ける場合には、血管モデルを粘膜領域に貼り付けた後、さらに平滑化処理等を追加することで、血管モデルと粘膜領域との境界における違和感を低減させることができる。図5は、図4に示す画像に対し、さらに血管モデルを貼り付けることにより生成した新たな画像を示す模式図である。 The blood vessel generation unit 121b may execute the above methods (2-1) to (2-6) alone or in appropriate combination. As an example, a blood vessel model selected in consideration of the color of the blood vessel region may be enlarged or reduced in accordance with the depth of the mucosal region. When combining multiple methods or when pasting multiple blood vessel models selected by different methods to the same mucosal region, add a smoothing process etc. after pasting the blood vessel model to the mucosal region. In addition, it is possible to reduce a sense of incongruity at the boundary between the blood vessel model and the mucous membrane region. FIG. 5 is a schematic diagram showing a new image generated by further attaching a blood vessel model to the image shown in FIG.
 再び図2を参照すると、ステップS13に続くステップS14において、画像生成部120は、新たに生成した画像を出力し、学習サンプルとして記憶部50に記憶させる。その後、演算部100の動作は終了する。 Referring to FIG. 2 again, in step S14 following step S13, the image generation unit 120 outputs a newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the arithmetic unit 100 ends.
 以上説明したように、本発明の実施の形態1によれば、管腔内画像から抽出した粘膜領域に対し、粘膜領域の表面性状に基づいて選択及び調整された絨毛モデルや血管モデルを貼り付けることにより新たな画像を生成するので、管腔内の状態を適切に反映した学習サンプルを得ることができる。 As described above, according to the first embodiment of the present invention, the villi model or blood vessel model selected and adjusted based on the surface property of the mucosal region is pasted to the mucosal region extracted from the intraluminal image. Thus, a new image is generated, so that a learning sample appropriately reflecting the state in the lumen can be obtained.
(実施の形態2)
 次に、本発明の実施の形態2について説明する。図6は、本発明の実施の形態2に係る画像処理装置が備える演算部の構成を示すブロック図である。本実施の形態2に係る画像処理装置は、図1に示す演算部100の代わりに、図6に示す演算部200を備える。演算部200以外の画像処理装置の各部の構成及び動作は、実施の形態1と同様である。
(Embodiment 2)
Next, a second embodiment of the present invention will be described. FIG. 6 is a block diagram showing a configuration of a calculation unit provided in the image processing apparatus according to Embodiment 2 of the present invention. The image processing apparatus according to the second embodiment includes a calculation unit 200 illustrated in FIG. 6 instead of the calculation unit 100 illustrated in FIG. The configuration and operation of each part of the image processing apparatus other than the arithmetic unit 200 are the same as those in the first embodiment.
 演算部200は、粘膜領域抽出部110及び画像生成部120に加えて、画像生成部120が生成した新たな画像を用いて管腔内画像内の領域を識別するための識別基準を作成する識別基準作成部210を備える。このうち、粘膜領域抽出部110及び画像生成部120の構成及び動作は実施の形態1と同様である。 The calculation unit 200 uses the new image generated by the image generation unit 120 in addition to the mucosal region extraction unit 110 and the image generation unit 120 to create an identification criterion for identifying an area in the intraluminal image. A reference creation unit 210 is provided. Among these, the configurations and operations of the mucous membrane region extraction unit 110 and the image generation unit 120 are the same as those in the first embodiment.
 識別基準作成部210は、病変や病変と疑われる病変候補など、正常な粘膜とは異なる特定の特徴を有する領域を異常領域として、粘膜領域から識別するための基準を作成する。識別基準作成部210は、管腔内画像から抽出された粘膜領域から算出された特徴量と、画像生成部120が生成した新たな画像から算出された特徴量とに対して異なる重みを設定する重み設定部211を有する。 The identification criterion creating unit 210 creates a criterion for identifying from a mucosal region an area having specific characteristics different from normal mucous membranes, such as a lesion or a lesion candidate suspected of being a lesion, as an abnormal area. The identification reference creation unit 210 sets different weights for the feature amount calculated from the mucosal region extracted from the intraluminal image and the feature amount calculated from the new image generated by the image generation unit 120. A weight setting unit 211 is included.
 次に、演算部200の動作について説明する。図7は、演算部200の動作を示すフローチャートである。なお、図7のステップS10~S14は全体として実施の形態1と同様であるが、ステップS11において抽出された粘膜領域を記憶部50に記憶する際には、もとの粘膜領域である旨を示すフラグを付加すると共に、ステップS13において生成された画像を記憶部50に記憶する際には、新たに生成された画像であることを示すフラグを付加しておく。以下、ステップS13において生成された画像を生成画像ともいう。 Next, the operation of the arithmetic unit 200 will be described. FIG. 7 is a flowchart showing the operation of the calculation unit 200. Steps S10 to S14 in FIG. 7 are the same as those in the first embodiment as a whole. However, when the mucosal region extracted in step S11 is stored in the storage unit 50, the fact that it is the original mucosal region is indicated. When the image generated in step S13 is stored in the storage unit 50, a flag indicating that the image is a newly generated image is added. Hereinafter, the image generated in step S13 is also referred to as a generated image.
 ステップS14に続くステップS21において、識別基準作成部210は、ステップS11において抽出された粘膜領域と、ステップS13において生成された生成画像とを記憶部50から読み出し、粘膜領域及び生成画像に基づいて識別基準を作成する。 In step S21 following step S14, the identification reference creation unit 210 reads the mucous membrane region extracted in step S11 and the generated image generated in step S13 from the storage unit 50, and identifies based on the mucous membrane region and the generated image. Create a standard.
 図8は、ステップS21における識別基準の作成処理を示すフローチャートである。まず、ステップS211において、識別基準作成部210は、粘膜領域及び生成画像の特徴量として、色特徴量、形状特徴量、テクスチャ特徴量を算出する。各特徴量の種類については、ステップS11において説明したものと同様である。 FIG. 8 is a flowchart showing the identification reference creation process in step S21. First, in step S211, the identification reference creation unit 210 calculates a color feature value, a shape feature value, and a texture feature value as the feature values of the mucous membrane region and the generated image. The type of each feature amount is the same as that described in step S11.
 続くステップS212において、重み設定部211は、管腔内画像から抽出された粘膜領域及び生成画像からそれぞれ算出された特徴量に対して重みを設定する。詳細には、重み設定部211は、粘膜領域及び生成画像に付加されたフラグをもとに、粘膜領域と生成画像とを判別し、粘膜領域の特徴量に与えられる重みの方が生成画像の特徴量に与えられる重みよりも大きくなるように、それぞれの重みを設定する。粘膜領域は、管腔内画像から抽出された実際の粘膜を写したものであるから信頼度が高く、それに対して、生成画像は、様々な仮定をもとにモデルから作成したものであるから粘膜領域よりも相対的に信頼度が低いためである。 In subsequent step S212, the weight setting unit 211 sets weights for the feature amounts respectively calculated from the mucosal region extracted from the intraluminal image and the generated image. Specifically, the weight setting unit 211 discriminates between the mucous membrane area and the generated image based on the mucous membrane area and the flag added to the generated image, and the weight given to the feature amount of the mucosal area is the generated image. Each weight is set to be larger than the weight given to the feature amount. The mucous membrane area is highly reliable because it is a copy of the actual mucosa extracted from the intraluminal image, whereas the generated image is created from the model based on various assumptions. This is because the reliability is relatively lower than the mucosal region.
 続くステップS213において、識別基準作成部210は、ステップS211において算出した特徴量に対してステップS212において設定した重みを掛けあわせた上で特徴量分布を作成し、この特徴量分布に基づき、サポートベクターマシン(SVM)等の学習器を用いて識別基準を作成する。この際、ステップS212において設定された重みを、粘膜領域及び生成画像のそれぞれの特徴量に乗じて演算を行う。 In subsequent step S213, the identification criterion creating unit 210 creates a feature quantity distribution by multiplying the feature quantity calculated in step S211 by the weight set in step S212, and based on this feature quantity distribution, a support vector is created. An identification criterion is created using a learning device such as a machine (SVM). At this time, the weighting set in step S212 is multiplied by the feature amount of each mucous membrane region and generated image to perform calculation.
 ステップS21に続くステップS22において、識別基準作成部210は、作成した識別基準を出力して記憶部50に記憶させる。その後、演算部200の動作は終了する。 In step S22 following step S21, the identification reference creation unit 210 outputs the created identification reference and stores it in the storage unit 50. Thereafter, the operation of the arithmetic unit 200 ends.
 以上説明したように、本発明の実施の形態2によれば、管腔内画像から抽出された粘膜領域、及び粘膜領域をもとに生成された画像を学習サンプルとして用いて識別基準を作成するので、信頼性の高い識別基準を作成することができる。この際に、粘膜領域及び生成画像の特徴量に与えられる重みを変えることで、識別基準の信頼性をさらに向上させることが可能となる。 As described above, according to the second embodiment of the present invention, the identification standard is created using the mucosal region extracted from the intraluminal image and the image generated based on the mucosal region as learning samples. Therefore, a highly reliable identification standard can be created. At this time, it is possible to further improve the reliability of the identification criterion by changing the weight given to the mucous membrane region and the feature amount of the generated image.
(実施の形態3)
 次に、本発明の実施の形態3について説明する。図9は、本発明の実施の形態3に係る画像処理装置が備える演算部の構成を示すブロック図である。本実施の形態3に係る画像処理装置は、図1に示す演算部100の代わりに、図9に示す演算部300を備える。演算部300以外の画像処理装置の各部の構成及び動作は、実施の形態1と同様である。
(Embodiment 3)
Next, a third embodiment of the present invention will be described. FIG. 9 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to Embodiment 3 of the present invention. The image processing apparatus according to the third embodiment includes a calculation unit 300 illustrated in FIG. 9 instead of the calculation unit 100 illustrated in FIG. The configuration and operation of each part of the image processing apparatus other than the arithmetic unit 300 are the same as those in the first embodiment.
 演算部300は、粘膜領域抽出部110及び画像生成部310を備える。このうち、粘膜領域抽出部110の動作は実施の形態1と同様である。 The calculation unit 300 includes a mucous membrane region extraction unit 110 and an image generation unit 310. Among these, the operation of the mucous membrane region extraction unit 110 is the same as that of the first embodiment.
 画像生成部310は、粘膜領域抽出部110が抽出した粘膜領域を加工することにより、新たな画像を生成する。より詳細には、画像生成部310は、生体の粘膜以外の被写体を表す領域、即ち非粘膜領域を生成する非粘膜領域生成部311を有する。 The image generation unit 310 generates a new image by processing the mucosal region extracted by the mucous membrane region extraction unit 110. More specifically, the image generation unit 310 includes a non-mucosal region generation unit 311 that generates a region representing a subject other than the biological mucous membrane, that is, a non-mucosal region.
 非粘膜領域生成部311は、粘膜領域に対し、泡領域、残渣領域、若しくは処置具が写った処置具領域等を合成することにより、新たな画像を生成する。或いは、粘膜領域に対してハレーション領域若しくは暗部領域を生成又は消去することにより、新たな画像を生成する。非粘膜領域生成部311は、泡を表す泡モデル、残渣を表す残渣モデル、及び処置具を表す処置具モデルを各々1つ以上格納している。 The non-mucosal region generation unit 311 generates a new image by synthesizing the bubble region, the residue region, or the treatment tool region in which the treatment tool is shown with the mucous membrane region. Alternatively, a new image is generated by generating or deleting a halation region or a dark region from the mucous membrane region. The non-mucosal region generation unit 311 stores one or more foam models representing bubbles, residue models representing residues, and treatment instrument models representing treatment instruments.
 次に、演算部300の動作を説明する。図10は、演算部300の動作を示すフローチャートである。なお、図10のステップS10~S12は実施の形態1と同様である。 Next, the operation of the calculation unit 300 will be described. FIG. 10 is a flowchart showing the operation of the calculation unit 300. Note that steps S10 to S12 in FIG. 10 are the same as those in the first embodiment.
 ステップS12に続くステップS31において、画像生成部310は、粘膜領域の表面性状に基づいて、粘膜領域に非粘膜領域を合成した新たな画像を生成する。以下、新たな画像の生成処理を、非粘膜領域の種類ごとに説明する。 In step S31 subsequent to step S12, the image generation unit 310 generates a new image in which the non-mucosal region is combined with the mucosal region based on the surface properties of the mucosal region. Hereinafter, a new image generation process will be described for each type of non-mucosal region.
(3-1)粘膜領域に対して泡領域を合成する場合
 非粘膜領域生成部311は、粘膜領域に泡モデルを貼り付けることにより、粘膜領域に泡領域が合成された新たな画像を生成する。詳細には、非粘膜領域生成部311は、予め格納している泡モデルの中からいずれかの泡モデルを選択して粘膜領域に貼り付ける。泡モデルの選択方法としては、複数の泡モデルを表示部40(図1参照)に表示するなどしてユーザに任意の泡モデルを選択させても良いし、非粘膜領域生成部311がランダムに選択しても良い。或いは、以下に説明するように、粘膜領域の表面性状に応じて非粘膜領域生成部311が泡モデルを適宜選択しても良い。以下、粘膜領域の表面性状に応じた泡モデルの選択方法及び調整方法を説明する。
(3-1) When a foam area is combined with a mucosal area The non-mucosal area generation unit 311 generates a new image in which the foam area is combined with the mucosal area by pasting the foam model onto the mucosal area. . Specifically, the non-mucosal region generation unit 311 selects one of the foam models stored in advance and pastes it to the mucosal region. As a foam model selection method, a user may select an arbitrary foam model by displaying a plurality of foam models on the display unit 40 (see FIG. 1), or the non-mucosal region generation unit 311 may randomly select a model. You may choose. Alternatively, as described below, the non-mucosal region generation unit 311 may appropriately select a foam model according to the surface properties of the mucosal region. Hereinafter, a method for selecting and adjusting a foam model according to the surface properties of the mucous membrane region will be described.
(3-1-1)粘膜領域の奥行を考慮する場合
 非粘膜領域生成部311は、まず、実施の形態1における(1-2)と同様にして管腔内画像における粘膜領域の奥行を判断する。奥行が深い場合、泡は小さく写り、奥行が浅い場合、泡は大きく写るため、非粘膜領域生成部311は、粘膜領域の奥行に応じた大きさの泡モデルを選択して粘膜領域に貼り付ける。或いは、任意に選択した泡モデルを、粘膜領域の奥行に応じて拡大又は縮小した上で粘膜領域に貼り付けても良い。
(3-1-1) Considering the depth of the mucosal region The non-mucosal region generating unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the bubbles appear small, and when the depth is shallow, the bubbles appear large. Therefore, the non-mucosal area generation unit 311 selects and pastes a foam model having a size corresponding to the depth of the mucosal area. . Alternatively, an arbitrarily selected foam model may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
(3-1-2)粘膜領域の明るさを考慮する場合
 非粘膜領域生成部311は、まず、粘膜領域を構成する各画素の画素値から輝度を算出し、さらに、各画素の輝度の平均値(平均輝度値)を算出する。そして、非粘膜領域生成部311は、予め格納している泡モデルの平均輝度値を粘膜領域の平均輝度値と比較し、粘膜領域の平均輝度値と等しい、又は粘膜領域の平均輝度値の所定範囲内(例えば±数10%以内)である平均輝度値を有する泡モデルを選択する。この条件に合う泡モデルを格納していない場合、非粘膜領域生成部311は、粘膜領域の平均輝度値に対して最も近い平均輝度値を有する泡モデルを選択しても良い。
(3-1-2) Considering the Brightness of the Mucosal Region The non-mucosal region generation unit 311 first calculates the luminance from the pixel values of each pixel constituting the mucosal region, and further calculates the average of the luminance of each pixel. A value (average luminance value) is calculated. Then, the non-mucosal region generation unit 311 compares the average brightness value of the bubble model stored in advance with the average luminance value of the mucosal region, and is equal to or equal to the average luminance value of the mucosal region. A bubble model having an average luminance value that is within a range (for example, within ± 10%) is selected. When a foam model that meets this condition is not stored, the non-mucosal region generation unit 311 may select a foam model having an average luminance value that is closest to the average luminance value of the mucosal region.
 続いて、非粘膜領域生成部311は、粘膜領域内において、泡モデルを貼り付ける領域とその周辺領域との間で輝度の差が大きくなり過ぎないように、泡モデルの輝度を調整する。具体的には、粘膜領域のうち輝度が比較的高い領域に対しては、泡モデルの輝度を高く調整した上で、泡モデルを貼り付ける。反対に、粘膜領域のうち輝度が比較的低い領域に対しては、泡モデルの輝度を低く調整した上で、泡モデルを貼り付ける。 Subsequently, the non-mucosal area generation unit 311 adjusts the brightness of the foam model so that the difference in brightness between the area where the foam model is pasted and the surrounding area in the mucosa area does not become too large. Specifically, the foam model is pasted after adjusting the brightness of the foam model to a relatively high brightness area in the mucous membrane area. On the contrary, the foam model is pasted after adjusting the brightness of the foam model to a relatively low brightness area in the mucous membrane area.
(3-1-3)泡の密度を考慮する場合
 非粘膜領域生成部311は、粘膜領域に貼り付ける泡モデルのピッチの範囲を決定する。このピッチの範囲は、所定の範囲内でランダムに決定しても良いし、ユーザに任意のピッチの範囲を入力させることとしても良い。そして、非粘膜領域生成部311は、任意に選択した泡モデルを、先に決定したピッチの範囲内に収まるように重ね合わせたり、或いは間隔を開けたりしながら粘膜領域に貼り付ける。
(3-1-3) Considering Foam Density The non-mucosal area generation unit 311 determines the pitch range of the foam model to be attached to the mucosal area. The pitch range may be determined randomly within a predetermined range, or the user may be allowed to input an arbitrary pitch range. Then, the non-mucosal region generation unit 311 pastes the arbitrarily selected bubble model to the mucosal region while overlapping or spacing the same so as to be within the previously determined pitch range.
 非粘膜領域生成部311は、上記(3-1-1)~(3-1-3)の手法を単独で実行しても良いし、複数の手法を適宜組み合わせて実行しても良い。複数の手法を組み合わせる場合や、形状や大きさの異なる複数の泡モデルを組み合わせて使用する場合には、泡モデルを粘膜領域に貼り付けた後、さらに平滑化処理等を追加することで、泡モデルと粘膜領域との境界における違和感を低減させることができる。 The non-mucosal region generation unit 311 may execute the above methods (3-1-1) to (3-1-3) alone, or may execute a combination of a plurality of methods as appropriate. When combining multiple methods or using multiple foam models with different shapes and sizes, after adding the foam model to the mucosal area, add a smoothing process to the foam model. The uncomfortable feeling at the boundary between the model and the mucous membrane region can be reduced.
(3-2)粘膜領域に対して残渣領域を合成する場合
 非粘膜領域生成部311は、粘膜領域に残渣モデルを貼り付けることにより、粘膜領域に残渣モデルが合成された画像を生成する。詳細には、非粘膜領域生成部311は、予め格納している残渣モデルの中からいずれかの残渣モデルを選択して粘膜領域に貼り付ける。残渣モデルはユーザに選択させても良いし、非粘膜領域生成部311がランダムに選択しても良い。或いは、以下に説明するように、粘膜領域の表面性状に応じて非粘膜領域生成部311が残渣モデルを適宜選択しても良い。以下、粘膜領域の表面性状に応じた残渣モデルの選択方法及び調整方法を説明する。
(3-2) When Residual Area is Combined with Mucosal Area The non-mucosal area generation unit 311 generates an image in which the residual model is combined with the mucosal area by pasting the residual model onto the mucosal area. Specifically, the non-mucosal region generation unit 311 selects any one of the residue models stored in advance and pastes it on the mucosal region. The residue model may be selected by the user, or the non-mucosal region generation unit 311 may select it randomly. Alternatively, as described below, the non-mucosal region generation unit 311 may appropriately select a residue model according to the surface properties of the mucosal region. Hereinafter, a method for selecting and adjusting a residue model according to the surface properties of the mucosa region will be described.
(3-2-1)粘膜領域の奥行を考慮する場合
 非粘膜領域生成部311は、まず、実施の形態1における(1-2)と同様にして管腔内画像における粘膜領域の奥行を判断する。奥行が深い場合、残渣は小さく写り、奥行が浅い場合、残渣は大きく写るため、非粘膜領域生成部311は、粘膜領域の奥行に応じた大きさの残渣モデルを選択して粘膜領域に貼り付ける。或いは、任意に選択した残渣モデルを、粘膜領域の奥行に応じて拡大又は縮小した上で粘膜領域に貼り付けても良い。
(3-2-1) When Considering Depth of Mucosal Region First, the non-mucosal region generation unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the residue appears small, and when the depth is shallow, the residue appears large. Therefore, the non-mucosal region generation unit 311 selects and pastes a residue model having a size corresponding to the depth of the mucosal region. . Alternatively, an arbitrarily selected residue model may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
(3-2-2)粘膜領域の明るさを考慮する場合
 非粘膜領域生成部311は、上記(3-1-2)と同様にして粘膜領域の平均輝度値を算出する。そして、非粘膜領域生成部311は、予め格納している残渣モデルの平均輝度値を粘膜領域の平均輝度値と比較し、粘膜領域の平均輝度値と等しい、又は粘膜領域の平均輝度値の所定範囲内(例えば±数10%以内)である平均輝度値を有する残渣モデルを選択する。この条件に合う残渣モデルを格納していない場合、非粘膜領域生成部311は、粘膜領域の平均輝度値に対して最も近い平均輝度値を有する残渣モデルを選択しても良い。
(3-2-2) Considering the Brightness of the Mucosal Area The non-mucosal area generation unit 311 calculates the average luminance value of the mucosal area in the same manner as (3-1-2) above. Then, the non-mucosal region generation unit 311 compares the average luminance value of the residue model stored in advance with the average luminance value of the mucosal region, and is equal to or equal to the average luminance value of the mucosal region. A residue model having an average luminance value within a range (for example, within ± 10%) is selected. When the residue model that meets this condition is not stored, the non-mucosal region generation unit 311 may select a residue model having an average luminance value closest to the average luminance value of the mucosal region.
 続いて、非粘膜領域生成部311は、粘膜領域内において、残渣モデルを貼り付ける領域とその周辺領域との間で輝度の差が大きくなり過ぎないように、残渣モデルの輝度を調整する。具体的には、粘膜領域のうち輝度が比較的高い領域に対しては、残渣モデルの輝度を高く調整した上で、残渣モデルを貼り付ける。反対に、粘膜領域のうち輝度が比較的低い領域に対しては、残渣モデルの輝度を低く調整した上で、残渣モデルを貼り付ける。 Subsequently, the non-mucosal region generation unit 311 adjusts the luminance of the residue model so that the difference in luminance between the region to which the residue model is pasted and its peripheral region does not become too large in the mucosal region. Specifically, the residue model is pasted after adjusting the brightness of the residue model to a relatively high region of the mucous membrane region. On the other hand, the residue model is pasted after adjusting the brightness of the residue model to a relatively low brightness region in the mucous membrane region.
(3-2-3)残渣の密度を考慮する場合
 非粘膜領域生成部311は、粘膜領域に貼り付ける残渣モデルのピッチの範囲を決定する。このピッチの範囲は、所定の範囲内でランダムに決定しても良いし、ユーザに任意のピッチの範囲を入力させることとしても良い。そして、非粘膜領域生成部311は、任意に選択した残渣モデルを、先に決定したピッチの範囲内に収まるように重ね合わせたり、或いは間隔を開けたりしながら粘膜領域に貼り付ける。
(3-2-3) When Considering Residue Density The non-mucosal region generation unit 311 determines the range of the pitch of the residue model to be attached to the mucosal region. The pitch range may be determined randomly within a predetermined range, or the user may be allowed to input an arbitrary pitch range. Then, the non-mucosal region generation unit 311 pastes the arbitrarily selected residue model to the mucosal region while overlapping or leaving a gap so as to be within the previously determined pitch range.
 非粘膜領域生成部311は、上記(3-2-1)~(3-2-3)の手法を単独で実行しても良いし、複数の手法を適宜組み合わせて実行しても良い。複数の手法を組み合わせる場合や、形状や大きさの異なる複数の残渣モデルを組み合わせて使用する場合には、残渣モデルを粘膜領域に貼り付けた後、さらに平滑化処理等を追加することで、残渣モデルと粘膜領域との境界における違和感を低減させることができる。 The non-mucosal region generation unit 311 may execute the above methods (3-2-1) to (3-2-3) alone, or may execute a combination of a plurality of methods as appropriate. When combining multiple methods or using multiple residue models with different shapes and sizes, after applying the residue model to the mucosal area, add a smoothing process to the residue. The uncomfortable feeling at the boundary between the model and the mucous membrane region can be reduced.
(3-3-1)粘膜領域にハレーション領域を生成する場合
 非粘膜領域生成部311は、粘膜領域上でハレーション領域を生成する領域を決定する。ハレーション領域を生成する領域は、非粘膜領域生成部311がランダムに決定しても良いし、粘膜領域を表示部40に表示してユーザに指定させても良い。非粘膜領域生成部311は、決定した領域における輝度値を所定の閾値よりも高くすることにより、当該領域をハレーション領域とする。
(3-3-1) Generating a halation region in a mucosal region The non-mucosal region generating unit 311 determines a region for generating a halation region on the mucosal region. The region for generating the halation region may be determined randomly by the non-mucosal region generation unit 311 or may be displayed on the display unit 40 and designated by the user. The non-mucosal region generation unit 311 sets the luminance value in the determined region to a halation region by making the luminance value higher than a predetermined threshold.
(3-3-2)粘膜領域からハレーション領域を消去する場合
 非粘膜領域生成部311は、粘膜領域からハレーション領域を抽出する。詳細には、粘膜領域を構成する各画素の画素値から輝度値を算出し、輝度値が所定の閾値よりも高い領域をハレーション領域として判断する。或いは、粘膜領域を表示部40に表示して、ハレーション領域をユーザに指定させることとしても良い。非粘膜領域生成部311は、ハレーション領域の周囲の画素の画素値を用いて当該ハレーション領域を補間することにより、ハレーション領域を消去する。
(3-3-2) When deleting the halation region from the mucosal region The non-mucosal region generating unit 311 extracts the halation region from the mucosal region. Specifically, a luminance value is calculated from the pixel values of each pixel constituting the mucous membrane region, and a region having a luminance value higher than a predetermined threshold is determined as a halation region. Or it is good also as displaying a mucous membrane area | region on the display part 40 and making a user designate a halation area | region. The non-mucosal region generation unit 311 erases the halation region by interpolating the halation region using the pixel values of pixels around the halation region.
 なお、非粘膜領域生成部311は、(3-3-1)及び(3-3-2)において説明したハレーション領域の生成及び消去のうちのいずれか一方を単独で実行しても良いし、両者を組み合わせて実行しても良い。 The non-mucosal region generation unit 311 may independently execute one of the generation and deletion of the halation regions described in (3-3-1) and (3-3-2). You may perform combining both.
(3-4-1)粘膜領域に暗部領域を生成する場合
 非粘膜領域生成部311は、粘膜領域上で暗部領域を生成する領域を決定する。暗部領域を生成する領域は、非粘膜領域生成部311がランダムに決定しても良いし、粘膜領域を表示部40に表示してユーザに指定させても良い。非粘膜領域生成部311は、決定した領域における輝度値を所定の閾値よりも低くすることにより、当該領域を暗部領域とする。
(3-4-1) When generating a dark area in the mucosal area The non-mucosal area generating unit 311 determines an area for generating a dark area on the mucosal area. The area for generating the dark area may be determined randomly by the non-mucosal area generation unit 311 or may be displayed on the display unit 40 and designated by the user. The non-mucosal area generation unit 311 sets the area as a dark area by making the luminance value in the determined area lower than a predetermined threshold.
(3-4-2)粘膜領域から暗部領域を消去する場合
 非粘膜領域生成部311は、粘膜領域から暗部領域を抽出する。詳細には、粘膜領域を構成する各画素の画素値から輝度値を算出し、輝度値が所定の閾値よりも低い領域を暗部領域として判断する。或いは、粘膜領域を表示部40に表示して、暗部領域をユーザに指定させることとしても良い。非粘膜領域生成部311は、暗部領域の周囲の画素の画素値を用いて当該暗部領域を補間することにより、暗部領域を消去する。
(3-4-2) When deleting the dark area from the mucosal area The non-mucosal area generating unit 311 extracts the dark area from the mucosal area. Specifically, a luminance value is calculated from the pixel values of each pixel constituting the mucous membrane area, and an area having a luminance value lower than a predetermined threshold is determined as a dark area. Or it is good also as displaying a mucous membrane area | region on the display part 40 and making a user designate a dark part area | region. The non-mucosal region generation unit 311 erases the dark part region by interpolating the dark part region using the pixel values of pixels around the dark part region.
 なお、非粘膜領域生成部311は、(3-4-1)及び(3-4-2)において説明した暗部領域の合成及び消去のうちのいずれか一方を単独で実行しても良いし、両者を組み合わせて実行しても良い。 Note that the non-mucosal region generation unit 311 may independently execute one of the dark region combining and erasing described in (3-4-1) and (3-4-2), You may perform combining both.
(3-5)粘膜領域に処置具領域を合成する場合
 非粘膜領域生成部311は、粘膜領域に処置具モデルを貼り付けることにより、粘膜領域に処置具領域が合成された新たな画像を生成する。詳細には、非粘膜領域生成部311は、予め格納している処置具モデルの中からいずれかの処置具モデルを選択して粘膜領域に貼り付ける。処置具モデルの選択方法としては、複数の処置具モデルを表示部40に表示するなどしてユーザに任意の処置具モデルを選択させても良いし、非粘膜領域生成部311がランダムに選択しても良い。或いは、以下に説明するように、粘膜領域の表面性状に応じて非粘膜領域生成部311が処置具モデルを適宜選択しても良い。以下、粘膜領域の表面性状に応じた処置具モデルの選択方法及び調整方法を説明する。
(3-5) When a treatment tool region is combined with a mucosal region The non-mucosal region generation unit 311 generates a new image in which the treatment tool region is combined with the mucosal region by pasting the treatment tool model on the mucosal region. To do. Specifically, the non-mucosal region generation unit 311 selects one of the treatment tool models from the pre-stored treatment tool models and pastes it on the mucosal region. As a method for selecting a treatment tool model, a user may select an arbitrary treatment tool model by displaying a plurality of treatment tool models on the display unit 40, or the non-mucosal region generation unit 311 may select a random treatment tool model. May be. Alternatively, as will be described below, the non-mucosal region generation unit 311 may appropriately select a treatment instrument model in accordance with the surface properties of the mucosal region. Hereinafter, a method for selecting and adjusting a treatment tool model according to the surface properties of the mucosa region will be described.
(3-5-1)粘膜領域の奥行を考慮する場合
 非粘膜領域生成部311は、まず、実施の形態1における(1-2)と同様にして管腔内画像における粘膜領域の奥行を判断する。奥行が深い場合、処置具は小さく写り、奥行が浅い場合、処置具は大きく写るため、非粘膜領域生成部311は、粘膜領域の奥行に応じた大きさの処置具モデルを選択して粘膜領域に貼り付ける。或いは、任意に選択した処置具モデルを、粘膜領域の奥行に応じて拡大又は縮小した上で粘膜領域に貼り付けても良い。
(3-5-1) Considering the Depth of the Mucosal Region The non-mucosal region generation unit 311 first determines the depth of the mucosal region in the intraluminal image in the same manner as (1-2) in the first embodiment. To do. When the depth is deep, the treatment tool appears small, and when the depth is shallow, the treatment tool appears large. Therefore, the non-mucosal region generation unit 311 selects a treatment tool model having a size corresponding to the depth of the mucosal region and selects the mucosal region. Paste to. Alternatively, a treatment tool model arbitrarily selected may be pasted on the mucosal region after being enlarged or reduced according to the depth of the mucosal region.
(3-5-2)粘膜領域の明るさを考慮する場合
 非粘膜領域生成部311は、上記(3-1-2)と同様にして粘膜領域の平均輝度値を算出する。そして、非粘膜領域生成部311は、予め格納している処置具モデルの平均輝度値を粘膜領域の平均輝度値と比較し、粘膜領域の平均輝度値と等しい、又は粘膜領域の平均輝度値の所定範囲内(例えば±数10%以内)である平均輝度値を有する処置具モデルを選択する。この条件に合う処置具モデルを格納していない場合、非粘膜領域生成部311は、粘膜領域の平均輝度値に対して最も近い平均輝度値を有する処置具モデルを選択しても良い。
(3-5-2) Considering the Brightness of the Mucosal Area The non-mucosal area generation unit 311 calculates the average luminance value of the mucosal area in the same manner as (3-1-2) above. Then, the non-mucosal region generation unit 311 compares the average luminance value of the treatment instrument model stored in advance with the average luminance value of the mucosal region, and is equal to the average luminance value of the mucosal region or the average luminance value of the mucosal region. A treatment instrument model having an average luminance value within a predetermined range (for example, within ± tens of 10%) is selected. When the treatment instrument model meeting this condition is not stored, the non-mucosal region generation unit 311 may select a treatment instrument model having an average luminance value closest to the average luminance value of the mucosa region.
 続いて、非粘膜領域生成部311は、粘膜領域内において、処置具モデルを貼り付ける領域とその周辺領域との間で輝度の差が大きくなり過ぎないように、処置具モデルの輝度を調整する。具体的には、粘膜領域のうち輝度が比較的高い領域に対しては、処置具モデルの輝度を高く調整した上で、処置具モデルを貼り付ける。反対に、粘膜領域のうち輝度が比較的低い領域に対しては、処置具モデルの輝度を低く調整した上で、処置具モデルを貼り付ける。 Subsequently, the non-mucosal region generation unit 311 adjusts the luminance of the treatment instrument model so that the difference in luminance between the region where the treatment instrument model is pasted and its peripheral region does not become too large in the mucous membrane region. . Specifically, the treatment tool model is pasted to a region having a relatively high luminance in the mucosal region after adjusting the luminance of the treatment tool model to be high. On the other hand, the treatment instrument model is pasted after adjusting the brightness of the treatment instrument model to a relatively low area in the mucous membrane area.
 非粘膜領域生成部311は、上記(3-5-1)及び(3-5-2)の手法を単独で実行しても良いし、複数の手法を適宜組み合わせて実行しても良い。複数の手法を組み合わせる場合や、形状や大きさの異なる複数の処置具モデルを組み合わせて使用する場合には、処置具モデルを粘膜領域に貼り付けた後、さらに平滑化処理等を追加することで、処置具モデルと粘膜領域との境界における違和感を低減させることができる。 The non-mucosal region generation unit 311 may execute the above methods (3-5-1) and (3-5-2) alone, or may execute a combination of a plurality of methods as appropriate. When combining multiple methods or using multiple treatment tool models with different shapes and sizes, add a smoothing process after attaching the treatment tool model to the mucosal region. The uncomfortable feeling at the boundary between the treatment instrument model and the mucous membrane region can be reduced.
 ステップS31に続くステップS32において、画像生成部310は、新たに生成した画像を出力し、学習サンプルとして記憶部50に記憶させる。その後、演算部300の動作は終了する。 In step S32 following step S31, the image generation unit 310 outputs the newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the arithmetic unit 300 ends.
 以上説明したように、本発明の実施の形態3によれば、管腔内画像から抽出した粘膜領域に対し、粘膜領域の表面性状に基づいて選択及び調整された泡モデル、残渣モデル、処置具モデルを貼り付ける、或いは、粘膜領域に対してハレーション領域又は暗部領域を生成又は消去することにより新たな画像を生成するので、管腔内の状態を適切に反映した学習サンプルを得ることができる。 As described above, according to the third embodiment of the present invention, the foam model, the residue model, and the treatment tool selected and adjusted based on the surface property of the mucosal region with respect to the mucosal region extracted from the intraluminal image. Since a new image is generated by pasting a model or generating or deleting a halation region or a dark region with respect to the mucous membrane region, a learning sample appropriately reflecting the state in the lumen can be obtained.
(実施の形態4)
 次に、本発明の実施の形態4について説明する。図11は、本発明の実施の形態4に係る画像処理装置が備える演算部の構成を示すブロック図である。本実施の形態4に係る画像処理装置は、図1に示す演算部100の代わりに、図11に示す演算部400を備える。演算部400以外の画像処理装置の各部の構成及び動作は、実施の形態1と同様である。
(Embodiment 4)
Next, a fourth embodiment of the present invention will be described. FIG. 11 is a block diagram illustrating a configuration of a calculation unit included in an image processing apparatus according to Embodiment 4 of the present invention. The image processing apparatus according to the fourth embodiment includes a calculation unit 400 shown in FIG. 11 instead of the calculation unit 100 shown in FIG. The configuration and operation of each part of the image processing apparatus other than the arithmetic unit 400 are the same as those in the first embodiment.
 演算部400は、粘膜領域抽出部110及び画像生成部120に加えて、粘膜領域の属性を判定する粘膜領域属性判定部410と、粘膜領域の属性に基づいて新たな画像の生成方法を決定する生成方法決定部420とを備える。なお、粘膜領域抽出部110及び画像生成部120の構成及び動作は実施の形態1と同様である。 In addition to the mucosal region extraction unit 110 and the image generation unit 120, the calculation unit 400 determines a mucosal region attribute determination unit 410 that determines the attribute of the mucosal region, and determines a new image generation method based on the mucosal region attribute. A generation method determination unit 420. The configurations and operations of the mucous membrane region extraction unit 110 and the image generation unit 120 are the same as those in the first embodiment.
 ここで、粘膜領域の属性には、粘膜領域内に正常な粘膜とは異なる特徴を有する異常領域が存在するか否か、異常領域が存在する場合にはその異常領域の種類、当該粘膜領域が位置する臓器の種類、粘膜領域が観察に適さない不要領域であるか否かいった特徴や性質が含まれる。このうち、観察に適さない不要領域とは、画像がボケているボケ領域や、画像に色ズレが生じている色ズレ領域のことである。 Here, the attribute of the mucous membrane region includes whether or not there is an abnormal region having characteristics different from normal mucous membrane in the mucosal region, and if there is an abnormal region, the type of the abnormal region, Features such as the type of organ located and whether or not the mucous membrane area is an unnecessary area not suitable for observation are included. Among these, the unnecessary area that is not suitable for observation is a blurred area where an image is blurred or a color shift area where a color shift occurs in the image.
 詳細には、粘膜領域属性判定部410は、粘膜領域から異常領域を抽出する異常領域抽出部411と、抽出された異常領域の種類を推定する異常種類推定部412と、当該粘膜領域が位置する臓器の種類を判別する臓器判別部413と、粘膜領域が観察に適さない不要領域であるか否かを判定する不要領域判定部414とを備える。 Specifically, the mucosal region attribute determination unit 410 includes an abnormal region extraction unit 411 that extracts an abnormal region from the mucous membrane region, an abnormal type estimation unit 412 that estimates the type of the extracted abnormal region, and the mucous membrane region is located. An organ discriminating unit 413 that discriminates the type of organ, and an unnecessary region determining unit 414 that determines whether or not the mucosal region is an unnecessary region not suitable for observation are provided.
 生成方法決定部420は、粘膜領域属性判定部410の判定結果、即ち粘膜領域の属性に基づいて、新たに生成する画像の数を決定する生成数決定部421を備える。 The generation method determination unit 420 includes a generation number determination unit 421 that determines the number of images to be newly generated based on the determination result of the mucous membrane region attribute determination unit 410, that is, the attribute of the mucous membrane region.
 次に、演算部400の動作を説明する。図12は、演算部400の動作を示すフローチャートである。なお、図12のステップS10、S11は、実施の形態1と同様である。 Next, the operation of the calculation unit 400 will be described. FIG. 12 is a flowchart showing the operation of the calculation unit 400. Note that steps S10 and S11 in FIG. 12 are the same as those in the first embodiment.
 ステップS11に続くステップS41において、粘膜領域属性判定部410は、ステップS11において抽出された粘膜領域の属性を判定する。図13は、粘膜領域の属性の判定処理を示すフローチャートである。 In step S41 following step S11, the mucous membrane area attribute determining unit 410 determines the attribute of the mucosal area extracted in step S11. FIG. 13 is a flowchart showing the mucosal region attribute determination process.
 ステップS411において、異常領域抽出部411は、粘膜領域から異常領域を抽出する。詳細には、異常領域抽出部411は、まず、粘膜領域を構成する各画素の画素値をもとに色特徴量を算出し、予め作成されている正常な粘膜領域の識別基準を用いて閾値処理を行うことにより、正常な粘膜の特徴を有する粘膜領域を正常領域として抽出し、正常領域以外の粘膜領域を異常領域として抽出する。色特徴量としては、画素値のうちのR値、G値、B値や、これらの値をもとに2次的に算出される値、具体的には、G/R値又はB/G値等の色比、色相、彩度、明度、色差等が挙げられる。また、識別基準は、事前に収集された異常領域の色特徴量に基づいて作成されたものである。 In step S411, the abnormal area extraction unit 411 extracts an abnormal area from the mucous membrane area. Specifically, the abnormal area extraction unit 411 first calculates a color feature amount based on the pixel value of each pixel constituting the mucosal area, and uses a normal mucosal area identification criterion created in advance as a threshold value. By performing the processing, a mucosal region having normal mucous membrane characteristics is extracted as a normal region, and a mucosal region other than the normal region is extracted as an abnormal region. The color feature amount includes an R value, a G value, and a B value among pixel values, a value that is secondarily calculated based on these values, specifically, a G / R value or a B / G value. Color ratio such as value, hue, saturation, brightness, color difference and the like. The identification standard is created based on the color feature amount of the abnormal area collected in advance.
 続くステップS412において、異常種類推定部412は、ステップS411において抽出された異常領域の各々に対し、異常領域の種類を推定する。詳細には、まず、異常種類推定部412は、各異常領域について色特徴量、形状特徴量、及びテクスチャ特徴量を算出し、予め異常領域の種類別に作成されている判別基準を用いてこれらの特徴量を判別する。色特徴量としては、ステップS411において列挙した色特徴量等が用いられる。また、形状特徴量としては、異常領域の面積(画素数)、周囲長、水平フェレ径又は垂直フェレ径を含むフェレ径、HOG特徴量、SIFT特徴量等が用いられる。テクスチャ特徴量としては、LBP等が挙げられる。また、異常領域の判別基準は、種類別に事前に収集された異常領域の特徴量分布から確率密度関数を作成することにより求められる。異常種類推定部412は、種類別に作成された判別基準を用いて、各異常領域を出血、潰瘍、腫瘍、絨毛異常のいずれかとして推定する。 In subsequent step S412, the abnormality type estimation unit 412 estimates the type of abnormal region for each of the abnormal regions extracted in step S411. Specifically, first, the abnormality type estimation unit 412 calculates a color feature amount, a shape feature amount, and a texture feature amount for each abnormal region, and uses these determination criteria created in advance for each type of abnormal region. A feature amount is determined. As the color feature amount, the color feature amount listed in step S411 is used. As the shape feature amount, the area (number of pixels) of the abnormal region, the perimeter, the ferret diameter including the horizontal ferret diameter or the vertical ferret diameter, the HOG feature amount, the SIFT feature amount, and the like are used. Examples of the texture feature amount include LBP. In addition, the abnormal region discrimination standard is obtained by creating a probability density function from the feature amount distribution of the abnormal region collected in advance for each type. The abnormality type estimation unit 412 estimates each abnormal region as one of bleeding, ulcer, tumor, and villi abnormality using a discrimination criterion created for each type.
 続くステップS413において、臓器判別部413は、粘膜領域が位置する臓器の種類を判別する。詳細には、臓器判別部413は、当該粘膜領域が抽出された管腔内画像の平均R値、平均G値、及び平均B値を算出し、これらの値に基づいて、管腔内画像が撮像された臓器が、食道、胃、小腸、大腸のいずれかであるかを判別する。 In subsequent step S413, the organ discriminating unit 413 discriminates the type of organ where the mucosal region is located. Specifically, the organ discriminating unit 413 calculates an average R value, an average G value, and an average B value of the intraluminal image from which the mucosal region is extracted, and the intraluminal image is calculated based on these values. It is determined whether the imaged organ is an esophagus, stomach, small intestine, or large intestine.
 具体的には、臓器判別部413は、管腔内画像の平均R値、平均G値、及び平均B値が、食道、胃、小腸、大腸のそれぞれについて予め設定されているR、G、Bの各色要素のいずれの範囲内に含まれるかを判別する。例えば、管腔内画像の平均R値、平均G値、及び平均B値が食道のR、G、Bの各色要素の範囲内である場合、臓器判別部413は、判別対象の粘膜領域は食道に位置すると判別する。胃、小腸、大腸についても同様である(参考:特開2006-288612号公報)。 Specifically, the organ discriminating unit 413 has R, G, B in which the average R value, the average G value, and the average B value of the intraluminal image are preset for each of the esophagus, stomach, small intestine, and large intestine. It is determined which range of each color element is included. For example, when the average R value, average G value, and average B value of the intraluminal image are within the range of R, G, and B color elements of the esophagus, the organ discriminating unit 413 determines that the mucosal region to be discriminated is the esophagus. It is determined that it is located at. The same applies to the stomach, small intestine, and large intestine (Reference: JP-A-2006-288612).
 続くステップS414において、不要領域判定部414は、当該粘膜領域が観察に適さない不要領域であるか否か、具体的には、粘膜領域がボケているか否か又は粘膜領域に色ズレが生じているか否かを判定する。 In subsequent step S414, the unnecessary area determination unit 414 determines whether or not the mucosal area is an unnecessary area that is not suitable for observation, specifically, whether or not the mucosal area is blurred or a color shift occurs in the mucosal area. It is determined whether or not.
 粘膜領域がボケているか否かの判定は次のように行われる。即ち、不要領域判定部414は、粘膜領域に対してソーベル(sobel)フィルタ、ラプラシアン(Laplacian)フィルタ等の処理を施すことにより、粘膜領域内からエッジを抽出し、粘膜領域におけるエッジ強度が所定値以下の場合、当該粘膜領域はボケていると判定する。 The determination as to whether or not the mucous membrane area is blurred is performed as follows. That is, the unnecessary area determination unit 414 extracts edges from the mucosal area by performing processing such as a Sobel filter and a Laplacian filter on the mucosal area, and the edge strength in the mucosal area is a predetermined value. In the following cases, it is determined that the mucosal region is blurred.
 また、粘膜領域に色ズレが生じているか否かの判定は次のように行われる。即ち、不要領域判定部414は、粘膜領域内の注目画素におけるR、G、Bの各値と、注目画素と隣接する隣接画素におけるR、G、Bの各値との差を算出し、各値の差が閾値以下であるか否かを判定する。そして、不要領域判定部414は、R、G、Bの中の少なくとも1つにおいて、差が閾値よりも大きくなる場合に、注目画素に色ズレが生じていると判定する。このような注目画素に対する処理を粘膜領域全体について行うことにより、粘膜領域全体における色ズレを判定する。詳細には、色ズレが生じている画素数の合計が所定値以上である場合や、粘膜領域の画素数に対する色ズレが生じている画素数の割合が所定値以上である場合に、当該粘膜領域を色ズレ領域と判定する。その後、処理はメインルーチンに戻る。 Also, the determination as to whether or not a color shift has occurred in the mucosal area is performed as follows. That is, the unnecessary area determination unit 414 calculates the difference between the R, G, and B values of the target pixel in the mucous membrane area and the R, G, and B values of adjacent pixels adjacent to the target pixel. It is determined whether the difference between the values is equal to or less than a threshold value. Then, the unnecessary area determination unit 414 determines that a color shift has occurred in the pixel of interest when the difference is larger than the threshold value in at least one of R, G, and B. By performing the process for the pixel of interest on the entire mucosal area, the color shift in the entire mucosal area is determined. Specifically, when the total number of pixels with color misregistration is equal to or greater than a predetermined value, or when the ratio of the number of pixels with color misregistration to the number of pixels in the mucosa region is equal to or greater than a predetermined value, the mucosa The area is determined as a color misregistration area. Thereafter, the process returns to the main routine.
 ステップS41に続くステップS42において、生成数決定部421は、ステップS41において判定された異常領域の種類、臓器の種類、及び不要領域の判定結果に基づいて、新たな画像を生成する数を決定する。詳細には、以下のように決定する。 In step S42 following step S41, the generation number determination unit 421 determines the number of new images to be generated based on the abnormal region type, organ type, and unnecessary region determination result determined in step S41. . In detail, it determines as follows.
 生成数決定部421は、異常領域の種類の重要度に応じて、画像の生成数を決定する。具体的には、異常領域の重要度は、絨毛異常、腫瘍、潰瘍、出血の順に高くなるため、画像の生成数も、絨毛異常、腫瘍、潰瘍、出血の順に増加させる。 The generation number determination unit 421 determines the number of image generations according to the importance of the type of abnormal area. Specifically, since the importance of the abnormal region increases in the order of abnormal villus, tumor, ulcer, and bleeding, the number of generated images is also increased in the order of abnormal villus, tumor, ulcer, and bleeding.
 また、生成数決定部421は、臓器の種類に応じて画像の生成数を決定する。具体的には、当該内視鏡検査の検査対象である臓器の種類と、判別対象の粘膜領域が位置する臓器の種類とが一致する場合に、生成する画像の数を多く設定する。なお、検査対象である臓器の種類は、入力部30によりユーザに入力させることとしても良いし、管腔内画像の関連情報として記憶部50に予め記憶させておいても良い。 Also, the generation number determination unit 421 determines the number of image generations according to the type of organ. Specifically, the number of images to be generated is set to be large when the type of organ that is the examination target of the endoscopic examination matches the type of organ in which the mucosal region to be discriminated is located. Note that the type of the organ to be examined may be input by the user using the input unit 30 or may be stored in advance in the storage unit 50 as related information of the intraluminal image.
 また、生成数決定部421は、当該粘膜領域が不要領域である場合、画像の生成数を少なく設定する。 Also, the generation number determination unit 421 sets a smaller number of image generations when the mucosal region is an unnecessary region.
 続くステップS43において、画像生成部120は、粘膜領域の表面性状を取得する。粘膜領域の表面性状の取得処理は、実施の形態1と同様である(図2のステップS12参照)。 In subsequent step S43, the image generation unit 120 acquires the surface property of the mucosal region. The process for acquiring the surface property of the mucosa region is the same as that in the first embodiment (see step S12 in FIG. 2).
 続くステップS44において、画像生成部120は、粘膜領域の表面性状に基づいて、ステップS42において決定した数の画像を新たに生成する。新たな画像を生成する個別の処理は、実施の形態1と同様である(図2のステップS13参照)。 In subsequent step S44, the image generation unit 120 newly generates the number of images determined in step S42 based on the surface property of the mucosal region. The individual processing for generating a new image is the same as that in the first embodiment (see step S13 in FIG. 2).
 続くステップS45において、画像生成部120は、新たに生成した画像を出力し、学習サンプルとして記憶部50に記憶させる。その後、演算部400の動作は終了する。 In subsequent step S45, the image generation unit 120 outputs the newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the calculation unit 400 ends.
 以上説明したように、本発明の実施の形態4によれば、粘膜領域の属性に応じて、当該粘膜領域をもとに生成する画像の数を変化させるので、重要な異常領域が抽出された粘膜領域や検査対象の臓器内の粘膜領域から、より多くの学習サンプルを取得することが可能となる。 As described above, according to the fourth embodiment of the present invention, since the number of images to be generated based on the mucosal area is changed according to the attribute of the mucosal area, important abnormal areas are extracted. More learning samples can be acquired from the mucosal region or the mucosal region in the organ to be examined.
(実施の形態5)
 次に、本発明の実施の形態5について説明する。図14は、本発明の実施の形態5に係る画像処理装置が備える演算部の構成を示すブロック図である。本実施の形態5に係る画像処理装置は、図1に示す演算部100の代わりに、図14に示す演算部500を備える。演算部500以外の画像処理装置の各部の構成及び動作は、実施の形態1と同様である。
(Embodiment 5)
Next, a fifth embodiment of the present invention will be described. FIG. 14 is a block diagram illustrating a configuration of a calculation unit included in the image processing apparatus according to the fifth embodiment of the present invention. The image processing apparatus according to the fifth embodiment includes a calculation unit 500 shown in FIG. 14 instead of the calculation unit 100 shown in FIG. The configuration and operation of each part of the image processing apparatus other than the arithmetic unit 500 are the same as those in the first embodiment.
 演算部500は、粘膜領域抽出部110と、粘膜領域の表面性状に基づいて新たな画像を生成する画像生成部510と、粘膜領域の属性を判定する粘膜領域属性判定部520と、粘膜領域の属性に基づいて新たな画像の生成方法を決定する生成方法決定部530とを備える。このうち、粘膜領域抽出部110の動作は実施の形態1と同様である。 The calculation unit 500 includes a mucosal region extraction unit 110, an image generation unit 510 that generates a new image based on the surface properties of the mucosal region, a mucosal region attribute determination unit 520 that determines the attribute of the mucosal region, And a generation method determination unit 530 that determines a generation method of a new image based on the attribute. Among these, the operation of the mucous membrane region extraction unit 110 is the same as that of the first embodiment.
 画像生成部510は、管腔内画像から抽出された粘膜領域に対し、該粘膜領域における色情報や形状情報やテクスチャ情報を変化させることにより微細構造を生成する微細構造生成部511を備える。詳細には、微細構造生成部511は、粘膜領域の色情報を変更する色情報変更部511aと、粘膜領域の形状情報を変更する形状情報変更部511bと、粘膜領域のテクスチャ情報を変更するテクスチャ情報変更部511cとを有する。 The image generation unit 510 includes a fine structure generation unit 511 that generates a fine structure by changing color information, shape information, and texture information in the mucosal region extracted from the intraluminal image. Specifically, the fine structure generation unit 511 includes a color information change unit 511a that changes the color information of the mucosa region, a shape information change unit 511b that changes the shape information of the mucosa region, and a texture that changes the texture information of the mucosa region. And an information changing unit 511c.
 粘膜領域属性判定部520は、粘膜領域から異常領域を抽出する異常領域抽出部521と、抽出された異常領域の種類を推定する異常種類推定部522と、当該粘膜領域が位置する臓器の種類を判別する臓器判別部523とを備える。 The mucosal region attribute determination unit 520 includes an abnormal region extraction unit 521 that extracts an abnormal region from the mucous membrane region, an abnormal type estimation unit 522 that estimates the type of the extracted abnormal region, and the type of organ in which the mucosal region is located. An organ discriminating unit 523 for discriminating;
 生成方法決定部530は、粘膜領域属性判定部520による判定結果に基づいて、新たに生成される画像における微細構造の生成方法を決定する。生成方法決定部530は、微細構造生成部511が粘膜領域の色情報、形状情報、及びテクスチャ情報を変化させる際に、それぞれの情報にパラメータとして与えられる重みを決定する重み決定部531を備える。 The generation method determination unit 530 determines the generation method of the fine structure in the newly generated image based on the determination result by the mucous membrane region attribute determination unit 520. The generation method determination unit 530 includes a weight determination unit 531 that determines a weight given as a parameter to each piece of information when the fine structure generation unit 511 changes the color information, shape information, and texture information of the mucous membrane region.
 次に、演算部500の動作を説明する。図15は、演算部500の動作を示すフローチャートである。なお、図15のステップS10、S11は実施の形態1と同様である(図2参照)。 Next, the operation of the calculation unit 500 will be described. FIG. 15 is a flowchart showing the operation of the calculation unit 500. Note that steps S10 and S11 in FIG. 15 are the same as those in the first embodiment (see FIG. 2).
 ステップS11に続くステップS51において、粘膜領域属性判定部520は、ステップS11において抽出された粘膜領域の属性を判定する。詳細には、異常領域抽出部521が粘膜領域から異常領域を抽出し、異常種類推定部522がこの異常領域の種類を推定する。また、臓器判別部523が、粘膜領域が位置する臓器の種類を判別する。異常領域の抽出処理、異常領域の種類の推定処理、及び臓器の種類の判別処理は、実施の形態4と同様である(図13のステップS411~S413参照)。 In step S51 subsequent to step S11, the mucosa region attribute determining unit 520 determines the attribute of the mucosa region extracted in step S11. Specifically, the abnormal area extraction unit 521 extracts an abnormal area from the mucous membrane area, and the abnormality type estimation unit 522 estimates the type of the abnormal area. In addition, the organ discriminating unit 523 discriminates the type of organ where the mucosal region is located. The abnormal region extraction processing, abnormal region type estimation processing, and organ type determination processing are the same as those in the fourth embodiment (see steps S411 to S413 in FIG. 13).
 続くステップS52において、生成方法決定部530は、粘膜領域の属性に基づいて、新たな画像の生成方法を決定する。詳細には、粘膜領域における色情報、形状情報、及びテクスチャ情報を変更する際にそれぞれの情報に与えられる重みを設定する。以下、粘膜領域の属性に基づく重みの設定方法を説明する。なお、色情報、形状情報、及びテクスチャ情報にそれぞれ与えられる重みは、合計が1となるように規格化される。 In subsequent step S52, the generation method determination unit 530 determines a new image generation method based on the attribute of the mucous membrane region. Specifically, the weight given to each information when color information, shape information, and texture information in the mucous membrane region are changed is set. Hereinafter, a method for setting the weight based on the attribute of the mucous membrane region will be described. Note that the weights given to the color information, the shape information, and the texture information are normalized so that the sum is 1.
(5-1-1)異常領域が出血である場合
 この場合、粘膜領域の表面の微細構造に対して色情報を優先的に変更するために、色情報の重みを形状情報及びテクスチャ情報の重みよりも大きくする。なお、形状情報及びテクスチャ情報の重みは同程度であって良い。
(5-1-1) When the abnormal region is bleeding In this case, in order to preferentially change the color information with respect to the fine structure of the surface of the mucous membrane region, the weight of the color information is changed to the weight of the shape information and the texture information. Larger than. Note that the weights of the shape information and the texture information may be approximately the same.
(5-1-2)異常領域が潰瘍である場合
 この場合、粘膜領域の表面の微細構造に対して色情報及び形状情報を優先的に変更するために、色情報及び形状情報の重みをテクスチャ情報の重みよりも大きくする。なお、色情報及び形状情報の重みは同程度であって良い。
(5-1-2) When the abnormal region is an ulcer In this case, in order to preferentially change the color information and the shape information with respect to the fine structure of the surface of the mucous membrane region, the weight of the color information and the shape information is textured. Make it larger than the weight of the information. Note that the weights of the color information and the shape information may be approximately the same.
(5-1-3)異常領域が腫瘍である場合
 この場合、粘膜領域の表面の微細構造に対して形状情報を優先的に変更するために、形状情報の重みを色情報及びテクスチャ情報の重みよりも大きくする。なお、色情報及びテクスチャ情報の重みは同程度であって良い。
(5-1-3) When the abnormal region is a tumor In this case, in order to preferentially change the shape information with respect to the fine structure of the surface of the mucosal region, the weight of the shape information is the weight of the color information and the texture information. Larger than. Note that the weights of the color information and the texture information may be approximately the same.
(5-1-4)異常領域が絨毛異常である場合
 この場合、粘膜領域の表面の微細構造に対してテクスチャ情報を優先的に変更するために、テクスチャ情報の重みを色情報及び形状情報の重みよりも大きくする。なお、色情報及び形状情報の重みは同程度であって良い。
(5-1-4) When the abnormal region is abnormal villus In this case, in order to preferentially change the texture information with respect to the fine structure of the surface of the mucous membrane region, the weight of the texture information is set to the color information and the shape information. Make it larger than the weight. Note that the weights of the color information and the shape information may be approximately the same.
(5-2-1)臓器が胃である場合
 この場合、粘膜領域の表面の微細構造に対して色情報を優先的に変更するために、色情報の重みを形状情報及びテクスチャ情報の重みよりも大きくする。なお、形状情報及びテクスチャ情報の重みは同程度であって良い。
(5-2-1) When the organ is the stomach In this case, in order to preferentially change the color information with respect to the fine structure of the surface of the mucous membrane region, the weight of the color information is changed from the weight of the shape information and texture information. Also make it bigger. Note that the weights of the shape information and the texture information may be approximately the same.
(5-2-2)臓器が小腸である場合
 この場合、粘膜領域の表面の微細構造に対してテクスチャ情報を優先的に変更するために、テクスチャ情報の重みを色情報及び形状情報の重みよりも大きくする。なお、色情報及び形状情報の重みは同程度であって良い。
(5-2-2) When the organ is the small intestine In this case, in order to preferentially change the texture information for the fine structure of the surface of the mucous membrane region, the weight of the texture information is changed from the weight of the color information and the shape information. Also make it bigger. Note that the weights of the color information and the shape information may be approximately the same.
(5-2-3)臓器が大腸である場合
 この場合、粘膜領域の表面の微細構造に対して形状情報を優先的に変更するために、形状情報の重みを色情報及びテクスチャ情報の重みよりも大きくする。なお、色情報及びテクスチャ情報の重みは同程度であって良い。
(5-2-3) When the organ is the large intestine In this case, in order to preferentially change the shape information with respect to the fine structure of the surface of the mucous membrane region, the weight of the shape information is more than the weight of the color information and texture information. Also make it bigger. Note that the weights of the color information and the texture information may be approximately the same.
 重み決定部531は、異常領域の種類と臓器の種類とのいずれかに基づいて重みを決定しても良いし、異常領域の種類と臓器の種類との両方に基づいて重みを決定しても良い。後者の場合、異常領域の種類に基づいて決定した重み(上記(5-1-1)~(5-1-4)参照)と、臓器の種類に基づいて決定した重み(上記(5-2-1)~(5-2-3)参照)との平均値を算出すれば良い。 The weight determination unit 531 may determine the weight based on either the abnormal region type or the organ type, or may determine the weight based on both the abnormal region type and the organ type. good. In the latter case, the weight determined based on the type of abnormal region (see (5-1-1) to (5-1-4) above) and the weight determined based on the type of organ (above (5-2) -1) to (5-2-3)) may be calculated.
 ステップS52に続くステップS53において、画像生成部510は、ステップS11において抽出された粘膜領域の表面性状を取得する。具体的には、粘膜領域の色情報として各画素の色相、形状情報として粘膜領域の輪郭、及びテクスチャ情報として各画素の輝度値を取得する。 In step S53 subsequent to step S52, the image generation unit 510 acquires the surface property of the mucosal region extracted in step S11. Specifically, the hue of each pixel is acquired as the color information of the mucosa region, the outline of the mucosa region is acquired as the shape information, and the luminance value of each pixel is acquired as the texture information.
 続くステップS54において、微細構造生成部511は、粘膜領域の表面性状に基づいて、新たな画像をステップS52において決定した生成方法で生成する。この際、色情報、形状情報、テクスチャ情報の変更は個別に実行しても良いし、複数種類の情報の変更を組み合わせて実行しても良い。図16は、新たな画像の生成処理を示すフローチャートである。以下においては、複数種類の情報の変更を組み合わせる場合を説明する。 In subsequent step S54, the fine structure generation unit 511 generates a new image by the generation method determined in step S52 based on the surface property of the mucosal region. At this time, the change of the color information, the shape information, and the texture information may be executed individually or may be executed by combining a plurality of types of information changes. FIG. 16 is a flowchart showing a new image generation process. Below, the case where the change of multiple types of information is combined is demonstrated.
 まず、ステップS541において、色情報変更部511aは、ステップS11において抽出された粘膜領域の色情報を変更する。詳細には、色情報変更部511aは、まず、粘膜領域を構成する各画素における色相(H値)の平均値(平均H値)を算出する。そして、この平均H値が所定の範囲内に収まるように、粘膜領域を構成する各画素のH値を所定間隔で変動させ、H値が異なる粘膜領域の画像を複数作成する。具体的には、生体の粘膜領域が取り得る色相(H値)として予め定められた範囲に対し、色情報の重みを掛けあわせた範囲内に平均H値が収まるようにする。例えば色情報の重みが0.9である場合、平均H値が変動可能な範囲は、生体の粘膜領域が取り得るH値の範囲の90%以内となる。なお、色情報の重みがゼロである場合、ステップS541は省略される。 First, in step S541, the color information changing unit 511a changes the color information of the mucous membrane region extracted in step S11. Specifically, the color information changing unit 511a first calculates the average value (average H value) of the hue (H value) in each pixel constituting the mucous membrane region. Then, the H value of each pixel constituting the mucous membrane region is varied at a predetermined interval so that the average H value falls within a predetermined range, and a plurality of images of the mucosal region having different H values are created. Specifically, the average H value is set within a range obtained by multiplying a predetermined range as hues (H values) that can be taken by the mucous membrane region of the living body and the weight of the color information. For example, when the weight of the color information is 0.9, the range in which the average H value can be varied is within 90% of the range of the H value that can be taken by the mucous membrane region of the living body. If the weight of the color information is zero, step S541 is omitted.
 続くステップS542において、形状情報変更部511bは、ステップS541において色情報が変更された粘膜領域の形状情報を変更する。詳細には、H値が異なる複数の粘膜領域の画像の各々に対し、アフィン変換等の公知の幾何学的な変換処理により、粘膜領域の形状を変化させた画像を複数作成する。この際、形状情報の重みに応じて、アフィン変換等における変換量を決定する。具体的には、形状情報の重みが大きいほど変換量を多くする。なお、形状情報の重みがゼロである場合、ステップS542は省略される。 In subsequent step S542, the shape information changing unit 511b changes the shape information of the mucous membrane region whose color information has been changed in step S541. Specifically, a plurality of images in which the shape of the mucous membrane region is changed are generated for each of the plurality of mucosal region images having different H values by a known geometric transformation process such as affine transformation. At this time, a conversion amount in affine transformation or the like is determined according to the weight of the shape information. Specifically, the amount of conversion increases as the weight of the shape information increases. If the weight of the shape information is zero, step S542 is omitted.
 続くステップS543において、テクスチャ情報変更部511cは、ステップS542において形状情報が変更された粘膜領域のテクスチャ情報を変更する。詳細には、H値や形状が異なる粘膜領域の画像の各々に対し、粘膜領域に先鋭化フィルタや平滑化フィルタ等のフィルタ処理を施すことにより、粘膜表面のテクスチャを変化させた複数の画像を作成する。この際、テクスチャ情報の重みに応じて、フィルタ処理におけるパラメータを決定する。具体的には、テクスチャ情報の重みが大きいほど、極端な先鋭化や、極端な平滑化が許容されるようにパラメータを決定する。なお、テクスチャ情報の重みがゼロである場合、ステップS543は省略される。その後、処理はメインルーチンに戻る。 In subsequent step S543, the texture information changing unit 511c changes the texture information of the mucous membrane region whose shape information has been changed in step S542. Specifically, a plurality of images in which the texture of the mucosal surface is changed by applying filter processing such as a sharpening filter or a smoothing filter to each mucosal region image having a different H value or shape. create. At this time, parameters in the filter processing are determined according to the weight of the texture information. Specifically, the parameter is determined so that extreme sharpening or extreme smoothing is allowed as the weight of the texture information is increased. If the texture information weight is zero, step S543 is omitted. Thereafter, the process returns to the main routine.
 ステップS54に続くステップS55において、画像生成部510は、新たに生成した画像を出力し、学習サンプルとして記憶部50に記憶させる。その後、演算部500の動作は終了する。 In step S55 following step S54, the image generation unit 510 outputs a newly generated image and stores it in the storage unit 50 as a learning sample. Thereafter, the operation of the calculation unit 500 ends.
 以上説明したように、本発明の実施の形態5によれば、粘膜領域の属性に応じて、粘膜領域の色情報、形状情報、及びテクスチャ情報を変更することにより新たな画像を生成するので、粘膜領域の状態を適切に反映した学習サンプルを得ることができる。 As described above, according to the fifth embodiment of the present invention, a new image is generated by changing the color information, shape information, and texture information of the mucosal area according to the attribute of the mucosal area. A learning sample appropriately reflecting the state of the mucosal region can be obtained.
 以上説明した実施の形態1~5は、記憶装置に記憶された画像処理プログラムをパーソナルコンピュータやワークステーション等のコンピュータシステムで実行することによって実現することができる。また、このようなコンピュータシステムを、ローカルエリアネットワーク(LAN)、広域エリアネットワーク(WAN)、又は、インターネット等の公衆回線を介して、他のコンピュータシステムやサーバ等の機器に接続して使用しても良い。この場合、実施の形態1~5に係る画像処理装置は、これらのネットワークを介して管腔内画像の画像データを取得したり、これらのネットワークを介して接続された種々の出力機器(ビュアーやプリンタ等)に画像処理結果を出力したり、これらのネットワークを介して接続された記憶装置(記憶媒体及びその読取装置等)に画像処理結果を格納するようにしても良い。 Embodiments 1 to 5 described above can be realized by executing an image processing program stored in a storage device on a computer system such as a personal computer or a workstation. In addition, such a computer system is used by being connected to other computer systems, servers, and other devices via a public line such as a local area network (LAN), a wide area network (WAN), or the Internet. Also good. In this case, the image processing apparatuses according to Embodiments 1 to 5 acquire image data of intraluminal images via these networks, and various output devices (viewers, viewers, etc.) connected via these networks. The image processing result may be output to a printer or the like, or the image processing result may be stored in a storage device (storage medium and its reading device) connected via these networks.
 本発明は、実施の形態1~5に限定されるものではなく、各実施の形態や変形例に開示されている複数の構成要素を適宜組み合わせることによって、種々の発明を形成できる。例えば、各実施の形態や変形例に示される全構成要素からいくつかの構成要素を除外して形成しても良いし、異なる実施の形態や変形例に示した構成要素を適宜組み合わせて形成しても良い。 The present invention is not limited to the first to fifth embodiments, and various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the respective embodiments and modifications. For example, some constituent elements may be excluded from all the constituent elements shown in each embodiment or modification, or may be formed by appropriately combining the constituent elements shown in different embodiments or modifications. May be.
 1 画像処理装置
 10 制御部
 20 画像取得部
 30 入力部
 40 表示部
 50 記憶部
 51 プログラム記憶部
 100、200、300、400、500 演算部
 110 粘膜領域抽出部
 120、310、510 画像生成部
 121、511 微細構造生成部
 121a 絨毛生成部
 121b 血管生成部
 210 識別基準作成部
 211 重み設定部
 311 非粘膜領域生成部
 410、520 粘膜領域属性判定部
 411、521 異常領域抽出部
 412、522 異常種類推定部
 413、523 臓器判別部
 414 不要領域判定部
 420、530 生成方法決定部
 421 生成数決定部
 511a 色情報変更部
 511b 形状情報変更部
 511c テクスチャ情報変更部
 531 重み決定部
DESCRIPTION OF SYMBOLS 1 Image processing apparatus 10 Control part 20 Image acquisition part 30 Input part 40 Display part 50 Storage part 51 Program storage part 100,200,300,400,500 Calculation part 110 Mucous membrane area | region extraction part 120,310,510 Image generation part 121, 511 Fine structure generation unit 121a Villi generation unit 121b Blood vessel generation unit 210 Identification reference creation unit 211 Weight setting unit 311 Non-mucosal region generation unit 410, 520 Mucosal region attribute determination unit 411, 521 Abnormal region extraction unit 412, 522 Abnormality type estimation unit 413, 523 Organ determination unit 414 Unnecessary region determination unit 420, 530 Generation method determination unit 421 Generation number determination unit 511a Color information change unit 511b Shape information change unit 511c Texture information change unit 531 Weight determination unit

Claims (20)

  1.  生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出部と、
     前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成部と、
    を備えることを特徴とする画像処理装置。
    A mucous membrane region extracting unit that extracts a mucous membrane region from an intraluminal image obtained by imaging the inside of the lumen of a living body;
    An image generating unit that acquires a surface property of the mucosal region and generates an image different from the intraluminal image by processing the mucosal region in the intraluminal image based on the surface property;
    An image processing apparatus comprising:
  2.  前記粘膜領域抽出部が抽出した前記粘膜領域と、前記画像生成部が生成した前記画像とを構成する各画素の特徴量を算出し、該特徴量に基づいて、前記粘膜領域が特定の特徴を有するか否かを識別するための識別基準を作成する識別基準作成部をさらに備える、ことを特徴とする請求項1に記載の画像処理装置。 A feature amount of each pixel constituting the mucosal region extracted by the mucosa region extraction unit and the image generated by the image generation unit is calculated, and based on the feature amount, the mucous membrane region has a specific feature. The image processing apparatus according to claim 1, further comprising an identification criterion creating unit that creates an identification criterion for identifying whether or not the image has an identification criterion.
  3.  前記識別基準作成部は、前記粘膜領域及び前記画像からそれぞれ算出された前記特徴量に対して重みを設定する重み設定部を備え、前記重み設定部により設定された重みが与えられた前記特徴量に基づいて前記識別基準を作成する、ことを特徴とする請求項2に記載の画像処理装置。 The identification reference creation unit includes a weight setting unit that sets a weight for the feature amount calculated from the mucosal region and the image, and the feature amount to which the weight set by the weight setting unit is given. The image processing apparatus according to claim 2, wherein the identification reference is created based on the image.
  4.  前記画像生成部は、前記粘膜領域に対し、生体の粘膜における微細構造を表す領域を合成することにより前記画像を生成する、ことを特徴とする請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the image generation unit generates the image by synthesizing a region representing a fine structure in a mucous membrane of a living body with the mucous membrane region.
  5.  前記微細構造を表す領域は、絨毛領域と血管領域とのうちの少なくともいずれかである、ことを特徴とする請求項4に記載の画像処理装置。 The image processing apparatus according to claim 4, wherein the region representing the fine structure is at least one of a villi region and a blood vessel region.
  6.  前記画像生成部は、前記粘膜領域に対し、生体の粘膜以外の被写体を表す領域を合成することにより前記画像を生成する、ことを特徴とする請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the image generation unit generates the image by synthesizing an area representing a subject other than a living body's mucous membrane with the mucosa area.
  7.  前記粘膜領域の属性を判定する粘膜領域属性判定部と、
     前記粘膜領域属性判定部が判定した前記粘膜領域の属性に応じて、前記画像生成部が実行する前記画像の生成方法を決定する生成方法決定部と、
    をさらに備えることを特徴とする請求項1に記載の画像処理装置。
    A mucosal region attribute determining unit for determining the attribute of the mucosal region;
    A generation method determination unit that determines a generation method of the image executed by the image generation unit according to the attribute of the mucous membrane region determined by the mucous membrane region attribute determination unit;
    The image processing apparatus according to claim 1, further comprising:
  8.  前記粘膜領域属性判定部は、
     前記粘膜領域から特定の特徴を有する領域を正常領域として抽出し、該正常領域以外の前記粘膜領域を異常領域として抽出する異常領域抽出部と、
     前記異常領域の種類を推定する異常領域推定部と、
    を備える、ことを特徴とする請求項7に記載の画像処理装置。
    The mucosal region attribute determination unit
    An abnormal region extraction unit that extracts a region having specific characteristics from the mucosal region as a normal region, and extracts the mucosal region other than the normal region as an abnormal region;
    An abnormal region estimation unit for estimating the type of the abnormal region;
    The image processing apparatus according to claim 7, further comprising:
  9.  前記粘膜領域属性判定部は、前記粘膜領域が位置する臓器の種類を判別する臓器判別部を備える、ことを特徴とする請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the mucosal region attribute determining unit includes an organ determining unit that determines an organ type in which the mucosal region is located.
  10.  前記粘膜領域属性判定部は、前記粘膜領域が観察に適さない不要領域であるか否かを判定する不要領域判定部を備える、ことを特徴とする請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the mucosal region attribute determining unit includes an unnecessary region determining unit that determines whether or not the mucosal region is an unnecessary region that is not suitable for observation.
  11.  前記生成方法決定部は、前記粘膜領域の属性に応じて、前記画像生成部が生成する前記画像の数を決定する、ことを特徴とする請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the generation method determination unit determines the number of the images generated by the image generation unit according to an attribute of the mucous membrane region.
  12.  前記粘膜領域属性判定部は、
     前記粘膜領域から特定の特徴を有する領域を正常領域として抽出し、該正常領域以外の前記粘膜領域を異常領域として抽出する異常領域抽出部と、
     前記異常領域の種類を推定する異常領域推定部と、
    を備え、
     前記生成方法決定部は、前記異常特徴の種類に応じて前記画像の数を決定する、
    ことを特徴とする請求項11に記載の画像処理装置。
    The mucosal region attribute determination unit
    An abnormal region extraction unit that extracts a region having specific characteristics from the mucosal region as a normal region, and extracts the mucosal region other than the normal region as an abnormal region;
    An abnormal region estimation unit for estimating the type of the abnormal region;
    With
    The generation method determination unit determines the number of images according to the type of the abnormal feature.
    The image processing apparatus according to claim 11.
  13.  前記粘膜領域属性判定部は、前記粘膜領域が位置する臓器の種類を判別する臓器判別部を備え、
     前記生成方法決定部は、前記臓器の種類に応じて前記画像の数を決定する、
    ことを特徴とする請求項11に記載の画像処理装置。
    The mucosal region attribute determining unit includes an organ determining unit that determines the type of organ in which the mucosal region is located,
    The generation method determination unit determines the number of the images according to the type of the organ.
    The image processing apparatus according to claim 11.
  14.  前記粘膜領域属性判定部は、前記粘膜領域が観察に適さない不要領域であるか否かを判定する不要領域判定部を備え、
     前記生成方法決定部は、前記粘膜領域が前記不要領域であるか否かに応じて前記画像の数を決定する、
    ことを特徴とする請求項11に記載の画像処理装置。
    The mucosal region attribute determining unit includes an unnecessary region determining unit that determines whether or not the mucosal region is an unnecessary region not suitable for observation,
    The generation method determination unit determines the number of the images according to whether the mucosal region is the unnecessary region;
    The image processing apparatus according to claim 11.
  15.  前記画像生成部は、前記粘膜領域における色情報、形状情報、及びテクスチャ情報のうちの少なくともいずれかを変化させることにより前記画像を生成し、
     前記生成方法決定部は、前記粘膜領域属性判定部が判定した前記粘膜領域の属性に応じて、前記色情報、前記形状情報、及び前記テクスチャ情報のうちの少なくともいずれかの変化のさせ方を決定する、
    ことを特徴とする請求項7に記載の画像処理装置。
    The image generation unit generates the image by changing at least one of color information, shape information, and texture information in the mucosa region,
    The generation method determining unit determines how to change at least one of the color information, the shape information, and the texture information according to the attribute of the mucosal region determined by the mucosal region attribute determining unit. To
    The image processing apparatus according to claim 7.
  16.  前記生成方法決定部は、前記粘膜領域の属性に応じて、前記色情報、前記形状情報、及び前記テクスチャ情報を変化させる際にそれぞれに与えられる重みを決定する、ことを特徴とする請求項15に記載の画像処理装置。 16. The generation method determining unit, according to an attribute of the mucous membrane region, determines a weight given to each when changing the color information, the shape information, and the texture information. An image processing apparatus according to 1.
  17.  前記粘膜領域属性判定部は、
     前記粘膜領域から、特定の特徴を有する領域を異常領域として抽出する異常領域抽出部と、
     前記特定の特徴の種類を推定する異常領域推定部と、
    を備え、
     前記生成方法決定部は、前記特定の特徴の種類に応じて前記重みを決定する、
    ことを特徴とする請求項16に記載の画像処理装置。
    The mucosal region attribute determination unit
    From the mucous membrane region, an abnormal region extraction unit that extracts a region having specific characteristics as an abnormal region;
    An abnormal region estimation unit for estimating the type of the specific feature;
    With
    The generation method determination unit determines the weight according to the type of the specific feature.
    The image processing apparatus according to claim 16.
  18.  前記粘膜領域属性判定部は、前記粘膜領域が位置する臓器の種類を判別する臓器判別部を備え、
     前記生成方法決定部は、前記臓器の種類に応じて前記重みを決定する、
    ことを特徴とする請求項16に記載の画像処理装置。
    The mucosal region attribute determining unit includes an organ determining unit that determines the type of organ in which the mucosal region is located,
    The generation method determination unit determines the weight according to the type of the organ;
    The image processing apparatus according to claim 16.
  19.  生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出ステップと、
     前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成ステップと、
    を含むことを特徴とする画像処理方法。
    A mucosal region extraction step for extracting a mucous membrane region from an intraluminal image obtained by imaging the inside of the lumen of a living body;
    An image generation step of obtaining a surface property of the mucosal region and generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image based on the surface property;
    An image processing method comprising:
  20.  生体の管腔内を撮像した管腔内画像から粘膜領域を抽出する粘膜領域抽出ステップと、
     前記粘膜領域の表面性状を取得し、該表面性状に基づいて前記管腔内画像における前記粘膜領域を加工することにより、前記管腔内画像とは異なる画像を生成する画像生成ステップと、
    をコンピュータに実行させることを特徴とする画像処理プログラム。
     
    A mucosal region extraction step for extracting a mucous membrane region from an intraluminal image obtained by imaging the inside of the lumen of a living body;
    An image generation step of obtaining a surface property of the mucosal region and generating an image different from the intraluminal image by processing the mucosal region in the intraluminal image based on the surface property;
    An image processing program for causing a computer to execute.
PCT/JP2015/068264 2015-06-24 2015-06-24 Image-processing device, image-processing method, and image-processing program WO2016208016A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2015/068264 WO2016208016A1 (en) 2015-06-24 2015-06-24 Image-processing device, image-processing method, and image-processing program
JP2017524509A JPWO2016208016A1 (en) 2015-06-24 2015-06-24 Image processing apparatus, image processing method, and image processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/068264 WO2016208016A1 (en) 2015-06-24 2015-06-24 Image-processing device, image-processing method, and image-processing program

Publications (1)

Publication Number Publication Date
WO2016208016A1 true WO2016208016A1 (en) 2016-12-29

Family

ID=57586285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/068264 WO2016208016A1 (en) 2015-06-24 2015-06-24 Image-processing device, image-processing method, and image-processing program

Country Status (2)

Country Link
JP (1) JPWO2016208016A1 (en)
WO (1) WO2016208016A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019012623A1 (en) * 2017-07-12 2019-01-17 オリンパス株式会社 Image processing device, optical scanning-type observation system, and image processing method
WO2020017211A1 (en) * 2018-07-20 2020-01-23 富士フイルム株式会社 Medical image learning device, medical image learning method, and program
JPWO2021033216A1 (en) * 2019-08-16 2021-02-25
WO2021033215A1 (en) * 2019-08-16 2021-02-25 Hoya株式会社 Processor for endoscope, endoscope system, information processing device, program, and information processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006166939A (en) * 2004-12-10 2006-06-29 Olympus Corp Image processing method
JP2012143340A (en) * 2011-01-11 2012-08-02 Olympus Corp Image processing device, image processing method, and image processing program
JP2013099509A (en) * 2011-10-12 2013-05-23 Fujifilm Corp Endoscope system and image generation method
JP2013240701A (en) * 2013-08-05 2013-12-05 Olympus Corp Image processor, method for operating the same, and image processing program
JP2014166298A (en) * 2013-01-31 2014-09-11 Olympus Corp Image processor for endoscope, endoscope device, image processing method and image processing program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006166939A (en) * 2004-12-10 2006-06-29 Olympus Corp Image processing method
JP2012143340A (en) * 2011-01-11 2012-08-02 Olympus Corp Image processing device, image processing method, and image processing program
JP2013099509A (en) * 2011-10-12 2013-05-23 Fujifilm Corp Endoscope system and image generation method
JP2014166298A (en) * 2013-01-31 2014-09-11 Olympus Corp Image processor for endoscope, endoscope device, image processing method and image processing program
JP2013240701A (en) * 2013-08-05 2013-12-05 Olympus Corp Image processor, method for operating the same, and image processing program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019012623A1 (en) * 2017-07-12 2019-01-17 オリンパス株式会社 Image processing device, optical scanning-type observation system, and image processing method
WO2020017211A1 (en) * 2018-07-20 2020-01-23 富士フイルム株式会社 Medical image learning device, medical image learning method, and program
JPWO2020017211A1 (en) * 2018-07-20 2021-08-02 富士フイルム株式会社 Medical image learning device, medical image learning method, and program
JPWO2021033216A1 (en) * 2019-08-16 2021-02-25
WO2021033215A1 (en) * 2019-08-16 2021-02-25 Hoya株式会社 Processor for endoscope, endoscope system, information processing device, program, and information processing method
WO2021033216A1 (en) * 2019-08-16 2021-02-25 Hoya株式会社 Processor for endoscope, endoscope system, information processing device, program, and information processing method
JP7116849B2 (en) 2019-08-16 2022-08-10 Hoya株式会社 Endoscope processor, endoscope system, information processing device, program and information processing method

Also Published As

Publication number Publication date
JPWO2016208016A1 (en) 2018-04-05

Similar Documents

Publication Publication Date Title
JP5683888B2 (en) Image processing apparatus, image processing method, and image processing program
JP6045396B2 (en) Image processing apparatus, image processing method, and image processing program
JP4418400B2 (en) Image display device
JP5800468B2 (en) Image processing apparatus, image processing method, and image processing program
JP6371544B2 (en) Image processing apparatus, image processing method, and image processing program
JP6196922B2 (en) Image processing apparatus, image processing method, and image processing program
US9761004B2 (en) Method and system for automatic detection of coronary stenosis in cardiac computed tomography data
JP5039310B2 (en) Cerebral hemorrhage segmentation device
US9672610B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
JP2007069007A (en) Automatic detection of spinal curvature in spinal image and calculation method and device for specified angle
JP2015514447A (en) Intelligent landmark selection to improve registration accuracy in multimodal image integration
CN104240180B (en) A kind of method and device for realizing image adjust automatically
CN110751605B (en) Image processing method and device, electronic equipment and readable storage medium
JP5576775B2 (en) Image processing apparatus, image processing method, and image processing program
WO2016208016A1 (en) Image-processing device, image-processing method, and image-processing program
JPWO2016185617A1 (en) Image processing apparatus, image processing method, and image processing program
CN103228214B (en) Image processing device and image processing method
JP6407718B2 (en) Image processing apparatus and image processing method
JP6458166B2 (en) MEDICAL IMAGE PROCESSING METHOD, DEVICE, SYSTEM, AND PROGRAM
JP2008194239A (en) Image processing apparatus and method for the same
JP2019028887A (en) Image processing method
JP2004234579A (en) Method and program for extracting tissue region of interest and image processor
JP4124406B2 (en) Abnormal shadow detection device
WO2023042273A1 (en) Image processing device, image processing method, and storage medium
WO2022180786A1 (en) Image processing device, image processing method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15896338

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017524509

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15896338

Country of ref document: EP

Kind code of ref document: A1