WO2016056408A1 - Image processing device, image processing method, and image processing program - Google Patents

Image processing device, image processing method, and image processing program Download PDF

Info

Publication number
WO2016056408A1
WO2016056408A1 PCT/JP2015/077207 JP2015077207W WO2016056408A1 WO 2016056408 A1 WO2016056408 A1 WO 2016056408A1 JP 2015077207 W JP2015077207 W JP 2015077207W WO 2016056408 A1 WO2016056408 A1 WO 2016056408A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
similarity
target
images
image processing
Prior art date
Application number
PCT/JP2015/077207
Other languages
French (fr)
Japanese (ja)
Inventor
聡美 鎌田
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to JP2016504817A priority Critical patent/JPWO2016056408A1/en
Publication of WO2016056408A1 publication Critical patent/WO2016056408A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program.
  • an image summarization process is performed in which an image group including a plurality of images acquired in time series is acquired, and a part of the images is extracted from the image group and summarized into an image group having a smaller number than the original image group.
  • An image extracting device image processing device that performs such processing is known (for example, see Patent Document 1).
  • images at positions where the scene changes are selected from the image group as representative images, and the image group is summarized into a predetermined number of representative images. Then, the user can grasp the contents of the entire original image group in a short time by observing a predetermined number of representative images included in the image group after the image summarization process.
  • the present invention has been made in view of the above, and provides an image processing apparatus, an image processing method, and an image processing program that can select more images including many effective regions useful for observation as representative images. There is to do.
  • the image processing apparatus provides, for each image included in an image group acquired in time series, a region other than an effective region useful for observation in the image.
  • An area detection unit for detecting an invalid area, and a plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the image group are set for each of the plurality of target images.
  • a plurality of targets based on a similarity calculation unit that calculates a similarity with an adjacent image that is adjacent to the target image in time series among the included images, and the similarity for each of the plurality of target images
  • An image selection unit that selects a representative image from the image, and the similarity calculation unit includes the effective region excluding the invalid region in one of the target image and the adjacent image, and the target Image or adjacent And calculates the similarity between a region corresponding to the effective area of any in the other image of the image.
  • the image processing apparatus includes, for each image included in an image group acquired in time series, an area detection unit that detects an invalid area other than an effective area useful for observation in the image, and the image
  • a plurality of target images for which similarity is calculated from images included in the group are set, and for each of the plurality of target images, the target image and the image included in the image group are compared with the target image.
  • a similarity calculation unit that calculates the similarity between adjacent images that are adjacent in series, and an image selection unit that selects a representative image from the plurality of target images based on the similarity for each of the plurality of target images.
  • the similarity calculation unit when the target image and the adjacent image are overlapped, the effective area excluding the invalid area in the target image and the invalid area in the adjacent image Effective area is And calculates the similarity between the overlapping region are.
  • the image processing apparatus is the image processing apparatus according to the above invention, wherein the similarity calculation unit sets, as the target image, an image including the effective area excluding the invalid area among images included in the image group. It is characterized by doing.
  • the image selection unit selects a predetermined number of the representative images from the plurality of target images in order of decreasing similarity.
  • the image processing method according to the present invention is an image processing method performed by an image processing apparatus. For each image included in an image group acquired in time series, an invalid area other than an effective area useful for observation in the image. A plurality of target images for which similarity is calculated from images included in the image group, and each target image is included in the target image and the image group.
  • the image processing method is an image processing method performed by an image processing apparatus. For each image included in an image group acquired in time series, an invalid area other than an effective area useful for observation in the image. A plurality of target images for which similarity is calculated from images included in the image group, and each target image is included in the target image and the image group. Based on the similarity calculation step for calculating the similarity between adjacent images of the target image in time series among the images, and based on the similarity for each of the plurality of target images, An image selection step for selecting a representative image, and in the similarity calculation step, when the target image and the adjacent image are overlapped, the existence area excluding the invalid area in the target image is excluded. And calculates the degree of similarity between said excluding the invalid area overlaps the effective area and each other area in the region and the inner adjacent images.
  • an image processing program causes an image processing apparatus to execute the above-described image processing method.
  • the image processing apparatus sets a plurality of target images that are targets for calculating similarity from images included in an image group acquired in time series, and for each of the plurality of target images, An effective area excluding the invalid area in either one of the target image or the adjacent image, and an effective area in the one image in the other image of the target image or the adjacent image
  • the similarity is calculated between the corresponding regions.
  • the image processing apparatus sets the calculation area for calculating the similarity as an effective area in the target image or the adjacent image, and calculates the similarity between the same calculation areas in the target image and the adjacent image.
  • the image processing apparatus selects, for example, a target image that has a similarity with a relatively low similarity value as a representative image from the plurality of target images.
  • a target image that has a similarity with a relatively low similarity value as a representative image from the plurality of target images.
  • the image processing apparatus sets a plurality of target images that are targets for calculating similarity from images included in an image group acquired in time series, and for each of the plurality of target images, When the target image and the adjacent image are overlapped, the similarity is calculated between the regions where the effective regions overlap each other. Then, the image processing apparatus selects, for example, a target image that has a similarity with a relatively low similarity value as a representative image from the plurality of target images. As described above, according to the image processing apparatus according to the second aspect of the present invention, the similarity is calculated between the effective regions, so that the contribution to the similarity calculation processing of the invalid region can be eliminated. Therefore, according to the image processing device according to the second aspect of the present invention, as in the image processing device according to the first aspect of the present invention described above, more images including many effective regions useful for observation are used as representative images. There is an effect that it can be selected.
  • the image processing method according to the present invention is a method performed by the above-described image processing apparatus, the same effect as the above-described image processing apparatus can be obtained.
  • the image processing program according to the present invention is a program executed by the above-described image processing apparatus, the same effect as that of the above-described image processing apparatus can be obtained.
  • FIG. 1 is a schematic diagram showing an endoscope system according to Embodiment 1 of the present invention.
  • FIG. 2 is a block diagram showing the image processing apparatus shown in FIG.
  • FIG. 3 is a flowchart showing the operation (image processing method) of the image processing apparatus shown in FIG.
  • FIG. 4 is a diagram for explaining the image processing method shown in FIG.
  • FIG. 5A is a diagram for explaining step S5 shown in FIG.
  • FIG. 5B is a diagram for explaining step S5 shown in FIG.
  • FIG. 6A is a diagram for explaining step S5 according to Embodiment 2 of the present invention.
  • FIG. 6B is a diagram for explaining step S5 according to Embodiment 2 of the present invention.
  • FIG. 5A is a diagram for explaining step S5 according to Embodiment 2 of the present invention.
  • FIG. 6B is a diagram for explaining step S5 according to Embodiment 2 of the present invention.
  • FIG. 5A is a diagram for explaining step S5 according
  • FIG. 7A is a diagram for explaining step S5 according to Embodiment 3 of the present invention.
  • FIG. 7B is a diagram for explaining step S5 according to Embodiment 3 of the present invention.
  • FIG. 8A is a diagram for explaining step S5 according to Embodiment 4 of the present invention.
  • FIG. 8B is a diagram for explaining step S5 according to Embodiment 4 of the present invention.
  • FIG. 1 is a schematic diagram showing an endoscope system according to Embodiment 1 of the present invention.
  • the endoscope system 1 is a system that acquires an in-vivo image inside a subject 100 using a swallowable capsule endoscope 2 and causes a doctor or the like to observe the in-vivo image.
  • the endoscope system 1 includes a receiving device 3, an image processing device 4, and a portable recording medium 5 in addition to the capsule endoscope 2.
  • the recording medium 5 is a portable recording medium for transferring data between the receiving device 3 and the image processing device 4, and is configured to be detachable from the receiving device 3 and the image processing device 4. Yes.
  • the capsule endoscope 2 is a capsule endoscope device that is formed in a size that can be introduced into the organ of the subject 100, and is introduced into the organ of the subject 100 by oral ingestion or the like, and is peristaltic. In-vivo images are sequentially taken while moving inside the organ by means of, for example. Then, the capsule endoscope 2 sequentially transmits image data generated by imaging.
  • the receiving device 3 includes a plurality of receiving antennas 3a to 3h, and receives image data from the capsule endoscope 2 inside the subject 100 via at least one of the plurality of receiving antennas 3a to 3h. Then, the receiving device 3 stores the received image data in the recording medium 5 inserted in the receiving device 3.
  • the receiving antennas 3a to 3h may be arranged on the body surface of the subject 100 as shown in FIG. 1, or may be arranged on a jacket worn by the subject 100. Further, the number of receiving antennas provided in the receiving device 3 may be one or more, and is not particularly limited to eight.
  • FIG. 2 is a block diagram showing the image processing apparatus 4.
  • the image processing apparatus 4 is configured as a workstation that acquires image data in the subject 100 and displays an image corresponding to the acquired image data.
  • the image processing apparatus 4 includes a reader / writer 41, a memory unit 42, an input unit 43, a display unit 44, and a control unit 45.
  • the reader / writer 41 has a function as an image acquisition unit that acquires image data to be processed from the outside. Specifically, when the recording medium 5 is inserted into the reader / writer 41, the reader / writer 41 is controlled by the control unit 45 to store image data (stored in the capsule endoscope 2). A group of in-vivo images including a plurality of in-vivo images captured (acquired) in time series. Further, the reader / writer 41 transfers the captured in-vivo image group to the control unit 45. The in-vivo image group transferred to the control unit 45 is stored in the memory unit 42.
  • the memory unit 42 stores the in-vivo image group transferred from the control unit 45.
  • the memory unit 42 stores various programs (including an image processing program) executed by the control unit 45, information necessary for processing of the control unit 45, and the like.
  • the input unit 43 is configured using a keyboard, a mouse, and the like, and accepts user operations.
  • the display unit 44 is configured using a liquid crystal display or the like, and includes a display screen including in-vivo images under the control of the control unit 45 (for example, a display screen including a predetermined number of representative images selected by image summarization processing described later) ) Is displayed.
  • a display screen including in-vivo images under the control of the control unit 45 (for example, a display screen including a predetermined number of representative images selected by image summarization processing described later) ) Is displayed.
  • the control unit 45 is configured using a CPU (Central Processing Unit) or the like, reads a program (including an image processing program) stored in the memory unit 42, and controls the operation of the entire image processing apparatus 4 according to the program.
  • a function of the control unit 45 a function of executing “image summarization processing” which is a main part of the present invention will be mainly described.
  • the control unit 45 includes an area detection unit 451, a similarity calculation unit 452, and an image selection unit 453.
  • the area detection unit 451 detects, for each in-vivo image included in the in-vivo image group stored in the memory unit 42, an invalid area other than the effective area useful for observation in the in-vivo image. Specifically, the region detection unit 451 compares the feature value indicating the color information, frequency information, shape information, and the like that can be acquired from the in-vivo image with the second threshold value, and based on the comparison result, An invalid area other than an effective area useful for observation is detected.
  • the effective area means an area where mucous membranes, blood vessels, and blood on the surface of the living body are reflected.
  • the invalid region is a region other than the effective region, such as a region where residues or bubbles are reflected, a region where the deep part of the lumen is reflected (dark portion), a halation region (bright portion) that is specularly reflected from the surface of the subject, It means a region that becomes noise due to a poor communication state between the capsule endoscope 2 and the receiving device 3.
  • various known methods can be employed as the method for detecting the invalid area as described above (for example, JP 2007-313119 A, JP 2011-234931 A, and JP 2010-115413 A). JP, 16-16454, etc.).
  • the similarity calculation unit 452 sets a plurality of target images that are targets for calculating the similarity from the in-vivo images included in the in-vivo image group stored in the memory unit 42, and sets the target image for each of the plurality of target images. And the degree of similarity between the immediately preceding in-vivo image (hereinafter referred to as the immediately preceding adjacent image) in time series with respect to the target image.
  • the similarity calculation unit 452 calculates the similarity between the target image and the immediately preceding adjacent image, the effective region excluding the invalid region detected by the region detection unit 451 in the target image, and the immediately preceding The degree of similarity is calculated with an area corresponding to the effective area of the target image in the adjacent image (having the same positional relationship as the effective area of the target image). In other words, the similarity calculation unit 452 sets the calculation area for calculating the similarity as an effective area in the target image, and calculates the similarity between the same calculation areas in the target image and the immediately preceding adjacent image. In the first embodiment, the similarity calculation unit 452 calculates a normalized cross-correlation value as the similarity between the calculation regions in the target image and the immediately adjacent image.
  • the image selection unit 453 selects a predetermined number of representative images from the target images based on the similarity for each target image calculated by the similarity calculation unit 452.
  • FIG. 3 is a flowchart showing the operation (image processing method) of the image processing apparatus 4.
  • FIG. 4 is a diagram for explaining the image processing method shown in FIG. Specifically, FIG. 4A shows a state where in-vivo images included in the in-vivo image group to be processed (in FIG. 4A, only the in-vivo images F1 to F14 are shown) are virtually arranged in time series. Is shown.
  • FIGS. 4B and 4C show examples of invalid regions in the in-vivo image F12 (represented by hatching in FIG. 4A) included in the in-vivo image group. In FIG. 4B and FIG.
  • FIG. 4C the invalid area is expressed in white (in FIG. 4C, a state where there is no valid area is illustrated).
  • FIG. 4D schematically shows that an in-vivo image set as a target image for calculating similarity is set as a candidate image as a representative image candidate when a predetermined condition is satisfied. It is a figure.
  • FIG. 4E schematically shows that the in-vivo image set as the target image does not satisfy the predetermined condition, or that the in-vivo image not set as the target image is not set as the candidate image.
  • FIG. FIG. 4F is a diagram schematically showing that a candidate image is selected as a representative image when a predetermined condition is satisfied.
  • the recording medium 5 is inserted into the reader / writer 41, the in-vivo image group stored in the recording medium 5 is taken in via the reader / writer 41, and the in-vivo image group is already stored in the memory unit 42. It shall be.
  • the control unit 45 reads all in-vivo images included in the in-vivo image group stored in the memory unit 42 one by one in time-series order (frame number order) (step S1).
  • the region detection unit 451 detects an invalid region in the in-vivo image read in step S1 (step S2: region detection step).
  • the similarity calculation unit 452 refers to the detection result in step S2, and determines whether or not an effective area is included in the in-vivo image (whether the entire image is not detected as an invalid area in step S2). (Step S3).
  • the similarity calculation unit 452 sets the in-vivo image as a target image for which the similarity is calculated (step S4). ).
  • the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4 (a)) and the effective area is included in the in-vivo image F12 as shown in FIG. 4 (b).
  • the in-vivo image F12 is set as a target image.
  • step S3 when it is determined that the effective region is not included in the in-vivo image (step S3: No), the control unit 45 returns to step S1. Then, the control unit 45 executes the above-described process again for the next in-vivo image (in-vivo image F13 when step S3 is executed for in-vivo image F12).
  • the control unit 45 executes the above-described process again for the next in-vivo image (in-vivo image F13 when step S3 is executed for in-vivo image F12).
  • the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A)
  • the entire in-vivo image F12 is detected as an invalid area as shown in FIG. 4C
  • the in-vivo image F12 is a non-target image that is not a target for calculating the similarity.
  • the similarity calculation unit 452 reads the in-vivo image (target image) read out in step S1 and the immediately adjacent image immediately before the target image (the target image is the in-vivo image F12). In some cases, the degree of similarity between the immediately adjacent image and the in-vivo image F11) is calculated (step S5). Then, the similarity calculation unit 452 stores the calculated similarity in the memory unit 42 in association with the target image. Specifically, in step S5, the similarity calculation unit 452 calculates the similarity between the target image and the immediately adjacent image as described below.
  • FIG. 5A and 5B are diagrams for explaining step S5.
  • FIG. 5A is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the target image.
  • FIG. 5B is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the immediately adjacent image.
  • the invalid area is expressed in white.
  • the calculation area for calculating the similarity is represented by a thick frame.
  • the similarity calculation unit 452 sets an effective area excluding the invalid area detected in step S2 as a calculation area in the target image. Then, as shown in FIGS.
  • the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately preceding adjacent image. Steps S4 and S5 described above correspond to the similarity calculation step according to the present invention.
  • step S5 the image selection unit 453 determines whether or not the similarity (normalized cross-correlation value) calculated in step S5 is less than the first threshold (step S6). In other words, the image selection unit 453 determines in step S6 whether or not the scene has been switched due to the transition from the immediately adjacent image to the target image.
  • the image selection unit 453 indicates a flag indicating that the candidate image is a candidate for a representative image.
  • the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A)
  • the similarity between the in-vivo image (target image) F12 and the in-vivo image (immediately adjacent image) F11 is the first.
  • the in-vivo image F12 is set as a candidate image as shown in FIG.
  • step S6 determines that the degree of similarity between the target image and the immediately preceding adjacent image is greater than or equal to the first threshold (step S6: No).
  • the control unit 45 returns to step S1. Then, the control unit 45 executes the above-described process again for the next in-vivo image (in-vivo image F13 when step S6 is executed with the in-vivo image F12 as the target image).
  • the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A)
  • the similarity between the in-vivo image (target image) F12 and the in-vivo image (immediately adjacent image) F11 is the first.
  • the in-vivo image F12 is a non-candidate image that is not a candidate for a representative image, as shown in FIG. Note that the in-vivo image (FIG. 4C) in which the entire image is detected as an invalid area is also a non-candidate image.
  • step S7 the control unit 45 determines whether or not steps S1 to S7 have been performed for all in-vivo images included in the in-vivo image group stored in the memory unit 42 (step S8). If it is determined that not all the in-vivo images are implemented (step S8: No), the control unit 45 returns to step S1. And the control part 45 performs the process mentioned above about the remaining in-vivo images. On the other hand, when it is determined that all in-vivo images have been performed (step S8: Yes), the image selection unit 453 converts the candidate images included in the in-vivo image group stored in the memory unit 42 into the candidate images. A predetermined number (for example, 2000) of representative images is selected in ascending order of the related similarity (step S9: image selection step). For example, when 2000 or more candidate images exist in the in-vivo image group stored in the memory unit 42, the candidate images are represented in order from the lowest similarity as shown in FIG. Selected as an image.
  • step S9 image selection step
  • the image processing apparatus 4 uses the calculation area for calculating the similarity as an effective area in the target image, and calculates the similarity between the same calculation area in the target image and the immediately preceding adjacent image. calculate. Then, the image processing apparatus 4 selects a target image having a similarity with a relatively low similarity value as a representative image from the plurality of target images.
  • the representative image is an image including many effective regions useful for observation by reducing the contribution to the similarity calculation processing of the invalid region. As a result, it is possible to select more.
  • the image processing device 4 does not set the in-vivo image in which the entire image is detected as the invalid area as the target image that is the target for calculating the similarity. For this reason, by excluding the in-vivo image whose entire image is an invalid area as described above from the target image in advance, the target image is not selected as a representative image. Can be reduced.
  • the image processing device 4 reduces the number of target images by setting the similarity to the first threshold (sets it as a candidate image), and makes a similarity from a plurality of candidate images.
  • a predetermined number of representative images are selected in ascending order. For this reason, it is possible to select as many representative images as planned by a simple process.
  • Embodiment 2 Next, a second embodiment of the present invention will be described.
  • the same reference numerals are given to the same configurations and steps as those in the above-described first embodiment, and the detailed description thereof is omitted or simplified.
  • the calculation area for calculating the similarity is the effective area in the target image.
  • the calculation area for calculating the similarity is set as an effective area in the immediately adjacent image.
  • the configuration of the image processing apparatus according to the second embodiment is the same as that of the image processing apparatus 4 described in the first embodiment.
  • the image processing method according to the second embodiment is the same as the image processing method described in the first embodiment described above, except for step S5. Only step S5 according to the second embodiment will be described below.
  • FIG. 6A and 6B are diagrams for explaining step S5 according to Embodiment 2 of the present invention.
  • FIG. 6A schematically corresponds to FIG. 5A and schematically shows an invalid area and an effective area in the target image.
  • 6B corresponds to FIG. 5B and is a diagram schematically showing an invalid area and an effective area in the immediately adjacent image.
  • the similarity calculation unit 452 sets the effective area excluding the invalid area detected in step S2 in the immediately adjacent image as the calculation area.
  • the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately preceding adjacent image.
  • FIG. 7A and 7B are diagrams for explaining step S5 according to Embodiment 3 of the present invention.
  • FIG. 7A corresponds to FIG. 5A and is a diagram schematically showing an invalid area and an effective area in the target image.
  • FIG. 7B is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the immediately adjacent image.
  • the invalid area is expressed in white as in FIG. 7A.
  • the calculation area for calculating the similarity is represented by a thick frame.
  • the similarity calculation unit 452 sets the effective area excluding the invalid area detected in step S2 in the immediately adjacent image as the calculation area.
  • the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately adjacent image.
  • Embodiment 3 (Modification of Embodiment 3)
  • the effective area in the immediately adjacent image is used as the calculation area.
  • the present invention is not limited to this, and the effective area in the target image may be used as the calculation area as in Embodiment 1 described above.
  • the calculation area for calculating the similarity is the effective area in the target image.
  • the fourth embodiment when the target image and the immediately preceding adjacent image are superimposed, an area where the effective area of the target image and the effective area of the immediately preceding adjacent image overlap is calculated.
  • the configuration of the image processing apparatus according to the fourth embodiment is the same as that of the image processing apparatus 4 described in the first embodiment.
  • the image processing method according to the fourth embodiment is the same as the image processing method described in the first embodiment described above, except for step S5. Only step S5 according to the fourth embodiment will be described below.
  • Step S5] 8A and 8B are diagrams for explaining step S5 according to Embodiment 4 of the present invention.
  • FIG. 8A schematically corresponds to FIG. 5A and schematically shows an invalid area and an effective area in the target image.
  • FIG. 8B corresponds to FIG. 5B and is a diagram schematically showing an invalid area and an effective area in the immediately adjacent image.
  • the similarity calculation unit 452 includes an effective area excluding the invalid area detected in step S2 in the target image when the target image is superimposed on the immediately preceding adjacent image, A region where the effective region excluding the invalid region detected in step S2 in the immediately preceding adjacent image overlaps is defined as a calculation region.
  • the similarity calculation unit 452 calculates an area in which the position of the target image and the immediately preceding adjacent image in each image has the same positional relationship between the effective area of the target image and the effective area of the immediately preceding adjacent image. This is an area. Then, as shown in FIGS. 8A and 8B, the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation regions in the target image and the immediately preceding adjacent image.
  • the calculation area for calculating the similarity is an area in which the effective areas overlap when the target image and the immediately preceding adjacent image are overlapped, and the calculation area for the target image and the immediately adjacent image is the same.
  • the similarity is calculated between the same calculation areas.
  • a target image having a similarity with a relatively low similarity is selected as a representative image from the plurality of target images.
  • the similarity is calculated between the effective areas, the contribution to the similarity calculation processing of the invalid area can be excluded. Therefore, according to the fourth embodiment, as in the above-described embodiment, there is an effect that more images including many effective regions useful for observation can be selected as representative images.
  • Embodiment 4 (Modification of Embodiment 4)
  • the similarity between the target image and the immediately preceding adjacent image is calculated.
  • the present invention is not limited to this, and as in Embodiment 3 described above, between the target image and the immediately adjacent image. You may calculate the similarity in.
  • Embodiments 1 to 4 described above the image summarization process is performed on the in-vivo image group captured by the capsule endoscope 2, but the present invention is not limited to this, and the image group acquired in time series If so, the image summarization process may be executed for other image groups.
  • the image processing apparatus 4 acquires the in-vivo image group captured in time series by the capsule endoscope 2 using the recording medium 5 and the reader / writer 41. Not limited to this.
  • the in-vivo image group is stored in advance in a separately installed server.
  • the image processing apparatus 4 is provided with a communication unit that communicates with the server.
  • the image processing apparatus 4 may acquire the in-vivo image group by communicating with the server using the communication unit. That is, the communication unit has a function as an image acquisition unit that acquires image data to be processed from the outside.
  • the similarity is compared with the first threshold, and the candidate image is set from the target image based on the comparison result.
  • the present invention is not limited to this, and from all the target images, A predetermined number of representative images may be selected in ascending order of similarity. That is, steps S6 and S7 may be omitted.
  • the in-vivo image in which the entire image is detected as an invalid area is excluded from the target for calculating the similarity, but the present invention is not limited to this. All in-vivo images may be set as target images), and a high similarity value (“1” if normalized cross-correlation value) may be set as the similarity of the in-vivo images.
  • the processing flow is not limited to the processing order in the flowcharts described in the first to fourth embodiments, and may be changed within a consistent range.
  • the processing algorithm described using the flowcharts in this specification can be described as a program.
  • Such a program may be recorded on a recording unit inside the computer, or may be recorded on a computer-readable recording medium. Recording of the program in the recording unit or recording medium may be performed when the computer or recording medium is shipped as a product, or may be performed by downloading via a communication network.

Abstract

An image processing device (4) is provided with the following: a region detection unit (451) that, for each image contained in an image group acquired in a time sequence manner, detects invalid regions other than valid regions which are useful for observation in the images; a similarity calculation unit (452) that sets a plurality of target images which serve as targets for calculating similarity from the images contained in the image group, and that calculates, for each of the plurality of target images, the similarity between a target image and an adjacent image which is an image from among the images contained in the image group and which is adjacent to the target image in a time sequence manner; and an image selection unit (453) that selects a representative image from the plurality of target images on the basis of the similarity of each of the plurality of target images. The similarity calculation unit (452) calculates the similarity between a valid region, which is within the image of one of the target image and the adjacent image and from which an invalid region is removed, and a region which corresponds to the valid region within the other of the target image and the adjacent image.

Description

画像処理装置、画像処理方法、及び画像処理プログラムImage processing apparatus, image processing method, and image processing program
 本発明は、画像処理装置、画像処理方法、及び画像処理プログラムに関する。 The present invention relates to an image processing apparatus, an image processing method, and an image processing program.
 従来、時系列で取得された複数の画像を含む画像群を取得し、当該画像群から一部の画像を抽出して元の画像群よりも枚数の少ない画像群に要約する画像要約処理を実行する画像抽出装置(画像処理装置)が知られている(例えば、特許文献1参照)。
 特許文献1に記載の画像処理装置では、画像群からシーンが変化する位置の画像を代表画像としてそれぞれ選出し、当該画像群を所定枚数の代表画像に要約する。
 そして、ユーザは、画像要約処理後の画像群に含まれる所定枚数の代表画像を観察することで、短時間で元の画像群全体の内容を把握することができる。
Conventionally, an image summarization process is performed in which an image group including a plurality of images acquired in time series is acquired, and a part of the images is extracted from the image group and summarized into an image group having a smaller number than the original image group. An image extracting device (image processing device) that performs such processing is known (for example, see Patent Document 1).
In the image processing apparatus described in Patent Document 1, images at positions where the scene changes are selected from the image group as representative images, and the image group is summarized into a predetermined number of representative images.
Then, the user can grasp the contents of the entire original image group in a short time by observing a predetermined number of representative images included in the image group after the image summarization process.
特開2009-5020号公報JP 2009-5020 A
 しかしながら、特許文献1に記載の画像要約処理では、代表画像を選出する際の条件をシーンの変化としているため、選出された所定枚数の代表画像の中に、観察に有用な有効領域以外の無効領域を多く含む画像が含まれてしまう虞がある、という問題がある。
 例えば、被検体内に導入され、当該被検体内を撮像するカプセル型内視鏡にて撮像された体内画像群に対して当該画像要約処理を実行した場合には、選出された所定枚数の代表画像の中に、当該被検体内の粘膜等の観察に有用な有効領域ではなく、泡や残渣等の観察に不要な無効領域を多く含む体内画像が含まれてしまう虞がある。
However, in the image summarization process described in Patent Document 1, since the condition for selecting the representative image is a change in the scene, the invalid area other than the effective area useful for observation is included in the selected number of representative images. There is a problem that an image including many areas may be included.
For example, when the image summarization process is performed on an in-vivo image group introduced into a subject and captured by a capsule endoscope that images the subject, a predetermined number of representatives selected There is a possibility that an in-vivo image including many ineffective areas unnecessary for observation of bubbles, residues and the like is included in the image, not an effective area useful for observation of the mucous membrane or the like in the subject.
 本発明は、上記に鑑みてなされたものであって、観察に有用な有効領域を多く含む画像を代表画像としてより多く選出することができる画像処理装置、画像処理方法、及び画像処理プログラムを提供することにある。 The present invention has been made in view of the above, and provides an image processing apparatus, an image processing method, and an image processing program that can select more images including many effective regions useful for observation as representative images. There is to do.
 上述した課題を解決し、目的を達成するために、本発明に係る画像処理装置は、時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出部と、前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出部と、前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出部とを備え、前記類似度算出部は、前記対象画像または前記隣接画像のうちいずれか一方の画像内における前記無効領域を除いた前記有効領域と、前記対象画像または前記隣接画像のうちいずれか他方の画像内における当該有効領域に相当する領域との間で前記類似度を算出することを特徴とする。 In order to solve the above-described problems and achieve the object, the image processing apparatus according to the present invention provides, for each image included in an image group acquired in time series, a region other than an effective region useful for observation in the image. An area detection unit for detecting an invalid area, and a plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the image group are set for each of the plurality of target images. A plurality of targets based on a similarity calculation unit that calculates a similarity with an adjacent image that is adjacent to the target image in time series among the included images, and the similarity for each of the plurality of target images An image selection unit that selects a representative image from the image, and the similarity calculation unit includes the effective region excluding the invalid region in one of the target image and the adjacent image, and the target Image or adjacent And calculates the similarity between a region corresponding to the effective area of any in the other image of the image.
 また、本発明に係る画像処理装置は、時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出部と、前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出部と、前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出部とを備え、前記類似度算出部は、前記対象画像及び前記隣接画像を重ね合わせた場合に、前記対象画像内における前記無効領域を除いた前記有効領域と前記隣接画像内における前記無効領域を除いた前記有効領域とが互いに重なり合う領域間で前記類似度を算出することを特徴とする。 The image processing apparatus according to the present invention includes, for each image included in an image group acquired in time series, an area detection unit that detects an invalid area other than an effective area useful for observation in the image, and the image A plurality of target images for which similarity is calculated from images included in the group are set, and for each of the plurality of target images, the target image and the image included in the image group are compared with the target image. A similarity calculation unit that calculates the similarity between adjacent images that are adjacent in series, and an image selection unit that selects a representative image from the plurality of target images based on the similarity for each of the plurality of target images. The similarity calculation unit, when the target image and the adjacent image are overlapped, the effective area excluding the invalid area in the target image and the invalid area in the adjacent image Effective area is And calculates the similarity between the overlapping region are.
 また、本発明に係る画像処理装置は、上記発明において、前記類似度算出部は、前記画像群に含まれる画像のうち、前記無効領域を除いた前記有効領域を含む画像を前記対象画像として設定することを特徴とする。 The image processing apparatus according to the present invention is the image processing apparatus according to the above invention, wherein the similarity calculation unit sets, as the target image, an image including the effective area excluding the invalid area among images included in the image group. It is characterized by doing.
 また、本発明に係る画像処理装置は、上記発明において、前記画像選出部は、前記複数の対象画像から、前記類似度が低い順に、所定枚数の前記代表画像を選出することを特徴とする。 In the image processing apparatus according to the present invention as set forth in the invention described above, the image selection unit selects a predetermined number of the representative images from the plurality of target images in order of decreasing similarity.
 また、本発明に係る画像処理方法は、画像処理装置が行う画像処理方法において、時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出ステップと、前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出ステップと、前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出ステップとを含み、前記類似度算出ステップでは、前記対象画像または前記隣接画像のうちいずれか一方の画像内における前記無効領域を除いた前記有効領域と、前記対象画像または前記隣接画像のうちいずれか他方の画像内における当該有効領域に相当する領域との間で前記類似度を算出することを特徴とする。 The image processing method according to the present invention is an image processing method performed by an image processing apparatus. For each image included in an image group acquired in time series, an invalid area other than an effective area useful for observation in the image. A plurality of target images for which similarity is calculated from images included in the image group, and each target image is included in the target image and the image group. Based on the similarity calculation step for calculating the similarity between adjacent images of the target image in time series among the images, and based on the similarity for each of the plurality of target images, An image selection step of selecting a representative image, and in the similarity calculation step, the effective area excluding the invalid area in either one of the target image and the adjacent image; And calculates the degree of similarity between the target image and the region corresponding to the effective area of any in the other image of the adjacent images.
 また、本発明に係る画像処理方法は、画像処理装置が行う画像処理方法において、時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出ステップと、前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出ステップと、前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出ステップとを含み、前記類似度算出ステップでは、前記対象画像及び前記隣接画像を重ね合わせた場合に、前記対象画像内における前記無効領域を除いた前記有効領域と前記隣接画像内における前記無効領域を除いた前記有効領域とが互いに重なり合う領域間で前記類似度を算出することを特徴とする。 The image processing method according to the present invention is an image processing method performed by an image processing apparatus. For each image included in an image group acquired in time series, an invalid area other than an effective area useful for observation in the image. A plurality of target images for which similarity is calculated from images included in the image group, and each target image is included in the target image and the image group. Based on the similarity calculation step for calculating the similarity between adjacent images of the target image in time series among the images, and based on the similarity for each of the plurality of target images, An image selection step for selecting a representative image, and in the similarity calculation step, when the target image and the adjacent image are overlapped, the existence area excluding the invalid area in the target image is excluded. And calculates the degree of similarity between said excluding the invalid area overlaps the effective area and each other area in the region and the inner adjacent images.
 また、本発明に係る画像処理プログラムは、上述した画像処理方法を画像処理装置に実行させることを特徴とする。 Also, an image processing program according to the present invention causes an image processing apparatus to execute the above-described image processing method.
 本発明の第1態様に係る画像処理装置は、時系列で取得された画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像または隣接画像のうちいずれか一方の画像内における無効領域を除いた有効領域と、当該対象画像または隣接画像のうちいずれか他方の画像内における当該いずれか一方の画像内の有効領域に相当する領域との間で類似度を算出する。言い換えれば、画像処理装置は、類似度を算出するための演算領域を対象画像または隣接画像における有効領域とし、当該対象画像及び隣接画像における同一の演算領域間で類似度を算出する。そして、画像処理装置は、複数の対象画像から、例えば、比較的に類似性の低い値の類似度となる対象画像を代表画像として選出する。
 以上のように、本発明の第1態様に係る画像処理装置によれば、無効領域の類似度算出処理への寄与度を少なくすることで、観察に有用な有効領域を多く含む画像を代表画像としてより多く選出することができる、という効果を奏する。
The image processing apparatus according to the first aspect of the present invention sets a plurality of target images that are targets for calculating similarity from images included in an image group acquired in time series, and for each of the plurality of target images, An effective area excluding the invalid area in either one of the target image or the adjacent image, and an effective area in the one image in the other image of the target image or the adjacent image The similarity is calculated between the corresponding regions. In other words, the image processing apparatus sets the calculation area for calculating the similarity as an effective area in the target image or the adjacent image, and calculates the similarity between the same calculation areas in the target image and the adjacent image. Then, the image processing apparatus selects, for example, a target image that has a similarity with a relatively low similarity value as a representative image from the plurality of target images.
As described above, according to the image processing device according to the first aspect of the present invention, an image including a large number of effective regions useful for observation is reduced by reducing the contribution to the similarity calculation processing of invalid regions. As a result, it is possible to select more.
 本発明の第2態様に係る画像処理装置は、時系列で取得された画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像及び隣接画像を重ね合わせた場合に互いの有効領域が重なり合う領域間で類似度を算出する。そして、画像処理装置は、複数の対象画像から、例えば、比較的に類似性の低い値の類似度となる対象画像を代表画像として選出する。
 以上のように、本発明の第2態様に係る画像処理装置によれば、有効領域同士で類似度を算出するので、無効領域の類似度算出処理への寄与度を排除することができる。したがって、本発明の第2態様に係る画像処理装置によれば、上述した本発明の第1態様に係る画像処理装置と同様に、観察に有用な有効領域を多く含む画像を代表画像としてより多く選出することができる、という効果を奏する。
The image processing apparatus according to the second aspect of the present invention sets a plurality of target images that are targets for calculating similarity from images included in an image group acquired in time series, and for each of the plurality of target images, When the target image and the adjacent image are overlapped, the similarity is calculated between the regions where the effective regions overlap each other. Then, the image processing apparatus selects, for example, a target image that has a similarity with a relatively low similarity value as a representative image from the plurality of target images.
As described above, according to the image processing apparatus according to the second aspect of the present invention, the similarity is calculated between the effective regions, so that the contribution to the similarity calculation processing of the invalid region can be eliminated. Therefore, according to the image processing device according to the second aspect of the present invention, as in the image processing device according to the first aspect of the present invention described above, more images including many effective regions useful for observation are used as representative images. There is an effect that it can be selected.
 本発明に係る画像処理方法は、上述した画像処理装置が行う方法であるため、上述した画像処理装置と同様の効果を奏する。
 本発明に係る画像処理プログラムは、上述した画像処理装置にて実行されるプログラムであるため、上述した画像処理装置と同様の効果を奏する。
Since the image processing method according to the present invention is a method performed by the above-described image processing apparatus, the same effect as the above-described image processing apparatus can be obtained.
Since the image processing program according to the present invention is a program executed by the above-described image processing apparatus, the same effect as that of the above-described image processing apparatus can be obtained.
図1は、本発明の実施の形態1に係る内視鏡システムを示す模式図である。FIG. 1 is a schematic diagram showing an endoscope system according to Embodiment 1 of the present invention. 図2は、図1に示した画像処理装置を示すブロック図である。FIG. 2 is a block diagram showing the image processing apparatus shown in FIG. 図3は、図2に示した画像処理装置の動作(画像処理方法)を示すフローチャートである。FIG. 3 is a flowchart showing the operation (image processing method) of the image processing apparatus shown in FIG. 図4は、図3に示した画像処理方法を説明するための図である。FIG. 4 is a diagram for explaining the image processing method shown in FIG. 図5Aは、図3に示したステップS5を説明するための図である。FIG. 5A is a diagram for explaining step S5 shown in FIG. 図5Bは、図3に示したステップS5を説明するための図である。FIG. 5B is a diagram for explaining step S5 shown in FIG. 図6Aは、本発明の実施の形態2に係るステップS5を説明するための図である。FIG. 6A is a diagram for explaining step S5 according to Embodiment 2 of the present invention. 図6Bは、本発明の実施の形態2に係るステップS5を説明するための図である。FIG. 6B is a diagram for explaining step S5 according to Embodiment 2 of the present invention. 図7Aは、本発明の実施の形態3に係るステップS5を説明するための図である。FIG. 7A is a diagram for explaining step S5 according to Embodiment 3 of the present invention. 図7Bは、本発明の実施の形態3に係るステップS5を説明するための図である。FIG. 7B is a diagram for explaining step S5 according to Embodiment 3 of the present invention. 図8Aは、本発明の実施の形態4に係るステップS5を説明するための図である。FIG. 8A is a diagram for explaining step S5 according to Embodiment 4 of the present invention. 図8Bは、本発明の実施の形態4に係るステップS5を説明するための図である。FIG. 8B is a diagram for explaining step S5 according to Embodiment 4 of the present invention.
 以下、図面を参照して、本発明に係る画像処理装置、画像処理方法、及び画像処理プログラムの好適な実施の形態を詳細に説明する。なお、この実施の形態によって本発明が限定されるものではない。 Hereinafter, preferred embodiments of an image processing apparatus, an image processing method, and an image processing program according to the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited to the embodiments.
(実施の形態1)
 〔内視鏡システムの概略構成〕
 図1は、本発明の実施の形態1に係る内視鏡システムを示す模式図である。
 内視鏡システム1は、飲み込み型のカプセル型内視鏡2を用いて、被検体100内部の体内画像を取得し、当該体内画像を医師等に観察させるシステムである。
 この内視鏡システム1は、図1に示すように、カプセル型内視鏡2の他、受信装置3と、画像処理装置4と、可搬型の記録媒体5とを備える。
 記録媒体5は、受信装置3と画像処理装置4との間におけるデータの受け渡しを行うための可搬型の記録メディアであり、受信装置3及び画像処理装置4に対してそれぞれ着脱可能に構成されている。
(Embodiment 1)
[Schematic configuration of endoscope system]
FIG. 1 is a schematic diagram showing an endoscope system according to Embodiment 1 of the present invention.
The endoscope system 1 is a system that acquires an in-vivo image inside a subject 100 using a swallowable capsule endoscope 2 and causes a doctor or the like to observe the in-vivo image.
As shown in FIG. 1, the endoscope system 1 includes a receiving device 3, an image processing device 4, and a portable recording medium 5 in addition to the capsule endoscope 2.
The recording medium 5 is a portable recording medium for transferring data between the receiving device 3 and the image processing device 4, and is configured to be detachable from the receiving device 3 and the image processing device 4. Yes.
 カプセル型内視鏡2は、被検体100の臓器内部に導入可能な大きさに形成されたカプセル型の内視鏡装置であり、経口摂取等によって被検体100の臓器内部に導入され、蠕動運動等によって臓器内部を移動しつつ、体内画像を順次、撮像する。そして、カプセル型内視鏡2は、撮像することにより生成した画像データを順次、送信する。
 受信装置3は、複数の受信アンテナ3a~3hを備え、これら複数の受信アンテナ3a~3hのうち少なくとも一つを介して被検体100内部のカプセル型内視鏡2からの画像データを受信する。そして、受信装置3は、当該受信装置3に挿着された記録媒体5内に、受信した画像データを蓄積する。
 なお、受信アンテナ3a~3hは、図1に示したように被検体100の体表上に配置されていてもよいし、被検体100に着用させるジャケットに配置されていてもよい。また、受信装置3が備える受信アンテナ数は、1つ以上であればよく、特に8つに限定されない。
The capsule endoscope 2 is a capsule endoscope device that is formed in a size that can be introduced into the organ of the subject 100, and is introduced into the organ of the subject 100 by oral ingestion or the like, and is peristaltic. In-vivo images are sequentially taken while moving inside the organ by means of, for example. Then, the capsule endoscope 2 sequentially transmits image data generated by imaging.
The receiving device 3 includes a plurality of receiving antennas 3a to 3h, and receives image data from the capsule endoscope 2 inside the subject 100 via at least one of the plurality of receiving antennas 3a to 3h. Then, the receiving device 3 stores the received image data in the recording medium 5 inserted in the receiving device 3.
The receiving antennas 3a to 3h may be arranged on the body surface of the subject 100 as shown in FIG. 1, or may be arranged on a jacket worn by the subject 100. Further, the number of receiving antennas provided in the receiving device 3 may be one or more, and is not particularly limited to eight.
 〔画像処理装置の構成〕
 図2は、画像処理装置4を示すブロック図である。
 画像処理装置4は、被検体100内の画像データを取得し、取得した画像データに対応する画像を表示するワークステーションとして構成されている。
 この画像処理装置4は、図2に示すように、リーダライタ41と、メモリ部42と、入力部43と、表示部44と、制御部45とを備える。
[Configuration of image processing apparatus]
FIG. 2 is a block diagram showing the image processing apparatus 4.
The image processing apparatus 4 is configured as a workstation that acquires image data in the subject 100 and displays an image corresponding to the acquired image data.
As shown in FIG. 2, the image processing apparatus 4 includes a reader / writer 41, a memory unit 42, an input unit 43, a display unit 44, and a control unit 45.
 リーダライタ41は、外部から処理対象となる画像データを取得する画像取得部としての機能を有する。
 具体的に、リーダライタ41は、当該リーダライタ41に記録媒体5が挿着された際に、制御部45による制御の下、記録媒体5に保存された画像データ(カプセル型内視鏡2により時系列で撮像(取得)された複数の体内画像を含む体内画像群)を取り込む。また、リーダライタ41は、取り込んだ体内画像群を制御部45に転送する。そして、制御部45に転送された体内画像群は、メモリ部42に記憶される。
The reader / writer 41 has a function as an image acquisition unit that acquires image data to be processed from the outside.
Specifically, when the recording medium 5 is inserted into the reader / writer 41, the reader / writer 41 is controlled by the control unit 45 to store image data (stored in the capsule endoscope 2). A group of in-vivo images including a plurality of in-vivo images captured (acquired) in time series. Further, the reader / writer 41 transfers the captured in-vivo image group to the control unit 45. The in-vivo image group transferred to the control unit 45 is stored in the memory unit 42.
 メモリ部42は、制御部45から転送された体内画像群を記憶する。また、メモリ部42は、制御部45が実行する各種プログラム(画像処理プログラムを含む)や制御部45の処理に必要な情報等を記憶する。 The memory unit 42 stores the in-vivo image group transferred from the control unit 45. In addition, the memory unit 42 stores various programs (including an image processing program) executed by the control unit 45, information necessary for processing of the control unit 45, and the like.
 入力部43は、キーボード及びマウス等を用いて構成され、ユーザ操作を受け付ける。 The input unit 43 is configured using a keyboard, a mouse, and the like, and accepts user operations.
 表示部44は、液晶ディスプレイ等を用いて構成され、制御部45による制御の下、体内画像を含む表示画面(例えば、後述する画像要約処理により選出された所定枚数の代表画像を含む表示画面等)を表示する。 The display unit 44 is configured using a liquid crystal display or the like, and includes a display screen including in-vivo images under the control of the control unit 45 (for example, a display screen including a predetermined number of representative images selected by image summarization processing described later) ) Is displayed.
 制御部45は、CPU(Central Processing Unit)等を用いて構成され、メモリ部42に記憶されたプログラム(画像処理プログラムを含む)を読み出し、当該プログラムに従って画像処理装置4全体の動作を制御する。
 なお、以下では、制御部45の機能として、本発明の要部である「画像要約処理」を実行する機能を主に説明する。
 この制御部45は、図2に示すように、領域検出部451と、類似度算出部452と、画像選出部453とを備える。
The control unit 45 is configured using a CPU (Central Processing Unit) or the like, reads a program (including an image processing program) stored in the memory unit 42, and controls the operation of the entire image processing apparatus 4 according to the program.
Hereinafter, as a function of the control unit 45, a function of executing “image summarization processing” which is a main part of the present invention will be mainly described.
As illustrated in FIG. 2, the control unit 45 includes an area detection unit 451, a similarity calculation unit 452, and an image selection unit 453.
 領域検出部451は、メモリ部42に記憶された体内画像群に含まれる体内画像毎に、当該体内画像内の観察に有用な有効領域以外の無効領域を検出する。
 具体的に、領域検出部451は、体内画像から取得可能な色情報、周波数情報、形状情報等を示す特徴量と第2閾値とを比較し、当該比較結果に基づいて、当該体内画像内の観察に有用な有効領域以外の無効領域を検出する。
 ここで、有効領域とは、生体表面の粘膜、血管、及び血液が映った領域を意味する。一方、無効領域とは、有効領域以外の領域であり、残渣や泡が映った領域、管腔の深部が映った領域(暗部)、被写体の表面から鏡面反射されたハレーション領域(明部)、カプセル型内視鏡2と受信装置3との間における通信状態の不良によりノイズとなった領域等を意味する。
 なお、上記のような無効領域の検出方法としては、公知の種々の方法を採用することができる(例えば、特開2007-313119号公報、特開2011-234931号公報、特開2010-115413号公報、特開2012-16454号公報等)。
The area detection unit 451 detects, for each in-vivo image included in the in-vivo image group stored in the memory unit 42, an invalid area other than the effective area useful for observation in the in-vivo image.
Specifically, the region detection unit 451 compares the feature value indicating the color information, frequency information, shape information, and the like that can be acquired from the in-vivo image with the second threshold value, and based on the comparison result, An invalid area other than an effective area useful for observation is detected.
Here, the effective area means an area where mucous membranes, blood vessels, and blood on the surface of the living body are reflected. On the other hand, the invalid region is a region other than the effective region, such as a region where residues or bubbles are reflected, a region where the deep part of the lumen is reflected (dark portion), a halation region (bright portion) that is specularly reflected from the surface of the subject, It means a region that becomes noise due to a poor communication state between the capsule endoscope 2 and the receiving device 3.
It should be noted that various known methods can be employed as the method for detecting the invalid area as described above (for example, JP 2007-313119 A, JP 2011-234931 A, and JP 2010-115413 A). JP, 16-16454, etc.).
 類似度算出部452は、メモリ部42に記憶された体内画像群に含まれる体内画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と当該対象画像に対して時系列的に直前の体内画像(以下、直前隣接画像と記載)との類似度を算出する。
 ここで、類似度算出部452は、対象画像及び直前隣接画像間における類似度を算出する際、当該対象画像内における領域検出部451にて検出された無効領域を除いた有効領域と、当該直前隣接画像内における当該対象画像の有効領域に相当する(対象画像の有効領域と同一の位置関係となる)領域との間で類似度を算出する。言い換えれば、類似度算出部452は、類似度を算出するための演算領域を対象画像内における有効領域とし、当該対象画像及び直前隣接画像における同一の演算領域間で類似度を算出する。
 本実施の形態1では、類似度算出部452は、対象画像及び直前隣接画像における演算領域間の類似度として、正規化相互相関値を算出する。当該正規化相互相関値の算出方法としては、公知の種々の方法を採用することができる(例えば、国際公開第2012/117816号等)。
 なお、対象画像及び直前隣接画像における演算領域間の類似度としては、上述した正規化相互相関値に限られず、例えば、動きベクトル変化量や、画素値(輝度値またはG成分の値)の変化量等を当該類似度として採用しても構わない。
The similarity calculation unit 452 sets a plurality of target images that are targets for calculating the similarity from the in-vivo images included in the in-vivo image group stored in the memory unit 42, and sets the target image for each of the plurality of target images. And the degree of similarity between the immediately preceding in-vivo image (hereinafter referred to as the immediately preceding adjacent image) in time series with respect to the target image.
Here, the similarity calculation unit 452 calculates the similarity between the target image and the immediately preceding adjacent image, the effective region excluding the invalid region detected by the region detection unit 451 in the target image, and the immediately preceding The degree of similarity is calculated with an area corresponding to the effective area of the target image in the adjacent image (having the same positional relationship as the effective area of the target image). In other words, the similarity calculation unit 452 sets the calculation area for calculating the similarity as an effective area in the target image, and calculates the similarity between the same calculation areas in the target image and the immediately preceding adjacent image.
In the first embodiment, the similarity calculation unit 452 calculates a normalized cross-correlation value as the similarity between the calculation regions in the target image and the immediately adjacent image. As a method of calculating the normalized cross-correlation value, various known methods can be employed (for example, International Publication No. 2012/117816).
Note that the similarity between the calculation regions in the target image and the immediately preceding adjacent image is not limited to the normalized cross-correlation value described above. An amount or the like may be adopted as the similarity.
 画像選出部453は、類似度算出部452にて算出された対象画像毎の類似度に基づいて、当該対象画像から所定枚数の代表画像を選出する。 The image selection unit 453 selects a predetermined number of representative images from the target images based on the similarity for each target image calculated by the similarity calculation unit 452.
 〔画像処理装置の動作〕
 次に、上述した画像処理装置4の動作(画像処理方法)について説明する。
 図3は、画像処理装置4の動作(画像処理方法)を示すフローチャートである。図4は、図3に示した画像処理方法を説明するための図である。
 具体的に、図4(a)は、処理対象とする体内画像群に含まれる体内画像(図4(a)では、体内画像F1~F14のみを図示)を時系列順に仮想的に並べた状態を示している。図4(b)及び図4(c)は、体内画像群に含まれる体内画像F12(図4(a)では斜線で表現)内の無効領域の一例をそれぞれ示している。なお、図4(b)及び図4(c)では、無効領域を白塗りで表現している(図4(c)では、有効領域がない状態を例示)。図4(d)は、類似度を算出する対象となる対象画像として設定された体内画像が所定の条件を満足した場合に代表画像の候補となる候補画像に設定されることを模式的に示した図である。図4(e)は、対象画像として設定された体内画像が所定の条件を満足しなかった場合、または対象画像として設定されなかった体内画像が、候補画像に設定されないことを模式的に示した図である。図4(f)は、候補画像のうち所定の条件を満足した場合に代表画像として選出されることを模式的に示した図である。
[Operation of image processing apparatus]
Next, the operation (image processing method) of the above-described image processing apparatus 4 will be described.
FIG. 3 is a flowchart showing the operation (image processing method) of the image processing apparatus 4. FIG. 4 is a diagram for explaining the image processing method shown in FIG.
Specifically, FIG. 4A shows a state where in-vivo images included in the in-vivo image group to be processed (in FIG. 4A, only the in-vivo images F1 to F14 are shown) are virtually arranged in time series. Is shown. FIGS. 4B and 4C show examples of invalid regions in the in-vivo image F12 (represented by hatching in FIG. 4A) included in the in-vivo image group. In FIG. 4B and FIG. 4C, the invalid area is expressed in white (in FIG. 4C, a state where there is no valid area is illustrated). FIG. 4D schematically shows that an in-vivo image set as a target image for calculating similarity is set as a candidate image as a representative image candidate when a predetermined condition is satisfied. It is a figure. FIG. 4E schematically shows that the in-vivo image set as the target image does not satisfy the predetermined condition, or that the in-vivo image not set as the target image is not set as the candidate image. FIG. FIG. 4F is a diagram schematically showing that a candidate image is selected as a representative image when a predetermined condition is satisfied.
 なお、以下では、リーダライタ41に記録媒体5が挿着され、記録媒体5に保存された体内画像群がリーダライタ41を介して取り込まれ、当該体内画像群がメモリ部42に既に記憶されているものとする。
 先ず、制御部45は、メモリ部42に記憶された体内画像群に含まれる全ての体内画像を時系列順(フレーム番号順)に一枚ずつ読み出す(ステップS1)。
 次に、領域検出部451は、ステップS1で読み出された体内画像内の無効領域を検出する(ステップS2:領域検出ステップ)。
In the following, the recording medium 5 is inserted into the reader / writer 41, the in-vivo image group stored in the recording medium 5 is taken in via the reader / writer 41, and the in-vivo image group is already stored in the memory unit 42. It shall be.
First, the control unit 45 reads all in-vivo images included in the in-vivo image group stored in the memory unit 42 one by one in time-series order (frame number order) (step S1).
Next, the region detection unit 451 detects an invalid region in the in-vivo image read in step S1 (step S2: region detection step).
 次に、類似度算出部452は、ステップS2での検出結果を参照し、体内画像内に有効領域が含まれるか(ステップS2で画像全体が無効領域として検出されていないか)否かを判断する(ステップS3)。
 体内画像内に有効領域が含まれると判断した場合(ステップS3:Yes)には、類似度算出部452は、当該体内画像を、類似度を算出する対象となる対象画像に設定する(ステップS4)。
 例えば、ステップS1で読み出された体内画像が体内画像F12(図4(a))であり、図4(b)に示すように、体内画像F12内に有効領域が含まれている場合には、体内画像F12は、対象画像に設定される。
Next, the similarity calculation unit 452 refers to the detection result in step S2, and determines whether or not an effective area is included in the in-vivo image (whether the entire image is not detected as an invalid area in step S2). (Step S3).
When it is determined that the in-vivo image includes an effective region (step S3: Yes), the similarity calculation unit 452 sets the in-vivo image as a target image for which the similarity is calculated (step S4). ).
For example, when the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4 (a)) and the effective area is included in the in-vivo image F12 as shown in FIG. 4 (b). The in-vivo image F12 is set as a target image.
 一方、体内画像内に有効領域が含まれていないと判断された場合(ステップS3:No)には、制御部45は、ステップS1に戻る。そして、制御部45は、次の体内画像(体内画像F12についてステップS3を実行した場合には、体内画像F13)について、改めて上述した処理を実行する。
 例えば、ステップS1で読み出された体内画像が体内画像F12(図4(a))であり、図4(c)に示すように、体内画像F12全体が無効領域として検出された場合には、体内画像F12は、類似度を算出する対象とならない非対象画像となる。
On the other hand, when it is determined that the effective region is not included in the in-vivo image (step S3: No), the control unit 45 returns to step S1. Then, the control unit 45 executes the above-described process again for the next in-vivo image (in-vivo image F13 when step S3 is executed for in-vivo image F12).
For example, when the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A), and the entire in-vivo image F12 is detected as an invalid area as shown in FIG. 4C, The in-vivo image F12 is a non-target image that is not a target for calculating the similarity.
 ステップS4の後、類似度算出部452は、ステップS1で読み出された体内画像(対象画像)と、当該対象画像に対して時系列的に直前の直前隣接画像(対象画像が体内画像F12である場合には、直前隣接画像は体内画像F11)との類似度を算出する(ステップS5)。そして、類似度算出部452は、算出した類似度を当該対象画像に関連付けてメモリ部42に記憶する。
 具体的には、類似度算出部452は、ステップS5において、以下に示すように、対象画像及び直前隣接画像間の類似度を算出する。
After step S4, the similarity calculation unit 452 reads the in-vivo image (target image) read out in step S1 and the immediately adjacent image immediately before the target image (the target image is the in-vivo image F12). In some cases, the degree of similarity between the immediately adjacent image and the in-vivo image F11) is calculated (step S5). Then, the similarity calculation unit 452 stores the calculated similarity in the memory unit 42 in association with the target image.
Specifically, in step S5, the similarity calculation unit 452 calculates the similarity between the target image and the immediately adjacent image as described below.
 図5A及び図5Bは、ステップS5を説明するための図である。
 具体的に、図5Aは、対象画像において、ステップS2で検出された無効領域と、当該無効領域を除く有効領域とを模式的に示した図である。図5Bは、直前隣接画像において、ステップS2で検出された無効領域と、当該無効領域を除く有効領域とを模式的に示した図である。なお、図5A及び図5Bでは、無効領域を白塗りで表現している。また、図5A及び図5Bでは、類似度を算出するための演算領域を太枠で表現している。
 先ず、類似度算出部452は、図5Aに示すように、対象画像において、ステップS2で検出された無効領域を除く有効領域を演算領域とする。
 そして、類似度算出部452は、図5A及び図5Bに示すように、対象画像及び直前隣接画像における同一の演算領域間で類似度(正規化相互相関値)を算出する。
 以上説明したステップS4及びS5は、本発明に係る類似度算出ステップに相当する。
5A and 5B are diagrams for explaining step S5.
Specifically, FIG. 5A is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the target image. FIG. 5B is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the immediately adjacent image. In FIG. 5A and FIG. 5B, the invalid area is expressed in white. In FIG. 5A and FIG. 5B, the calculation area for calculating the similarity is represented by a thick frame.
First, as shown in FIG. 5A, the similarity calculation unit 452 sets an effective area excluding the invalid area detected in step S2 as a calculation area in the target image.
Then, as shown in FIGS. 5A and 5B, the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately preceding adjacent image.
Steps S4 and S5 described above correspond to the similarity calculation step according to the present invention.
 ステップS5の後、画像選出部453は、ステップS5で算出された類似度(正規化相互相関値)が第1閾値未満であるか否かを判断する(ステップS6)。言い換えれば、画像選出部453は、ステップS6において、直前隣接画像から対象画像への遷移でシーンが切り替わったか否かを判断している。 After step S5, the image selection unit 453 determines whether or not the similarity (normalized cross-correlation value) calculated in step S5 is less than the first threshold (step S6). In other words, the image selection unit 453 determines in step S6 whether or not the scene has been switched due to the transition from the immediately adjacent image to the target image.
 対象画像及び直前隣接画像間の類似度が第1閾値未満であると判断した場合(ステップS6:Yes)には、画像選出部453は、代表画像の候補となる候補画像であることを示すフラグをメモリ部42に記憶された当該対象画像に関連付けて記憶し、当該対象画像を候補画像に設定する(ステップS7)。
 例えば、ステップS1で読み出された体内画像が体内画像F12(図4(a))であり、当該体内画像(対象画像)F12と体内画像(直前隣接画像)F11との間の類似度が第1閾値未満である場合(類似性が低い場合)には、体内画像F12は、図4(d)に示すように、候補画像に設定される。
If it is determined that the similarity between the target image and the immediately preceding adjacent image is less than the first threshold (step S6: Yes), the image selection unit 453 indicates a flag indicating that the candidate image is a candidate for a representative image. Are stored in association with the target image stored in the memory unit 42, and the target image is set as a candidate image (step S7).
For example, the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A), and the similarity between the in-vivo image (target image) F12 and the in-vivo image (immediately adjacent image) F11 is the first. When it is less than one threshold (when similarity is low), the in-vivo image F12 is set as a candidate image as shown in FIG.
 一方、対象画像及び直前隣接画像間の類似度が第1閾値以上であると判断された場合(ステップS6:No)には、制御部45は、ステップS1に戻る。そして、制御部45は、次の体内画像(体内画像F12を対象画像としてステップS6を実行した場合には、体内画像F13)について、改めて上述した処理を実行する。
 例えば、ステップS1で読み出された体内画像が体内画像F12(図4(a))であり、当該体内画像(対象画像)F12と体内画像(直前隣接画像)F11との間の類似度が第1閾値以上である場合(類似性が高い場合)には、体内画像F12は、図4(e)に示すように、代表画像の候補とならない非候補画像となる。なお、画像全体が無効領域として検出された体内画像(図4(c))についても非候補画像となる。
On the other hand, when it is determined that the degree of similarity between the target image and the immediately preceding adjacent image is greater than or equal to the first threshold (step S6: No), the control unit 45 returns to step S1. Then, the control unit 45 executes the above-described process again for the next in-vivo image (in-vivo image F13 when step S6 is executed with the in-vivo image F12 as the target image).
For example, the in-vivo image read in step S1 is the in-vivo image F12 (FIG. 4A), and the similarity between the in-vivo image (target image) F12 and the in-vivo image (immediately adjacent image) F11 is the first. When it is equal to or greater than one threshold (when similarity is high), the in-vivo image F12 is a non-candidate image that is not a candidate for a representative image, as shown in FIG. Note that the in-vivo image (FIG. 4C) in which the entire image is detected as an invalid area is also a non-candidate image.
 ステップS7の後、制御部45は、メモリ部42に記憶された体内画像群に含まれる全ての体内画像について、ステップS1~S7を実施したか否かを判断する(ステップS8)。
 全ての体内画像で実施していないと判断した場合(ステップS8:No)には、制御部45は、ステップS1に戻る。そして、制御部45は、残りの体内画像について、上述した処理を実行する。
 一方、全ての体内画像で実施したと判断された場合(ステップS8:Yes)には、画像選出部453は、メモリ部42に記憶された体内画像群に含まれる候補画像から、当該候補画像に関連付けられた類似度が低い順に、所定枚数(例えば、2000枚)の代表画像を選出する(ステップS9:画像選出ステップ)。
 例えば、メモリ部42に記憶された体内画像群に2000枚以上の候補画像が存在していた場合には、図4(f)に示すように、類似度の低い方から順に、候補画像が代表画像として選出される。
After step S7, the control unit 45 determines whether or not steps S1 to S7 have been performed for all in-vivo images included in the in-vivo image group stored in the memory unit 42 (step S8).
If it is determined that not all the in-vivo images are implemented (step S8: No), the control unit 45 returns to step S1. And the control part 45 performs the process mentioned above about the remaining in-vivo images.
On the other hand, when it is determined that all in-vivo images have been performed (step S8: Yes), the image selection unit 453 converts the candidate images included in the in-vivo image group stored in the memory unit 42 into the candidate images. A predetermined number (for example, 2000) of representative images is selected in ascending order of the related similarity (step S9: image selection step).
For example, when 2000 or more candidate images exist in the in-vivo image group stored in the memory unit 42, the candidate images are represented in order from the lowest similarity as shown in FIG. Selected as an image.
 以上説明した本実施の形態1に係る画像処理装置4は、類似度を算出するための演算領域を対象画像における有効領域とし、当該対象画像及び直前隣接画像における同一の演算領域間で類似度を算出する。そして、画像処理装置4は、複数の対象画像から、比較的に類似性の低い値の類似度となる対象画像を代表画像として選出する。
 以上のように、本実施の形態1に係る画像処理装置4によれば、無効領域の類似度算出処理への寄与度を少なくすることで、観察に有用な有効領域を多く含む画像を代表画像としてより多く選出することができる、という効果を奏する。
The image processing apparatus 4 according to the first embodiment described above uses the calculation area for calculating the similarity as an effective area in the target image, and calculates the similarity between the same calculation area in the target image and the immediately preceding adjacent image. calculate. Then, the image processing apparatus 4 selects a target image having a similarity with a relatively low similarity value as a representative image from the plurality of target images.
As described above, according to the image processing device 4 according to the first embodiment, the representative image is an image including many effective regions useful for observation by reducing the contribution to the similarity calculation processing of the invalid region. As a result, it is possible to select more.
 また、本実施の形態1に係る画像処理装置4は、画像全体が無効領域として検出された体内画像については、類似度を算出する対象となる対象画像に設定しない。
 このため、予め上記のような画像全体が無効領域である体内画像を対象画像から外すことで、当該対象画像を代表画像として選出することがなく、さらには、類似度を算出する処理の負荷を低減することができる。
Further, the image processing device 4 according to the first embodiment does not set the in-vivo image in which the entire image is detected as the invalid area as the target image that is the target for calculating the similarity.
For this reason, by excluding the in-vivo image whose entire image is an invalid area as described above from the target image in advance, the target image is not selected as a representative image. Can be reduced.
 また、本実施の形態1に係る画像処理装置4は、類似度を第1閾値と比較することで複数の対象画像の数を絞る(候補画像に設定する)とともに、複数の候補画像から、類似度の低い順に、所定枚数の代表画像を選出する。
 このため、簡単な処理により、予定した数だけ、代表画像を選出することができる。
In addition, the image processing device 4 according to the first embodiment reduces the number of target images by setting the similarity to the first threshold (sets it as a candidate image), and makes a similarity from a plurality of candidate images. A predetermined number of representative images are selected in ascending order.
For this reason, it is possible to select as many representative images as planned by a simple process.
(実施の形態2)
 次に、本発明の実施の形態2について説明する。
 以下の説明では、上述した実施の形態1と同様の構成及びステップには同一符号を付し、その詳細な説明は省略または簡略化する。
 上述した実施の形態1では、類似度を算出するための演算領域を対象画像における有効領域としていた。
 これに対して本実施の形態2では、類似度を算出するための演算領域を直前隣接画像における有効領域とする。
 そして、本実施の形態2に係る画像処理装置の構成は、上述した実施の形態1で説明した画像処理装置4と同様の構成である。また、本実施の形態2に係る画像処理方法は、ステップS5を除き、上述した実施の形態1で説明した画像処理方法と同様の方法である。
 以下では、本実施の形態2に係るステップS5のみを説明する。
(Embodiment 2)
Next, a second embodiment of the present invention will be described.
In the following description, the same reference numerals are given to the same configurations and steps as those in the above-described first embodiment, and the detailed description thereof is omitted or simplified.
In Embodiment 1 described above, the calculation area for calculating the similarity is the effective area in the target image.
On the other hand, in the second embodiment, the calculation area for calculating the similarity is set as an effective area in the immediately adjacent image.
The configuration of the image processing apparatus according to the second embodiment is the same as that of the image processing apparatus 4 described in the first embodiment. The image processing method according to the second embodiment is the same as the image processing method described in the first embodiment described above, except for step S5.
Only step S5 according to the second embodiment will be described below.
 〔ステップS5〕
 図6A及び図6Bは、本発明の実施の形態2に係るステップS5を説明するための図である。
 具体的に、図6Aは、図5Aに対応し、対象画像における無効領域及び有効領域を模式的に示した図である。図6Bは、図5Bに対応し、直前隣接画像における無効領域及び有効領域を模式的に示した図である。
 先ず、類似度算出部452は、図6Bに示すように、直前隣接画像において、ステップS2で検出された無効領域を除く有効領域を演算領域とする。
 そして、類似度算出部452は、図6A及び図6Bに示すように、対象画像及び直前隣接画像における同一の演算領域間で類似度(正規化相互相関値)を算出する。
[Step S5]
6A and 6B are diagrams for explaining step S5 according to Embodiment 2 of the present invention.
Specifically, FIG. 6A schematically corresponds to FIG. 5A and schematically shows an invalid area and an effective area in the target image. 6B corresponds to FIG. 5B and is a diagram schematically showing an invalid area and an effective area in the immediately adjacent image.
First, as shown in FIG. 6B, the similarity calculation unit 452 sets the effective area excluding the invalid area detected in step S2 in the immediately adjacent image as the calculation area.
Then, as shown in FIGS. 6A and 6B, the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately preceding adjacent image.
 以上説明した本実施の形態2のように直前隣接画像における有効領域を演算領域とするように構成した場合であっても、上述した実施の形態1と同様の効果を奏する。 Even when the effective area in the immediately preceding adjacent image is configured as the calculation area as in the second embodiment described above, the same effects as in the first embodiment described above are obtained.
(実施の形態3)
 次に、本発明の実施の形態3について説明する。
 以下の説明では、上述した実施の形態1と同様の構成及びステップには同一符号を付し、その詳細な説明は省略または簡略化する。
 上述した実施の形態1では、対象画像と直前隣接画像との類似度を算出していた。
 これに対して本実施の形態3では、対象画像と当該対象画像に対して時系列的に直後の体内画像(以下、直後隣接画像と記載)との類似度を算出する。
 そして、本実施の形態3に係る画像処理装置の構成は、上述した実施の形態1で説明した画像処理装置4と同様の構成である。また、本実施の形態3に係る画像処理方法は、ステップS5を除き、上述した実施の形態1で説明した画像処理方法と同様の方法である。
 以下では、本実施の形態3に係るステップS5のみを説明する。
(Embodiment 3)
Next, a third embodiment of the present invention will be described.
In the following description, the same reference numerals are given to the same configurations and steps as those in the above-described first embodiment, and the detailed description thereof is omitted or simplified.
In the first embodiment described above, the similarity between the target image and the immediately preceding adjacent image is calculated.
On the other hand, in the third embodiment, the degree of similarity between the target image and the in-vivo image immediately after the target image in time series (hereinafter referred to as the immediately adjacent image) is calculated.
The configuration of the image processing apparatus according to the third embodiment is the same as that of the image processing apparatus 4 described in the first embodiment. The image processing method according to the third embodiment is the same as the image processing method described in the first embodiment described above except for step S5.
Only step S5 according to the third embodiment will be described below.
 〔ステップS5〕
 図7A及び図7Bは、本発明の実施の形態3に係るステップS5を説明するための図である。
 具体的に、図7Aは、図5Aに対応し、対象画像における無効領域及び有効領域を模式的に示した図である。図7Bは、直後隣接画像において、ステップS2で検出された無効領域と、当該無効領域を除く有効領域とを模式的に示した図である。なお、図7Bでは、図7Aと同様に、無効領域を白塗りで表現している。また、図7Bでは、図7Aと同様に、類似度を算出するための演算領域を太枠で表現している。
 先ず、類似度算出部452は、図7Bに示すように、直後隣接画像において、ステップS2で検出された無効領域を除く有効領域を演算領域とする。
 そして、類似度算出部452は、図7A及び図7Bに示すように、対象画像及び直後隣接画像における同一の演算領域間で類似度(正規化相互相関値)を算出する。
[Step S5]
7A and 7B are diagrams for explaining step S5 according to Embodiment 3 of the present invention.
Specifically, FIG. 7A corresponds to FIG. 5A and is a diagram schematically showing an invalid area and an effective area in the target image. FIG. 7B is a diagram schematically showing the invalid area detected in step S2 and the valid area excluding the invalid area in the immediately adjacent image. In FIG. 7B, the invalid area is expressed in white as in FIG. 7A. In FIG. 7B, similarly to FIG. 7A, the calculation area for calculating the similarity is represented by a thick frame.
First, as shown in FIG. 7B, the similarity calculation unit 452 sets the effective area excluding the invalid area detected in step S2 in the immediately adjacent image as the calculation area.
Then, as shown in FIGS. 7A and 7B, the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation areas in the target image and the immediately adjacent image.
 以上説明した本実施の形態3のように直後隣接画像における有効領域を演算領域とし当該直後隣接画像との間での対象画像の類似度を算出するように構成した場合であっても、上述した実施の形態1と同様の効果を奏する。 As described above in the third embodiment, even when the effective area in the immediately adjacent image is used as the calculation area and the similarity of the target image with the immediately adjacent image is calculated, the above-described case is described above. The same effects as in the first embodiment are obtained.
(実施の形態3の変形例)
 上述した実施の形態3では、直後隣接画像における有効領域を演算領域としていたが、これに限られず、上述した実施の形態1と同様に、対象画像における有効領域を演算領域としても構わない。
(Modification of Embodiment 3)
In Embodiment 3 described above, the effective area in the immediately adjacent image is used as the calculation area. However, the present invention is not limited to this, and the effective area in the target image may be used as the calculation area as in Embodiment 1 described above.
(実施の形態4)
 次に、本発明の実施の形態4について説明する。
 以下の説明では、上述した実施の形態1と同様の構成及びステップには同一符号を付し、その詳細な説明は省略または簡略化する。
 上述した実施の形態1では、類似度を算出するための演算領域を対象画像における有効領域としていた。
 これに対して本実施の形態4では、対象画像と直前隣接画像とを重ね合わせた場合に、対象画像の有効領域と直前隣接画像の有効領域とが重なり合う領域を、類似度を算出するための演算領域とする。
 そして、本実施の形態4に係る画像処理装置の構成は、上述した実施の形態1で説明した画像処理装置4と同様の構成である。また、本実施の形態4に係る画像処理方法は、ステップS5を除き、上述した実施の形態1で説明した画像処理方法と同様の方法である。
 以下では、本実施の形態4に係るステップS5のみを説明する。
(Embodiment 4)
Next, a fourth embodiment of the present invention will be described.
In the following description, the same reference numerals are given to the same configurations and steps as those in the above-described first embodiment, and the detailed description thereof is omitted or simplified.
In Embodiment 1 described above, the calculation area for calculating the similarity is the effective area in the target image.
On the other hand, in the fourth embodiment, when the target image and the immediately preceding adjacent image are superimposed, an area where the effective area of the target image and the effective area of the immediately preceding adjacent image overlap is calculated. The calculation area.
The configuration of the image processing apparatus according to the fourth embodiment is the same as that of the image processing apparatus 4 described in the first embodiment. The image processing method according to the fourth embodiment is the same as the image processing method described in the first embodiment described above, except for step S5.
Only step S5 according to the fourth embodiment will be described below.
 〔ステップS5〕
 図8A及び図8Bは、本発明の実施の形態4に係るステップS5を説明するための図である。
 具体的に、図8Aは、図5Aに対応し、対象画像における無効領域及び有効領域を模式的に示した図である。図8Bは、図5Bに対応し、直前隣接画像における無効領域及び有効領域を模式的に示した図である。
 先ず、類似度算出部452は、図8A及び図8Bに示すように、対象画像と直前隣接画像とを重ね合わせた場合に、対象画像におけるステップS2で検出された無効領域を除く有効領域と、直前隣接画像におけるステップS2で検出された無効領域を除く有効領域とが重なり合う領域を演算領域とする。言い換えれば、類似度算出部452は、対象画像の有効領域と直前隣接画像の有効領域との間で、対象画像及び直前隣接画像の各画像内での位置が同一の位置関係となる領域を演算領域とする。
 そして、類似度算出部452は、図8A及び図8Bに示すように、対象画像及び直前隣接画像における同一の演算領域間で類似度(正規化相互相関値)を算出する。
[Step S5]
8A and 8B are diagrams for explaining step S5 according to Embodiment 4 of the present invention.
Specifically, FIG. 8A schematically corresponds to FIG. 5A and schematically shows an invalid area and an effective area in the target image. FIG. 8B corresponds to FIG. 5B and is a diagram schematically showing an invalid area and an effective area in the immediately adjacent image.
First, as shown in FIGS. 8A and 8B, the similarity calculation unit 452 includes an effective area excluding the invalid area detected in step S2 in the target image when the target image is superimposed on the immediately preceding adjacent image, A region where the effective region excluding the invalid region detected in step S2 in the immediately preceding adjacent image overlaps is defined as a calculation region. In other words, the similarity calculation unit 452 calculates an area in which the position of the target image and the immediately preceding adjacent image in each image has the same positional relationship between the effective area of the target image and the effective area of the immediately preceding adjacent image. This is an area.
Then, as shown in FIGS. 8A and 8B, the similarity calculation unit 452 calculates a similarity (normalized cross-correlation value) between the same calculation regions in the target image and the immediately preceding adjacent image.
 以上説明した本実施の形態4によれば、上述した実施の形態1と同様の効果の他、以下の効果がある。
 本実施の形態4では、類似度を算出するための演算領域を、対象画像及び直前隣接画像を重ね合わせた場合に互いの有効領域が重なり合う領域を有効領域とし、当該対象画像及び直前隣接画像における同一の演算領域間で類似度を算出する。そして、複数の対象画像から、比較的に類似性の低い値の類似度となる対象画像を代表画像として選出する。
 以上のように、本実施の形態4によれば、有効領域同士で類似度を算出するので、無効領域の類似度算出処理への寄与度を排除することができる。したがって、本実施の形態4によれば、上述した実施の形態と同様に、観察に有用な有効領域を多く含む画像を代表画像としてより多く選出することができる、という効果を奏する。
According to the fourth embodiment described above, there are the following effects in addition to the effects similar to those of the first embodiment.
In the fourth embodiment, the calculation area for calculating the similarity is an area in which the effective areas overlap when the target image and the immediately preceding adjacent image are overlapped, and the calculation area for the target image and the immediately adjacent image is the same. The similarity is calculated between the same calculation areas. Then, a target image having a similarity with a relatively low similarity is selected as a representative image from the plurality of target images.
As described above, according to the fourth embodiment, since the similarity is calculated between the effective areas, the contribution to the similarity calculation processing of the invalid area can be excluded. Therefore, according to the fourth embodiment, as in the above-described embodiment, there is an effect that more images including many effective regions useful for observation can be selected as representative images.
(実施の形態4の変形例)
 上述した実施の形態4では、対象画像及び直前隣接画像の間での類似度を算出していたが、これに限られず、上述した実施の形態3と同様に、対象画像及び直後隣接画像の間での類似度を算出しても構わない。
(Modification of Embodiment 4)
In Embodiment 4 described above, the similarity between the target image and the immediately preceding adjacent image is calculated. However, the present invention is not limited to this, and as in Embodiment 3 described above, between the target image and the immediately adjacent image. You may calculate the similarity in.
(その他の実施形態)
 ここまで、本発明を実施するための形態を説明してきたが、本発明は上述した実施の形態1~4によってのみ限定されるべきものではない。
 上述した実施の形態1~4では、カプセル型内視鏡2にて撮像された体内画像群に対して画像要約処理を実行していたが、これに限られず、時系列に取得された画像群であれば、その他の画像群に対して画像要約処理を実行するように構成しても構わない。
(Other embodiments)
The embodiments for carrying out the present invention have been described so far, but the present invention should not be limited only by the above-described first to fourth embodiments.
In Embodiments 1 to 4 described above, the image summarization process is performed on the in-vivo image group captured by the capsule endoscope 2, but the present invention is not limited to this, and the image group acquired in time series If so, the image summarization process may be executed for other image groups.
 上述した実施の形態1~4では、画像処理装置4は、カプセル型内視鏡2にて時系列で撮像された体内画像群を、記録媒体5及びリーダライタ41を用いて取得していたが、これに限られない。
 例えば、別途、設置したサーバに体内画像群を予め保存しておく。また、画像処理装置4にサーバと通信を行う通信部を設ける。そして、通信部によりサーバと通信を行うことで画像処理装置4が体内画像群を取得する構成としても構わない。
 すなわち、当該通信部は、外部から処理対象とする画像データを取得する画像取得部としての機能を有する。
In Embodiments 1 to 4 described above, the image processing apparatus 4 acquires the in-vivo image group captured in time series by the capsule endoscope 2 using the recording medium 5 and the reader / writer 41. Not limited to this.
For example, the in-vivo image group is stored in advance in a separately installed server. The image processing apparatus 4 is provided with a communication unit that communicates with the server. The image processing apparatus 4 may acquire the in-vivo image group by communicating with the server using the communication unit.
That is, the communication unit has a function as an image acquisition unit that acquires image data to be processed from the outside.
 上述した実施の形態1~4では、類似度を第1閾値と比較し、当該比較結果に基づいて、対象画像から候補画像を設定していたが、これに限られず、全ての対象画像から、類似度の低い順に、所定枚数の代表画像を選出しても構わない。すなわち、ステップS6,S7を省略しても構わない。 In Embodiments 1 to 4 described above, the similarity is compared with the first threshold, and the candidate image is set from the target image based on the comparison result. However, the present invention is not limited to this, and from all the target images, A predetermined number of representative images may be selected in ascending order of similarity. That is, steps S6 and S7 may be omitted.
 上述した実施の形態1~4では、画像全体が無効領域として検出された体内画像については、類似度を算出する対象から外していたが、これに限られず、当該体内画像についても対象画像とし(全ての体内画像を対象画像とし)、当該体内画像の類似度として類似性の高い値(正規化相互相関値であれば「1」)を設定しても構わない。 In Embodiments 1 to 4 described above, the in-vivo image in which the entire image is detected as an invalid area is excluded from the target for calculating the similarity, but the present invention is not limited to this. All in-vivo images may be set as target images), and a high similarity value (“1” if normalized cross-correlation value) may be set as the similarity of the in-vivo images.
 また、処理フローは、上述した実施の形態1~4で説明したフローチャートにおける処理の順序に限られず、矛盾のない範囲で変更しても構わない。
 さらに、本明細書においてフローチャートを用いて説明した処理のアルゴリズムは、プログラムとして記述することが可能である。このようなプログラムは、コンピュータ内部の記録部に記録してもよいし、コンピュータ読み取り可能な記録媒体に記録してもよい。プログラムの記録部または記録媒体への記録は、コンピュータまたは記録媒体を製品として出荷する際に行ってもよいし、通信ネットワークを介したダウンロードにより行ってもよい。
The processing flow is not limited to the processing order in the flowcharts described in the first to fourth embodiments, and may be changed within a consistent range.
Furthermore, the processing algorithm described using the flowcharts in this specification can be described as a program. Such a program may be recorded on a recording unit inside the computer, or may be recorded on a computer-readable recording medium. Recording of the program in the recording unit or recording medium may be performed when the computer or recording medium is shipped as a product, or may be performed by downloading via a communication network.
 1 内視鏡システム
 2 カプセル型内視鏡
 3 受信装置
 3a~3h 受信アンテナ
 4 画像処理装置
 5 記録媒体
 41 リーダライタ
 42 メモリ部
 43 入力部
 44 表示部
 45 制御部
 100 被検体
 451 領域検出部
 452 類似度算出部
 453 画像選出部
 F1~F14 体内画像
DESCRIPTION OF SYMBOLS 1 Endoscope system 2 Capsule-type endoscope 3 Receiving device 3a-3h Receiving antenna 4 Image processing device 5 Recording medium 41 Reader / writer 42 Memory unit 43 Input unit 44 Display unit 45 Control unit 100 Subject 451 Area detection unit 452 Similar Degree calculation unit 453 Image selection unit F1-F14 In-vivo image

Claims (7)

  1.  時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出部と、
     前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出部と、
     前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出部とを備え、
     前記類似度算出部は、
     前記対象画像または前記隣接画像のうちいずれか一方の画像内における前記無効領域を除いた前記有効領域と、前記対象画像または前記隣接画像のうちいずれか他方の画像内における当該有効領域に相当する領域との間で前記類似度を算出する
     ことを特徴とする画像処理装置。
    For each image included in the image group acquired in time series, an area detection unit that detects an invalid area other than an effective area useful for observation in the image, and
    A plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the target image among the images included in the image group are set for each of the plurality of target images. A similarity calculation unit for calculating the similarity between adjacent images adjacent in time series,
    An image selection unit that selects a representative image from the plurality of target images based on the similarity for each of the plurality of target images;
    The similarity calculation unit includes:
    The effective area excluding the invalid area in either one of the target image or the adjacent image, and the area corresponding to the effective area in the other image of the target image or the adjacent image The similarity is calculated between the image processing apparatus and the image processing apparatus.
  2.  時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出部と、
     前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出部と、
     前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出部とを備え、
     前記類似度算出部は、
     前記対象画像及び前記隣接画像を重ね合わせた場合に、前記対象画像内における前記無効領域を除いた前記有効領域と前記隣接画像内における前記無効領域を除いた前記有効領域とが互いに重なり合う領域間で前記類似度を算出する
     ことを特徴とする画像処理装置。
    For each image included in the image group acquired in time series, an area detection unit that detects an invalid area other than an effective area useful for observation in the image, and
    A plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the target image among the images included in the image group are set for each of the plurality of target images. A similarity calculation unit for calculating the similarity between adjacent images adjacent in time series,
    An image selection unit that selects a representative image from the plurality of target images based on the similarity for each of the plurality of target images;
    The similarity calculation unit includes:
    When the target image and the adjacent image are overlapped, the effective area excluding the invalid area in the target image and the effective area excluding the invalid area in the adjacent image are overlapped with each other. An image processing apparatus that calculates the similarity.
  3.  前記類似度算出部は、
     前記画像群に含まれる画像のうち、前記無効領域を除いた前記有効領域を含む画像を前記対象画像として設定する
     ことを特徴とする請求項1または2に記載の画像処理装置。
    The similarity calculation unit includes:
    The image processing apparatus according to claim 1, wherein an image including the effective area excluding the invalid area among images included in the image group is set as the target image.
  4.  前記画像選出部は、
     前記複数の対象画像から、前記類似度が低い順に、所定枚数の前記代表画像を選出する
     ことを特徴とする請求項1~3のいずれか一つに記載の画像処理装置。
    The image selection unit
    4. The image processing apparatus according to claim 1, wherein a predetermined number of the representative images are selected from the plurality of target images in descending order of the similarity.
  5.  画像処理装置が行う画像処理方法において、
     時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出ステップと、
     前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出ステップと、
     前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出ステップとを含み、
     前記類似度算出ステップでは、
     前記対象画像または前記隣接画像のうちいずれか一方の画像内における前記無効領域を除いた前記有効領域と、前記対象画像または前記隣接画像のうちいずれか他方の画像内における当該有効領域に相当する領域との間で前記類似度を算出する
     ことを特徴とする画像処理方法。
    In the image processing method performed by the image processing apparatus,
    For each image included in the image group acquired in time series, an area detection step for detecting an invalid area other than an effective area useful for observation in the image; and
    A plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the target image among the images included in the image group are set for each of the plurality of target images. A similarity calculation step for calculating the similarity between adjacent images adjacent in time series,
    An image selection step of selecting a representative image from the plurality of target images based on the similarity for each of the plurality of target images;
    In the similarity calculation step,
    The effective area excluding the invalid area in either one of the target image or the adjacent image, and the area corresponding to the effective area in the other image of the target image or the adjacent image An image processing method characterized in that the similarity is calculated between and.
  6.  画像処理装置が行う画像処理方法において、
     時系列で取得された画像群に含まれる画像毎に、当該画像内の観察に有用な有効領域以外の無効領域を検出する領域検出ステップと、
     前記画像群に含まれる画像から類似度を算出する対象となる複数の対象画像を設定し、当該複数の対象画像毎に、当該対象画像と前記画像群に含まれる画像のうち当該対象画像に対して時系列的に隣接する隣接画像との類似度を算出する類似度算出ステップと、
     前記複数の対象画像毎の前記類似度に基づいて、当該複数の対象画像から代表画像を選出する画像選出ステップとを含み、
     前記類似度算出ステップでは、
     前記対象画像及び前記隣接画像を重ね合わせた場合に、前記対象画像内における前記無効領域を除いた前記有効領域と前記隣接画像内における前記無効領域を除いた前記有効領域とが互いに重なり合う領域間で前記類似度を算出する
     ことを特徴とする画像処理方法。
    In the image processing method performed by the image processing apparatus,
    For each image included in the image group acquired in time series, an area detection step for detecting an invalid area other than an effective area useful for observation in the image; and
    A plurality of target images for which similarity is calculated from images included in the image group are set, and the target image and the target image among the images included in the image group are set for each of the plurality of target images. A similarity calculation step for calculating the similarity between adjacent images adjacent in time series,
    An image selection step of selecting a representative image from the plurality of target images based on the similarity for each of the plurality of target images;
    In the similarity calculation step,
    When the target image and the adjacent image are overlapped, the effective area excluding the invalid area in the target image and the effective area excluding the invalid area in the adjacent image are overlapped with each other. An image processing method characterized by calculating the similarity.
  7.  請求項5または6に記載の画像処理方法を画像処理装置に実行させる
     ことを特徴とする画像処理プログラム。
    An image processing program causing an image processing apparatus to execute the image processing method according to claim 5.
PCT/JP2015/077207 2014-10-10 2015-09-25 Image processing device, image processing method, and image processing program WO2016056408A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016504817A JPWO2016056408A1 (en) 2014-10-10 2015-09-25 Image processing apparatus, image processing method, and image processing program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-209343 2014-10-10
JP2014209343 2014-10-10

Publications (1)

Publication Number Publication Date
WO2016056408A1 true WO2016056408A1 (en) 2016-04-14

Family

ID=55653025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/077207 WO2016056408A1 (en) 2014-10-10 2015-09-25 Image processing device, image processing method, and image processing program

Country Status (2)

Country Link
JP (1) JPWO2016056408A1 (en)
WO (1) WO2016056408A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019187206A1 (en) * 2018-03-27 2019-10-03 オリンパス株式会社 Image processing device, capsule-type endoscope system, operation method of image processing device, and operation program of image processing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008155974A1 (en) * 2007-06-20 2008-12-24 Olympus Corporation Image extraction apparatus, image extraction program, and image extraction method
WO2014050638A1 (en) * 2012-09-27 2014-04-03 オリンパス株式会社 Image processing device, program, and image processing method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011175599A (en) * 2010-02-25 2011-09-08 Canon Inc Image processor, and processing method and program thereof
EP2823754A4 (en) * 2012-03-07 2015-12-30 Olympus Corp Image processing device, program, and image processing method
CN104203065B (en) * 2012-03-08 2017-04-12 奥林巴斯株式会社 Image processing device and image processing method
EP2839770A4 (en) * 2012-04-18 2015-12-30 Olympus Corp Image processing device, program, and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008155974A1 (en) * 2007-06-20 2008-12-24 Olympus Corporation Image extraction apparatus, image extraction program, and image extraction method
WO2014050638A1 (en) * 2012-09-27 2014-04-03 オリンパス株式会社 Image processing device, program, and image processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019187206A1 (en) * 2018-03-27 2019-10-03 オリンパス株式会社 Image processing device, capsule-type endoscope system, operation method of image processing device, and operation program of image processing device

Also Published As

Publication number Publication date
JPWO2016056408A1 (en) 2017-04-27

Similar Documents

Publication Publication Date Title
JP4418400B2 (en) Image display device
CN110049709B (en) Image processing apparatus
US8811698B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
US8837821B2 (en) Image processing apparatus, image processing method, and computer readable recording medium
KR102344585B1 (en) Polyp diagnostic method, device and computer program from endoscopy image using deep learning
EP2305091A1 (en) Image processing apparatus, image processing program, and image processing method
JP5526044B2 (en) Image processing apparatus, image processing method, and image processing program
JP5085370B2 (en) Image processing apparatus and image processing program
US8457376B2 (en) Image processing apparatus, image processing method, and computer-readable recording medium
JP6807869B2 (en) Image processing equipment, image processing methods and programs
JP6956853B2 (en) Diagnostic support device, diagnostic support program, and diagnostic support method
JP2011024628A (en) Image processor, image processing program, and image processing method
JP2016137007A (en) Image display device and image display method
WO2016056408A1 (en) Image processing device, image processing method, and image processing program
JP2010099139A (en) Image display device, image display method, and image display program
JP2007075157A (en) Image display device
JP7265805B2 (en) Image analysis method, image analysis device, image analysis system, control program, recording medium
JP5573674B2 (en) Medical image processing apparatus and program
JP2013075244A (en) Image display device, image display method, and image display program
JP5937286B1 (en) Image processing apparatus, image processing method, and image processing program
WO2024024022A1 (en) Endoscopic examination assistance device, endoscopic examination assistance method, and recording medium
JP7315033B2 (en) Treatment support device, treatment support method, and program
JP5343973B2 (en) Medical image processing apparatus and program
WO2023187886A1 (en) Image processing device, image processing method, and storage medium
JP2010075334A (en) Medical image processor and program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016504817

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15848700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15848700

Country of ref document: EP

Kind code of ref document: A1