WO2018186279A1 - Dispositif de traitement d'image, programme de traitement d'image, et support d'enregistrement - Google Patents

Dispositif de traitement d'image, programme de traitement d'image, et support d'enregistrement Download PDF

Info

Publication number
WO2018186279A1
WO2018186279A1 PCT/JP2018/013260 JP2018013260W WO2018186279A1 WO 2018186279 A1 WO2018186279 A1 WO 2018186279A1 JP 2018013260 W JP2018013260 W JP 2018013260W WO 2018186279 A1 WO2018186279 A1 WO 2018186279A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
attention
image processing
complementary
unit
Prior art date
Application number
PCT/JP2018/013260
Other languages
English (en)
Japanese (ja)
Inventor
恭平 池田
山本 智幸
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2018186279A1 publication Critical patent/WO2018186279A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • Patent Document 1 Patent Document 1
  • Patent Document 2 Patent Document 2
  • Japanese Patent Publication Japanese Patent Laid-Open No. 2006-301932 (published on November 7, 2006)” Japanese Patent Publication “Japanese Unexamined Patent Publication No. 2015-135540 (May 6, 2015)”
  • the non-target object is an object that hides the target area and does not need to be focused. What is unfocused can vary depending on the situation. For example, let us consider a case in which an operation is imaged using a camera placed behind the surgeon. At this time, if the surgeon wants to observe the surgical field from the viewpoint of the surgeon, the surgical field is the attention area, and the surgeon is the non-target object. On the other hand, if the surgeon wants to observe the surgeon's movements from the assistant's perspective, the surgeon becomes a region of interest rather than a non-attention object. There may also be an object (for example, a camera used for imaging) that is always desired to be set as a non-target object regardless of the situation.
  • an object for example, a camera used for imaging
  • the image to be generated can change depending on which object in the image is set as the non-target object.
  • image generation processing for example, every time a new non-attention object is selected, if an image that makes the non-attention object transparent is generated from scratch, it takes time to generate the image, so the image display is delayed. Occurs.
  • an image corresponding to each of a plurality of non-attention objects, such as a transparent non-interest object is generated in advance for each of the plurality of non-interest objects and all combinations of the plurality of non-interest objects, If stored, the amount of image data becomes enormous. Also, the processing settings (for example, the transparency of the non-attention object) cannot be changed later.
  • An object of one aspect of the present invention is to realize a technology that enables high-speed image generation while suppressing an increase in the amount of data to be stored in a process for generating an image in which a non-target object is made transparent. To do.
  • an image processing apparatus refers to attention image data including a complementary difference image generated in advance, and determines whether or not an object in a reference image is a non-attention object.
  • a non-target object selection unit that determines whether or not, a complementary difference image processing unit that performs image processing on the complementary difference image included in the target image data based on display conditions, and the complementary difference image and the reference image are combined
  • the image processing apparatus includes an attention image generation unit that obtains an attention image in which the transparency of the non-attention object in the reference image is adjusted.
  • an image processing system including a first image processing device and a second image processing device, wherein the first image processing device includes: The image processing device generates, for each of the plurality of reference images, a complementary difference image that is used to complement a non-target object included in the reference image, and generates attention image data including the generated complementary difference image The second image processing device selects whether or not the object in the reference image is a non-target object with reference to the target image data generated by the target image data generation unit.
  • a complementary differential image processing unit that performs image processing on the complementary differential image included in the target image data based on display conditions, and the complementary differential image and the reference image, Standard image Transparency of the non-interest of the inner is provided with a target image generator to obtain a target image that has been adjusted, the.
  • FIG. 1 is a schematic block diagram of an image processing apparatus according to Embodiment 1 of the present invention. It is a schematic block diagram of the attention image data generation part of the image processing device concerning Embodiment 1 of the present invention.
  • FIG. 6 illustrates a captured image of an input image according to one embodiment of the present invention. It is a figure explaining the complementary difference image of 1 aspect of this invention. It is a figure explaining the complementary difference image of 1 aspect of this invention.
  • It is a schematic block diagram of the attention image composition part of the image processing device concerning Embodiment 1 of the present invention. It is a figure explaining selection of the non-attention object by the image processing apparatus non-attention object selection part concerning Embodiment 1 of the present invention.
  • Embodiment 1 An embodiment of the present invention will be described with reference to FIGS.
  • the image processing system 1 includes an attention image data generation unit 30, an attention image synthesis unit 50, and a storage unit 20.
  • the attention image data generation unit 30 generates attention image data.
  • the attention image data includes a reference image described later, accompanying information of the reference image, a complementary difference image corresponding to the reference image, accompanying information of the complementary difference image, and three-dimensional information of the reference image.
  • the attention image data generation unit 30 outputs the generated attention image data to the storage unit 20.
  • the storage unit 20 stores the acquired attention image data.
  • the attention image synthesis unit 50 synthesizes the attention image using the attention image data acquired from the storage unit 20.
  • the attention image data generation unit 30, the attention image synthesis unit 50, and the storage unit 20 may be configured to be realized in an integrated image processing apparatus, or in a separate image processing apparatus. It is good also as a structure to implement
  • the first image processing device including the attention image data generation unit 30 and the storage unit 20 is disposed in the transmission station, and the second image processing device including the attention image synthesis unit 50 is disposed in the user's home. It may be configured.
  • each image processing apparatus includes a data transmission / reception unit, and the data transmission / reception unit
  • the image processing devices transmit and receive image data to each other. The same applies to each embodiment described later.
  • the attention image data generation unit 30 includes an acquisition unit 32, a reference image selection unit 34, an attention region complement data generation unit 36, a three-dimensional information acquisition unit 38, and a multiplexing unit 40.
  • the acquisition unit 32 acquires an input image group including a plurality of input images.
  • the input image is an image obtained by capturing an image of a certain scene or imaging target from various positions and angles.
  • the input image is not necessarily limited to an image captured by a general camera.
  • an image captured by a 360 degree camera may be included.
  • an image captured by an actual camera is not necessarily used as the input image.
  • the input image may be a virtual camera image.
  • the virtual camera image is, for example, an image synthesized from an image captured by an actual camera.
  • the virtual camera image may be an image drawn by computer graphics.
  • the acquisition unit 32 acquires camera parameters corresponding to each of the plurality of input images.
  • the camera parameters include at least information indicating the direction of the camera, information indicating the position of the camera, information indicating the angle of view of the camera, and information indicating the resolution of the image.
  • the acquired camera parameters are attached to the corresponding input image. In other words, there is always a corresponding camera parameter in the input image.
  • FIG. 3 shows an example of a situation where an input image is captured.
  • the attention area 500 is captured by the actual camera 100 and the actual camera 200.
  • the virtual camera 300 image is synthesized from the captured image of the camera 100 and the captured image of the camera 200.
  • the input image group can include an image of the attention area 500 captured by the camera 100, an image of the attention area 500 captured by the camera 200, and an image of the virtual camera 300.
  • the images of the camera 100, the camera 200, and the camera 300 include the worker 400 as an object that can be a non-target object.
  • the reference image selection unit 34 selects an image to be subjected to a complementing process related to a non-target object from the input image group acquired by the acquisition unit 32 and sets it as a reference image.
  • the reference image selection unit 34 can select a plurality of reference images. The selection of the reference image may be performed, for example, by the user selecting, or all input images included in the input image group may be selected as the reference image.
  • the attention area complementation data generation unit 36 to be described later may determine whether attention area complementation data can be generated, and select an input image that can generate attention area complementation data as a reference image.
  • the reference image selection unit 34 outputs the reference image to the attention area complement data generation unit 36 and the multiplexing unit 40.
  • the reference image selection unit 34 attaches camera parameters corresponding to the input image selected as the reference image to the reference image.
  • the camera position included in the camera parameters is referred to as a reference viewpoint position.
  • the reference viewpoint position is the position of the camera at the time of imaging if the reference image is an image captured by an actual camera.
  • the reference viewpoint position is a virtual viewpoint position if the reference image is an image synthesized from an image captured by an actual camera.
  • the attention area complement data generation unit 36 generates attention area complement data.
  • the attention area complement data is data used to complement a non-attention object shown in the reference image.
  • the term “complement” refers to processing for generating an image in which a non-target object is made transparent or translucent. In other words, complement refers to a process of generating an image in which the transparency of a non-target object is adjusted.
  • the attention area complement data generation unit 36 When there are a plurality of reference images, the attention area complement data generation unit 36 generates corresponding attention area complement data for each reference image. In the present embodiment, a complementary difference image is created as attention area complementary data.
  • the attention area complement data generation unit 36 first sets a non-target object candidate that is an object that can be selected as a non-target object in the non-target object selection unit 54.
  • the attention area complement data generation unit 36 first generates an image that makes the non-target object candidate transparent, in other words, a complementary image that is an image in which the transparency of the non-target object candidate is adjusted.
  • the method for synthesizing the complementary image is not limited.
  • a complementary image may be generated by applying a viewpoint conversion process to an input image captured from a position different from the reference viewpoint position.
  • the selection of a non-target object candidate may be performed by the user's selection, or all objects that can generate a complementary image may be selected as a non-target object candidate. Further, the correlation between a plurality of input images may be calculated, and an object existing in a region having a low correlation may be selected as a non-target object candidate.
  • the attention area complementary data generation unit 36 generates a complementary difference image by taking the difference between the complementary image and the reference image.
  • the attention area complement data generation unit 36 generates a complementary difference image by extracting an area having a pixel value equal to or less than a threshold value from the difference image between the reference image and the corresponding complement image.
  • the threshold value may be set arbitrarily.
  • the region-of-interest supplement data generation unit 36 extracts a complemented difference image from the complemented image so as to surround the region where the non-target object candidate exists.
  • a complementary difference image is generated separately for each non-target object candidate.
  • non-attention object candidate and the “non-attention object” may be simply described as “non-attention object”.
  • FIG. 4 is an example of a reference image and is represented as “reference image A”.
  • FIG. 4C is an example of a reference image and is represented as “reference image B”.
  • the reference image A and the reference image B are images obtained by capturing the hatched area from different quasi-viewpoint positions.
  • the reference image A includes a white cloud that is a non-target object in an area starting from the coordinates (x1, y1).
  • start point the upper left point in the target area
  • FIG. 4B An example of the complementary difference image corresponding to the reference image A is shown in FIG. 4B and is represented as “complementary difference image 1”.
  • the complementary differential image 1 shown in FIG. 4B includes the above-described part of the gray region that is shielded by the white cloud in FIG.
  • the gray region in the drawing shown in FIG. 4B is an image that is not shielded by a white cloud that is a non-target object.
  • the complementary difference image 1 includes information indicating that the corresponding reference image is “reference image A”, information indicating that the complementary image coordinate information indicating the corresponding position on the reference image A is (x1, y1), And information indicating that the name of the non-attention object is “baiyun”.
  • 4C includes a gray cloud that is a non-attention object in a region starting from coordinates (x2, y2).
  • An example of the complementary difference image corresponding to the reference image B is shown in FIG. 4D and is represented as “complementary difference image 2”.
  • the lower left part of the hatched area is shielded by a gray cloud that is a non-target object.
  • the complementary difference image 2 shown in (d) of FIG. 4 includes the part of the shaded area shielded by the gray clouds in (c) of FIG.
  • the hatched area in the drawing shown in FIG. 4D is an image that is not shielded by a gray cloud that is a non-target object.
  • the complementary difference image 2 includes information indicating that the corresponding reference image is “reference image B”, information indicating that the complementary image coordinate information indicating the corresponding position on the reference image B is (x2, y2), And information indicating that the name of the non-attention object is “gray cloud”.
  • the complementary difference image has alpha channel information composed of alpha values representing the opacity of each pixel.
  • the complementary difference image is smaller than the reference image. Therefore, even when there are a plurality of complementary difference images, the generation process can be performed with a relatively light process, and the amount of data to be stored can be suppressed.
  • FIG. 5 a specific example of a complementary difference image generation process is shown.
  • (A) of FIG. 5 is an example of a reference image, and includes a book end 600a selected as a non-target object and a medicine bottle 600b.
  • the complementary image with respect to the reference image is an image in which a non-attention object becomes transparent as shown in FIG.
  • the attention area complement data generation unit 36 is selected from the reference image shown in FIG. 5A and the complement image shown in FIG. 5B as the non-target object shown in FIG.
  • the complementary difference image 1 corresponding to the medicine bottle 600b and the complementary difference image 2 corresponding to the book end 600a selected as the non-target object shown in FIG. 5D are generated.
  • the 3D information acquisition unit 38 calculates 3D information of the captured situation.
  • a typical example of the three-dimensional information is a depth map.
  • the depth map is information indicating the depth value of each region of the reference image.
  • Each region of the reference image is, for example, a pixel unit of the reference image.
  • the depth value included in the depth map is based on the reference viewpoint position (origin). Further, it is assumed that the depth direction of each region of the depth map is parallel and not radial.
  • the depth map may be calculated by stereo matching. That is, a depth map of the reference image may be calculated from a plurality of input images including the reference image and camera parameters of the input image. Note that the depth map may be calculated using three or more input images. Further, the depth map may be measured using a distance sensor such as an infrared sensor, for example.
  • a depth map when all non-attention objects are made transparent a depth map when all non-attention objects are shown.
  • the former is referred to as “a depth map that does not include a non-target object”, and the latter is referred to as a “depth map that includes a non-target object”.
  • the multiplexing unit 40 multiplexes the reference image, the complementary difference image, and the three-dimensional information.
  • the reference image and the complementary difference image are multiplexed together with information attached to each.
  • the attention image synthesis unit 50 includes a demultiplexing unit 52, a non-target object selection unit 54, a complementary difference image processing unit 56, a display method acquisition unit 58, and a attention image generation unit 60. .
  • the demultiplexing unit 52 acquires the target image data from the storage unit 20, and demultiplexes the acquired target image data.
  • the demultiplexing unit 52 outputs the reference image included in the target image data and the accompanying information to the target image generating unit 60. Further, the three-dimensional information and the attention area complement data included in the attention image data are output to the non-attention object selection unit 54.
  • the non-target object selection unit 54 selects at least one of the objects shown in the reference image as the non-target object based on the information accompanying the complementary difference image or the three-dimensional information. A specific example of non-target object selection will be described later.
  • the non-attention object selection unit 54 outputs only the complementary difference image determined to contain a non-attention object to the complementary difference image processing unit 56 as the selected complementary difference image.
  • the non-target object selection unit 54 refers to information indicating the reference viewpoint position, a depth map that does not include the non-target object in the reference image, and a depth map of a complementary region described later in the reference image to determine whether the object is a non-target object. Can be determined.
  • the depth value of the depth map of the area on the attention area side with respect to the reference viewpoint position is a positive value. It becomes. Therefore, if the depth values of the depth map of the complementary region are all negative values, it can be said that the depth map exists behind the reference viewpoint.
  • the non-attention object selection unit 54 determines that an object that satisfies this condition, that is, an object whose depth values of the depth map are all negative values is a non-attention object. In other words, the non-attention object selection unit 54 determines that an object that exists within the transmission range illustrated in FIG. Then, the non-target object selection unit 54 outputs a complementary difference image that complements the non-target object to the complementary difference image processing unit 56.
  • the image at the reference viewpoint position is generated from each image captured by two cameras (camera A and camera B) located behind the reference viewpoint position.
  • the image captured by the camera A located behind the reference viewpoint position and on the extended line from the attention area to the non-target object includes the non-target object, but is behind the reference viewpoint position,
  • Non-display objects are not included in the image captured by the camera B that is not located on the extension line from the region toward the non-target object.
  • the image of the reference viewpoint position can be generated by complementing the image captured by the camera A with the image captured by the camera B.
  • An example of an object that is selected as a non-target object by the non-target object selection unit 54 is an object that overlaps the peripheral area of the reference viewpoint position.
  • the non-target object selection unit 54 refers to information indicating the reference viewpoint position, a depth map that does not include the non-target object in the reference image, and a depth map of the complementary region in the reference image to determine whether the object is a non-target object. Can be determined.
  • the non-target object selection unit 54 uses the reference viewpoint position as the origin, and the depth value of the depth map of the complementary region is equal to or less than a certain ratio, for example, ⁇ 10% of the maximum depth value.
  • An object including the following depth values is determined as a non-attention object.
  • the non-target object selection unit 54 determines that an object that exists within the transmission range illustrated in FIG. 7B is a non-target object. Then, the non-target object selection unit 54 outputs a complementary difference image that complements the non-target object to the complementary difference image processing unit 56.
  • the non-attention object selection unit 54 may make a determination based on a constant value instead of a ratio. For example, if the depth of the depth map of the complement region is the reference viewpoint position and the absolute value of the depth value is less than or equal to a certain value, the non-target object selection unit 54 does not pay attention to the non-target object candidate corresponding to the complement region. It can be judged as a thing. For example, if the depth of the depth map of the complementary region is included in a sphere with a certain radius centered on the reference viewpoint position, the non-target object candidate corresponding to the complementary region is selected as the non-target object. It can be judged.
  • the non-attention object selection unit 54 determines whether or not the object is a non-attention object based on information accompanying the complementary difference image.
  • the non-target object selection unit 54 displays a complementary difference image that complements the non-target object as a complementary difference image processing unit 56. Output to.
  • examples of objects selected as non-target objects by the non-target object selection unit 54 include all objects that are in front of the target area.
  • the non-target object selection unit 54 refers to the information indicating the reference viewpoint position, the depth map of the region not including the non-target object in the reference image, and the depth map of the complementary region in the reference image to determine whether the object is a non-target object. It can be determined whether or not. As shown in FIG. 7C, the direction toward the region of interest is positive with the reference viewpoint position as the origin. In this case, the non-target object selection unit 54 determines that the non-target object is included in the complementary difference image if all the depth values of the depth map of the complementary region are smaller than the depth map without the non-target object.
  • the non-attention object selection unit 54 determines that an object existing within the transmission range illustrated in (c) of FIG. 7 is a non-attention object. Then, the non-target object selection unit 54 outputs a complementary difference image that complements the non-target object to the complementary difference image processing unit 56.
  • the non-target object selection unit 54 may select a non-target object when the region of the non-target object candidate in the image at the reference viewpoint position is equal to or greater than a predetermined ratio.
  • the predetermined ratio is an arbitrary value. For example, assuming that the predetermined ratio is set to 50%, in the example shown in FIG. 7A, even if the ratio of the non-target object candidate area in the image captured by the rear camera is 20%, In the image of the reference viewpoint position, when the ratio of the non-target object candidate area is 50%, it is determined that the non-target object candidate is a non-target object, and a complementary difference image that complements the non-target object is The result is output to the complementary difference image processing unit 56.
  • a region where a non-target object candidate exists in the reference image is called a complementary region.
  • the complementary region refers to the inner region of the reference image that is complemented by the complementary image.
  • the complementary region is calculated from the complementary difference image corresponding to the reference image and the complementary image coordinate information attached to the complementary difference image.
  • the depth map of the complementary region is information obtained from the complementary region and the depth map with the non-target object, and is information indicating the depth value of the non-target object corresponding to the complementary difference image. That is, it refers to information obtained by cutting out a depth map corresponding to a complementary region from a depth map having a non-target object corresponding to the reference image.
  • the display method acquisition unit 58 acquires the display method selected and adjusted by the user. Further, the complementary difference image processing unit 56 acquires information indicating a display method related to complementation from the display method acquiring unit 58. The complementary difference image processing unit 56 performs processing according to the display condition on the complementary difference image, and outputs the processed complementary difference image to the attention image generation unit 60 as a processed complementary difference image.
  • the display method according to the present embodiment is a complementary region display method, and is selected and adjusted by the user. Below, the example of the display method which concerns on this embodiment is demonstrated concretely.
  • Display method example 1 As a first example, there can be mentioned a display method for switching and displaying whether or not a specific object is a non-target object, for example, whether or not to complement an object shown in a reference image.
  • the complementary difference image processing unit 56 sets the entire corresponding complementary difference image to be opaque. In other words, the overall alpha value of the corresponding complementary difference image is set to 1. Similarly, when not complementing, the entire alpha value of the corresponding complementary difference image is set to zero.
  • Display method example 2 As a second example, a display method that complements only a specified region in the display image, for example, complements only a part of a non-target object in the reference image can be mentioned.
  • the complementary difference image processing unit 56 can complement only the designated region by setting only the alpha value of the corresponding region of the complementary difference image corresponding to the designated region to 1.
  • the transparency of the non-target object is set, for example, the non-target object reflected in the reference image is made translucent.
  • the complementary difference image processing unit 56 can set the transparency of the non-attention area by changing the overall alpha value of the complementary difference image according to the set transparency.
  • the image processing system 1 is provided with a configuration for identifying the user's field of view, and the complementary difference image processing unit 56 is in the field of view when the user observes a part of the image captured by the 360 degree camera.
  • the display method which complements only a non-attention object can be mentioned.
  • the complementary difference image processing unit 56 sets the entire alpha value of the complementary difference image corresponding to the non-target object in the field of view to 1, and sets the entire alpha value of the complementary difference image corresponding to the non-target object other than the above to 0. By doing so, only the specified area can be complemented.
  • the complementary difference image processing unit 56 outputs the complementary difference image for which the above processing has been completed to the attention image generating unit 60.
  • the attention image generation unit 60 synthesizes the reference image supplied from the demultiplexing unit 52 and the processed complementary difference image supplied from the complementary difference image processing unit 56, and the attention of which the transparency of the non-target object is adjusted. Generate an image.
  • the attention image generation unit 60 synthesizes the complementary difference image with the reference image at a position based on the complementary image coordinate information of the reference image. That is, the complementary difference image is synthesized with respect to the reference image at a position where the complementary difference image is extracted in the complementary image corresponding to the reference image.
  • the attention image generation unit 60 synthesizes the complementary difference image at a position determined by the complementary image coordinate information that is the coordinate information in the reference image, so even if the complementary difference image is smaller than the reference image. Therefore, it is possible to suitably perform the synthesis process.
  • the attention image generation unit 60 alpha blends the reference image and the complementary difference image.
  • the attention image generation unit 60 sets the alpha value of the reference image according to the alpha value of the complementary difference image. Specifically, the attention image generation unit 60 sets the alpha value of each pixel of the reference image to a value obtained by subtracting from 1 the alpha value of each corresponding pixel of the complementary difference image.
  • step S ⁇ b> 10 the non-target object selection unit 54 acquires one of the complementary difference images from the demultiplexing unit 52.
  • step S12 the non-target object selection unit 54 acquires the depth map of the complementary region corresponding to the complementary difference image acquired in step S10 from the three-dimensional information of the reference image from the demultiplexing unit 52.
  • step S14 the non-attention object selection unit 54 refers to the three-dimensional information and determines whether or not the object included in the complementary difference image is a non-attention object. If it is a non-attention object (YES in S14), the process proceeds to step S16. If it is not a non-attention object (NO in S14), the process proceeds to S20.
  • the non-target object selection unit 54 determines that the medicine bottle is a non-target object for the complementary difference image 1 because the distance between the camera and the medicine bottle is short.
  • the non-focused object selection unit 54 determines that the complementary difference image 2 is not a non-focused object because the distance between the camera and the book end is long.
  • step S16 the display method acquisition unit 58 acquires information indicating the display method desired by the user.
  • step S18 the complementary difference image processing unit 56 changes the alpha value of the complementary difference image based on the display method acquired in S16.
  • the complementary difference image processing unit 56 changes the alpha value of all the pixels of the complementary difference image 1 to 0.5 based on the display conditions.
  • step S20 the target image composition unit 50 determines whether or not the processing for all the complementary difference images included in the target image data has been completed. If it is determined that the process has been completed (YES in S20), the process proceeds to step S22. If it is determined that the process has not been completed (NO in S20), the process returns to S10.
  • step S22 the attention image generation unit 60 generates an attention image from the reference image and the complementary difference image corresponding to the non-attention object.
  • the attention image generation unit 60 synthesizes the reference image and the complementary difference image 1 in which the alpha value is changed to 0.5. Since the alpha value of the complementary difference image 1 is 0.5, the non-target medicine bottle is translucent in the attention image, so that an object shielded by the medicine bottle can be seen through.
  • step S24 the attention image generation unit 60 outputs the generated attention image.
  • a plurality of complementary images are generated in advance according to the non-target object selection pattern, and only a region where a difference between the reference image and the complementary image is generated is stored as a complementary difference image.
  • a desired image that does not include a non-attention object can be generated with an arbitrary process setting and a relatively light process.
  • the amount of data to be stored can be suppressed.
  • processing settings can be changed flexibly.
  • a point on the space designated by the user (referred to as a “virtual viewpoint”.
  • the position of the “virtual viewpoint” can be indicated by coordinates in the space, and information indicating the coordinates of the virtual viewpoint is displayed as the virtual viewpoint.
  • an image (referred to as “virtual viewpoint image”) in which the attention area is observed is further generated from the position information.
  • the image processing system 1 a includes an attention image data generation unit 30, a virtual viewpoint image synthesis unit 70, and a storage unit 20.
  • the virtual viewpoint image composition unit 70 acquires information indicating the position of the virtual viewpoint.
  • the storage unit 20 and the attention image data generation unit 30 correspond to the storage unit 20 and the attention image data generation unit 30 of the image processing system 1 according to the first embodiment, respectively.
  • the virtual viewpoint image composition unit 70 includes a demultiplexing unit 52, a non-target object selection unit 54b, a complementary difference image processing unit 56, a display method acquisition unit 58, a target image generation unit 60, and a virtual viewpoint.
  • An image generation unit 72 is provided. Note that the demultiplexing unit 52, the non-target object selection unit 54a, the complementary difference image processing unit 56, the display method acquisition unit 58, and the target image generation unit 60 are each demultiplexed in the image processing system 1 according to the first embodiment. This corresponds to the conversion unit 52, the non-target object selection unit 54, the complementary difference image processing unit 56, the display method acquisition unit 58, and the target image generation unit 60.
  • the non-target object selection unit 54 of Embodiment 1 selects a non-target object based on the reference viewpoint position, but the non-target object selection unit 54a of the present embodiment replaces the reference viewpoint position with the position of the virtual viewpoint. Select a non-attention object as a reference.
  • Non-attention object selection unit 54 As an example of the object selected as the non-attention object by the non-attention object selection unit 54, an object positioned behind the virtual viewpoint position can be cited.
  • the non-target object selection unit 54 refers to information indicating the virtual viewpoint position, a depth map that does not include the non-target object in the reference image, and a depth map of a complementary region described later in the reference image to determine whether the object is a non-target object. Can be determined.
  • the virtual viewpoint position is the origin and the direction toward the attention area is positive
  • the depth value of the depth map of the area closer to the attention area than the virtual viewpoint position is a positive value.
  • the non-target object selection unit 54 determines that an object that satisfies this condition, that is, an object whose depth values in the depth map are all negative values, is a non-target object, and complements the non-target object. Is output to the complementary difference image processing unit 56.
  • the non-target object selection unit 54 refers to the information indicating the virtual viewpoint position, the depth map that does not include the non-target object in the reference image, and the depth map of the complementary region in the reference image to determine whether or not the object is a non-target object. Can be determined.
  • the non-attention object selection unit 54 uses a virtual viewpoint position as an origin, and removes an object in which the depth value of the depth map of the complementary region includes a depth value of a certain percentage or less, for example, ⁇ 10% or less of the maximum depth value. It determines with it being an attention object, and outputs the complementary difference image which complements the said non-attention object to the complementary difference image process part 56.
  • the non-attention object selection unit 54 may make a determination based on a constant value instead of a ratio. For example, if the depth of the depth map of the complementary region is included within ⁇ 10 cm in the depth direction starting from the virtual viewpoint position, the non-focused object selection unit 54 does not focus on the non-focused object candidate corresponding to the complementary region. It can be judged as a thing. Alternatively, the non-target object selection unit 54 may determine that the non-target object candidate is a non-target object on the condition that the absolute value of the depth value is not more than a certain value.
  • examples of objects selected as non-target objects by the non-target object selection unit 54 include all objects that are in front of the target area.
  • the non-target object selection unit 54 refers to the information indicating the virtual viewpoint position, the depth map of the region not including the non-target object in the reference image, and the depth map of the complementary region in the reference image to determine whether the object is a non-target object. It can be determined whether or not.
  • the direction toward the region of interest is positive with the virtual viewpoint position as the origin.
  • the non-target object selection unit 54 determines that the non-target object is included in the complementary difference image if all the depth values of the depth map of the complement region are smaller than the depth map without the non-target object,
  • the complementary difference image is output to the complementary difference image processing unit 56.
  • the non-attention object selection unit 54 may select a non-attention object as a non-attention object when the area of the non-attention object candidate in the image at the virtual viewpoint position is a predetermined ratio or more.
  • the predetermined ratio is an arbitrary value. For example, assuming that the predetermined ratio is set to 50%, even if the ratio of the non-target object candidate area in the image captured by the rear camera is 20%, the non-target object candidate is displayed in the virtual viewpoint position image. When the proportion of the area is 50%, it is determined that the non-target object candidate is a non-target object, and a complementary difference image that complements the non-target object is output to the complementary difference image processing unit 56.
  • the virtual viewpoint image generation unit 72 generates a virtual viewpoint image using the attention image generated by the attention image generation unit 60. That is, the virtual viewpoint image generation unit 72 generates, for example, an image in which a non-target object candidate determined to be a non-target object appears to be transparent and the target region is observed from the virtual viewpoint position. To do.
  • This embodiment is different from the above-described embodiment in that complementary difference information is generated and used instead of the complementary difference image.
  • the image processing system 1b includes an attention image data generation unit 30b, an attention image synthesis unit 50b, and a storage unit 20.
  • the storage unit 20 corresponds to the storage unit 20 of the image processing system 1 according to the first embodiment.
  • the attention image data generation unit 30 b includes an acquisition unit 32, a reference image selection unit 34, an attention region complement data generation unit 36 b, a three-dimensional information acquisition unit 38, and a multiplexing unit 40.
  • the acquisition unit 32, the reference image selection unit 34, the 3D information acquisition unit 38, and the multiplexing unit 40 are respectively the acquisition unit 32, the reference image selection unit 34, and the 3D information acquisition of the image processing system 1 according to the first embodiment. This corresponds to the unit 38 and the multiplexing unit 40.
  • the attention area complement data generation unit 36 of Embodiment 1 has generated attention image data including a complementary difference image
  • the attention area complement data generation unit 36b of the present embodiment generates complementary difference information instead of the complementary difference image. Then, attention image data including complementary difference information is generated.
  • an attention image that complements the region of the non-attention object included in the reference image is generated.
  • a reference image is complemented, a partial region of any one of the other reference images is used.
  • complementary difference information information indicating which region of another reference image should be used for which region of a certain reference image is referred to as complementary difference information.
  • an area hidden by a non-target object in one reference image is one of the other reference images (reference image B shown in FIG. 14B).
  • reference image A shown in FIG. 14A an area hidden by a non-target object in one reference image
  • reference image B shown in FIG. 14B an area hidden by a non-target object in one reference image
  • reference image B shown in FIG. 14B an area hidden by a non-target object in one reference image
  • reference image B shown in FIG. 14B
  • the reference image A includes a white cloud that is a non-target object in an area starting from the coordinates (x1, y1) and having a width of wa and a height of ha.
  • This region is a complement region to be complemented in the reference image A.
  • the complementary region in the third embodiment is equivalent to the complementary region shown in the description of the depth map of the complementary region in the first embodiment.
  • the complementary region here is not calculated from the complementary difference image corresponding to the reference image and the complementary image coordinate information attached to the complementary difference image, but is information included in the complementary difference information.
  • the reference image B has a width wb starting from the coordinates (x2, y2) corresponding to the coordinates (x1, y1) of the reference image A, There is no non-attention object in the area where the height is hb (reference area of the standard image B). Therefore, the reference area of the standard image B can be used to complement the complementary area of the standard image A.
  • the size and shape of the complementary region and the reference region are not necessarily the same.
  • the non-target object selection unit 54b described later deforms the partial image extracted from the reference region to fit the complementary region.
  • the complementary difference information 1 that is the complementary difference information of the reference image A is information indicating that the corresponding reference image is “reference image A”, the starting point (x1, information indicating the coordinates of y1), information indicating the width wa of the complementary region, information indicating the height ha of the complementary region, information indicating that the standard image used for complementing is “standard image B”, the starting point of the reference region ( information indicating the coordinates of x2, y2), information indicating the width wb of the complementary region, information indicating the height hb of the complementary region, and information indicating that the non-target object is a “white cloud”.
  • the reference image B includes a gray cloud that is a non-attention object in a region starting from the coordinates (xb1, yb1) and ending at the coordinates (xb2, yb2).
  • This region is a complement region to be complemented with the reference image B.
  • the lower right point in the target area is called an “end point”.
  • the reference image A starts from the coordinates (xa1, ya1) corresponding to the coordinates (xb1, yb1) of the reference image B, and the coordinates (xb2) of the reference image B , Yb2), there is no non-attention object in the area (reference area of the standard image A) whose end point is the coordinate (xa2, ya2) corresponding to. Therefore, the reference area of the standard image A can be used to complement the complementary area of the standard image B.
  • the complementary difference information 2 that is the complementary difference information of the reference image B is information indicating that the corresponding reference image is the “reference image B”, the starting point (xb1, Information indicating the coordinates of yb1), information indicating the coordinates of the end point (xb2, yb2) of the complementary region, information indicating that the standard image used for complementing is “standard image A”, and the starting point (xa1, ya1) of the reference region Information indicating the coordinates of the reference area, information indicating the coordinates of the end point (xa2, ya2) of the reference area, and information indicating that the non-target object is a “gray cloud”.
  • the supplemental area and the reference area may be expressed in any manner as long as the area is uniquely determined. For example, it may be represented by the coordinates of the four corners of the region. Further, it is not necessarily a rectangular area, and may be, for example, a circle.
  • the attention image synthesis unit 50 b includes a demultiplexing unit 52, a non-target object selection unit 54 b, a complementary difference image processing unit 56, a display method acquisition unit 58, and a attention image generation unit 60.
  • the demultiplexing unit 52, the complementary difference image processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60 are respectively the demultiplexing unit 52 and the complementary difference image of the image processing system 1 according to the first embodiment. This corresponds to the processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60.
  • the non-target object selection unit 54b first extracts pixels in the reference area of another standard image based on the complementary difference information corresponding to the standard image, and performs the complementary differential image corresponding to the selected non-target object. Is generated. At this time, information on the base image used for complementation and information other than the reference area included in the complementation difference information are handled as information accompanying the complementation difference image. Next, the same processing as that performed by the non-target object selection unit 54 of the first embodiment is performed. That is, a non-target object is selected based on information accompanying the complementary difference image or three-dimensional information, and the complementary difference image determined to contain the non-target object is output to the complementary difference image processing unit 56.
  • the image processing apparatus refers to the target image data including the complementary difference information generated in advance, and determines whether or not the object in the reference image is a non-target object. Based on the display unit, the selection unit 54b, the complementary difference image processing unit 56 that performs image processing on the complementary difference image specified by the complementary difference information included in the target image data, the complementary difference image, and the reference image And an attention image generation unit 60 that obtains an attention image in which the transparency of the non-attention object in the reference image is adjusted by combining.
  • a desired image that does not include a non-attention object can be generated with an arbitrary process setting and a relatively light process.
  • the amount of data to be stored can be suppressed.
  • This embodiment is different from the above-described embodiment in that three-dimensional information is not used.
  • the image processing system 1c includes an attention image data generation unit 30c, an attention image synthesis unit 50c, and a storage unit 20.
  • the storage unit 20 corresponds to the storage unit 20 of the image processing system 1 according to the first embodiment.
  • the attention image data generation unit 30c is different from the attention image data generation unit 30 of the first embodiment in that it does not have a function of a three-dimensional information acquisition unit.
  • the acquisition unit 32, the reference image selection unit 34, the attention area complement data generation unit 36, and the multiplexing unit 40c are respectively the acquisition unit 32, the reference image selection unit 34, and the attention area complement of the image processing system 1 according to the first embodiment. This corresponds to the data generation unit 36 and the multiplexing unit 40.
  • the attention image data since the three-dimensional information is not acquired, the attention image data includes a reference image and a complementary difference image.
  • the attention image synthesis unit 50 c includes a demultiplexing unit 52, a non-target object selection unit 54 c, a complementary difference image processing unit 56, a display method acquisition unit 58, and a attention image generation unit 60.
  • the demultiplexing unit 52, the complementary difference image processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60 are respectively the demultiplexing unit 52 and the complementary difference image of the image processing system 1 according to the first embodiment. This corresponds to the processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60.
  • the non-attention object selection unit 54c selects an unattention object without using the three-dimensional information. That is, the non-target object selection unit 54c selects a non-target object based on information accompanying the non-target object included in the complementary difference image. That is, the non-target object selection unit 54c identifies a specific object and selects it as a non-target object.
  • This embodiment is different from Embodiment 1 in that the non-attention object selection unit 54d uses user attention point information to select whether or not it is a non-attention object.
  • the image processing system 1 d includes an attention image data generation unit 30, an attention image synthesis unit 50 d, and a storage unit 20.
  • the attention image composition unit 50d in the present embodiment acquires information indicating the user's line-of-sight direction.
  • the storage unit 20 and the attention image data generation unit 30 correspond to the storage unit 20 and the attention image data generation unit 30 of the image processing system 1 according to the first embodiment, respectively.
  • the attention image data generation unit 30d includes an acquisition unit 32, a reference image selection unit 34, an attention region complement data generation unit 36, a three-dimensional information acquisition unit 38d, and a multiplexing unit 40.
  • the acquisition unit 32, the reference image selection unit 34, the attention area complement data generation unit 36, and the multiplexing unit 40 are respectively the acquisition unit 32, the reference image selection unit 34, and the attention area complementation of the image processing system 1 according to the first embodiment. This corresponds to the data generation unit 36 and the multiplexing unit 40.
  • the 3D information acquisition unit 38d acquires line-of-sight direction information in addition to the 3D information of the reference image.
  • the line-of-sight direction information includes information indicating the position of the eyes of a person included in the input image and information indicating a point that the person is interested in.
  • the line-of-sight direction information may be acquired using, for example, an eye tracking device, or may be acquired by performing image processing on an input image in which a person's eyes are reflected.
  • the line-of-sight direction information is output to the multiplexing unit 40 in association with the reference image. However, when a person is not included in the reference image, the line-of-sight direction information may not be associated with the reference image.
  • the three-dimensional information acquisition unit 38d acquires information on the position of the eyes of the person included in the line-of-sight direction information as information indicating a relative position with respect to the depth map.
  • the position of the eye can be represented by three-dimensional coordinates with reference to the origin of the depth map.
  • the information indicating the attention point included in the line-of-sight direction information is information indicating which coordinates in the reference image the person is observing. For example, when a person observes the center of an image with respect to a reference image having a width w and a height h, the coordinates of the attention point are (w / 2, h / 2).
  • the attention image synthesis unit 50 d includes a demultiplexing unit 52, a non-target object selection unit 54 d, a complementary difference image processing unit 56, a display method acquisition unit 58, and a attention image generation unit 60.
  • the demultiplexing unit 52, the complementary difference image processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60 are respectively the demultiplexing unit 52 and the complementary difference image of the image processing system 1 according to the first embodiment. This corresponds to the processing unit 56, the display method acquisition unit 58, and the attention image generation unit 60.
  • the user attention point information is information representing coordinates in the attention image that the user observes. For example, when the user is observing the center of the attention image having the width w and the height h, the user attention point information is information indicating (w / 2, h / 2). For example, as the user attention point, if the user observes the attention image using a head mounted display (HMD), the center point of the area displayed on the HMD can be set as the attention point. Similarly, when the user observes an attention image using a flat panel display, the center point of the area displayed on the display can be set as the attention point. Further, for example, the measurement may be performed using an eye tracking device.
  • HMD head mounted display
  • the center point of the area displayed on the display can be set as the attention point.
  • the measurement may be performed using an eye tracking device.
  • the non-target object selection unit 54d calculates the angle of the line of sight of the person in the reference image from the line-of-sight direction information corresponding to the reference image. Next, the angle of the user's eye line is calculated from the reference viewpoint position and the user attention point information. The non-target object selection unit 54d determines whether or not the object is a non-target object from the former calculation result and the latter calculation result.
  • the non-target object selection unit 54d determines whether or not the object is a non-target object from the reference viewpoint position, the angle of the user's eyes, the position of the eyes of the person in the reference image, and the angle of the eyes.
  • the non-attention object selection unit 54d determines that the person in the reference image position is close to the eye position of the person in the reference image and the eye angle of the user is close to the eye angle of the person in the reference image. Can be made non-attention.
  • the user's line-of-sight direction when viewing the 360-degree camera image may be recorded and used as user attention point information.
  • control blocks of the image processing systems 1, 1 a, 1 b, 1 c, 1 d are integrated. It may be realized by a logic circuit (hardware) formed on a circuit (IC chip) or the like, or may be realized by software using a CPU (Central Processing Unit).
  • the image processing apparatuses 1, 1 a, 1 b, 1 c, and 1 d record a CPU that executes instructions of a program, which is software that realizes each function, and the program and various data that can be read by a computer (or CPU).
  • ROM Read Only Memory
  • recording medium storage device
  • RAM Random Access Memory
  • the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
  • a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • an arbitrary transmission medium such as a communication network or a broadcast wave
  • one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • An image processing apparatus refers to a target image data including a complementary difference image generated in advance, and determines whether or not an object in a reference image is a non-target object (54, 54a, 54b, 54c, 54d) and a complementary difference image processing unit (56) that performs image processing on the complementary difference image included in the target image data based on display conditions; the complementary difference image; and An attention image generation unit (60) that obtains an attention image in which the transparency of the non-attention object in the reference image is adjusted by synthesizing the reference image;
  • a desired image that does not include a non-attention object can be generated with an arbitrary process setting and a relatively light process.
  • the amount of data to be stored can be suppressed.
  • the image processing apparatus is the image processing apparatus according to aspect 1, in which the complementary difference image is smaller than the reference image, the attention image data includes coordinate information, and the attention image generation unit (60) synthesizes the complementary difference image at a position determined by the coordinate information in the reference image.
  • the complementary difference image is smaller than the reference image, and the attention image generation unit (60) synthesizes the complementary difference image at a position determined by the coordinate information in the reference image. Even when there are a plurality of complementary difference images, the generation process can be performed with a lighter process, and the amount of data to be stored can be further suppressed. In addition, the complementary difference image can be suitably combined.
  • the non-target object selection unit (54, 54a, 54b, 54c, 54d) refers to the depth map of the reference image, and It is determined whether or not the object in the reference image is a non-target object.
  • the object in the reference image is a non-target object.
  • the depth map of the reference image includes a depth map that does not include the non-attention object and a depth map that includes the non-attention object.
  • the object in the reference image is a non-target object.
  • the image processing apparatus is the image processing apparatus according to any one of aspects 1 to 4, wherein the image is viewed from the position of the coordinates in the space based on the virtual viewpoint position information indicating the coordinates in the space and the attention image. You may further provide the virtual viewpoint image generation part (72) which synthesize
  • the non-target object selecting unit selects a specific object as the non-target object.
  • the region of interest is not hidden by what does not need a particular attention.
  • the non-target object selection unit refers to a user's line-of-sight direction to determine whether an object in the reference image is a non-target object. judge.
  • the object in the reference image is a non-attention object according to the user's line-of-sight direction.
  • the image processing apparatus refers to a target image data including preliminarily generated complementary difference information, and determines whether or not an object in the reference image is a non-target object.
  • a complementary difference image processing unit that performs image processing on a complementary difference image specified by the complementary difference information included in the target image data based on display conditions, and combining the complementary difference image and the reference image.
  • An attention image generation unit that obtains an attention image in which the transparency of the non-attention object in the reference image is adjusted.
  • An image processing system is an image processing system including a first image processing apparatus and a second image processing apparatus, and the first image processing apparatus includes a plurality of reference images.
  • the attention image data generating unit for generating a complementary difference image used to complement the non-target object included in the reference image, and generating attention image data including the generated complementary difference image
  • the second image processing apparatus refers to the target image data generated by the target image data generation unit, and selects a non-target object selection unit that selects whether or not the object in the reference image is a non-target object.
  • the transparency of the non-attention object in the reference image is obtained by combining the complementary difference image processing unit that performs image processing on the complementary difference image included in the attention image data, and the complementary difference image and the reference image. Adjusted A target image generator to obtain a target image, and a.
  • the image processing apparatus may be realized by a computer.
  • the image processing apparatus is operated on each computer by causing the computer to operate as each unit (software element) included in the image processing apparatus.
  • the image processing program of the image processing apparatus to be realized in this way and a computer-readable recording medium on which the image processing program is recorded also fall within the scope of the present invention.
  • Image processing system 54 1, 1a, 1b, 1c, 1d Image processing system 54, 54b, 54c, 54d Non-attention object selection unit 56 Complementary difference image processing unit 60 Attention image generation unit 36, 36b Attention region complementation data generation unit 72 Virtual viewpoint image generation unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

En ce qui concerne le processus de génération d'une telle image dont l'objet de non-attention est rendu transparent, la présente invention permet de générer l'image à grande vitesse tout en supprimant une augmentation du volume de données à stocker. L'invention concerne un dispositif de traitement d'image pourvu d'une unité de sélection d'objet de non-attention (54) pour déterminer si un objet dans une image de référence est un objet de non-attention, une unité de traitement d'image de différence complémentaire (56) pour effectuer un traitement d'image sur une image de différence complémentaire, et une unité de génération d'image d'objet d'attention (60) pour synthétiser l'image de différence complémentaire et l'image de référence et obtenir ainsi une image d'objet d'attention dans laquelle le niveau de transparence de l'objet de non-attention dans l'image de référence est ajusté.
PCT/JP2018/013260 2017-04-04 2018-03-29 Dispositif de traitement d'image, programme de traitement d'image, et support d'enregistrement WO2018186279A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017074533 2017-04-04
JP2017-074533 2017-04-04

Publications (1)

Publication Number Publication Date
WO2018186279A1 true WO2018186279A1 (fr) 2018-10-11

Family

ID=63712139

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/013260 WO2018186279A1 (fr) 2017-04-04 2018-03-29 Dispositif de traitement d'image, programme de traitement d'image, et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2018186279A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004114224A1 (fr) * 2003-06-20 2004-12-29 Nippon Telegraph And Telephone Corporation Procede de creation d'une image de point visuel virtuel, procede et dispositif d'affichage d'images 3d
JP2005198007A (ja) * 2004-01-07 2005-07-21 Ricoh Co Ltd 画像処理装置、画像処理方法、プログラム及び情報記録媒体
JP2008117305A (ja) * 2006-11-07 2008-05-22 Olympus Corp 画像処理装置
WO2015011985A1 (fr) * 2013-07-25 2015-01-29 ソニー株式会社 Dispositif, procédé et programme de traitement d'informations
JP2015126326A (ja) * 2013-12-26 2015-07-06 株式会社東芝 電子機器及び画像処理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004114224A1 (fr) * 2003-06-20 2004-12-29 Nippon Telegraph And Telephone Corporation Procede de creation d'une image de point visuel virtuel, procede et dispositif d'affichage d'images 3d
JP2005198007A (ja) * 2004-01-07 2005-07-21 Ricoh Co Ltd 画像処理装置、画像処理方法、プログラム及び情報記録媒体
JP2008117305A (ja) * 2006-11-07 2008-05-22 Olympus Corp 画像処理装置
WO2015011985A1 (fr) * 2013-07-25 2015-01-29 ソニー株式会社 Dispositif, procédé et programme de traitement d'informations
JP2015126326A (ja) * 2013-12-26 2015-07-06 株式会社東芝 電子機器及び画像処理方法

Similar Documents

Publication Publication Date Title
WO2011033673A1 (fr) Appareil de traitement d'image
US9779539B2 (en) Image processing apparatus and image processing method
US9990738B2 (en) Image processing method and apparatus for determining depth within an image
JP6147464B2 (ja) 画像処理システム、端末装置及び方法
JP7459051B2 (ja) 角検出のための方法および装置
CN105611267B (zh) 现实世界和虚拟世界图像基于深度和色度信息的合并
US20160180514A1 (en) Image processing method and electronic device thereof
Kytö et al. Improving relative depth judgments in augmented reality with auxiliary augmentations
JP2016225811A (ja) 画像処理装置、画像処理方法およびプログラム
CN113253845A (zh) 一种基于人眼追踪视图显示方法、装置、介质及电子设备
JP2017158153A (ja) 画像処理装置および画像処理方法
KR101212223B1 (ko) 촬영장치 및 깊이정보를 포함하는 영상의 생성방법
US11043019B2 (en) Method of displaying a wide-format augmented reality object
KR20200003597A (ko) 증강현실 영상 보정 장치 및 방법
WO2018186279A1 (fr) Dispositif de traitement d'image, programme de traitement d'image, et support d'enregistrement
US20120249533A1 (en) Stereoscopic display apparatus
JP2019075784A (ja) ライトフィールド表示装置及びそのライトフィールド画像の表示方法
JP6004354B2 (ja) 画像データ処理装置及び画像データ処理方法
JP2012065851A (ja) 多視点裸眼立体内視鏡システム
JP2018129026A (ja) 決定装置、画像処理装置、決定方法及び決定プログラム
JP5868055B2 (ja) 画像処理装置および画像処理方法
KR20190072742A (ko) 캘리브레이션된 멀티 카메라 입력 기반 실시간 초다시점 중간시점 영상 합성 방법 및 시스템
JP5281720B1 (ja) 立体映像処理装置及び立体映像処理方法
JP6085943B2 (ja) 視差制御装置及び視差制御プログラム
KR20230067307A (ko) 아바타를 이용한 3차원 모델링 공간에서 이동 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18781911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18781911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP