WO2024024357A1 - Image display device and image display method - Google Patents

Image display device and image display method Download PDF

Info

Publication number
WO2024024357A1
WO2024024357A1 PCT/JP2023/023519 JP2023023519W WO2024024357A1 WO 2024024357 A1 WO2024024357 A1 WO 2024024357A1 JP 2023023519 W JP2023023519 W JP 2023023519W WO 2024024357 A1 WO2024024357 A1 WO 2024024357A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional object
texture
pasted
viewpoint
omnidirectional image
Prior art date
Application number
PCT/JP2023/023519
Other languages
French (fr)
Japanese (ja)
Inventor
真 宇佐美
真之介 宇佐美
廉 宇佐美
Original Assignee
ガラクーダ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ガラクーダ株式会社 filed Critical ガラクーダ株式会社
Publication of WO2024024357A1 publication Critical patent/WO2024024357A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • This invention relates to image display technology.
  • An interface has been developed that allows users to virtually walk around indoors and outdoors using omnidirectional images taken with a 360-degree camera. By changing the viewpoint or moving, the user can explore the virtual space represented by omnidirectional images.
  • the present invention has been made in view of these problems, and its purpose is to provide a lightweight image display technology that can easily move in a three-dimensional space.
  • an image display device pastes a first omnidirectional image as a semi-transparent texture on the surface of a first three-dimensional object, and
  • the first omnidirectional image is rotated and reversed to align with the translucent texture pasted on the front surface, and then pasted as an opaque texture on the back side of the first three-dimensional image, or
  • the first omnidirectional image is pasted on the back surface of the dimensional object as a semi-transparent texture
  • the first omnidirectional image is pasted on the front surface of the first three-dimensional object
  • the first omnidirectional image is pasted on the back surface of the semi-transparent texture.
  • the drawing processing unit renders the first three-dimensional object observed from the viewpoint.
  • Another aspect of the present invention is an image display method. This method involves pasting a first omnidirectional image on the front surface of a first three-dimensional object as a semi-transparent texture, pasting the first omnidirectional image on the back surface of the first three-dimensional object, and pasting the first omnidirectional image on the front surface of the first three-dimensional object.
  • the first omnidirectional image is rotated and flipped to align with the attached semi-transparent texture and pasted as an opaque texture, or the first omnidirectional image is semi-transparent on the back side of the first three-dimensional object.
  • the first omnidirectional image is pasted as a texture on the front surface of the first three-dimensional object, rotated and flipped to align with the translucent texture pasted on the back surface, and then opaque. pasting the texture as a texture; and when the viewpoint is outside the first three-dimensional object, moving the viewpoint inside the first three-dimensional object by selecting the first three-dimensional object. and rendering the first three-dimensional object as viewed from the viewpoint.
  • FIG. 1 is a configuration diagram of an image display device according to the present embodiment.
  • FIG. 3 is a diagram illustrating a method for pasting textures on the front and back surfaces of a 3D object.
  • FIG. 3 is a diagram illustrating a method of pasting a moving image on the front and back sides of a 3D object.
  • 3 is a flowchart illustrating rendering processing of a translucent 3D object.
  • FIG. 3 is a diagram illustrating a nested structure of translucent 3D objects.
  • FIGS. 6A and 6B are diagrams illustrating an example in which an opaque 3D object is placed inside a translucent 3D object.
  • FIGS. 7(a) to 7(c) are diagrams illustrating a method for setting entrances and exits in a translucent 3D object using semantic segmentation.
  • FIGS. 8A and 8B are diagrams illustrating the shape of a translucent 3D object that matches the scene of the texture pasted on the translucent 3D object.
  • FIG. 6 is a diagram illustrating the movement of a translucent 3D object when a device displaying the translucent 3D object is tilted.
  • FIGS. 10(a) and 10(b) are diagrams illustrating the hierarchical structure of semitransparent 3D objects.
  • FIGS. 11(a) and 11(b) are diagrams illustrating animation patterns of semitransparent 3D objects.
  • FIG. 1 is a configuration diagram of an image display device 100 according to the present embodiment.
  • the image display device 100 includes a drawing processing section 10, a viewpoint moving section 20, a display control section 30, an entrance/exit setting section 40, a hierarchy setting section 50, a 3D object storage section 60, a texture storage section 70, and a hierarchical structure storage section 80. .
  • multiple three-dimensional objects are arranged in a nested structure.
  • the three-dimensional object can be observed from outside the three-dimensional object, or the three-dimensional object can be observed from the inside by moving the viewpoint inside the three-dimensional object.
  • a plurality of nested three-dimensional objects may be placed in a virtual space, and the nested three-dimensional objects may be accessed via a browser on a network such as the Web. .
  • Any user interface that can access a plurality of nested three-dimensional objects may be used and is not limited to the one exemplified here.
  • the same texture is pasted on the front and back surfaces of the three-dimensional object, and the front surface of the three-dimensional object is translucent, but the back surface is opaque. Therefore, when a three-dimensional object is viewed from the outside, the inside of the three-dimensional object can be seen through, but when the viewpoint is moved inside the three-dimensional object, the outside of the three-dimensional object cannot be seen.
  • the drawing processing unit 10 reads data of a three-dimensional object from the 3D object storage unit 60, reads an omnidirectional image (also referred to as a “360 degree image”) from the texture storage unit 70, and writes the omnidirectional image on the surface of the three-dimensional object. Paste it as a translucent texture, then paste the same omnidirectional image on the back side of the 3D object, rotate and flip it to align with the translucent texture pasted on the front side, and paste it as an opaque texture. . In this case, a normal omnidirectional image is pasted on the front side of the three-dimensional object, and an inverted omnidirectional image is pasted on the back side of the three-dimensional object. When making a map look like a map, it is desirable to attach a normally rotated omnidirectional image to the surface of the three-dimensional object so that the map is not reversed.
  • the drawing processing unit 10 pastes an omnidirectional image as a semi-transparent texture on the back side of the three-dimensional object, and pastes the same omnidirectional image on the front side of the three-dimensional object as a semi-transparent texture pasted on the back side. It may be rotated and flipped for alignment and then pasted as an opaque texture. In this case, an inverted omnidirectional image is pasted on the front side of the three-dimensional object, and a forward omnidirectional image is pasted on the back side of the three-dimensional object. For example, if characters are drawn on the omnidirectional image, It is desirable to paste a normally rotated omnidirectional image on the back side of a three-dimensional object so that the characters do not appear reversed when viewed from inside the three-dimensional object.
  • the resolution of the translucent texture pasted on the front surface of the three-dimensional object may be smaller than the resolution of the opaque texture pasted on the back surface of the three-dimensional object. It is desirable that the resolution of the translucent texture that allows the inside of a three-dimensional object to be seen through from the outside be extremely small to the extent that the interior atmosphere can be seen to the extent necessary and sufficient. Specifically, a difference in resolution of about 1/10 or more is provided.
  • Prepare another low-resolution omnidirectional image in advance by thinning pixels from the high-resolution omnidirectional image use the high-resolution omnidirectional image as an opaque texture pasted on the back of the 3D object, and use the low-resolution omnidirectional image to create another low-resolution omnidirectional image.
  • the omnidirectional image is used as a translucent texture pasted onto the surface of a three-dimensional object. As a result, calculation amount and memory amount can be saved, drawing operation can be made faster, and power consumption can be suppressed.
  • the viewpoint moving unit 20 moves the viewpoint inside the three-dimensional object by selecting the three-dimensional object.
  • the drawing processing unit 10 renders the three-dimensional object observed from the viewpoint.
  • the display control unit 30 displays the rendering result on the display.
  • Three-dimensional objects with a translucent texture pasted on the front surface and an opaque texture pasted on the back surface may be arranged in a nested structure.
  • the viewpoint moving unit 20 selects the second three-dimensional object when the viewpoint is inside the first three-dimensional object.
  • the viewpoint is moved inside the second three-dimensional object.
  • the drawing processing unit 10 renders at least one of the first three-dimensional object and the second three-dimensional object in the nested structure observed from the viewpoint. If the viewpoint is inside the first 3D object and outside the second 3D object, the back side of the first 3D object and the front side of the second 3D object observed from the viewpoint are rendered. Ru. If the viewpoint is inside the second three-dimensional object, the back side of the second three-dimensional object observed from the viewpoint is rendered.
  • the drawing processing unit 10 sets the surface texture of the terminal three-dimensional object to be opaque.
  • the viewpoint moving unit 20 does not move the viewpoint inside the terminal three-dimensional object even if the user selects the terminal three-dimensional object.
  • the hierarchy setting section 50 determines a hierarchical structure based on the position information or meta information of the omnidirectional image pasted on the three-dimensional object, sets the nested structure of the three-dimensional object based on the hierarchical structure, and stores the nested structure of the three-dimensional object in the hierarchical structure storage section. 80.
  • the drawing processing unit 10 refers to the hierarchical structure stored in the hierarchical structure storage unit 80, arranges and renders three-dimensional objects in a nested structure. Since the nested structure of three-dimensional objects is determined based on the meta information of the omnidirectional image, it is intuitively easy to understand, and it is possible to naturally move between the three-dimensional objects arranged in the nested structure.
  • the doorway setting unit 40 sets a specific location in the omnidirectional image as a doorway, and sets the doorway to be completely transparent, based on the semantic information in the omnidirectional image pasted on the three-dimensional object.
  • the viewpoint moving unit 20 moves the viewpoint inside the three-dimensional object by selecting an entrance/exit of the three-dimensional object, and when the viewpoint is inside the three-dimensional object, it moves the viewpoint inside the three-dimensional object. By selecting the entrance/exit of the object, the viewpoint is moved to the outside of the three-dimensional object.
  • the drawing processing unit 10 uses a cube or a rectangular parallelepiped as the three-dimensional object when the omnidirectional image is an image of a building, and uses a sphere as the three-dimensional object when the omnidirectional image is an image of an outdoor space. Or an ellipsoid can be used.
  • three-dimensional objects such as cubes and cylinders, but it is more intuitive for users to use cubes for buildings and spheres for outdoor spaces. As a result, it is possible to provide an interface that is intuitively easy to understand using a simple three-dimensional model that matches the user's intuition.
  • the drawing processing unit 10 can also reflect the influence of a light source inside a three-dimensional object on the surface or inside of another three-dimensional object in a nested structure.
  • a light source in a 3D object in a higher hierarchy may illuminate or cast a shadow on the surface of another 3D object in a lower hierarchy within the 3D object, or a light source in a 3D object in a lower hierarchy may The inside is illuminated.
  • light from a light source of one three-dimensional object or its reflected light may affect the surface or interior of the adjacent three-dimensional object.
  • An attribute setting unit for setting attributes of the three-dimensional object stored in the 3D object storage unit 60 may be further provided.
  • FIG. 2 is a diagram illustrating a method for pasting textures on the front and back surfaces of a 3D object.
  • the same textures 300 and 310 pasted on the front and back surfaces of the 3D object 200 are, for example, omnidirectional images in an equirectangular format generated by an equirectangular projection.
  • the omnidirectional image can be taken with a 360-degree camera, or a panoramic image obtained by stitching multiple images may be used.
  • the texture 300 pasted on the front surface of the 3D object 200 has transparency of about 20 to 50%, and the texture 310 pasted on the back surface of the 3D object 200 is an opaque texture. Paste 310.
  • the 3D object 200 is viewed from the outside, there is a visual effect in which the inside can be seen through, but when the 3D object 200 is viewed from inside the 3D object 200, the outside is not visible.
  • the user By providing an interface that allows the inside of the 3D object 200 to be viewed from the outside, but does not allow the outside to be seen from inside the 3D object 200, the user is suggested to enter the inside of the 3D object 200, and the user's viewpoint is changed to the inside of the 3D object 200. It can provide motivation to move internally.
  • the 3D object 200 is, for example, a simple three-dimensional object such as a sphere, a cube, or a cylinder.
  • a simple three-dimensional shape as the 3D object 200, it is possible to provide an interface that allows an easy bird's-eye view of the entire image while keeping the amount of calculation and data for rendering processing of omnidirectional images low. This makes it possible to display a three-dimensional space that is realistic yet lightweight.
  • the texture 310 pasted on the back side of the 3D object 200 undergoes rotation/reversal processing for the texture 300 pasted on the front side of the 3D object 200 so that the same subject overlaps in the same position on the front and back sides of the 3D object 200. It becomes what it is.
  • a 3D object with a simple shape in which the transparency of the pasted texture on the front and back sides has been changed will be referred to as a "semi-transparent 3D object".
  • FIG. 3 is a diagram illustrating a method for pasting moving images on the front and back sides of a 3D object.
  • the moving image texture 320 is frame data of a translucent texture pasted on the surface of the 3D object 200.
  • the moving image texture 330 is frame data of an opaque texture pasted on the back side of the translucent 3D object 200.
  • the process of pasting can be achieved by the same means as when pasting a still image texture to the 3D object 200, but in the case of the moving image textures 320 and 330, the structure is such that frames are switched depending on the display time.
  • the display time is not limited, and it is possible to display from 30 fps to slow motion (240 fps).
  • FIG. 4 is a flowchart showing the process of drawing a translucent 3D object.
  • a 3D object with a simple mesh structure is generated by combining polygons (S10).
  • Textures captured and saved in 360 degrees come in a variety of formats, such as equirectangular format, two fisheye images side by side, etc., and are subjected to geometric transformation in order to be pasted onto a simple-shaped 3D object.
  • the user can move the viewpoint by operating the mouse, touching the touch panel, or the like.
  • the user can move the viewpoint from the outside of the translucent 3D object to the inside of the translucent 3D object by clicking the translucent 3D object to which the texture is pasted with a mouse or touching it on the touch panel.
  • the user can move the viewpoint from inside the translucent 3D object to outside the translucent 3D object by right-clicking the mouse or the like.
  • Rendering processing of the three-dimensional space seen from the user's camera viewpoint is performed (S40).
  • the frame rate of the rendering process may be changed depending on the update frequency of texture information and the frequency of user operations.
  • the user's viewpoint is inside the translucent 3D object, an image of the translucent 3D object viewed from the interior viewpoint is rendered.
  • the back surface of the translucent 3D object is opaque, the texture pasted on the back surface of the translucent 3D object is visible, but the outside is not visible.
  • FIG. 5 is a diagram illustrating the nested structure of translucent 3D objects.
  • two other translucent 3D objects 210 and 210 are superimposed inside the translucent 3D object 200.
  • the types and number of translucent 3D objects to be superimposed are free.
  • a cubic translucent 3D object may be superimposed on a spherical translucent 3D object, and the number of cubic translucent 3D objects is not limited to one, but a plurality may be arranged.
  • a translucent 3D object By arranging a translucent 3D object inside a translucent 3D object, it is possible to realize an interface that allows continuous entry into the interior of the 3D object even when multiple 3D objects overlap. Note that an opaque 3D object can be placed inside a translucent 3D object, but in that case, it is not possible to enter inside the opaque 3D object.
  • the user can move to a translucent 3D object in a higher or lower hierarchy.
  • an operation such as clicking the destination translucent 3D object is performed.
  • a "back" operation such as right-clicking the mouse.
  • a 3D object for warping that expresses movement to another translucent 3D object in the same hierarchy is prepared, and the user can select the 3D object for warp to move to another translucent 3D object in the same hierarchy. enable movement.
  • FIGS. 6(a) and 6(b) are diagrams illustrating an example in which an opaque 3D object is placed inside a semitransparent 3D object.
  • the surface of the translucent 3D object 500 has a translucent texture pasted on it, so the inside can be seen. It can be seen that an opaque 3D object 510 exists inside the semitransparent 3D object 500.
  • FIG. 6(b) shows a three-dimensional space seen from a viewpoint inside the translucent 3D object 500 after entering the interior of the translucent 3D object 500. From a viewpoint inside the translucent 3D object 500, a 360-degree image with an opaque texture pasted on the back surface of the translucent 3D object 500 is visible. Since the opaque 3D object 510 exists inside the translucent 3D object 500, the opaque 3D object 510 is visible in the field of view.
  • Opaque 3D object 510 is the end of the nesting in the sense that it cannot go any further inside.
  • the opaque 3D object that is the end point can be a 3D object linked to the scene.
  • the top level of the nested structure is a translucent 3D object to which an omnidirectional image of a department store is pasted, there are translucent 3D objects of many stores inside the translucent 3D object of the department store.
  • the lowest level of the nested structure is a translucent 3D object for a restaurant, for example, a menu object or a food object may be placed inside the translucent 3D restaurant object as an opaque 3D object at the end point. good. This allows the user to navigate while walking through a department store rendered as a three-dimensional space, entering a particular store, and ultimately looking at a menu or viewing a particular dish. can provide a standard interface.
  • FIGS. 7(a) to 7(c) are diagrams illustrating a method for setting entrances and exits in a translucent 3D object using semantic segmentation.
  • Semantic segmentation which associates a label or category with each pixel in an image, may be used to automate the generation of doorways.
  • FIG. 7(a) is a diagram illustrating an example of an image labeled by semantic segmentation. Each pixel is labeled as ⁇ sky,'' ⁇ building,'' ⁇ window,''' ⁇ door,''' ⁇ car,'' ⁇ paved road,'' and ⁇ vegetation.''
  • a doorway can be automatically generated by performing semantic segmentation on a texture pasted on a translucent 3D object and, for example, "drilling" pixels determined to be "doors.” Since the entrance/exit of the texture pasted on the translucent 3D object is set to be transparent, the inside can be seen from the outside of the translucent 3D object, and the outside can be seen from the inside of the translucent 3D object.
  • an entrance/exit 532 is set in the translucent 3D object 530.
  • FIG. 7(b) shows the image seen by the user when the user is inside the translucent 3D object 520.
  • a user inside the translucent 3D object 520 sees the translucent 3D object 530 against the background of the opaque texture on the back side of the translucent 3D object 520. Since the doorway 532 of the semitransparent 3D object 530 is completely transparent, the interior of the semitransparent 3D object 530 can be seen through the doorway 532. The user can select the doorway 532 of the translucent 3D object 530 and enter the interior of the translucent 3D object 530 through the doorway 532.
  • FIG. 7(c) shows the image seen by the user when the user is inside the semi-transparent 3D object 530.
  • a user inside the translucent 3D object 530 can see the opaque texture on the back side of the translucent 3D object 530. Since the doorway 532 of the translucent 3D object 530 is completely transparent, the opaque texture of the outside, that is, the back surface of the semitransparent 3D object 520, is visible through the doorway 532. The user can select the doorway 532 of the translucent 3D object 530 and exit from the doorway 532 to the outside of the translucent 3D object 530.
  • FIGS. 8(a) and 8(b) are diagrams illustrating the shape of a translucent 3D object that matches the scene of the texture pasted on the translucent 3D object.
  • the shape of the semi-transparent 3D object can be selected according to the scene of the 360-degree image.
  • a cubic translucent 3D object 540 is used because the building is often close to a cube. be able to.
  • a spherical translucent 3D object 550 is used to represent an open space at infinity. Can be done.
  • FIG. 9 is a diagram illustrating the movement of a translucent 3D object when the device displaying the translucent 3D object is tilted.
  • a spherical translucent 3D object and a cubic translucent 3D object are displayed on a display device such as a smartphone.
  • the tilt of the display device can be calculated based on information from the acceleration and gyro sensor mounted on the display device.
  • the displayed translucent 3D object can be made to move differently depending on the tilt of the display device. For example, when the display device is tilted, a spherical translucent 3D object has a small coefficient of friction, so it is animated so that it rolls, and a cube translucent 3D object has a large friction coefficient, so it is animated so that it does not move easily. I do.
  • FIGS. 10(a) and 10(b) are diagrams illustrating the hierarchical structure of semitransparent 3D objects.
  • the method of superimposing translucent 3D objects and the texture of translucent 3D objects placed on the same layer are free, but it is possible to output meta information from the texture by applying object detection technology etc. to the texture in advance. Can be done.
  • the hierarchical structure of the translucent 3D object can be determined using the meta information of the texture pasted to the translucent 3D object.
  • FIG. 10(a) shows the hierarchical structure of meta information.
  • a texture 560b including meta information "Earth” is classified in the folder 560a at the highest level.
  • Textures 562b and 564b that include meta information such as “water” and “earth” are classified into folders 562a and 564a in the second hierarchy.
  • Textures 566b and 568b including meta information "sea” are classified in a folder 566a at the lowest level below a folder 562a called "water” at the second level.
  • FIG. 10(b) shows a hierarchical structure of semi-transparent 3D objects corresponding to the hierarchical structure of meta information shown in FIG. 10(a).
  • a texture 560b classified into a folder 560a named "Earth" at the highest level is pasted to the translucent 3D object 560c at the highest level.
  • a texture 562b classified into a folder 562a named "Water” in the second hierarchy is pasted to the translucent 3D object 562c in the second hierarchy.
  • a texture 564b classified into a folder 564a called "ground” in the second hierarchy is pasted to the translucent 3D object 564c in the second hierarchy.
  • Textures 566b and 568b classified into a folder 566a called “sea" in the lowest hierarchy are pasted to the semitransparent 3D objects 566c and 568c in the lowest hierarchy.
  • Texture meta information is not limited to information extracted by image processing.
  • GPS information or the like representing the point where the texture was imaged may be used as the meta information.
  • a hierarchical structure of meta information such as "Tokyo” under “Japan” and “Shinjuku” under “Tokyo” is created based on the GPS information of the texture, and the texture is classified.
  • a semi-transparent 3D object corresponding to "Tokyo” is placed inside a semi-transparent 3D object corresponding to "Japan”
  • a semi-transparent 3D object corresponding to "Tokyo” is placed inside a semi-transparent 3D object corresponding to "Japan”.
  • a translucent 3D object corresponding to "Shinjuku” is placed inside.
  • FIGS. 11(a) and 11(b) are diagrams illustrating animation patterns of semi-transparent 3D objects.
  • the animation of the translucent 3D object can be changed depending on the texture attached to the translucent 3D object or the status of the 3D object inside the translucent 3D object.
  • a shop signboard, logo mark, etc. may be provided above the translucent 3D object.
  • signboards and logo marks are placed perpendicular to the line of sight, with the orientation remaining the same even when the translucent 3D object rotates or bounces.
  • these signs and logos rotate when the store is open, stop when the store is closed, and rotate during sales periods. You can also make it bounce.
  • an animation or other effect showing an image of the product offered by the shop may be added near the translucent 3D object (for example, above). For example, if it's a ramen restaurant, you can add an animation of steam to the restaurant.
  • the texture to be pasted may be switched depending not only on the status of the translucent 3D object but also on the time of day and user attribute information. For example, a blue sky texture may be displayed during the day, and a starry sky texture may be displayed during the night. Depending on the user's attributes such as age, gender, hobbies, etc., you can choose a texture suitable for children or adults, a texture suitable for men or women, or a type of texture that matches the user's hobbies. good.
  • textures may be switched by setting the transparency of unselected textures to 100% (that is, completely transparent). For example, by rendering two images in advance, one for the daytime texture and one for the nighttime texture, and overlapping them, you can set the transparency of the daytime texture to 0% and the transparency of the nighttime texture to 100% during the daytime. You can switch to the night texture by switching to the day texture and setting the transparency of the day texture to 100% and the transparency of the night texture to 0% during the night time period.
  • textures for spring, summer, autumn, and winter can be switched depending on the season, or textures can be switched depending on age and gender. Textures can be switched simply by setting transparency, which eliminates the time it takes to load textures into memory and render them, making it possible to switch textures quickly.
  • the influence of lighting may be reflected on the texture between semitransparent 3D objects. For example, if an object emits light inside a translucent 3D object, the light emitted by that object may illuminate or cast a shadow on the texture of another translucent 3D object inside the translucent 3D object. Good too.
  • the present invention relates to image display technology.
  • 10 drawing processing unit 20 viewpoint moving unit, 30 display control unit, 40 entrance/exit setting unit, 50 hierarchy setting unit, 60 3D object storage unit, 70 texture storage unit, 80 hierarchical structure storage unit, 100 image display device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

A drawing processing unit 10 affixes an omnidirectional image to the obverse surface of a three-dimensional object as a translucent texture, and affixes the omnidirectional image to the reverse surface of the three-dimensional object as an opaque texture after rotation and reversal for the purpose of positional alignment with the translucent texture affixed to the obverse surface. If a viewpoint lies outside of the three-dimensional object, a viewpoint movement unit 20 moves the viewpoint to the inside of the three-dimensional object upon selection of the three-dimensional object. The drawing processing unit 10 renders the three-dimensional object that is observed from the viewpoint.

Description

画像表示装置および画像表示方法Image display device and image display method
 この発明は、画像表示技術に関する。 This invention relates to image display technology.
 360度カメラで撮影した全方位画像を用いてバーチャルに屋内や屋外を歩き回ることのできるインタフェースが開発されている。ユーザは視点を変えたり、移動することにより、全方位画像で表現された仮想空間内を探索することができる。 An interface has been developed that allows users to virtually walk around indoors and outdoors using omnidirectional images taken with a 360-degree camera. By changing the viewpoint or moving, the user can explore the virtual space represented by omnidirectional images.
 360度画像を用いた仮想空間においてバーチャル移動を実現する際、屋外や屋内を網羅的に360度撮影する必要があり、膨大なデータ量が必要であり、描画処理にも時間がかかる。 When realizing virtual movement in a virtual space using 360-degree images, it is necessary to comprehensively photograph the outdoors and indoors in 360 degrees, which requires a huge amount of data and takes time to process the drawing.
 本発明はこうした課題に鑑みてなされたものであり、その目的は、軽量な3次元空間を容易に移動することのできる画像表示技術を提供することにある。 The present invention has been made in view of these problems, and its purpose is to provide a lightweight image display technology that can easily move in a three-dimensional space.
 上記課題を解決するために、本発明のある態様の画像表示装置は、第1の3次元オブジェクトの表面に第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの表面に前記第1の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、描画処理部と、視点が前記第1の3次元オブジェクトの外部にある場合に、前記第1の3次元オブジェクトの選択により、前記第1の3次元オブジェクトの内部に視点を移動させる視点移動部とを含む。前記描画処理部は、前記視点から観察される前記第1の3次元オブジェクトをレンダリングする。 In order to solve the above problems, an image display device according to an aspect of the present invention pastes a first omnidirectional image as a semi-transparent texture on the surface of a first three-dimensional object, and The first omnidirectional image is rotated and reversed to align with the translucent texture pasted on the front surface, and then pasted as an opaque texture on the back side of the first three-dimensional image, or The first omnidirectional image is pasted on the back surface of the dimensional object as a semi-transparent texture, the first omnidirectional image is pasted on the front surface of the first three-dimensional object, and the first omnidirectional image is pasted on the back surface of the semi-transparent texture. a drawing processing unit that rotates and flips the texture to align with the texture and pastes it as an opaque texture; and a viewpoint moving unit that moves the viewpoint inside the first three-dimensional object according to selection. The drawing processing unit renders the first three-dimensional object observed from the viewpoint.
 本発明の別の態様は、画像表示方法である。この方法は、第1の3次元オブジェクトの表面に第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの表面に前記第1の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、ステップと、視点が前記第1の3次元オブジェクトの外部にある場合に、前記第1の3次元オブジェクトの選択により、前記第1の3次元オブジェクトの内部に視点を移動させるステップと、前記視点から観察される前記第1の3次元オブジェクトをレンダリングするステップとを含む。 Another aspect of the present invention is an image display method. This method involves pasting a first omnidirectional image on the front surface of a first three-dimensional object as a semi-transparent texture, pasting the first omnidirectional image on the back surface of the first three-dimensional object, and pasting the first omnidirectional image on the front surface of the first three-dimensional object. The first omnidirectional image is rotated and flipped to align with the attached semi-transparent texture and pasted as an opaque texture, or the first omnidirectional image is semi-transparent on the back side of the first three-dimensional object. The first omnidirectional image is pasted as a texture on the front surface of the first three-dimensional object, rotated and flipped to align with the translucent texture pasted on the back surface, and then opaque. pasting the texture as a texture; and when the viewpoint is outside the first three-dimensional object, moving the viewpoint inside the first three-dimensional object by selecting the first three-dimensional object. and rendering the first three-dimensional object as viewed from the viewpoint.
 なお、以上の構成要素の任意の組合せ、本発明の表現を方法、装置、システム、コンピュータプログラム、データ構造、記録媒体などの間で変換したものもまた、本発明の態様として有効である。 Note that any combination of the above components and the expression of the present invention converted between methods, devices, systems, computer programs, data structures, recording media, etc. are also effective as aspects of the present invention.
 本発明によれば、軽量な3次元空間を容易に移動することのできる画像表示技術を提供することができる。 According to the present invention, it is possible to provide a lightweight image display technology that can easily move in a three-dimensional space.
本実施の形態の画像表示装置の構成図である。FIG. 1 is a configuration diagram of an image display device according to the present embodiment. 3Dオブジェクトの表面および裏面に対してテクスチャを貼り付ける方法を説明する図である。FIG. 3 is a diagram illustrating a method for pasting textures on the front and back surfaces of a 3D object. 3Dオブジェクトの表面および裏面に対して動画を貼り付ける方法を説明する図である。FIG. 3 is a diagram illustrating a method of pasting a moving image on the front and back sides of a 3D object. 半透明3Dオブジェクトの描画処理を示すフローチャートである。3 is a flowchart illustrating rendering processing of a translucent 3D object. 半透明3Dオブジェクトの入れ子構造を説明する図である。FIG. 3 is a diagram illustrating a nested structure of translucent 3D objects. 図6(a)および図6(b)は、半透明3Dオブジェクトの内部に不透明3Dオブジェクトを配置した例を説明する図である。FIGS. 6A and 6B are diagrams illustrating an example in which an opaque 3D object is placed inside a translucent 3D object. 図7(a)~図7(c)は、セマンティックセグメンテーションによって半透明3Dオブジェクトに出入り口を設定する方法を説明する図である。FIGS. 7(a) to 7(c) are diagrams illustrating a method for setting entrances and exits in a translucent 3D object using semantic segmentation. 図8(a)および図8(b)は、半透明3Dオブジェクトに貼り付けるテクスチャのシーンに合わせた半透明3Dオブジェクトの形状を説明する図である。FIGS. 8A and 8B are diagrams illustrating the shape of a translucent 3D object that matches the scene of the texture pasted on the translucent 3D object. 半透明3Dオブジェクトを表示するデバイスが傾いた場合の半透明3Dオブジェクトの動きを説明する図である。FIG. 6 is a diagram illustrating the movement of a translucent 3D object when a device displaying the translucent 3D object is tilted. 図10(a)および図10(b)は、半透明3Dオブジェクトの階層構造を説明する図である。FIGS. 10(a) and 10(b) are diagrams illustrating the hierarchical structure of semitransparent 3D objects. 図11(a)および図11(b)は、半透明3Dオブジェクトのアニメーションのパターンを説明する図である。FIGS. 11(a) and 11(b) are diagrams illustrating animation patterns of semitransparent 3D objects.
 図1は、本実施の形態の画像表示装置100の構成図である。画像表示装置100は、描画処理部10、視点移動部20、表示制御部30、出入り口設定部40、階層設定部50、3Dオブジェクト記憶部60、テクスチャ記憶部70、および階層構造記憶部80を含む。 FIG. 1 is a configuration diagram of an image display device 100 according to the present embodiment. The image display device 100 includes a drawing processing section 10, a viewpoint moving section 20, a display control section 30, an entrance/exit setting section 40, a hierarchy setting section 50, a 3D object storage section 60, a texture storage section 70, and a hierarchical structure storage section 80. .
 本実施の形態では、複数の3次元オブジェクトが入れ子構造で配置されている。3次元オブジェクトの外側から3次元オブジェクトを観察したり、3次元オブジェクトの内部に視点を移動して3次元オブジェクトを内側から観察することができる。ユーザインタフェースとして、入れ子構造の複数の3次元オブジェクトを仮想空間内に配置してもよく、Webなどのネットワーク上でブラウザなどを介して入れ子構造の複数の3次元オブジェクトにアクセスできるようにしてもよい。入れ子構造の複数の3次元オブジェクトにアクセスすることができるユーザインタフェースは任意であり、ここに例示したものに限られない。 In this embodiment, multiple three-dimensional objects are arranged in a nested structure. The three-dimensional object can be observed from outside the three-dimensional object, or the three-dimensional object can be observed from the inside by moving the viewpoint inside the three-dimensional object. As a user interface, a plurality of nested three-dimensional objects may be placed in a virtual space, and the nested three-dimensional objects may be accessed via a browser on a network such as the Web. . Any user interface that can access a plurality of nested three-dimensional objects may be used and is not limited to the one exemplified here.
 3次元オブジェクトのサーフェスの表と裏には同一のテクスチャが貼り付けられており、3次元オブジェクトの表面は半透明であるが、裏面は不透明である。そのため、3次元オブジェクトを外側から見た場合、その3次元オブジェクトの内部が透けて見えるが、3次元オブジェクトの内部に視点を移動した場合、その3次元オブジェクトの外側は見えない。 The same texture is pasted on the front and back surfaces of the three-dimensional object, and the front surface of the three-dimensional object is translucent, but the back surface is opaque. Therefore, when a three-dimensional object is viewed from the outside, the inside of the three-dimensional object can be seen through, but when the viewpoint is moved inside the three-dimensional object, the outside of the three-dimensional object cannot be seen.
 描画処理部10は、3Dオブジェクト記憶部60から3次元オブジェクトのデータを読み出し、テクスチャ記憶部70から全方位画像(「360度画像」ともいう)を読み出し、3次元オブジェクトの表面に全方位画像を半透明のテクスチャとして貼り付け、3次元オブジェクトの裏面に同一の全方位画像を、表面に貼り付けられた半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける。この場合、3次元オブジェクトの表面には正転の全方位画像、3次元オブジェクトの裏面には反転の全方位画像が貼り付けられるが、一例として、3次元オブジェクトを地図が表面に表示される地球儀に見立てる場合、地図が反転しないように3次元オブジェクトの表面に正転の全方位画像を貼り付けることが望ましい。 The drawing processing unit 10 reads data of a three-dimensional object from the 3D object storage unit 60, reads an omnidirectional image (also referred to as a “360 degree image”) from the texture storage unit 70, and writes the omnidirectional image on the surface of the three-dimensional object. Paste it as a translucent texture, then paste the same omnidirectional image on the back side of the 3D object, rotate and flip it to align with the translucent texture pasted on the front side, and paste it as an opaque texture. . In this case, a normal omnidirectional image is pasted on the front side of the three-dimensional object, and an inverted omnidirectional image is pasted on the back side of the three-dimensional object. When making a map look like a map, it is desirable to attach a normally rotated omnidirectional image to the surface of the three-dimensional object so that the map is not reversed.
 あるいは、描画処理部10は、3次元オブジェクトの裏面に全方位画像を半透明のテクスチャとして貼り付け、3次元オブジェクトの表面に同一の全方位画像を、裏面に貼り付けられた半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付けてもよい。この場合、3次元オブジェクトの表面には反転の全方位画像、3次元オブジェクトの裏面には正転の全方位画像が貼り付けられるが、一例として、全方位画像に文字が描画されている場合、3次元オブジェクトの内部から見た場合に文字が反転して見えないように3次元オブジェクトの裏面に正転の全方位画像を貼り付けることが望ましい。 Alternatively, the drawing processing unit 10 pastes an omnidirectional image as a semi-transparent texture on the back side of the three-dimensional object, and pastes the same omnidirectional image on the front side of the three-dimensional object as a semi-transparent texture pasted on the back side. It may be rotated and flipped for alignment and then pasted as an opaque texture. In this case, an inverted omnidirectional image is pasted on the front side of the three-dimensional object, and a forward omnidirectional image is pasted on the back side of the three-dimensional object. For example, if characters are drawn on the omnidirectional image, It is desirable to paste a normally rotated omnidirectional image on the back side of a three-dimensional object so that the characters do not appear reversed when viewed from inside the three-dimensional object.
 このように、3次元オブジェクトの表面、裏面のいずれを正転の全方位画像にするかは、全方位画像の種類や3次元オブジェクトの用途などに応じて決めればよい。 In this way, which side of the three-dimensional object, the front side or the back side, should be used as a forward-rotating omnidirectional image can be determined depending on the type of omnidirectional image, the purpose of the three-dimensional object, etc.
 3次元オブジェクトの表面に貼り付ける半透明のテクスチャの解像度を、3次元オブジェクトの裏面に貼り付ける不透明のテクスチャの解像度に比べて小さくしてもよい。3次元オブジェクトを外側から内部を透けて見せるための半透明のテクスチャの解像度は、内部の雰囲気が必要十分に見える程度まで圧倒的に小さくすることが望ましい。具体的には10分の1程度以上の解像度の差をつける。高解像度の全方位画像からピクセル間引きしてもう1つの低解像度の全方位画像をあらかじめ用意し、高解像度の全方位画像を3次元オブジェクトの裏面に貼り付ける不透明のテクスチャとして使用し、低解像度の全方位画像を3次元オブジェクトの表面に貼り付ける半透明のテクスチャとして使用する。これにより、計算量とメモリ量を節約し、描画動作を早くすることができ、また、消費電力を抑えることができる。 The resolution of the translucent texture pasted on the front surface of the three-dimensional object may be smaller than the resolution of the opaque texture pasted on the back surface of the three-dimensional object. It is desirable that the resolution of the translucent texture that allows the inside of a three-dimensional object to be seen through from the outside be extremely small to the extent that the interior atmosphere can be seen to the extent necessary and sufficient. Specifically, a difference in resolution of about 1/10 or more is provided. Prepare another low-resolution omnidirectional image in advance by thinning pixels from the high-resolution omnidirectional image, use the high-resolution omnidirectional image as an opaque texture pasted on the back of the 3D object, and use the low-resolution omnidirectional image to create another low-resolution omnidirectional image. The omnidirectional image is used as a translucent texture pasted onto the surface of a three-dimensional object. As a result, calculation amount and memory amount can be saved, drawing operation can be made faster, and power consumption can be suppressed.
 視点移動部20は、視点が3次元オブジェクトの外部にある場合に、3次元オブジェクトの選択により、3次元オブジェクトの内部に視点を移動させる。 When the viewpoint is outside the three-dimensional object, the viewpoint moving unit 20 moves the viewpoint inside the three-dimensional object by selecting the three-dimensional object.
 描画処理部10は、視点から観察される3次元オブジェクトをレンダリングする。表示制御部30は、レンダリング結果をディスプレイに表示する。 The drawing processing unit 10 renders the three-dimensional object observed from the viewpoint. The display control unit 30 displays the rendering result on the display.
 表面に半透明のテクスチャが貼り付けられ、裏面に不透明のテクスチャが貼り付けられた3次元オブジェクトを入れ子構造に配置してもよい。 Three-dimensional objects with a translucent texture pasted on the front surface and an opaque texture pasted on the back surface may be arranged in a nested structure.
 第1の3次元オブジェクトの内部に第2の3次元オブジェクトを配置する入れ子構造において、視点移動部20は、視点が第1の3次元オブジェクトの内部にある場合、第2の3次元オブジェクトの選択により、第2の3次元オブジェクトの内部に視点を移動させる。3次元オブジェクトを入れ子構造にすることで、軽量な3次元空間を瞬時に移動できるシステムを実現することができる。入れ子構造の3次元オブジェクトを移動していくことで、別地点への瞬時の移動が可能なインタフェースを提供することができる。 In the nested structure in which the second three-dimensional object is placed inside the first three-dimensional object, the viewpoint moving unit 20 selects the second three-dimensional object when the viewpoint is inside the first three-dimensional object. The viewpoint is moved inside the second three-dimensional object. By creating a nested structure of three-dimensional objects, it is possible to realize a lightweight system that can instantly move in three-dimensional space. By moving nested three-dimensional objects, it is possible to provide an interface that allows instantaneous movement to another location.
 描画処理部10は、視点から観察される入れ子構造における第1の3次元オブジェクトおよび第2の3次元オブジェクトの少なくとも一方をレンダリングする。視点が第1の3次元オブジェクトの内部で、第2の3次元オブジェクトの外部にある場合は、視点から観察される第1の3次元オブジェクトの裏面と第2の3次元オブジェクトの表面がレンダリングされる。視点が第2の3次元オブジェクトの内部にある場合は、視点から観察される第2の3次元オブジェクトの裏面がレンダリングされる。 The drawing processing unit 10 renders at least one of the first three-dimensional object and the second three-dimensional object in the nested structure observed from the viewpoint. If the viewpoint is inside the first 3D object and outside the second 3D object, the back side of the first 3D object and the front side of the second 3D object observed from the viewpoint are rendered. Ru. If the viewpoint is inside the second three-dimensional object, the back side of the second three-dimensional object observed from the viewpoint is rendered.
 描画処理部10は、入れ子構造内の3次元オブジェクトが、その内部に視点を移動させることができない終端の3次元オブジェクトである場合、終端の3次元オブジェクトの表面のテクスチャを不透明に設定する。視点移動部20は、ユーザが終端の3次元オブジェクトを選択しても、終端の3次元オブジェクトの内部に視点を移動させない。終端の3次元オブジェクトの表面を不透明にすることで、入れ子構造の終端が直感的にわかりやすいインタフェースを提供することができる。 If the three-dimensional object in the nested structure is a terminal three-dimensional object into which the viewpoint cannot be moved, the drawing processing unit 10 sets the surface texture of the terminal three-dimensional object to be opaque. The viewpoint moving unit 20 does not move the viewpoint inside the terminal three-dimensional object even if the user selects the terminal three-dimensional object. By making the surface of the three-dimensional object at the end opaque, it is possible to provide an interface in which the end of the nested structure can be intuitively understood.
 階層設定部50は、3次元オブジェクトに貼り付けられる全方位画像の位置情報またはメタ情報に基づいて階層構造を決定し、階層構造に基づいて3次元オブジェクトの入れ子構造を設定し、階層構造記憶部80に記憶する。描画処理部10は、階層構造記憶部80に記憶された階層構造を参照して、3次元オブジェクトを入れ子構造に配置してレンダリングする。全方位画像のメタ情報に基づいて3次元オブジェクトの入れ子構造を決めるため、直感的にわかりやすく、入れ子構造に配置された3次元オブジェクト間を自然に移動することができる。 The hierarchy setting section 50 determines a hierarchical structure based on the position information or meta information of the omnidirectional image pasted on the three-dimensional object, sets the nested structure of the three-dimensional object based on the hierarchical structure, and stores the nested structure of the three-dimensional object in the hierarchical structure storage section. 80. The drawing processing unit 10 refers to the hierarchical structure stored in the hierarchical structure storage unit 80, arranges and renders three-dimensional objects in a nested structure. Since the nested structure of three-dimensional objects is determined based on the meta information of the omnidirectional image, it is intuitively easy to understand, and it is possible to naturally move between the three-dimensional objects arranged in the nested structure.
 出入り口設定部40は、3次元オブジェクトに貼り付けられる全方位画像内のセマンティック情報に基づいて、全方位画像の特定の場所を出入り口に設定し、出入り口を完全透明に設定する。視点移動部20は、視点が3次元オブジェクトの外部にある場合、3次元オブジェクトの出入り口の選択により、3次元オブジェクトの内部に視点を移動させ、視点が3次元オブジェクトの内部にある場合、3次元オブジェクトの出入り口の選択により、前記3次元オブジェクトの外部に視点を移動させる。 The doorway setting unit 40 sets a specific location in the omnidirectional image as a doorway, and sets the doorway to be completely transparent, based on the semantic information in the omnidirectional image pasted on the three-dimensional object. When the viewpoint is outside the three-dimensional object, the viewpoint moving unit 20 moves the viewpoint inside the three-dimensional object by selecting an entrance/exit of the three-dimensional object, and when the viewpoint is inside the three-dimensional object, it moves the viewpoint inside the three-dimensional object. By selecting the entrance/exit of the object, the viewpoint is moved to the outside of the three-dimensional object.
 画像内のセマンティック情報を抽出し、セマンティック情報を利用して、3次元オブジェクトの特定部分を完全透明にすることで、出口または入口を表現し、ユーザの移動の意欲の向上を図ることができる。3次元オブジェクトの入れ子構造において、3次元オブジェクトへの入口、3次元オブジェクトからの出口を直感的にわかりやすくして、入れ子構造の3次元オブジェクト間の移動が容易なインタフェースを提供することができる。 By extracting the semantic information in the image and using the semantic information to make a specific part of the three-dimensional object completely transparent, it is possible to represent an exit or an entrance and improve the user's desire to move. In a nested structure of three-dimensional objects, it is possible to provide an interface that makes it easy to intuitively understand the entrance to and exit from the three-dimensional object, and allows easy movement between the nested three-dimensional objects.
 画像内のセマンティック情報を利用して、3次元オブジェクトの特定部分を完全透明にするだけでなく、別の画像に置き換えてもよい。たとえば建物の天井を完全透明にするだけでなく、天井を青空や夜空の画像に置き換えることもできる。 By using the semantic information in the image, you can not only make a specific part of a three-dimensional object completely transparent, but also replace it with another image. For example, in addition to making the ceiling of a building completely transparent, it is also possible to replace the ceiling with an image of the blue or night sky.
 描画処理部10は、全方位画像が建造物を撮影した画像である場合、3次元オブジェクトとして立方体または直方体を使用し、全方位画像が屋外空間を撮影した画像である場合、3次元オブジェクトとして球体または楕円体を使用することができる。3次元オブジェクトは立方体や円柱など制限はないが、建築物は立方体、屋外空間は球体とする方がユーザの直感と合う。これにより、ユーザの直感に合わせた簡易形状の3次元モデルによって直感的にわかりやすいインタフェースを提供することができる。 The drawing processing unit 10 uses a cube or a rectangular parallelepiped as the three-dimensional object when the omnidirectional image is an image of a building, and uses a sphere as the three-dimensional object when the omnidirectional image is an image of an outdoor space. Or an ellipsoid can be used. There are no restrictions on three-dimensional objects such as cubes and cylinders, but it is more intuitive for users to use cubes for buildings and spheres for outdoor spaces. As a result, it is possible to provide an interface that is intuitively easy to understand using a simple three-dimensional model that matches the user's intuition.
 描画処理部10は、入れ子構造において3次元オブジェクトの内部の光源による影響を他の前記3次元オブジェクトの表面または内部に反映させることもできる。たとえば、上位階層の3次元オブジェクト内の光源によって、3次元オブジェクト内にある下位階層の別の3次元オブジェクトの表面に光が当てられたり、影ができたり、下位階層の別の3次元オブジェクトの内部が照らされたりする。また、同一階層において隣り合う3次元オブジェクト間で、一つの3次元オブジェクトの光源からの光やその反射光が隣の3次元オブジェクトの表面または内部に影響してもよい。 The drawing processing unit 10 can also reflect the influence of a light source inside a three-dimensional object on the surface or inside of another three-dimensional object in a nested structure. For example, a light source in a 3D object in a higher hierarchy may illuminate or cast a shadow on the surface of another 3D object in a lower hierarchy within the 3D object, or a light source in a 3D object in a lower hierarchy may The inside is illuminated. Further, between adjacent three-dimensional objects in the same hierarchy, light from a light source of one three-dimensional object or its reflected light may affect the surface or interior of the adjacent three-dimensional object.
 3Dオブジェクト記憶部60に記憶される3次元オブジェクトの属性を設定する属性設定部をさらに設けてもよい。 An attribute setting unit for setting attributes of the three-dimensional object stored in the 3D object storage unit 60 may be further provided.
 図2は、3Dオブジェクトの表面および裏面に対してテクスチャを貼り付ける方法を説明する図である。 FIG. 2 is a diagram illustrating a method for pasting textures on the front and back surfaces of a 3D object.
 3Dオブジェクト200の表面および裏面に貼り付ける同一のテクスチャ300および310は、一例として正距円筒図法(Equirectangular projection)により生成されたエクイレクタングラー形式の全方位画像である。全方位画像は360度カメラによって撮影することができるほか、複数の画像をスティッチしたパノラマ画像を用いてもよい。 The same textures 300 and 310 pasted on the front and back surfaces of the 3D object 200 are, for example, omnidirectional images in an equirectangular format generated by an equirectangular projection. The omnidirectional image can be taken with a 360-degree camera, or a panoramic image obtained by stitching multiple images may be used.
 テクスチャ300、310を3Dオブジェクト200に貼り付ける際、3Dオブジェクト200の表面に貼り付けるテクスチャ300は20~50%程度の透明性を持たせ、3Dオブジェクト200の裏面に貼り付けるテクスチャ310は不透明なテクスチャ310を貼りつける。これにより、3Dオブジェクト200を外から見た場合、内部が透けて見える視覚効果があるが、3Dオブジェクト200の内部から見た場合、外部は見えない。外部から3Dオブジェクト200の内部を覗けるが、3Dオブジェクト200の中から外部は見えないインタフェースを提供することにより、3Dオブジェクト200の内部に入れることをユーザに示唆し、ユーザの視点を3Dオブジェクト200の内部に移動させるための動機付けを与えることができる。 When pasting textures 300 and 310 on the 3D object 200, the texture 300 pasted on the front surface of the 3D object 200 has transparency of about 20 to 50%, and the texture 310 pasted on the back surface of the 3D object 200 is an opaque texture. Paste 310. As a result, when the 3D object 200 is viewed from the outside, there is a visual effect in which the inside can be seen through, but when the 3D object 200 is viewed from inside the 3D object 200, the outside is not visible. By providing an interface that allows the inside of the 3D object 200 to be viewed from the outside, but does not allow the outside to be seen from inside the 3D object 200, the user is suggested to enter the inside of the 3D object 200, and the user's viewpoint is changed to the inside of the 3D object 200. It can provide motivation to move internally.
 3Dオブジェクト200は、一例として、球体や立方体、円柱など単純な3次元形状のオブジェクトである。3Dオブジェクト200として単純な3次元形状を使用することで、全方位画像のレンダリング処理の計算量とデータ量を低く抑えつつ、全体を容易に俯瞰することができるインタフェースを提供できる。これにより、リアリティがあるが、軽量な3次元空間の表示ができる。 The 3D object 200 is, for example, a simple three-dimensional object such as a sphere, a cube, or a cylinder. By using a simple three-dimensional shape as the 3D object 200, it is possible to provide an interface that allows an easy bird's-eye view of the entire image while keeping the amount of calculation and data for rendering processing of omnidirectional images low. This makes it possible to display a three-dimensional space that is realistic yet lightweight.
 3Dオブジェクト200の表面と裏面では同じ位置に同一の被写体が重なるように、3Dオブジェクト200の裏面に貼り付けるテクスチャ310は、3Dオブジェクト200の表面に貼り付けるテクスチャ300に対して回転・反転処理を実施したものになる。 The texture 310 pasted on the back side of the 3D object 200 undergoes rotation/reversal processing for the texture 300 pasted on the front side of the 3D object 200 so that the same subject overlaps in the same position on the front and back sides of the 3D object 200. It becomes what it is.
 以後、このような表面と裏面で貼り付けられたテクスチャの透明度を変更した単純形状の3Dオブジェクトのことを「半透明3Dオブジェクト」と呼ぶことにする。 Hereinafter, a 3D object with a simple shape in which the transparency of the pasted texture on the front and back sides has been changed will be referred to as a "semi-transparent 3D object".
 図3は、3Dオブジェクトの表面および裏面に対して動画を貼り付ける方法を説明する図である。 FIG. 3 is a diagram illustrating a method for pasting moving images on the front and back sides of a 3D object.
 図3に示すように、3Dオブジェクト200には動画テクスチャ320、330を貼り付けることも可能である。動画テクスチャ320は、3Dオブジェクト200の表面に貼り付ける半透明のテクスチャのフレームデータである。動画テクスチャ330は、半透明3Dオブジェクト200の裏面に貼り付ける不透明のテクスチャのフレームデータである。貼り付ける処理は、3Dオブジェクト200に静止画テクスチャを貼りつける時と同様の手段で実現できるが、動画テクスチャ320、330の場合、表示時刻に応じてフレームが切り替わる構造となる。表示時刻は限定せず30fpsのものからスローモーション(240fps)も可能である。 As shown in FIG. 3, it is also possible to paste moving image textures 320 and 330 onto the 3D object 200. The moving image texture 320 is frame data of a translucent texture pasted on the surface of the 3D object 200. The moving image texture 330 is frame data of an opaque texture pasted on the back side of the translucent 3D object 200. The process of pasting can be achieved by the same means as when pasting a still image texture to the 3D object 200, but in the case of the moving image textures 320 and 330, the structure is such that frames are switched depending on the display time. The display time is not limited, and it is possible to display from 30 fps to slow motion (240 fps).
 図4は、半透明3Dオブジェクトの描画処理を示すフローチャートである。 FIG. 4 is a flowchart showing the process of drawing a translucent 3D object.
 ポリゴンを組み合わせて、単純なメッシュ構造の3Dオブジェクトを生成する(S10)。 A 3D object with a simple mesh structure is generated by combining polygons (S10).
 次に生成した3Dオブジェクトの表面、裏面に360度撮影したテクスチャを貼りつける(S20)。360度撮影して保存されたテクスチャはエクイレクタングラー形式、魚眼画像2枚横並びの形式等、様々あるが、単純形状の3Dオブジェクトに張り付けるための幾何変換を行う。 Next, textures photographed 360 degrees are pasted on the front and back sides of the generated 3D object (S20). Textures captured and saved in 360 degrees come in a variety of formats, such as equirectangular format, two fisheye images side by side, etc., and are subjected to geometric transformation in order to be pasted onto a simple-shaped 3D object.
 ユーザの視点を移動させる(S30)。ユーザは、マウス操作やタッチパネル上でのタッチ操作などにより視点を移動することができる。ユーザは、テクスチャが貼り付けられた半透明3Dオブジェクトをマウスでクリックしたり、タッチパネル上でタッチするなどの操作により、半透明3Dオブジェクトの外部から内部に視点を移動させることができる。また、ユーザは、マウスの右クリックなどの操作により、半透明3Dオブジェクトの内部から外部に視点を移動させることができる。 Move the user's viewpoint (S30). The user can move the viewpoint by operating the mouse, touching the touch panel, or the like. The user can move the viewpoint from the outside of the translucent 3D object to the inside of the translucent 3D object by clicking the translucent 3D object to which the texture is pasted with a mouse or touching it on the touch panel. Furthermore, the user can move the viewpoint from inside the translucent 3D object to outside the translucent 3D object by right-clicking the mouse or the like.
 ユーザのカメラ視点から見た3次元空間のレンダリング処理を行う(S40)。テクスチャ情報の更新頻度やユーザの操作頻度に応じて、レンダリング処理はフレームレートを変更してもよい。 Rendering processing of the three-dimensional space seen from the user's camera viewpoint is performed (S40). The frame rate of the rendering process may be changed depending on the update frequency of texture information and the frequency of user operations.
 ユーザの視点が半透明3Dオブジェクトの外部にある場合は、半透明3Dオブジェクトを外部の視点から見た画像がレンダリングされる。この場合、半透明3Dオブジェクトの表面は半透明であるため、半透明3Dオブジェクトの表面に貼り付けられたテクスチャを透過して半透明3Dオブジェクトの内部にあるオブジェクトが透けて見える。 If the user's viewpoint is outside the translucent 3D object, an image of the translucent 3D object from the external viewpoint is rendered. In this case, since the surface of the translucent 3D object is translucent, objects inside the translucent 3D object can be seen through the texture pasted on the surface of the translucent 3D object.
 ユーザの視点が半透明3Dオブジェクトの内部にある場合は、半透明3Dオブジェクトを内部の視点から見た画像がレンダリングされる。この場合、半透明3Dオブジェクトの裏面は不透明であるため、半透明3Dオブジェクトの裏面に貼り付けられたテクスチャが見えるが、外部は見えない。 If the user's viewpoint is inside the translucent 3D object, an image of the translucent 3D object viewed from the interior viewpoint is rendered. In this case, since the back surface of the translucent 3D object is opaque, the texture pasted on the back surface of the translucent 3D object is visible, but the outside is not visible.
 図5は、半透明3Dオブジェクトの入れ子構造を説明する図である。複数のサイズの異なる半透明3Dオブジェクトを入れ子構造にして重ね合わせることで半透明3Dオブジェクト間をユーザが容易に移動することができる。 FIG. 5 is a diagram illustrating the nested structure of translucent 3D objects. By stacking a plurality of translucent 3D objects of different sizes in a nested structure, the user can easily move between the translucent 3D objects.
 図5に示すように、半透明3Dオブジェクト200の内部に別の二つの半透明3Dオブジェクト210、210が重ね合わされている。重ね合わせる半透明3Dオブジェクトの種類やオブジェクトの数は自由である。例えば、球体の半透明3Dオブジェクトの中に立方体の半透明3Dオブジェクトを重ね合わせてもよく、立方体の半透明3Dオブジェクトは1個に限らず、複数個配置してもよい。 As shown in FIG. 5, two other translucent 3D objects 210 and 210 are superimposed inside the translucent 3D object 200. The types and number of translucent 3D objects to be superimposed are free. For example, a cubic translucent 3D object may be superimposed on a spherical translucent 3D object, and the number of cubic translucent 3D objects is not limited to one, but a plurality may be arranged.
 半透明3Dオブジェクトの内部にさらに半透明3Dオブジェクトを配置することで複数の3Dオブジェクトが重なっている場合においても、連続的に3Dオブジェクトの内部に入っていくことが可能なインタフェースを実現できる。なお、半透明3Dオブジェクトの内部に不透明な3Dオブジェクトを配置することもできるが、その場合、不透明な3Dオブジェクトの内部には入ることができない。 By arranging a translucent 3D object inside a translucent 3D object, it is possible to realize an interface that allows continuous entry into the interior of the 3D object even when multiple 3D objects overlap. Note that an opaque 3D object can be placed inside a translucent 3D object, but in that case, it is not possible to enter inside the opaque 3D object.
 半透明3Dオブジェクトの入れ子構造において、ユーザは上位または下位の階層の半透明3Dオブジェクトに移動することができる。入れ子構造において下位の階層の半透明3Dオブジェクトに移動する場合は、移動先の半透明3Dオブジェクトをクリックするなどの操作を行う。上位の階層の半透明3Dオブジェクトに移動する場合は、マウスの右クリックなどの「戻る」操作を行う。 In the nested structure of translucent 3D objects, the user can move to a translucent 3D object in a higher or lower hierarchy. When moving to a translucent 3D object at a lower level in the nested structure, an operation such as clicking the destination translucent 3D object is performed. To move to a translucent 3D object in a higher hierarchy, perform a "back" operation such as right-clicking the mouse.
 同一の階層の半透明3Dオブジェクトに移動することも可能である。上位、下位への階層移動は単なるマウス操作やタッチパネル上のピンチ操作で可能であるが、同一の階層の別の半透明3Dオブジェクトに移動する場合は、ワープホールや「どこでもドア」のような、同一の階層の別の半透明3Dオブジェクトへの移動を表現するワープ用の3Dオブジェクトを用意し、ワープ用の3Dオブジェクトをユーザが選択することで、同一の階層の別の半透明3Dオブジェクトへの移動を可能にする。 It is also possible to move to a translucent 3D object on the same level. You can move up or down in the hierarchy by simply using the mouse or by pinching on the touch panel, but if you want to move to another semi-transparent 3D object in the same hierarchy, you can move to another semi-transparent 3D object in the same hierarchy, such as a warp hole or "anywhere door". A 3D object for warping that expresses movement to another translucent 3D object in the same hierarchy is prepared, and the user can select the 3D object for warp to move to another translucent 3D object in the same hierarchy. enable movement.
 図6(a)および図6(b)は、半透明3Dオブジェクトの内部に不透明3Dオブジェクトを配置した例を説明する図である。 FIGS. 6(a) and 6(b) are diagrams illustrating an example in which an opaque 3D object is placed inside a semitransparent 3D object.
 図6(a)に示すように、半透明3Dオブジェクト500の表面は半透明のテクスチャが貼り付けられているため、内部を見ることができる。半透明3Dオブジェクト500の内部には不透明3Dオブジェクト510が存在していることがわかる。 As shown in FIG. 6(a), the surface of the translucent 3D object 500 has a translucent texture pasted on it, so the inside can be seen. It can be seen that an opaque 3D object 510 exists inside the semitransparent 3D object 500.
 図6(b)は、半透明3Dオブジェクト500の内部に入って、半透明3Dオブジェクト500の内部の視点から見た3次元空間を示す。半透明3Dオブジェクト500の内部の視点からは、半透明3Dオブジェクト500の裏面に貼り付けられた不透明のテクスチャによる360度画像が見えている。半透明3Dオブジェクト500の内部には不透明3Dオブジェクト510が存在しているため、視界には不透明3Dオブジェクト510が見えている。 FIG. 6(b) shows a three-dimensional space seen from a viewpoint inside the translucent 3D object 500 after entering the interior of the translucent 3D object 500. From a viewpoint inside the translucent 3D object 500, a 360-degree image with an opaque texture pasted on the back surface of the translucent 3D object 500 is visible. Since the opaque 3D object 510 exists inside the translucent 3D object 500, the opaque 3D object 510 is visible in the field of view.
 半透明3Dオブジェクトと違って、不透明3Dオブジェクト510の内部には進むことができない。不透明3Dオブジェクト510は、これ以上、内部に進むことができないという意味で入れ子構造の終点である。 Unlike semi-transparent 3D objects, it is not possible to go inside an opaque 3D object 510. Opaque 3D object 510 is the end of the nesting in the sense that it cannot go any further inside.
 終点である不透明3Dオブジェクトは、シーンと連動した3Dオブジェクトにすることができる。例えば、入れ子構造の最上位が百貨店の全方位画像を貼り付けた半透明3Dオブジェクトである場合、百貨店の半透明3Dオブジェクトの内部には、多数の店舗の半透明3Dオブジェクトが存在する。入れ子構造の最下位が飲食店の半透明3Dオブジェクトである場合、飲食店の半透明3Dオブジェクトの内部には、終点の不透明3Dオブジェクトとして、たとえば、メニューのオブジェクトや料理のオブジェクトを配置してもよい。これにより、ユーザが3次元空間として描画された百貨店内を歩きながら、特定の店舗に入り、最終的にメニューを見たり、特定の料理を見るように、ユーザをナビゲートすることができ、直感的なインタフェースを提供することができる。 The opaque 3D object that is the end point can be a 3D object linked to the scene. For example, if the top level of the nested structure is a translucent 3D object to which an omnidirectional image of a department store is pasted, there are translucent 3D objects of many stores inside the translucent 3D object of the department store. If the lowest level of the nested structure is a translucent 3D object for a restaurant, for example, a menu object or a food object may be placed inside the translucent 3D restaurant object as an opaque 3D object at the end point. good. This allows the user to navigate while walking through a department store rendered as a three-dimensional space, entering a particular store, and ultimately looking at a menu or viewing a particular dish. can provide a standard interface.
 図7(a)~図7(c)は、セマンティックセグメンテーションによって半透明3Dオブジェクトに出入り口を設定する方法を説明する図である。 FIGS. 7(a) to 7(c) are diagrams illustrating a method for setting entrances and exits in a translucent 3D object using semantic segmentation.
 半透明3Dオブジェクトにおいて、特定の場所を完全に透明にすることにより、その特定の場所を「穴あけ」状態にし、明示的に「出入り口」を表現する。ユーザは、完全透明の特定の場所を出入り口として認識し、その出入り口を通して、入れ子構造の半透明3Dオブジェクト間を移動することができる。 By making a specific location completely transparent in a translucent 3D object, that specific location becomes a "hole" state and explicitly represents a "doorway." A user can perceive a specific completely transparent location as a doorway through which the user can move between nested translucent 3D objects.
 出入り口の生成を自動化するために画像内のピクセル毎にラベルやカテゴリを関連付けるセマンティックセグメンテーションを活用してもよい。 Semantic segmentation, which associates a label or category with each pixel in an image, may be used to automate the generation of doorways.
 図7(a)は、セマンティックセグメンテーションによってラベル付けされた画像の例を説明する図である。ピクセル毎に「空」、「建物」、「窓」、「ドア」、「車」、「舗装道路」、「草木」のラベル付けがなされている。 FIG. 7(a) is a diagram illustrating an example of an image labeled by semantic segmentation. Each pixel is labeled as ``sky,'' ``building,'' ``window,'' ``door,'' ``car,'' ``paved road,'' and ``vegetation.''
 半透明3Dオブジェクトに貼り付けられるテクスチャに対してセマンティックセグメンテーションを実行し、例えば「ドア」と判定されたピクセルを「穴あけ」することで出入り口を自動生成することができる。半透明3Dオブジェクトに貼り付けられたテクスチャの出入り口は透明に設定されるので、半透明3Dオブジェクトの外側からは中が見え、半透明3Dオブジェクトの内側からは外が見える。 A doorway can be automatically generated by performing semantic segmentation on a texture pasted on a translucent 3D object and, for example, "drilling" pixels determined to be "doors." Since the entrance/exit of the texture pasted on the translucent 3D object is set to be transparent, the inside can be seen from the outside of the translucent 3D object, and the outside can be seen from the inside of the translucent 3D object.
 図7(b)および図7(c)を参照して、半透明3Dオブジェクト520の内部に別の半透明3Dオブジェクト530があるという入れ子構造において、半透明3Dオブジェクト530に出入り口532が設定されている例を説明する。 Referring to FIGS. 7(b) and 7(c), in a nested structure in which a translucent 3D object 520 has another translucent 3D object 530, an entrance/exit 532 is set in the translucent 3D object 530. Let me explain an example.
 図7(b)は、半透明3Dオブジェクト520の内部にユーザがいるときにユーザから見える画像を示す。半透明3Dオブジェクト520の中にいるユーザには、半透明3Dオブジェクト520の裏面の不透明テクスチャを背景として、半透明3Dオブジェクト530が見える。半透明3Dオブジェクト530の出入り口532は完全透明であるため、出入り口532を通して半透明3Dオブジェクト530の内部が見える。ユーザは、半透明3Dオブジェクト530の出入り口532を選択して、出入り口532から半透明3Dオブジェクト530の内部に入ることができる。 FIG. 7(b) shows the image seen by the user when the user is inside the translucent 3D object 520. A user inside the translucent 3D object 520 sees the translucent 3D object 530 against the background of the opaque texture on the back side of the translucent 3D object 520. Since the doorway 532 of the semitransparent 3D object 530 is completely transparent, the interior of the semitransparent 3D object 530 can be seen through the doorway 532. The user can select the doorway 532 of the translucent 3D object 530 and enter the interior of the translucent 3D object 530 through the doorway 532.
 図7(c)は、半透明3Dオブジェクト530の内部にユーザがいるときにユーザから見える画像を示す。半透明3Dオブジェクト530の中にいるユーザには、半透明3Dオブジェクト530の裏面の不透明テクスチャが見える。半透明3Dオブジェクト530の出入り口532は完全透明であるため、出入り口532を通して外側、すなわち半透明3Dオブジェクト520の裏面の不透明テクスチャが見える。ユーザは、半透明3Dオブジェクト530の出入り口532を選択して、出入り口532から半透明3Dオブジェクト530の外側に出ることができる。 FIG. 7(c) shows the image seen by the user when the user is inside the semi-transparent 3D object 530. A user inside the translucent 3D object 530 can see the opaque texture on the back side of the translucent 3D object 530. Since the doorway 532 of the translucent 3D object 530 is completely transparent, the opaque texture of the outside, that is, the back surface of the semitransparent 3D object 520, is visible through the doorway 532. The user can select the doorway 532 of the translucent 3D object 530 and exit from the doorway 532 to the outside of the translucent 3D object 530.
 図8(a)および図8(b)は、半透明3Dオブジェクトに貼り付けるテクスチャのシーンに合わせた半透明3Dオブジェクトの形状を説明する図である。半透明3Dオブジェクトは360度画像のシーンに合わせて形状を選択することができる。 FIGS. 8(a) and 8(b) are diagrams illustrating the shape of a translucent 3D object that matches the scene of the texture pasted on the translucent 3D object. The shape of the semi-transparent 3D object can be selected according to the scene of the 360-degree image.
 図8(a)に示すように、屋内を撮影した360度画像をテクスチャとして半透明3Dオブジェクト540に貼り付ける場合、建物が立方体に近いことが多いため、立方体の半透明3Dオブジェクト540を使用することができる。 As shown in FIG. 8(a), when pasting a 360-degree image taken indoors as a texture on a translucent 3D object 540, a cubic translucent 3D object 540 is used because the building is often close to a cube. be able to.
 図8(b)に示すように、屋外を撮影した360度画像を半透明3Dオブジェクト550に貼り付ける場合、無限遠のオープンな空間を表現するために球体の半透明3Dオブジェクト550を使用することができる。 As shown in FIG. 8(b), when pasting a 360-degree image taken outdoors onto a translucent 3D object 550, a spherical translucent 3D object 550 is used to represent an open space at infinity. Can be done.
 このようにシーンに合わせた形状の半透明3Dオブジェクトを使用することにより、ユーザは直感的に半透明3Dオブジェクトの内部が屋内なのか屋外なのかを理解することができ、半透明3Dオブジェクトの内部に入るかどうかを選択しやすくなるという効果がある。 By using a translucent 3D object with a shape that matches the scene, the user can intuitively understand whether the interior of the translucent 3D object is indoors or outdoors. This has the effect of making it easier to choose whether or not to enter.
 図9は、半透明3Dオブジェクトを表示するデバイスが傾いた場合の半透明3Dオブジェクトの動きを説明する図である。 FIG. 9 is a diagram illustrating the movement of a translucent 3D object when the device displaying the translucent 3D object is tilted.
 スマートフォンなどの表示デバイスに球体の半透明3Dオブジェクトと立方体の半透明3Dオブジェクトが表示されているとする。表示デバイスに搭載されている加速度やジャイロセンサの情報により、表示デバイスの傾きを算出することができる。表示デバイスの傾きに応じて、表示されている半透明3Dオブジェクトの動き方に違いを持たせることができる。例えば、表示デバイスを傾けた時、球体の半透明3Dオブジェクトは摩擦係数が小さいため、転がっていくようなアニメーションを行い、立方体の半透明3Dオブジェクトは摩擦係数が大きいため、なかなか動かないようにアニメーションを行う。 Assume that a spherical translucent 3D object and a cubic translucent 3D object are displayed on a display device such as a smartphone. The tilt of the display device can be calculated based on information from the acceleration and gyro sensor mounted on the display device. The displayed translucent 3D object can be made to move differently depending on the tilt of the display device. For example, when the display device is tilted, a spherical translucent 3D object has a small coefficient of friction, so it is animated so that it rolls, and a cube translucent 3D object has a large friction coefficient, so it is animated so that it does not move easily. I do.
 図10(a)および図10(b)は、半透明3Dオブジェクトの階層構造を説明する図である。半透明3Dオブジェクトの重ね合わせの仕方や同じ階層に配置する半透明3Dオブジェクトのテクスチャは自由であるが、あらかじめテクスチャに対して物体検出技術等を適用することで、テクスチャからメタ情報を出力することができる。半透明3Dオブジェクトに貼り付けられるテクスチャのメタ情報を用いて半透明3Dオブジェクトの階層構造を決定することができる。 FIGS. 10(a) and 10(b) are diagrams illustrating the hierarchical structure of semitransparent 3D objects. The method of superimposing translucent 3D objects and the texture of translucent 3D objects placed on the same layer are free, but it is possible to output meta information from the texture by applying object detection technology etc. to the texture in advance. Can be done. The hierarchical structure of the translucent 3D object can be determined using the meta information of the texture pasted to the translucent 3D object.
 図10(a)は、メタ情報の階層構造を示す。最上位の階層のフォルダ560aには「地球」というメタ情報を含むテクスチャ560bが分類される。2番目の階層のフォルダ562a、564aには「水」、「地」というメタ情報を含むテクスチャ562b、564bが分類される。2番目の階層の「水」というフォルダ562aの下の最下位の階層のフォルダ566aには「海」というメタ情報を含むテクスチャ566b、568bが分類される。 FIG. 10(a) shows the hierarchical structure of meta information. A texture 560b including meta information "Earth" is classified in the folder 560a at the highest level. Textures 562b and 564b that include meta information such as "water" and "earth" are classified into folders 562a and 564a in the second hierarchy. Textures 566b and 568b including meta information "sea" are classified in a folder 566a at the lowest level below a folder 562a called "water" at the second level.
 図10(b)は、図10(a)のメタ情報の階層構造に対応する半透明3Dオブジェクトの階層構造を示す。 FIG. 10(b) shows a hierarchical structure of semi-transparent 3D objects corresponding to the hierarchical structure of meta information shown in FIG. 10(a).
 最上位の階層の半透明3Dオブジェクト560cには、最上位の階層の「地球」というフォルダ560aに分類されたテクスチャ560bが貼り付けられる。 A texture 560b classified into a folder 560a named "Earth" at the highest level is pasted to the translucent 3D object 560c at the highest level.
 2番目の階層の半透明3Dオブジェクト562cには、2番目の階層の「水」というフォルダ562aに分類されたテクスチャ562bが貼り付けられる。 A texture 562b classified into a folder 562a named "Water" in the second hierarchy is pasted to the translucent 3D object 562c in the second hierarchy.
 2番目の階層の半透明3Dオブジェクト564cには、2番目の階層の「地」というフォルダ564aに分類されたテクスチャ564bが貼り付けられる。 A texture 564b classified into a folder 564a called "ground" in the second hierarchy is pasted to the translucent 3D object 564c in the second hierarchy.
 最下位の階層の半透明3Dオブジェクト566c、568cには、最下位の階層の「海」というフォルダ566aに分類されたテクスチャ566b、568bが貼り付けられる。 Textures 566b and 568b classified into a folder 566a called "sea" in the lowest hierarchy are pasted to the semitransparent 3D objects 566c and 568c in the lowest hierarchy.
 「地球」というフォルダの下には、「水」と「地」の二つのフォルダがあるが、これは「水」がメタ情報に含まれるテクスチャと「地」がメタ情報として含まれるテクスチャは別のフォルダに分類し、同一階層の別の半透明3Dオブジェクトとして表現することにより、ユーザエクスペリエンスの向上が見込めるからである。「水」というフォルダの下には「海」というフォルダがあるが、これは「海」が「水」の下位概念であることから、「水」に対応する半透明3Dオブジェクトの内部に「海」に対応する半透明3Dオブジェクトを配置することにより、直感的に理解しやすくなるためである。 Under the folder "Earth" there are two folders "Water" and "Earth", but this is because textures that include "Water" as meta information and textures that include "Earth" as meta information are different. This is because the user experience can be expected to be improved by classifying the objects into folders and expressing them as separate translucent 3D objects in the same hierarchy. There is a folder called "Ocean" under the "Water" folder, but this is because "Ocean" is a subordinate concept of "Water". This is because by arranging a translucent 3D object corresponding to ``, it becomes easier to understand intuitively.
 テクスチャのメタ情報として画像処理により抽出した情報に限定されない。テクスチャを撮像した地点を表すGPS情報等をメタ情報として用いてもよい。たとえば、テクスチャのGPS情報に基づいて「日本」の下に「東京」、「東京」の下に「新宿」といったメタ情報の階層構造を作成し、テクスチャを分類する。テクスチャのメタ情報の階層構造に対応して、「日本」に対応する半透明3Dオブジェクトの内部に「東京」に対応する半透明3Dオブジェクトを配置し、「東京」に対応する半透明3Dオブジェクトの内部に「新宿」に対応する半透明3Dオブジェクトを配置する。 Texture meta information is not limited to information extracted by image processing. GPS information or the like representing the point where the texture was imaged may be used as the meta information. For example, a hierarchical structure of meta information such as "Tokyo" under "Japan" and "Shinjuku" under "Tokyo" is created based on the GPS information of the texture, and the texture is classified. Corresponding to the hierarchical structure of texture meta information, a semi-transparent 3D object corresponding to "Tokyo" is placed inside a semi-transparent 3D object corresponding to "Japan", and a semi-transparent 3D object corresponding to "Tokyo" is placed inside a semi-transparent 3D object corresponding to "Japan". A translucent 3D object corresponding to "Shinjuku" is placed inside.
 図11(a)および図11(b)は、半透明3Dオブジェクトのアニメーションのパターンを説明する図である。半透明3Dオブジェクトに貼られているテクスチャまたは半透明3Dオブジェクトの内部の3Dオブジェクトのステータスに応じて半透明3Dオブジェクトのアニメーションを変えることができる。 FIGS. 11(a) and 11(b) are diagrams illustrating animation patterns of semi-transparent 3D objects. The animation of the translucent 3D object can be changed depending on the texture attached to the translucent 3D object or the status of the 3D object inside the translucent 3D object.
 例えば、半透明3Dオブジェクト550に販売店の360度画像が貼り付けられており、半透明3Dオブジェクト550の内部にキャラクタの3Dオブジェクトが存在しているとする。販売店がセール期間中であり、キャラクタが商品の販売をしている場合、半透明3Dオブジェクト550を図11(a)のように回転させたり、図11(b)のように弾ませたりすることができる。 For example, assume that a 360-degree image of a store is pasted on a translucent 3D object 550, and that a 3D object of a character exists inside the translucent 3D object 550. When the store is on sale and the character is selling products, the translucent 3D object 550 is rotated as shown in FIG. 11(a) or bounced as shown in FIG. 11(b). be able to.
 半透明3Dオブジェクトの属性を示すために、半透明3Dオブジェクトの上部にショップの看板やロゴマーク等を設けてもよい。看板やロゴマークは基本的に、半透明3Dオブジェクトが回転したり、弾んだりしても向きは変わらず、視線に対して垂直に配置される。また、このような看板やロゴマークがショップの営業状態を示すために、開店中は看板やロゴマークが回転し、閉店中は看板やロゴマークが停止し、セール期間中は看板やロゴマークが跳ねるようにしてもよい。 In order to indicate the attributes of the translucent 3D object, a shop signboard, logo mark, etc. may be provided above the translucent 3D object. Basically, signboards and logo marks are placed perpendicular to the line of sight, with the orientation remaining the same even when the translucent 3D object rotates or bounces. In addition, these signs and logos rotate when the store is open, stop when the store is closed, and rotate during sales periods. You can also make it bounce.
 半透明3Dオブジェクトの属性を示すために、半透明3Dオブジェクトの付近(たとえば上部など)にショップが提供する商品のイメージを示すアニメーションなどの演出を加えてもよい。たとえば、ラーメン屋であれば、湯気のアニメを加えるなどの演出をしてもよい。 In order to show the attributes of the translucent 3D object, an animation or other effect showing an image of the product offered by the shop may be added near the translucent 3D object (for example, above). For example, if it's a ramen restaurant, you can add an animation of steam to the restaurant.
 半透明3Dオブジェクトのステータスだけでなく、時間帯やユーザの属性情報に応じて貼り付けるテクスチャを切り替えてもよい。昼の場合は青空のテクスチャを表示し、夜は星空のテクスチャを表示するなどしてもよい。ユーザの年齢、性別、趣味などの属性に応じて、子ども向きまたは大人向きのテクスチャを選択したり、男性向きまたは女性向きのテクスチャを選択したり、趣味に合った種類のテクスチャを選択してもよい。 The texture to be pasted may be switched depending not only on the status of the translucent 3D object but also on the time of day and user attribute information. For example, a blue sky texture may be displayed during the day, and a starry sky texture may be displayed during the night. Depending on the user's attributes such as age, gender, hobbies, etc., you can choose a texture suitable for children or adults, a texture suitable for men or women, or a type of texture that matches the user's hobbies. good.
 半透明3Dオブジェクトに貼り付けられる複数のテクスチャを切り替える方法として、あらかじめ複数のテクスチャをメモリにロードしてレンダリングした上で重ね合わせておき、選択するテクスチャの透明度を0%(すなわち不透明)に設定し、選択しないテクスチャの透明度を100%(すなわち完全透明)に設定することで、テクスチャを切り替えてもよい。たとえば、昼のテクスチャと夜のテクスチャの2枚の画像をあらかじめレンダリングして重ね合わせておき、昼の時間帯では昼のテクスチャの透明度を0%、夜のテクスチャの透明度を100%とすることで昼のテクスチャに切り替え、夜の時間帯では昼のテクスチャの透明度を100%、夜のテクスチャの透明度を0%とすることで夜のテクスチャに切り替えることができる。同様に、春、夏、秋、冬のテクスチャを季節に応じて切り替えたり、年齢別、性別によってテクスチャを切り替えたりすることができる。透明度の設定だけでテクスチャを切り替えることができるため、テクスチャをメモリにロードしてレンダリングする時間を省いて高速なテクスチャの切り替えを実現できる。 To switch between multiple textures pasted on a translucent 3D object, load multiple textures into memory in advance, render them, overlap them, and set the transparency of the selected texture to 0% (that is, opaque). , textures may be switched by setting the transparency of unselected textures to 100% (that is, completely transparent). For example, by rendering two images in advance, one for the daytime texture and one for the nighttime texture, and overlapping them, you can set the transparency of the daytime texture to 0% and the transparency of the nighttime texture to 100% during the daytime. You can switch to the night texture by switching to the day texture and setting the transparency of the day texture to 100% and the transparency of the night texture to 0% during the night time period. Similarly, textures for spring, summer, autumn, and winter can be switched depending on the season, or textures can be switched depending on age and gender. Textures can be switched simply by setting transparency, which eliminates the time it takes to load textures into memory and render them, making it possible to switch textures quickly.
 また、半透明3Dオブジェクト間でライティングの影響をテクスチャに反映させてもよい。たとえば、半透明3Dオブジェクトの中に光を放つオブジェクトがある場合、そのオブジェクトの放つ光によって、半透明3Dオブジェクトの内部にある別の半透明3Dオブジェクトのテクスチャに光が当たったり、影ができてもよい。 Additionally, the influence of lighting may be reflected on the texture between semitransparent 3D objects. For example, if an object emits light inside a translucent 3D object, the light emitted by that object may illuminate or cast a shadow on the texture of another translucent 3D object inside the translucent 3D object. Good too.
 以上、本発明を実施の形態をもとに説明した。実施の形態は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。
て利用される。
The present invention has been described above based on the embodiments. Those skilled in the art will understand that the embodiments are merely illustrative, and that various modifications can be made to the combinations of their components and processing processes, and that such modifications are also within the scope of the present invention. .
used.
この発明は、画像表示技術に関する。 The present invention relates to image display technology.
 10 描画処理部、 20 視点移動部、 30 表示制御部、 40 出入り口設定部、 50 階層設定部、 60 3Dオブジェクト記憶部、 70 テクスチャ記憶部、 80 階層構造記憶部、 100 画像表示装置。 10 drawing processing unit, 20 viewpoint moving unit, 30 display control unit, 40 entrance/exit setting unit, 50 hierarchy setting unit, 60 3D object storage unit, 70 texture storage unit, 80 hierarchical structure storage unit, 100 image display device.

Claims (9)

  1.  第1の3次元オブジェクトの表面に第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、
     前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの表面に前記第1の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、描画処理部と、
     視点が前記第1の3次元オブジェクトの外部にある場合に、前記第1の3次元オブジェクトの選択により、前記第1の3次元オブジェクトの内部に視点を移動させる視点移動部とを含み、
     前記描画処理部は、前記視点から観察される前記第1の3次元オブジェクトをレンダリングすることを特徴とする画像表示装置。
    A first omnidirectional image is pasted on the front surface of the first three-dimensional object as a semi-transparent texture, the first omnidirectional image is pasted on the back surface of the first three-dimensional object, and the first omnidirectional image is pasted on the front surface of the first three-dimensional object. Rotate and flip to align with a translucent texture and paste as an opaque texture, or
    The first omnidirectional image is pasted on the back side of the first three-dimensional object as a semi-transparent texture, and the first omnidirectional image is pasted on the front side of the first three-dimensional object, and the first omnidirectional image is pasted on the back side of the first three-dimensional object. a drawing processing unit that rotates and inverts the texture to align with the translucent texture and pastes the texture as an opaque texture;
    a viewpoint moving unit that moves the viewpoint inside the first three-dimensional object by selecting the first three-dimensional object when the viewpoint is outside the first three-dimensional object;
    The image display device is characterized in that the drawing processing unit renders the first three-dimensional object observed from the viewpoint.
  2.  前記描画処理部は、前記第1の3次元オブジェクトの内部に第2の3次元オブジェクトを配置する入れ子構造において、
     前記第2の3次元オブジェクトの表面に第2の全方位画像を半透明のテクスチャとして貼り付け、前記第2の3次元オブジェクトの裏面に前記第2の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、
     前記第2の3次元オブジェクトの裏面に前記第2の全方位画像を半透明のテクスチャとして貼り付け、前記第2の3次元オブジェクトの表面に前記第2の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付け、
     前記視点移動部は、前記視点が前記第1の3次元オブジェクトの内部にある場合、前記第2の3次元オブジェクトの選択により、前記第2の3次元オブジェクトの内部に前記視点を移動させ、
     前記描画処理部は、前記視点から観察される前記入れ子構造における前記第1の3次元オブジェクトおよび前記第2の3次元オブジェクトの少なくとも一方をレンダリングすることを特徴とする請求項1に記載の画像表示装置。
    In the nested structure in which a second three-dimensional object is arranged inside the first three-dimensional object,
    A second omnidirectional image is pasted on the front surface of the second three-dimensional object as a semi-transparent texture, the second omnidirectional image is pasted on the back surface of the second three-dimensional object, and the second omnidirectional image is pasted on the front surface of the second three-dimensional object. Rotate and flip it to align with the translucent texture and paste it as an opaque texture, or
    The second omnidirectional image is pasted on the back side of the second three-dimensional object as a semi-transparent texture, and the second omnidirectional image is pasted on the front side of the second three-dimensional object, and the second omnidirectional image is pasted on the back side of the second three-dimensional object. Rotate and flip it to align with the translucent texture and paste it as an opaque texture,
    When the viewpoint is inside the first three-dimensional object, the viewpoint moving unit moves the viewpoint inside the second three-dimensional object by selecting the second three-dimensional object,
    The image display according to claim 1, wherein the drawing processing unit renders at least one of the first three-dimensional object and the second three-dimensional object in the nested structure observed from the viewpoint. Device.
  3.  前記描画処理部は、前記入れ子構造内の3次元オブジェクトが、その内部に視点を移動させることができない終端の3次元オブジェクトである場合、前記終端の3次元オブジェクトの表面のテクスチャを不透明に設定し、
     前記視点移動部は、前記終端の3次元オブジェクトを選択しても、前記終端の3次元オブジェクトの内部に視点を移動させないことを特徴とする請求項2に記載の画像表示装置。
    When the three-dimensional object in the nested structure is a terminal three-dimensional object into which a viewpoint cannot be moved, the drawing processing unit sets the surface texture of the terminal three-dimensional object to be opaque. ,
    The image display device according to claim 2, wherein the viewpoint moving unit does not move the viewpoint inside the terminal three-dimensional object even if the terminal three-dimensional object is selected.
  4.  前記3次元オブジェクトに貼り付けられる前記全方位画像の位置情報またはメタ情報に基づいて階層構造を決定し、前記階層構造に基づいて前記3次元オブジェクトの入れ子構造を設定する階層構造設定部をさらに含むことを特徴とする請求項2に記載の画像表示装置。 The apparatus further includes a hierarchical structure setting unit that determines a hierarchical structure based on position information or meta information of the omnidirectional image pasted on the three-dimensional object, and sets a nested structure of the three-dimensional object based on the hierarchical structure. The image display device according to claim 2, characterized in that:
  5.  前記3次元オブジェクトに貼り付けられる前記全方位画像内のセマンティック情報に基づいて、前記全方位画像の特定の場所を出入り口に設定し、前記出入り口を完全透明に設定する出入り口設定部をさらに含み、
     前記視点移動部は、前記視点が前記3次元オブジェクトの外部にある場合、前記3次元オブジェクトの前記出入り口の選択により、前記3次元オブジェクトの内部に視点を移動させ、前記視点が前記3次元オブジェクトの内部にある場合、前記3次元オブジェクトの前記出入り口の選択により、前記3次元オブジェクトの外部に視点を移動させることを特徴とする請求項2に記載の画像表示装置。
    The method further includes an entrance/exit setting unit that sets a specific location in the omnidirectional image as an entrance/exit based on semantic information in the omnidirectional image pasted on the three-dimensional object, and sets the entrance/exit completely transparent;
    When the viewpoint is outside the three-dimensional object, the viewpoint moving unit moves the viewpoint into the inside of the three-dimensional object by selecting the entrance/exit of the three-dimensional object, and when the viewpoint is outside the three-dimensional object. 3. The image display device according to claim 2, wherein when the object is located inside, the viewpoint is moved to the outside of the three-dimensional object by selecting the doorway of the three-dimensional object.
  6.  前記描画処理部は、前記全方位画像が建造物を撮影した画像である場合、前記3次元オブジェクトとして立方体または直方体を使用し、前記全方位画像が屋外空間を撮影した画像である場合、前記3次元オブジェクトとして球体または楕円体を使用することを特徴とする請求項1から5のいずれかに記載の画像表示装置。 When the omnidirectional image is an image taken of a building, the drawing processing unit uses a cube or a rectangular parallelepiped as the three-dimensional object, and when the omnidirectional image is an image taken of an outdoor space, the 6. The image display device according to claim 1, wherein a sphere or an ellipsoid is used as the dimensional object.
  7.  前記描画処理部は、前記入れ子構造において前記3次元オブジェクトの内部の光源による影響を他の前記3次元オブジェクトの表面または内部に反映させることを特徴とする請求項2に記載の画像表示装置。 The image display device according to claim 2, wherein the drawing processing unit reflects the influence of a light source inside the three-dimensional object on the surface or inside of another three-dimensional object in the nested structure.
  8.  第1の3次元オブジェクトの表面に第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、
     前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの表面に前記第1の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、ステップと、
     視点が前記第1の3次元オブジェクトの外部にある場合に、前記第1の3次元オブジェクトの選択により、前記第1の3次元オブジェクトの内部に視点を移動させるステップと、
     前記視点から観察される前記第1の3次元オブジェクトをレンダリングするステップとを含むことを特徴とする画像表示方法。
    A first omnidirectional image is pasted on the front surface of the first three-dimensional object as a semi-transparent texture, the first omnidirectional image is pasted on the back surface of the first three-dimensional object, and the first omnidirectional image is pasted on the front surface of the first three-dimensional object. Rotate and flip to align with a translucent texture and paste as an opaque texture, or
    The first omnidirectional image is pasted on the back side of the first three-dimensional object as a semi-transparent texture, and the first omnidirectional image is pasted on the front side of the first three-dimensional object, and the first omnidirectional image is pasted on the back side of the first three-dimensional object. rotating and flipping the texture to align with the translucent texture and pasting it as an opaque texture;
    moving the viewpoint inside the first three-dimensional object by selecting the first three-dimensional object when the viewpoint is outside the first three-dimensional object;
    An image display method comprising: rendering the first three-dimensional object observed from the viewpoint.
  9.  第1の3次元オブジェクトの表面に第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を、表面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、または、
     前記第1の3次元オブジェクトの裏面に前記第1の全方位画像を半透明のテクスチャとして貼り付け、前記第1の3次元オブジェクトの表面に前記第1の全方位画像を、裏面に貼り付けられた前記半透明のテクスチャと位置合わせするために回転および反転させた上で不透明のテクスチャとして貼り付ける、ステップと、
     視点が前記第1の3次元オブジェクトの外部にある場合に、前記第1の3次元オブジェクトの選択により、前記第1の3次元オブジェクトの内部に視点を移動させるステップと、
     前記視点から観察される前記第1の3次元オブジェクトをレンダリングするステップとをコンピュータに実行させる含むことを特徴とする画像表示プログラム。
    A first omnidirectional image is pasted on the front surface of the first three-dimensional object as a semi-transparent texture, the first omnidirectional image is pasted on the back surface of the first three-dimensional object, and the first omnidirectional image is pasted on the front surface of the first three-dimensional object. Rotate and flip to align with a translucent texture and paste as an opaque texture, or
    The first omnidirectional image is pasted on the back side of the first three-dimensional object as a semi-transparent texture, and the first omnidirectional image is pasted on the front side of the first three-dimensional object, and the first omnidirectional image is pasted on the back side of the first three-dimensional object. rotating and flipping the texture to align with the translucent texture and pasting it as an opaque texture;
    moving the viewpoint inside the first three-dimensional object by selecting the first three-dimensional object when the viewpoint is outside the first three-dimensional object;
    An image display program comprising: causing a computer to execute the step of rendering the first three-dimensional object observed from the viewpoint.
PCT/JP2023/023519 2022-07-29 2023-06-26 Image display device and image display method WO2024024357A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022121670 2022-07-29
JP2022-121670 2022-07-29

Publications (1)

Publication Number Publication Date
WO2024024357A1 true WO2024024357A1 (en) 2024-02-01

Family

ID=89706071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/023519 WO2024024357A1 (en) 2022-07-29 2023-06-26 Image display device and image display method

Country Status (1)

Country Link
WO (1) WO2024024357A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002301234A (en) * 2001-04-06 2002-10-15 Sanyo Product Co Ltd Game machine
JP2008033601A (en) * 2006-07-28 2008-02-14 Konami Digital Entertainment:Kk Image processing unit, image processing method, and program
JP2017182681A (en) * 2016-03-31 2017-10-05 株式会社リコー Image processing system, information processing device, and program
JP2018139096A (en) * 2016-11-30 2018-09-06 株式会社リコー Information processing device and program
US10242488B1 (en) * 2015-03-02 2019-03-26 Kentucky Imaging Technologies, LLC One-sided transparency: a novel visualization for tubular objects
US20190156578A1 (en) * 2017-11-22 2019-05-23 Google Llc Interaction between a viewer and an object in an augmented reality environment
JP2021517309A (en) * 2018-05-22 2021-07-15 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Image processing methods, devices, computer programs and computer devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002301234A (en) * 2001-04-06 2002-10-15 Sanyo Product Co Ltd Game machine
JP2008033601A (en) * 2006-07-28 2008-02-14 Konami Digital Entertainment:Kk Image processing unit, image processing method, and program
US10242488B1 (en) * 2015-03-02 2019-03-26 Kentucky Imaging Technologies, LLC One-sided transparency: a novel visualization for tubular objects
JP2017182681A (en) * 2016-03-31 2017-10-05 株式会社リコー Image processing system, information processing device, and program
JP2018139096A (en) * 2016-11-30 2018-09-06 株式会社リコー Information processing device and program
US20190156578A1 (en) * 2017-11-22 2019-05-23 Google Llc Interaction between a viewer and an object in an augmented reality environment
JP2021517309A (en) * 2018-05-22 2021-07-15 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Image processing methods, devices, computer programs and computer devices

Similar Documents

Publication Publication Date Title
US9317962B2 (en) 3D space content visualization system
Nebiker et al. Rich point clouds in virtual globes–A new paradigm in city modeling?
US7554539B2 (en) System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
US8493380B2 (en) Method and system for constructing virtual space
CN101147174B (en) System and method for managing communication and/or storage of image data
US20080076556A1 (en) Simulated 3D View of 2D Background Images and Game Objects
Shepherd Travails in the third dimension: A critical evaluation of three-dimensional geographical visualization
Zara Virtual reality and cultural heritage on the web
CN116051713B (en) Rendering method, electronic device, and computer-readable storage medium
Jian et al. Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
Pierdicca et al. 3D visualization tools to explore ancient architectures in South America
Schmohl et al. Stuttgart city walk: A case study on visualizing textured dsm meshes for the general public using virtual reality
Brivio et al. PhotoCloud: Interactive remote exploration of joint 2D and 3D datasets
Trapp et al. Colonia 3D communication of virtual 3D reconstructions in public spaces
JP2000505219A (en) 3D 3D browser suitable for the Internet
WO2007129065A1 (en) Virtual display method and apparatus
Trapp et al. Strategies for visualising 3D points-of-interest on mobile devices
WO2024024357A1 (en) Image display device and image display method
Döllner Geovisualization and real-time 3D computer graphics
Brogni et al. An interaction system for the presentation of a virtual egyptian flute in a real museum
Liarokapis et al. Design experiences of multimodal mixed reality interfaces
Trapp et al. Communication of digital cultural heritage in public spaces by the example of roman cologne
Gupta Quantum space time travel with the implementation of augmented reality and artificial intelligence
Malhotra Issues involved in real-time rendering of virtual environments
CN117011492B (en) Image rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23846091

Country of ref document: EP

Kind code of ref document: A1