WO2020071114A1 - 画像処理装置および方法 - Google Patents
画像処理装置および方法Info
- Publication number
- WO2020071114A1 WO2020071114A1 PCT/JP2019/036469 JP2019036469W WO2020071114A1 WO 2020071114 A1 WO2020071114 A1 WO 2020071114A1 JP 2019036469 W JP2019036469 W JP 2019036469W WO 2020071114 A1 WO2020071114 A1 WO 2020071114A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- unit
- vector
- data
- image processing
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/08—Bandwidth reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Definitions
- the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method capable of suppressing an increase in load when generating a point cloud from a mesh.
- the present disclosure has been made in view of such a situation, and is intended to suppress an increase in load when a point cloud is generated from a mesh.
- An image processing device includes a point cloud generation unit that generates point cloud data by arranging points at an intersection between a surface of a mesh and a vector having position coordinates corresponding to a specified resolution as a starting point.
- An image processing device includes a point cloud generation unit that generates point cloud data by arranging points at an intersection between a surface of a mesh and a vector having position coordinates corresponding to a specified resolution as a starting point.
- the image processing method is an image processing method that generates point cloud data by arranging points at intersections between a surface of a mesh and a vector whose starting point is a position coordinate corresponding to a specified resolution.
- point cloud data is generated by arranging points at intersections between a surface of a mesh and a vector whose starting point is a position coordinate corresponding to a designated resolution.
- FIG. 9 is a diagram illustrating a process for generating a point cloud from a mesh.
- FIG. 9 is a diagram illustrating a process for generating a point cloud from a mesh. It is a figure explaining the example of a mode of calculation of an intersection.
- FIG. 2 is a block diagram illustrating a main configuration example of a point cloud generation device. It is a flowchart explaining the example of the flow of a point cloud generation process. It is a figure explaining the example of a mode of an intersection derivation. It is a figure explaining the example of a mode of an intersection derivation. It is a figure explaining the example of a mode of an intersection derivation. It is a figure explaining the example of a mode of an intersection derivation. It is a block diagram which shows the main structural examples of a decoding device.
- FIG. 35 is a block diagram illustrating a main configuration example of an encoding device. It is a flowchart explaining the example of the flow of an encoding process. It is a figure explaining the example of a mode of scalability of Triangle Trisoup.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 4 is a diagram illustrating an example of how points are generated.
- FIG. 18 is a block diagram illustrating a main configuration example of a computer.
- Non-patent document 1 (described above)
- Non-patent document 2 (described above)
- Non-Patent Document 3 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (International Telecommunication Union), "Advanced video coding for generic audiovisual services", H.264, 04/2017
- Non-Patent Document 4 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (International Telecommunication Union), "High efficiency video coding", H.265, 12/2016
- Non-Patent Document 5 Jianle Chen, Maria Alshina, Gary J.
- Non-Patent Document 4 Quad-Tree ⁇ Block ⁇ Structure described in Non-Patent Document 4 and QTBT (Quad ⁇ Tree ⁇ Binary Tree) ⁇ Block ⁇ Structure described in Non-Patent Document 5 are not directly described in the embodiment, It is within the disclosure range of the present technology and satisfies the support requirements of the claims. Also, for example, technical terms such as parsing, syntax, and semantics are also within the disclosure range of the present technology even when there is no direct description in the embodiment. Satisfy the support requirements of the claims.
- a three-dimensional structure (three-dimensional object) is represented as a set (point group) of many points. That is, the data of the point cloud (also referred to as point cloud data) is configured by position information and attribute information (for example, color and the like) of each point of the point cloud. Therefore, the data structure is relatively simple, and an arbitrary three-dimensional structure can be expressed with sufficient accuracy by using sufficiently many points.
- a voxel is a three-dimensional area for quantizing position information to be encoded.
- the three-dimensional area including the point cloud is divided into small three-dimensional areas called voxels, and each voxel indicates whether or not a point is included.
- the position of each point is quantized in voxel units. Therefore, by converting point cloud data into such voxel data (also referred to as voxel data), it is possible to suppress an increase in the amount of information (typically reduce the amount of information). Can be.
- Octree is a tree structure of voxel data.
- the value of each bit of the lowest node of this Octree indicates whether or not each voxel has a point. For example, a value “1” indicates a voxel including a point, and a value “0” indicates a voxel not including a point.
- one node corresponds to eight voxels. That is, each node of the Octree is composed of 8-bit data, and the 8 bits indicate the presence or absence of points of eight voxels.
- the upper node of the Octree indicates whether or not there is a point in an area where eight voxels corresponding to lower nodes belonging to the node are combined into one. That is, an upper node is generated by collecting information of voxels of lower nodes. If a node having a value of “0”, that is, all eight corresponding voxels do not include a point, the node is deleted.
- a tree structure including nodes whose values are not “0” is constructed. That is, the Octree can indicate the presence or absence of voxel points at each resolution. Therefore, by converting the voxel data into an Octree and encoding, voxel data having various resolutions can be more easily restored at the time of decoding. That is, voxel scalability can be realized more easily.
- the resolution of the voxel in the area where the point does not exist can be reduced, so that the increase in the amount of information can be further suppressed (typically, Reduction).
- Non-Patent Document 2 ⁇ Combination of Octree and Mesh>
- it has been proposed to convert a target 3D object into voxels, and then perform coding by combining Octree coding and mesh coding (Triangle soup). .
- Octree data is decoded to generate Voxel data.
- voxel 11-1, voxel 11-2, and voxel 11-3 are generated.
- a mesh (Mesh) shape (that is, a mesh surface) is restored from the voxel data.
- the mesh surface 12 is restored based on the voxel 11-1, voxel 11-2, and voxel 11-3.
- points 13 are arranged on the surface 12 of the mesh at a resolution of 1 / (2 * blockwidth).
- the blockwidth indicates the longest side of a bounding box (Bounding box) including a mesh.
- the point 13 is voxelized again at the designated resolution d.
- the mesh data (surface 12 and the like) is removed. That is, when generating point cloud data having a desired resolution from the mesh data, resampling is performed so as to reduce the resolution (the number of points) of the points 13 once sampled at a high resolution.
- sampling must be performed twice, and the processing is redundant.
- the data amount increases. Therefore, there is a possibility that the load when generating the point cloud from the mesh increases. As a result, there is a possibility that the processing time increases and the amount of resources used such as a memory increases.
- ⁇ Point cloud resolution control> Therefore, utilizing the fact that the resolution of the output point cloud is the same as the resolution obtained by converting the input point cloud into voxels, the point cloud is generated at high speed by limiting the number of voxel determinations.
- point cloud data is generated by arranging points at the intersection of a mesh surface and a vector whose starting point is the position coordinates corresponding to the designated resolution.
- the image processing apparatus is provided with a point cloud generation unit that generates point cloud data by arranging points at intersections between a mesh surface and a vector whose starting point is the position coordinates corresponding to the designated resolution.
- a vector Vi having the same direction and the same length as a side of a bounding box including data to be encoded is generated at an interval k * d.
- a vector Vi as indicated by an arrow 23 is set for a surface 22 of the mesh existing in the bounding box 21.
- d is the quantization size when the bounding box is voxelized.
- k is an arbitrary natural number. That is, a vector Vi is set with the position coordinates corresponding to the specified voxel resolution as the starting origin.
- two positive and negative directions can be set for each of x, y, and z directions (directions parallel to each side of the bounding box) perpendicular to each other. That is, the intersection determination may be performed for each of the vectors Vi in the six directions. As described above, by performing the intersection determination in more directions, the intersection can be detected more reliably.
- the starting point of the vector Vi may be limited to the range of three vertices of the triangle Mesh. By doing so, the number of vectors Vi to be processed can be reduced, so that an increase in load can be suppressed (for example, the processing speed can be further increased).
- intersections have overlapping coordinate values due to different vectors or meshes
- one point may be left and deleted.
- an increase in unnecessary processing can be suppressed, and an increase in load can be suppressed (for example, the processing can be further speeded up).
- the position of the intersection may be clipped (moved) into the bounding box by clip processing. . Further, the intersection may be deleted.
- a point having the obtained coordinate value is output as a decoding result. That is, a point is arranged at the obtained coordinate value.
- intersection determination and coordinate value calculation The method of determining the intersection and calculating the coordinate value is arbitrary. For example, as shown in FIG. 3, it may be determined using Cramer's formula.
- P is an intersection coordinate
- origin is a ray coordinate
- ray is a direction vector
- t is a scalar value.
- Vo vo is the vertex coordinates of the triangle
- edge1 is the vector obtained by subtracting v0 from coordinates v1
- edge2 is the vector obtained by subtracting v0 from coordinates v2 in the same manner.
- the point P is defined as u (scalar value) from v0 in the direction of the vector edge1
- the intersection on the triangle is represented by an edge vector as follows.
- Edge1 * u + edge2 * v ⁇ ray * t origin ⁇ v0
- FIG. 4 is a block diagram illustrating an example of a configuration of a point cloud generation device that is an aspect of an image processing device to which the present technology is applied.
- FIG. 4 shows main components such as a processing unit and a flow of data, and the components shown in FIG. 4 are not necessarily all. That is, in the point cloud generation device 100, there may be a processing unit not shown as a block in FIG. 4, or a process or data flow not shown as an arrow or the like in FIG.
- the point cloud generation device 100 includes a vector setting unit 111, an intersection determination unit 112, an auxiliary processing unit 113, and an output unit 114.
- the vector setting unit 111 sets (generates) the intersection determination vector Vi as described above in, for example, ⁇ Derivation of Point Cloud>. As described above, this vector Vi is a vector having the same direction and the same length as the side of the bounding box including the data to be encoded.
- the vector setting unit 111 supplies vector information indicating the set vector Vi to the intersection determination unit 112.
- the intersection determining unit 112 obtains mesh data input to the point cloud generation device 100, and further obtains vector information supplied from the vector setting unit 111. As described above in, for example, ⁇ Derivation of point cloud> and ⁇ Intersection determination and coordinate value calculation>, the intersection determination unit 112 determines the intersection between the mesh surface indicated by the acquired mesh data and the vector Vi indicated by the vector information. I do. When an intersection is detected, the intersection determining unit 112 calculates the coordinate value of the intersection. The intersection determining unit 112 supplies the calculated coordinate values of the intersection (intersection coordinates) to the auxiliary processing unit 113.
- the auxiliary processing unit 113 acquires the intersection coordinates supplied from the intersection determination unit 112, and performs auxiliary processing on the intersection, for example, as described above in ⁇ Derivation of point cloud>.
- the auxiliary processing unit 113 supplies the intersection coordinates subjected to the auxiliary processing to the output unit 114 as necessary.
- the output unit 114 outputs the intersection coordinates supplied from the auxiliary processing unit 113 to the outside of the point cloud generation device 100 as (point information of) the point cloud data. That is, point cloud data in which points are arranged at the derived intersection coordinates is generated and output.
- each processing unit may be configured by a logic circuit that realizes the above-described processing.
- each processing unit has, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like, and realizes the above processing by executing a program using them. You may do so.
- each processing unit may have both configurations, and a part of the above processing may be realized by a logic circuit, and the other may be realized by executing a program.
- the configuration of each processing unit may be independent of each other. For example, some of the processing units may realize a part of the above processing by a logic circuit, and another of the processing units may execute a program. May be implemented, and another processing unit may implement the above-described processing by both the logic circuit and the execution of the program.
- the point cloud generation device 100 is able to provide ⁇ 1.
- Generation of Point Cloud> For example, voxel data equivalent to the input resolution can be generated from the mesh in one process. Therefore, it is possible to suppress an increase in the load of generating the point cloud data. Therefore, for example, point cloud data can be generated at higher speed. Further, for example, the manufacturing cost of the point cloud generation device 100 can be reduced.
- the intersection determination unit 112 acquires mesh data in step S101.
- step S102 the vector setting unit 111 sets the vector Vi (in the same direction and the same length as the side of the bounding box including the data to be encoded) starting from the position coordinate corresponding to the designated voxel (Voxel) resolution.
- Vector the vector setting unit 111 sets the vector Vi (in the same direction and the same length as the side of the bounding box including the data to be encoded) starting from the position coordinate corresponding to the designated voxel (Voxel) resolution.
- step S103 the intersection determining unit 112 performs an intersection determination between the vector Vi set in step S102 and the mesh surface (triangle) indicated by the mesh data acquired in step S101.
- step S104 the intersection determining unit 112 calculates the coordinates of the intersection detected in step S103.
- step S105 the auxiliary processing unit 113 deletes the overlapping intersection except one.
- step S106 the auxiliary processing unit 113 processes (for example, clips or deletes) an intersection outside the bounding box.
- step S107 the output unit 114 outputs the coordinates of the intersection obtained as described above as point cloud data (position information).
- step S107 When the processing in step S107 ends, the point cloud generation processing ends.
- each of the above processes is performed in ⁇ 1.
- Point Cloud Generation> in the same manner as in the example described above. Therefore, by executing each of the above-described processes, the point cloud generation device 100 can execute ⁇ 1.
- Generation of Point Cloud> For example, voxel data equivalent to the input resolution can be generated from the mesh in one process. Therefore, it is possible to suppress an increase in the load of generating the point cloud data. Therefore, for example, point cloud data can be generated at higher speed. Further, for example, the manufacturing cost of the point cloud generation device 100 can be reduced.
- the intersection determination for the inside of the surface may be performed using a vector Vi that is sparser than in the case of the intersection determination for the end of the surface.
- the intersection determination may be performed on the surface 201 using the vectors Vi202-1 through Vi202-8.
- the interval between the vectors Vi202-1 to Vi202-8 is narrow for the vectors Vi202-1 to Vi202-3 and for the vectors Vi202-6 to Vi202-8.
- the interval between the vectors Vi202-3 to Vi202-6 is set wider than the intervals between the other vectors Vi.
- the intervals between the vectors Vi202-1 to Vi202-3 and the vectors Vi202-6 to Vi202-8 used for the intersection determination with respect to the end of the surface 201 are set to be small (dense), and Are widely set (sparse) between the vectors Vi202-3 to Vi202-6 used for the intersection determination with respect to.
- the collision determination of the vector Vi is intentionally performed with a sparse width (roughening the width of the starting origin) inside the triangle, so that the number of points generated inside can be reduced. Therefore, it is possible to suppress an increase in the encoding bit rate of the attribute information (color information and the like) of the point cloud.
- the coordinates for which the intersection has been determined once may not be calculated twice. For example, as in the example of FIG. 7, when there are a plurality of mesh surfaces (surfaces 212 and 213) for one vector Vi211, the intersection is determined for one vector Vi211 at the same time. Processing can be further speeded up.
- intersection determination may be performed in parallel with a plurality of processes.
- the intersection determination of a plurality of vectors with respect to one surface of the mesh may be processed in parallel with each other (processing may be performed in parallel). That is, the processing may be performed independently for each vector. By doing so, it is possible to perform the intersection determination at higher speed.
- intersection determination of each of a plurality of planes with respect to one vector may be processed in parallel with each other (performed in parallel). That is, the processing may be performed independently for each surface of the mesh. By doing so, it is possible to perform the intersection determination at higher speed.
- FIG. 9 is a block diagram illustrating an example of a configuration of a decoding device that is an aspect of an image processing device to which the present technology is applied.
- the decoding device 300 illustrated in FIG. 9 is a decoding device corresponding to the encoding device 500 in FIG. 11 described below, and decodes, for example, a bit stream generated by the encoding device 500 and restores point cloud data. Device.
- FIG. 9 shows main components such as a processing unit and a flow of data, and the components shown in FIG. 9 are not necessarily all. That is, in the decoding device 300, a processing unit not illustrated as a block in FIG. 9 may exist, or a process or data flow not illustrated as an arrow or the like in FIG. 9 may exist.
- the decoding device 300 includes a lossless decoding unit 311, an Octree decoding unit 312, a Mesh shape restoration unit 313, a Point @ cloud generation unit 314, and an Attribute decoding unit 315.
- the lossless decoding unit 311 acquires a bit stream input to the decoding device 300, decodes the bit stream, and generates Octree data.
- the lossless decoding unit 311 supplies the Octree data to the Octree decoding unit 312.
- the treeOctree decoding unit 312 acquires the octree data supplied from the lossless decoding unit 311, constructs an octree from the octree data, and generates voxel data from the octree.
- the Octree decoding unit 312 supplies the generated voxel data to the Mesh shape restoration unit 313.
- the Mesh shape restoration unit 313 restores a mesh shape using the voxel data supplied from the Octree decoding unit 312.
- the mesh shape restoration unit 313 supplies the generated mesh data to the Point @ cloud generation unit 314.
- the ⁇ Point ⁇ cloud generation unit 314 generates point cloud data from the mesh data supplied from the Mesh shape restoration unit 313, and supplies the generated point cloud data to the Attribute decoding unit 315.
- This Point @ cloud generation unit 314 has the same configuration as the point cloud generation device 100 (FIG. 4) and performs the same processing. That is, the Point @ cloud generation unit 314 sets ⁇ 1. Generation of point cloud> and ⁇ 2.
- the first embodiment> generates point cloud data from mesh data by the method described above.
- the Point @ cloud generation unit 314 can obtain the same effect as the point cloud generation device 100.
- the Point @ cloud generating unit 314 can generate voxel data corresponding to the input resolution from the mesh in one process. Therefore, the Point @ cloud generation unit 314 can suppress an increase in the load of the point cloud data generation. Therefore, the Point @ cloud generation unit 314 can generate point cloud data at a higher speed, for example. Further, for example, the manufacturing cost of the Point @ cloud generating unit 314 can be reduced.
- the Attribute decoding unit 315 performs a process related to decoding of attribute information. For example, the Attribute decoding unit 315 decodes the attribute information corresponding to the point cloud data supplied from the Point @ cloud generation unit 314. Then, Attribute decoding section 315 includes the decoded attribute information in the point cloud data supplied from Point @ cloud generation section 314, and outputs it outside decoding apparatus 300.
- each processing unit may be configured by a logic circuit that realizes the above-described processing.
- each processing unit may include, for example, a CPU, a ROM, a RAM, and the like, and may execute the program by using the CPU, the ROM, the RAM, and the like, thereby realizing the above-described processing.
- each processing unit may have both configurations, and a part of the above processing may be realized by a logic circuit, and the other may be realized by executing a program.
- the configuration of each processing unit may be independent of each other. For example, some of the processing units may realize a part of the above processing by a logic circuit, and another of the processing units may execute a program. May be implemented, and another processing unit may implement the above-described processing by both the logic circuit and the execution of the program.
- the decoding device 300 can be configured as ⁇ 1. Generation of point cloud> and ⁇ 2.
- First Embodiment> The effect as described in the first embodiment can be obtained. For example, voxel data equivalent to the input resolution can be generated from the mesh in one process, so that the decoding device 300 can suppress an increase in the load of generating point cloud data. Therefore, for example, the decoding device 300 can generate point cloud data at higher speed. Further, for example, the manufacturing cost of the decoding device 300 can be reduced.
- the lossless decoding unit 311 acquires a bit stream in step S301.
- step S302 the lossless decoding unit 311 losslessly decodes the bit stream obtained in step S301.
- step S303 the Octree decoding unit 312 constructs an Octree and restores voxel data.
- step S304 the mesh shape restoring unit 313 restores the mesh shape from the voxel data restored in step S303.
- step S305 the Point cloud generation unit 314 executes a point cloud generation process (FIG. 5), and returns to ⁇ 1. Generation of point cloud> and ⁇ 2. A point cloud is generated from the mesh shape restored in step S304 by the method described above in the first embodiment>.
- step S306 the Attribute decoding unit 315 decodes attribute information (Attribute).
- step S307 the Attribute decoding unit 315 outputs the attribute information decoded in step S306, including the attribute information in the point cloud data.
- step S307 ends, the decoding processing ends.
- the decryption device 300 is able to perform ⁇ 1.
- First Embodiment> The effect as described in the first embodiment can be obtained.
- FIG. 11 is a block diagram illustrating an example of a configuration of an encoding device that is an aspect of an image processing device to which the present technology is applied.
- An encoding device 500 illustrated in FIG. 11 is an device that encodes 3D data such as a point cloud using voxels and octrees.
- FIG. 11 shows main components such as the processing unit and the flow of data, and the components shown in FIG. 11 are not necessarily all. That is, in the encoding device 500, a processing unit not illustrated as a block in FIG. 11 may exist, or a process or data flow not illustrated as an arrow or the like in FIG. 11 may exist. This is the same in other drawings for explaining the processing unit and the like in the encoding device 500.
- the coding apparatus 500 includes a Voxel generation unit 511, a Geometry coding unit 512, a Geometry decoding unit 513, an Attribute coding unit 514, and a bit stream generation unit 515.
- the Voxel generation unit 511 acquires point cloud data (Point @ cloud) input to the encoding device 500, sets a bounding box for an area including the acquired point cloud data, and further divides the bounding box. , Voxels are set, and the position information of the point cloud data is quantized.
- the Voxel generation unit 511 supplies the voxel (Voxel) data generated in this way to the Geometry encoding unit 512.
- the Geometry encoding unit 512 encodes the voxel data supplied from the Voxel generation unit 511, and encodes the position information of the point cloud.
- the geometry encoding unit 512 supplies the encoded data of the generated position information of the point cloud to the bitstream generation unit 515.
- the Geometry encoding unit 512 supplies the Octree data generated when encoding the position information of the point cloud to the Geometry decoding unit 513.
- the Geometry decoding unit 513 decodes the Octree data and generates position information of the point cloud.
- the geometry decoding unit 513 supplies the generated point cloud data (position information) to the attribute encoding unit 514.
- the Attribute encoding unit 514 encodes attribute information corresponding to point cloud data (position information) based on the input encoding parameter (encode parameter).
- the attribute encoding unit 514 supplies the encoded data of the generated attribute information to the bitstream generation unit 515.
- the bit stream generation unit 515 generates a bit stream including the encoded data of the position information supplied from the Geometry encoding unit 512 and the encoded data of the attribute information supplied from the Attribute encoding unit 514, and performs encoding. Output to the outside of the device 500.
- the geometry encoding unit 512 includes an Octree generation unit 521, a Mesh generation unit 522, and a lossless encoding unit 523.
- the Octree generating unit 521 constructs an Octree using the voxel data supplied from the Voxel generating unit 511, and generates Octree data.
- the Octree generation unit 521 supplies the generated Octree data to the Mesh generation unit 522.
- the Mesh generation unit 522 generates mesh data using the Octree data supplied from the Octree generation unit 521, and supplies the mesh data to the lossless encoding unit 523. Further, the Mesh generation unit 522 supplies the Octree data to the Geometry decoding unit 513.
- the lossless encoding unit 523 acquires the Mesh data supplied from the Mesh generation unit 522. In addition, the lossless encoding unit 523 acquires an encoding parameter input from outside the encoding device 500.
- the encoding parameter is information for specifying the type of encoding to be applied, and is input, for example, by a user operation, or supplied from an external device or the like.
- the lossless encoding unit 523 encodes the mesh data with the type specified by the encoding parameter, and generates encoded data of position information.
- the lossless encoding unit 523 supplies the position information to the bit stream generation unit 515.
- the Geometry decoding unit 513 includes an Octree decoding unit 531, a Mesh shape restoration unit 532, and a Point cloud generation unit 533.
- the Octree decoding unit 531 decodes the Octree data supplied from the Geometry encoding unit 512 and generates voxel data.
- the Octree decoding unit 531 supplies the generated voxel data to the Mesh shape restoration unit 532.
- the Mesh shape restoration unit 532 restores a mesh shape using the voxel data supplied from the Octree decoding unit 531, and supplies the mesh data to the Point cloud generation unit 533.
- the Point cloud generation unit 533 generates point cloud data from the mesh data supplied from the Mesh shape restoration unit 532, and supplies the generated point cloud data to the Attribute encoding unit 514.
- This Point @ cloud generation unit 533 has the same configuration as the point cloud generation device 100 (FIG. 4) and performs the same processing. That is, the Point @ cloud generating unit 533 sets ⁇ 1. Generation of point cloud> and ⁇ 2.
- the first embodiment> generates point cloud data from mesh data by the method described above.
- the Point @ cloud generation unit 533 can obtain the same effect as the point cloud generation device 100.
- the Point @ cloud generation unit 533 can generate voxel data equivalent to the input resolution from the mesh in one process. Therefore, the Point @ cloud generation unit 533 can suppress an increase in the load of generating the point cloud data. Therefore, the Point @ cloud generating unit 533 can generate point cloud data at a higher speed, for example. Further, for example, the manufacturing cost of the Point @ cloud generating unit 533 can be reduced.
- each processing unit may be configured by a logic circuit that realizes the above-described processing.
- each processing unit may include, for example, a CPU, a ROM, a RAM, and the like, and may execute the program by using the CPU, the ROM, the RAM, and the like, thereby realizing the above-described processing.
- each processing unit may have both configurations, and a part of the above processing may be realized by a logic circuit, and the other may be realized by executing a program.
- the configuration of each processing unit may be independent of each other.
- some of the processing units may realize a part of the above processing by a logic circuit, and another of the processing units may execute a program. May be implemented, and another processing unit may implement the above-described processing by both the logic circuit and the execution of the program.
- the encoding device 500 can be configured as ⁇ 1. Generation of point cloud> and ⁇ 2.
- First Embodiment> The effect as described in the first embodiment can be obtained. For example, voxel data equivalent to the input resolution can be generated from the mesh in one process, so that the encoding device 500 can suppress an increase in the load of generating point cloud data. Therefore, for example, the encoding device 500 can generate a bit stream at a higher speed. Further, for example, the manufacturing cost of the encoding device 500 can be reduced.
- the Voxel generation unit 511 acquires point cloud data in step S501.
- step S502 the Voxel generation unit 511 generates voxel data using the point cloud data.
- step S503 the Octree generating unit 521 constructs an Octree using the voxel data and generates Octree data.
- step S504 the Mesh generation unit 522 generates mesh data based on the Octree data.
- step S505 the lossless encoding unit 523 losslessly encodes the mesh data to generate encoded data of the position information of the point cloud.
- step S506 the Octree decoding unit 531 restores voxel data using the Octree data generated in step S503.
- step S507 the Mesh shape restoring unit 532 restores a Mesh shape from the voxel data.
- step S508 the Point @ cloud generation unit 533 executes a point cloud generation process (FIG. 5), and sets ⁇ 1. Generation of point cloud> and ⁇ 2. According to the method described in the first embodiment, point cloud data is generated from the mesh shape.
- step S509 the attribute encoding unit 514 encodes attribute information using the point cloud data.
- step S510 the bit stream generation unit 515 generates a bit stream including the encoded data of the position information generated in step S505 and the encoded data of the attribute information generated in step S509.
- step S511 the bit stream generation unit 515 outputs the bit stream to the outside of the encoding device 500.
- step S511 ends, the encoding processing ends.
- the encoding device 500 is able to perform ⁇ 1. Generation of point cloud> and ⁇ 2.
- First Embodiment> The effect as described in the first embodiment can be obtained.
- d 1
- FIG. 13 only one arrow is denoted by a reference numeral, but all arrows in the voxel 601 (including the end of the voxel 601) are vectors Vi603.
- a point 604 located at the intersection of the vector Vi 603 and the surface 602 of the mesh is derived.
- FIG. 13 only one point is denoted by a reference numeral, but all points shown in the voxel 601 (including the end of the voxel 601) are points 604.
- point cloud data of the final resolution can be obtained.
- the final resolution indicates a predetermined maximum resolution. For example, in the case of encoding / decoding, the highest resolution indicates the resolution of point cloud data before encoding using Octree, mesh, or the like.
- the point d of the arbitrary resolution can be derived by the interval d of the vector Vi. Therefore, the scalability of the resolution of Triangle soup can be realized.
- An arbitrary value can be set for the interval d of the vector Vi.
- the interval d of the vector Vi may be set to a power of two.
- the scalability of the resolution for each layer of the Octree is realized. That is, it is possible to derive point cloud data having a resolution corresponding to each layer of the Octree.
- the difference of the L of the desired hierarchy Octree (hierarchy derived) the lowest layer (the final resolution of the hierarchy) (L is a non-negative integer)
- the desired Derivation of point cloud data having a resolution corresponding to a hierarchy becomes possible.
- LL may be a negative value. By setting L to a negative value, it becomes possible to derive point cloud data having a higher resolution than the final resolution.
- the value of the interval d of the vector Vi may be other than a power of 2.
- the interval d of the vector Vi may be an integer or a decimal number as long as it is a positive value.
- the vector Vi603 of the identification numbers 1, 3, 5, and 7 shown in the figure is decimated in the upper hierarchy.
- the vector Vi 603 (in other words, the decimated vector Vi 603) employed in the upper hierarchy is determined for each direction of the vector Vi 603 (that is, for each of the three axial directions (x, y, z directions) perpendicular to each other). May be set independently. In other words, the positions of the starting origins of the respective vectors Vi 603 in the mutually perpendicular three-axis directions (x, y, z directions) may be independent of each other in each direction.
- the vector Vi603 in the vertical direction in the figure the vector Vi603 of the identification numbers 1, 3, 5, and 7 shown in the figure is adopted as the vector Vi603 in the upper hierarchy.
- the vector Vi603 in the horizontal direction in the figure the vector Vi603 of the identification number 0, 2, 4, 6, and 8 shown in the figure is adopted as the vector Vi603 in the upper hierarchy.
- the vertical vector Vi603 (vector Vi603 indicated by a dotted line) of the identification numbers 0, 2, 4, 6, and 8 shown in the figure is thinned out.
- vectors Vi603 (vectors Vi603 shown by dotted lines) of the identification numbers 1, 3, 5, and 7 shown in the figure in the horizontal direction are thinned out.
- ⁇ Independent start origin interval> For example, in the case of FIG. 14 and FIG. 15, at the upper layer, half of the vectors Vi603 in the vertical direction in the figure and the vectors Vi603 in the horizontal direction in the figure are both thinned out. That is, the interval d between the vectors Vi is the same in the vertical and horizontal directions in the figure.
- the number of vectors Vi 603 (in other words, thinned vectors Vi 603) employed in the upper layer is determined by the direction of the vector Vi 603 (that is, the three axial directions (x, y, z directions) perpendicular to each other). Each time) may be set independently. In other words, the intervals in the three-axis directions (x, y, z directions) perpendicular to each other at the starting origin of each vector Vi 603 may be independent of each other.
- the vectors Vi603 in the vertical direction in the figure are identified by identification numbers 0 to 8 are adopted, whereas only the vector Vi603 of the identification numbers 0, 2, 4, 6, and 8 shown in the figure is adopted as the vector Vi603 in the horizontal direction in the figure. . That is, the interval between the vector Vi603 in the vertical direction in the figure and the vector Vi603 in the horizontal direction in the figure is different from each other. Therefore, the vertical interval in the drawing and the horizontal interval in the drawing are different from each other. That is, the resolution of the point cloud data differs between the vertical direction in the figure and the horizontal direction in the figure.
- this makes it possible to set the resolution of the point cloud data independently for each direction of the vector Vi603 (that is, for each of the three axial directions (x, y, z directions) perpendicular to each other).
- ⁇ Point generation at a part of intersection Note that a point may be generated at a part of the intersection between the vector Vi and the mesh surface. In other words, it is not necessary to generate a point even at an intersection. That is, by reducing the number of intersections that generate points, the resolution of the point cloud may be reduced (that is, the scalability of the resolution may be realized).
- the method of selecting an intersection that generates points (or does not generate points) is arbitrary. For example, as shown in FIG. 17, points may be generated in a staggered manner (at every other intersection in each of the three axial directions).
- a point not located at the intersection between the vector Vi and the mesh surface may be generated and included in the point cloud data.
- a point 611 is generated at a position approximating each side of the mesh surface 602 (triangle) on the vector Vi, even if it is not an intersection, and is included in the point cloud data. Is also good.
- FIG. 18 although only one point is denoted by a reference numeral, all points indicated by white circles are the points 611 generated as described above.
- the method of determining the position at which the point is generated (the method of determining the approximate point from each side in the example of FIG. 18) is arbitrary.
- points can be added irrespective of the position of the intersection, so that the resolution of the desired portion can be more easily improved.
- the resolution around each side of the surface 602 can be improved as compared with other portions.
- the configuration of each side of the surface 602 can be represented more accurately in the point cloud data. Therefore, the three-dimensional structure represented by the mesh can be represented more accurately in the point cloud data.
- a desired method (or a combination thereof) may be selected from some or all of the methods described above in this specification and applied.
- the selection method is arbitrary. For example, all application patterns may be evaluated and the best may be selected. In this way, point cloud data can be generated by a method most suitable for a three-dimensional structure or the like.
- the intersection determination unit 112 acquires mesh data in step S601.
- step S602 the vector setting unit 111 sets, as a starting point, the position coordinates of each surface of the voxel corresponding to the resolution specified by the user or the like, and is perpendicular to each surface (parallel to each side of the voxel). Set the vector Vi.
- step S603 the intersection determination unit 112 performs an intersection determination between the vector Vi set in step S602 and the mesh surface (triangle) indicated by the mesh data acquired in step S601.
- steps S604 to S607 are executed in the same manner as the processes of steps S104 to S107.
- step S607 ends, the point cloud generation processing ends.
- the point cloud generation device 100 can obtain, for example, the effects described in the present embodiment.
- voxel data of any resolution can be generated from a mesh in one process. That is, the scalability of the resolution of the point cloud data can be realized.
- point cloud data generation can be suppressed. Therefore, for example, point cloud data can be generated at higher speed. Further, for example, the manufacturing cost of the point cloud generation device 100 can be reduced.
- the ⁇ Point ⁇ cloud generation unit 314 has a configuration similar to that of the point cloud generation device 100 described above in the present embodiment, and generates point cloud data from mesh data as described above in the present embodiment.
- the Point @ cloud generation unit 314 can obtain the same effect as the point cloud generation apparatus 100 of the present embodiment.
- the Point @ cloud generation unit 314 can generate voxel data of an arbitrary resolution from the mesh in one process. That is, the scalability of the resolution of the point cloud data can be realized.
- the Point cloud generation unit 314 can suppress an increase in the load of point cloud data generation. Therefore, the Point @ cloud generation unit 314 can generate point cloud data at a higher speed, for example. Further, for example, the manufacturing cost of the Point @ cloud generating unit 314 can be reduced.
- the attribute decoding unit 315 may decode the attribute information in a scalable manner. That is, the scalability of the resolution may be realized for the attribute information.
- decoding apparatus 300 can obtain the effects described above in the present embodiment (for example, the same effects as in point cloud generation apparatus 100).
- FIG. 20 is a block diagram illustrating a configuration example of hardware of a computer that executes the series of processes described above by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the input / output interface 910 is also connected to the bus 904.
- An input unit 911, an output unit 912, a storage unit 913, a communication unit 914, and a drive 915 are connected to the input / output interface 910.
- the input unit 911 includes, for example, a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like.
- the output unit 912 includes, for example, a display, a speaker, an output terminal, and the like.
- the storage unit 913 includes, for example, a hard disk, a RAM disk, a nonvolatile memory, and the like.
- the communication unit 914 includes, for example, a network interface.
- the drive 915 drives a removable medium 921 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 901 loads a program stored in the storage unit 913 into the RAM 903 via the input / output interface 910 and the bus 904 and executes the program, for example. Is performed.
- the RAM 903 also appropriately stores data necessary for the CPU 901 to execute various processes.
- the program executed by the computer can be recorded on a removable medium 921 as a package medium or the like and applied.
- the program can be installed in the storage unit 913 via the input / output interface 910 by attaching the removable medium 921 to the drive 915.
- This program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting. In that case, the program can be received by the communication unit 914 and installed in the storage unit 913.
- a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be received by the communication unit 914 and installed in the storage unit 913.
- this program can be installed in the ROM 902 or the storage unit 913 in advance.
- the present technology can be applied to any configuration.
- the present technology is applicable to a transmitter or a receiver (for example, a television receiver or a mobile phone) in satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to a terminal by cellular communication, or
- the present invention can be applied to various electronic devices such as a device (for example, a hard disk recorder and a camera) that records an image on a medium such as an optical disk, a magnetic disk, and a flash memory, and reproduces an image from the storage medium.
- the present technology is applicable to a processor (eg, a video processor) as a system LSI (Large Scale Integration), a module (eg, a video module) using a plurality of processors, and a unit (eg, a video unit) using a plurality of modules.
- a processor eg, a video processor
- LSI Large Scale Integration
- module eg, a video module
- unit eg, a video unit
- the present invention can be implemented as a part of the apparatus, such as a set (for example, a video set) in which other functions are added to the unit.
- the present technology can be applied to a network system including a plurality of devices.
- the present technology may be implemented as cloud computing in which a plurality of devices share and process in a shared manner via a network.
- the present technology is implemented in a cloud service that provides a service relating to an image (moving image) to an arbitrary terminal such as a computer, an AV (Audio Visual) device, a portable information processing terminal, and an IoT (Internet of Things) device. You may make it.
- a system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device housing a plurality of modules in one housing are all systems. .
- the system, device, processing unit, and the like to which the present technology is applied can be used in any field such as, for example, transportation, medical care, crime prevention, agriculture, livestock industry, mining, beauty, factories, home appliances, weather, and nature monitoring. . Further, its use is arbitrary.
- “flag” is information for identifying a plurality of states, and is not limited to information used for identifying two states of true (1) or false (0), as well as three or more. Information that can identify the state is also included. Therefore, the value that the “flag” can take may be, for example, a binary value of 1/0 or a ternary value or more. That is, the number of bits constituting the "flag” is arbitrary, and may be 1 bit or a plurality of bits. Also, the identification information (including the flag) may include not only a form in which the identification information is included in the bit stream but also a form in which the difference information of the identification information with respect to a certain reference information is included in the bit stream. In the above, “flag” and “identification information” include not only the information but also difference information with respect to reference information.
- association means, for example, that one data can be used (linked) when one data is processed. That is, the data associated with each other may be collected as one data, or may be individual data.
- the information associated with the encoded data (image) may be transmitted on a different transmission path from the encoded data (image).
- information associated with encoded data (image) may be recorded on a recording medium different from the encoded data (image) (or another recording area of the same recording medium).
- the “association” may be a part of the data instead of the entire data. For example, an image and information corresponding to the image may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
- the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
- the configuration described above as a plurality of devices (or processing units) may be combined and configured as one device (or processing unit).
- a configuration other than those described above may be added to the configuration of each device (or each processing unit).
- a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit).
- the above-described program may be executed by an arbitrary device.
- the device only has to have necessary functions (functional blocks and the like) and can obtain necessary information.
- each step of one flowchart may be executed by one apparatus, or a plurality of apparatuses may share and execute the steps.
- the plurality of processes may be executed by one device, or may be executed by a plurality of devices.
- a plurality of processes included in one step can be executed as a plurality of steps.
- the processing described as a plurality of steps can be collectively executed as one step.
- a program executed by a computer may be configured so that processing of steps for describing a program is executed in chronological order according to the order described in this specification, or may be executed in parallel or when calls are executed. It may be executed individually at a necessary timing such as when it is touched. That is, as long as no contradiction occurs, the processing of each step may be performed in an order different from the order described above. Further, the processing of the steps for describing the program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.
- a plurality of technologies related to the present technology can be independently and independently implemented unless inconsistency arises.
- some or all of the present technology described in any of the embodiments may be combined with some or all of the present technology described in other embodiments.
- some or all of the above-described arbitrary technology may be implemented in combination with another technology that is not described above.
- An image processing apparatus comprising: a point cloud generating unit that generates point cloud data by arranging points at intersections between a mesh surface and a vector whose starting point is position coordinates corresponding to a designated resolution.
- the point cloud generation unit includes: Performing an intersection determination between the surface and the vector, The image processing apparatus according to (1), wherein, when it is determined that the vehicle crosses, the coordinates of the intersection are calculated.
- the image processing device wherein the point cloud generation unit determines whether the vector intersects the vector in each of positive and negative directions in three axial directions perpendicular to each other.
- a mesh shape restoring unit that restores the shape of the mesh from voxel data;
- the image processing device any one of (2) to (16), wherein the point cloud generation unit generates the point cloud data with an intersection between the plane and the vector restored by the mesh shape restoration unit as a point. .
- a lossless decoding unit that losslessly decodes the bitstream to generate Octree data;
- An Octree decoding unit that generates the voxel data using the Octree data generated by the lossless decoding unit.
- the mesh shape restoring unit restores the shape of the mesh from the voxel data generated by the Octree decoding unit.
- a position information encoding unit that encodes position information of the point cloud data
- the image processing device further comprising: an Octree decoding unit that generates the voxel data using Octree data generated when the location information encoding unit encodes the location information.
- An image processing method in which point cloud data is generated by arranging points at intersections between a plane of a mesh and a vector whose starting point is position coordinates corresponding to a designated resolution.
- 100 point cloud generation device ⁇ 111 ⁇ vector setting unit, ⁇ 112 ⁇ intersection determination unit, ⁇ 113 ⁇ auxiliary processing unit, ⁇ 114 ⁇ output unit, ⁇ 300 ⁇ decoding device, ⁇ 311 ⁇ lossless decoding unit, ⁇ 312 ⁇ Octree decoding unit, ⁇ 313 ⁇ Mesh shape restoration unit, ⁇ 314 ⁇ Point cloud generation unit , ⁇ 315 ⁇ Attribute decoding unit, ⁇ 500 ⁇ encoding device, ⁇ 511 ⁇ Voxel generation unit, ⁇ 512 ⁇ Geometry encoding unit, ⁇ 513 ⁇ Geometry decoding unit, ⁇ 514 ⁇ Attribute encoding unit, ⁇ 515 ⁇ bit stream generation unit, ⁇ 521 ⁇ Octree generation unit, ⁇ 522 ⁇ mesh generation unit, # 523 Lossless encoding unit, ⁇ 531 ⁇ Octree decoding unit, ⁇ 532 ⁇ Mesh shape reconstruction unit, ⁇ 533 ⁇ Point cloud generation unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
Description
1.ポイントクラウドの生成
2.第1の実施の形態(ポイントクラウド生成装置)
3.第2の実施の形態(復号装置)
4.第3の実施の形態(符号化装置)
5.第4の実施の形態(Triangle soupのスケーラブル化)
6.付記
<技術内容・技術用語をサポートする文献等>
本技術で開示される範囲は、実施の形態に記載されている内容だけではなく、出願当時において公知となっている以下の非特許文献に記載されている内容も含まれる。
非特許文献2:(上述)
非特許文献3:TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU(International Telecommunication Union), "Advanced video coding for generic audiovisual services", H.264, 04/2017
非特許文献4:TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU(International Telecommunication Union), "High efficiency video coding", H.265, 12/2016
非特許文献5:Jianle Chen, Elena Alshina, Gary J. Sullivan, Jens-Rainer, Jill Boyce, "Algorithm Description of Joint Exploration Test Model 4", JVET-G1001_v1, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT, 13-21 July 2017
従来、点群の位置情報や属性情報等により3次元構造を表すポイントクラウド(Point cloud)や、頂点、エッジ、面で構成され、多角形表現を使用して3次元形状を定義するメッシュ(Mesh)等の3Dデータが存在した。
このようなポイントクラウドデータはそのデータ量が比較的大きいので、符号化等によるデータ量を圧縮するために、ボクセル(Voxel)を用いた符号化方法が考えられた。ボクセルは、符号化対象の位置情報を量子化するための3次元領域である。
さらに、このようなボクセル(Voxel)データを用いてOctreeを構築することが考えられた。Octreeは、ボクセルデータを木構造化したものである。このOctreeの最下位のノードの各ビットの値が、各ボクセルのポイントの有無を示す。例えば、値「1」がポイントを内包するボクセルを示し、値「0」がポイントを内包しないボクセルを示す。Octreeでは、1ノードが8つのボクセルに対応する。つまり、Octreeの各ノードは、8ビットのデータにより構成され、その8ビットが8つのボクセルのポイントの有無を示す。
近年、例えば非特許文献2に記載のように、対象3Dオブジェクトをボクセル(Voxel)化した後、Octree符号化とメッシュ(Mesh)符号化(Triangle soup)を組み合わせて符号化することが提案された。
そこで、出力ポイントクラウドの解像度が、入力ポイントクラウドをボクセル化した解像度と同じであることを利用して、ボクセルの判定回数を限定することで高速にポイントクラウドを生成するようにする。
次に、このポイントクラウドの導出方法についてより具体的に説明する。まず、図2の例のように、符号化対象のデータを含むバウンディングボックス(Bounding box)の辺と、同じ方向および同じ長さを持つベクトルViを間隔k*dで生成する。図2においては、バウンディングボックス21内に存在するメッシュの面22に対して、矢印23で示されるようなベクトルViが設定される。dは、バウンディングボックスをボクセル化する際の量子化サイズである。kは任意の自然数である。つまり、指定のボクセル解像度に対応する位置座標を開始原点とするベクトルViが設定される。
なお、交差判定および座標値算出の方法は任意である。例えば、図3のようにクラメルの公式を用いて求めるようにしてもよい。例えば、Pは交点座標、originはレイの座標、rayは方向ベクトル、t はスカラー値として、レイを通る交点を直線式で以下のように表す。
<ポイントクラウド生成装置>
次に、以上のような処理を実現する構成について説明する。図4は、本技術を適用した画像処理装置の一態様であるポイントクラウド生成装置の構成の一例を示すブロック図である。図4に示されるポイントクラウド生成装置100は、<1.ポイントクラウドの生成>において説明したように、メッシュからポイントクラウドを生成する装置である。
次に、このポイントクラウド生成装置100により実行されるポイントクラウド生成処理の流れの例を、図5のフローチャートを参照して説明する。
なお、以上のような交差判定において、面の端部に対する交差判定の場合よりも疎なベクトルViを用いて、面の内部に対する交差判定を行うようにしてもよい。例えば、図6の例のように、面201に対して、ベクトルVi202-1乃至ベクトルVi202-8を用いて交差判定を行うようにしてもよい。この例において、ベクトルVi202-1乃至ベクトルVi202-8の間隔は、ベクトルVi202-1乃至ベクトルVi202-3と、ベクトルVi202-6乃至ベクトルVi202-8が狭い。換言するに、ベクトルVi202-3乃至ベクトルVi202-6の間隔がその他ベクトルViの間隔に比べて広く設定されている。つまり、面201の端部に対する交差判定に用いられるベクトルVi202-1乃至ベクトルVi202-3、並びに、ベクトルVi202-6乃至ベクトルVi202-8の間隔が狭く設定され(密である)、面201の内部に対する交差判定に用いられるベクトルVi202-3乃至ベクトルVi202-6の間隔が広く設定されている(疎である)。
また、一度交差判定を行った座標は2度計算しないようにしてもよい。例えば、図7の例のように、1つのベクトルVi211に対して、複数のメッシュの面(面212および面213)がある場合に、1つのベクトルVi211に対し、同時に交差判定をすることで、処理をより高速化することができる。
また、図8に示されるように、1つのベクトルVi221が複数の三角形(面222および面223)と交わり、かつその三角形の間に空間がある場合、その空間にポイント(図中、黒点)を生成することで穴埋め(デノイズ)を行うようにしてもよい。このようにすることにより、より高精度なポイントクラウドを生成することができる。つまり、その表示画像の画質の低減を抑制することができる(典型的には、画質を向上させることができる)。
なお、以上のような交差判定は、複数の処理を並列に行うようにしてもよい。例えば、メッシュの1つの面に対する複数のベクトルの交差判定を互いに並列に処理する(並行して処理を行う)ようにしてもよい。つまり、ベクトル毎に処理を独立に行うようにしてもよい。このようにすることにより、より高速に交差判定を行うことができる。
<復号装置>
図9は、本技術を適用した画像処理装置の一態様である復号装置の構成の一例を示すブロック図である。図9に示される復号装置300は、後述する図11の符号化装置500に対応する復号装置であり、例えばこの符号化装置500により生成されたビットストリームを復号し、ポイントクラウドのデータを復元する装置である。
次に、図10のフローチャートを参照して、復号装置300により実行される復号処理の流れの例を説明する。
<符号化装置>
図11は、本技術を適用した画像処理装置の一態様である符号化装置の構成の一例を示すブロック図である。図11に示される符号化装置500は、ポイントクラウドのような3Dデータをボクセル(Voxel)およびOctreeを用いて符号化する装置である。
Geometry符号化部512は、Octree生成部521、Mesh生成部522、および可逆符号化部523を有する。
Geometry復号部513は、Octree復号部531、Mesh形状復元部532、およびPoint cloud生成部533を有する。
次に、図12のフローチャートを参照して、符号化装置500により実行される符号化処理の流れの例を説明する。
<Triangle soupのスケーラブル化>
以上においては、Triangle soupにおいて、指定のボクセル解像度に対応する位置座標を開始原点とするベクトルとメッシュの面との交差点にポイントが生成され、ポイントクラウドデータが生成されるように説明した。これに限らず、メッシュから任意の解像度でポイントクラウドデータの生成するようにしてもよい。
例えば、図14の場合、図中縦方向のベクトルVi603も、図中横方向のベクトルVi603も、共に、図中に示される識別番号0、2、4、6、8のベクトルVi603が、1階層上位のベクトルVi603として採用されている。換言するに、1階層上位においては、図中に示される識別番号1、3、5、7のベクトルVi603(点線で示されるベクトルVi603)が間引かれている。
例えば、図14や図15の場合、1階層上位においては、図中縦方向のベクトルVi603も、図中横方向のベクトルVi603も、共に、半数のベクトルが間引かれている。つまり、ベクトルViの間隔dが、図中縦方向と横方向とで互いに同一である。
なお、ベクトルViとメッシュの面との交差点の一部においてポイントを生成するようにしてもよい。換言するに、交差点であってもポイントを生成しなくてもよい。つまり、ポイントを生成する交差点の数を低減させることにより、ポイントクラウドの低解像度化を実現する(すなわち、解像度のスケーラビリティを実現する)ようにしてもよい。
ベクトルViとメッシュの面との交差点に位置しないポイントを生成し、ポイントクラウドデータに含めるようにしてもよい。例えば、図18に示されるように、交差点でなくても、ベクトルVi上の、メッシュの面602(三角形)の各辺に近似する位置にポイント611を生成し、ポイントクラウドデータに含めるようにしてもよい。図18において、1つの点のみ符号を付してあるが、白い丸で示されるポイントは全て上述のように生成されたポイント611である。
本実施の形態において上述した各手法は、任意の複数の手法を組み合わせて適用することができる。また、本実施の形態において上述した各手法は、<ポイントクラウドの生成>において上述した任意の手法とも組み合わせて適用することができる。
また、本明細書において上述した各手法の一部または全部の中から、所望の手法(またはその組み合わせ)が選択され、適用されるようにしてもよい。その場合、その選択方法は任意である。例えば、全ての適用パタンを評価し、最善を選択するようにしてもよい。このようにすることにより、3次元構造等に最も適した手法によりポイントクラウドデータを生成することができる。
本実施の形態において上述した各手法も、<1.ポイントクラウドの生成>において説明した各手法と同様に、第1の実施の形態において上述したポイントクラウド生成装置100に適用することができる。その場合のポイントクラウド生成装置100の構成は、図4を参照して説明した場合と同様である。
また、本実施の形態において上述した各手法は、<1.ポイントクラウドの生成>において説明した各手法と同様に、第2の実施の形態において上述した復号装置300に適用することができる。その場合の復号装置300の構成は、図9を参照して説明した場合と同様である。
<コンピュータ>
上述した一連の処理は、ハードウエアにより実行させることもできるし、ソフトウエアにより実行させることもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここでコンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータ等が含まれる。
以上においては、ポイントクラウドデータの符号化・復号に本技術を適用する場合について説明したが、本技術は、これらの例に限らず、任意の規格の3Dデータの符号化・復号に対して適用することができる。つまり、上述した本技術と矛盾しない限り、符号化・復号方式等の各種処理、並びに、3Dデータやメタデータ等の各種データの仕様は任意である。また、本技術と矛盾しない限り、上述した一部の処理や仕様を省略してもよい。
本技術を適用したシステム、装置、処理部等は、例えば、交通、医療、防犯、農業、畜産業、鉱業、美容、工場、家電、気象、自然監視等、任意の分野に利用することができる。また、その用途も任意である。
なお、本明細書において「フラグ」とは、複数の状態を識別するための情報であり、真(1)または偽(0)の2状態を識別する際に用いる情報だけでなく、3以上の状態を識別することが可能な情報も含まれる。したがって、この「フラグ」が取り得る値は、例えば1/0の2値であってもよいし、3値以上であってもよい。すなわち、この「フラグ」を構成するbit数は任意であり、1bitでも複数bitでもよい。また、識別情報(フラグも含む)は、その識別情報をビットストリームに含める形だけでなく、ある基準となる情報に対する識別情報の差分情報をビットストリームに含める形も想定されるため、本明細書においては、「フラグ」や「識別情報」は、その情報だけではなく、基準となる情報に対する差分情報も包含する。
(1) メッシュの面と指定解像度に対応する位置座標を開始原点とするベクトルとの交差点にポイントを配置することによりポイントクラウドデータを生成するポイントクラウド生成部
を備える画像処理装置。
(2) 前記ポイントクラウド生成部は、
前記面と前記ベクトルとの交差判定を行い、
交差すると判定された場合、前記交差点の座標を算出する
(1)に記載の画像処理装置。
(3) 前記ポイントクラウド生成部は、互いに垂直な3軸方向の正負それぞれの向きの前記ベクトルと、前記面との交差判定を行う
(2)に記載の画像処理装置。
(4) 前記ポイントクラウド生成部は、複数の交差点の座標値が重複する場合、重複する交差点群の内、いずれか1つの交差点を残し、その他の交差点を削除する
(3)に記載の画像処理装置。
(5) 前記ポイントクラウド生成部は、前記開始原点が前記面の各頂点の範囲内に位置する前記ベクトルと、前記面との交差判定を行う
(2)乃至(4)のいずれかに記載の画像処理装置。
(6) 前記ポイントクラウド生成部は、算出した前記交差点の座標がバウンディングボックスの外である場合、前記交差点の座標を前記バウンディングボックス内にクリップする
(2)乃至(5)のいずれかに記載の画像処理装置。
(7) 前記ポイントクラウド生成部は、算出した前記交差点の座標がバウンディングボックスの外である場合、前記交差点を削除する
(2)乃至(6)のいずれかに記載の画像処理装置。
(8) 前記ポイントクラウド生成部は、前記面の端部に対する交差判定の場合よりも疎な前記ベクトルを用いて、前記面の内部に対する交差判定を行う
(2)乃至(7)のいずれかに記載の画像処理装置。
(9) 前記ポイントクラウド生成部は、前記ベクトルが複数の前記面と交差し、かつ、複数の前記面同士の間に空間が存在する場合、前記空間にポイントを追加する
(2)乃至(8)のいずれかに記載の画像処理装置。
(10) 前記ポイントクラウド生成部は、1つの前記面に対する複数の前記ベクトルのそれぞれの交差判定を、互いに並列に行う
(2)乃至(9)のいずれかに記載の画像処理装置。
(11) 前記ポイントクラウド生成部は、1つの前記ベクトルに対する複数の前記面のそれぞれの交差判定を、互いに並列に行う
(2)乃至(10)のいずれかに記載の画像処理装置。
(12) 前記ベクトルは、指定のボクセル解像度に対応する位置座標を開始原点とする
(2)乃至(11)のいずれかに記載の画像処理装置。
(13) 前記ベクトルは、指定のボクセル解像度の2のべき乗に対応する位置座標を開始原点とする
(2)乃至(12)のいずれかに記載の画像処理装置。
(14)
前記ベクトルの開始原点の互いに垂直な3軸方向の位置は、互いに独立している
(2)乃至(13)のいずれかに記載の画像処理装置。
(15) 前記ベクトルの開始原点の互いに垂直な3軸方向の間隔は、互いに独立している
(2)乃至(14)のいずれかに記載の画像処理装置。
(16) 前記ポイントクラウド生成部は、前記交差点に位置しないポイントを前記ポイントクラウドデータに含める
(2)乃至(15)のいずれかに記載の画像処理装置。
(17) ボクセルデータから前記メッシュの形状を復元するメッシュ形状復元部をさらに備え、
前記ポイントクラウド生成部は、前記メッシュ形状復元部により復元された前記面と前記ベクトルとの交差点をポイントとして前記ポイントクラウドデータを生成する
(2)乃至(16)のいずれかに記載の画像処理装置。
(18) ビットストリームを可逆復号してOctreeデータを生成する可逆復号部と、
前記可逆復号部により生成された前記Octreeデータを用いて前記ボクセルデータを生成するOctree復号部と
をさらに備え、
前記メッシュ形状復元部は、前記Octree復号部により生成された前記ボクセルデータから前記メッシュの形状を復元する
(17)に記載の画像処理装置。
(19) ポイントクラウドデータの位置情報を符号化する位置情報符号化部と、
前記位置情報符号化部が前記位置情報を符号化する際に生成したOctreeデータを用いて前記ボクセルデータを生成するOctree復号部と
をさらに備える
(17)に記載の画像処理装置。
(20) メッシュの面と指定解像度に対応する位置座標を開始原点とするベクトルとの交差点にポイントを配置することによりポイントクラウドデータを生成する
画像処理方法。
Claims (20)
- メッシュの面と指定解像度に対応する位置座標を開始原点とするベクトルとの交差点にポイントを配置することによりポイントクラウドデータを生成するポイントクラウド生成部
を備える画像処理装置。 - 前記ポイントクラウド生成部は、
前記面と前記ベクトルとの交差判定を行い、
交差すると判定された場合、前記交差点の座標を算出する
請求項1に記載の画像処理装置。 - 前記ポイントクラウド生成部は、互いに垂直な3軸方向の正負それぞれの向きの前記ベクトルと、前記面との交差判定を行う
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、複数の交差点の座標値が重複する場合、重複する交差点群の内、いずれか1つの交差点を残し、その他の交差点を削除する
請求項3に記載の画像処理装置。 - 前記ポイントクラウド生成部は、前記開始原点が前記面の各頂点の範囲内に位置する前記ベクトルと、前記面との交差判定を行う
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、算出した前記交差点の座標がバウンディングボックスの外である場合、前記交差点の座標を前記バウンディングボックス内にクリップする
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、算出した前記交差点の座標がバウンディングボックスの外である場合、前記交差点を削除する
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、前記面の端部に対する交差判定の場合よりも疎な前記ベクトルを用いて、前記面の内部に対する交差判定を行う
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、前記ベクトルが複数の前記面と交差し、かつ、複数の前記面同士の間に空間が存在する場合、前記空間にポイントを追加する
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、1つの前記面に対する複数の前記ベクトルのそれぞれの交差判定を、互いに並列に行う
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、1つの前記ベクトルに対する複数の前記面のそれぞれの交差判定を、互いに並列に行う
請求項2に記載の画像処理装置。 - 前記ベクトルは、指定のボクセル解像度に対応する位置座標を開始原点とする
請求項2に記載の画像処理装置。 - 前記ベクトルは、指定のボクセル解像度の2のべき乗に対応する位置座標を開始原点とする
請求項2に記載の画像処理装置。 - 前記ベクトルの開始原点の互いに垂直な3軸方向の位置は、互いに独立している
請求項2に記載の画像処理装置。 - 前記ベクトルの開始原点の互いに垂直な3軸方向の間隔は、互いに独立している
請求項2に記載の画像処理装置。 - 前記ポイントクラウド生成部は、前記交差点に位置しないポイントを前記ポイントクラウドデータに含める
請求項2に記載の画像処理装置。 - ボクセルデータから前記メッシュの形状を復元するメッシュ形状復元部をさらに備え、
前記ポイントクラウド生成部は、前記メッシュ形状復元部により復元された前記面と前記ベクトルとの交差点をポイントとして前記ポイントクラウドデータを生成する
請求項2に記載の画像処理装置。 - ビットストリームを可逆復号してOctreeデータを生成する可逆復号部と、
前記可逆復号部により生成された前記Octreeデータを用いて前記ボクセルデータを生成するOctree復号部と
をさらに備え、
前記メッシュ形状復元部は、前記Octree復号部により生成された前記ボクセルデータから前記メッシュの形状を復元する
請求項17に記載の画像処理装置。 - ポイントクラウドデータの位置情報を符号化する位置情報符号化部と、
前記位置情報符号化部が前記位置情報を符号化する際に生成したOctreeデータを用いて前記ボクセルデータを生成するOctree復号部と
をさらに備える
請求項17に記載の画像処理装置。 - メッシュの面と指定解像度に対応する位置座標を開始原点とするベクトルとの交差点にポイントを配置することによりポイントクラウドデータを生成する
画像処理方法。
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3115203A CA3115203A1 (en) | 2018-10-02 | 2019-09-18 | Image processing apparatus and method |
JP2020550268A JP7424299B2 (ja) | 2018-10-02 | 2019-09-18 | 画像処理装置および方法 |
KR1020217007109A KR20210070271A (ko) | 2018-10-02 | 2019-09-18 | 화상 처리 장치 및 방법 |
US17/278,497 US11568602B2 (en) | 2018-10-02 | 2019-09-18 | Image processing apparatus and method using point cloud generation and a surface of a mesh |
EP19868424.3A EP3843046A4 (en) | 2018-10-02 | 2019-09-18 | IMAGE PROCESSING DEVICE AND METHOD |
BR112021005937-7A BR112021005937A2 (pt) | 2018-10-02 | 2019-09-18 | aparelho e método de processamento de imagem. |
CN201980063608.XA CN112771582B (zh) | 2018-10-02 | 2019-09-18 | 图像处理设备和方法 |
MX2021003538A MX2021003538A (es) | 2018-10-02 | 2019-09-18 | Aparato de procesamiento de imagen y metodo. |
AU2019355381A AU2019355381A1 (en) | 2018-10-02 | 2019-09-18 | Image processing device and method |
SG11202102923PA SG11202102923PA (en) | 2018-10-02 | 2019-09-18 | Image processing apparatus and method |
PH12021550654A PH12021550654A1 (en) | 2018-10-02 | 2021-03-23 | Image apparatus device and method |
US18/086,027 US11922579B2 (en) | 2018-10-02 | 2022-12-21 | Image processing apparatus and method for image processing by deriving voxel and mesh data to generate point cloud data |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018187482 | 2018-10-02 | ||
JP2018-187482 | 2018-10-02 | ||
JP2019114627 | 2019-06-20 | ||
JP2019-114627 | 2019-06-20 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/278,497 A-371-Of-International US11568602B2 (en) | 2018-10-02 | 2019-09-18 | Image processing apparatus and method using point cloud generation and a surface of a mesh |
US18/086,027 Continuation US11922579B2 (en) | 2018-10-02 | 2022-12-21 | Image processing apparatus and method for image processing by deriving voxel and mesh data to generate point cloud data |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020071114A1 true WO2020071114A1 (ja) | 2020-04-09 |
Family
ID=70054769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/036469 WO2020071114A1 (ja) | 2018-10-02 | 2019-09-18 | 画像処理装置および方法 |
Country Status (13)
Country | Link |
---|---|
US (2) | US11568602B2 (ja) |
EP (1) | EP3843046A4 (ja) |
JP (1) | JP7424299B2 (ja) |
KR (1) | KR20210070271A (ja) |
CN (1) | CN112771582B (ja) |
AU (1) | AU2019355381A1 (ja) |
BR (1) | BR112021005937A2 (ja) |
CA (1) | CA3115203A1 (ja) |
MX (1) | MX2021003538A (ja) |
PH (1) | PH12021550654A1 (ja) |
SG (1) | SG11202102923PA (ja) |
TW (1) | TW202025746A (ja) |
WO (1) | WO2020071114A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021065536A1 (ja) * | 2019-10-01 | 2021-04-08 | ソニー株式会社 | 情報処理装置および方法 |
JP2022521991A (ja) * | 2019-09-03 | 2022-04-13 | テンセント・アメリカ・エルエルシー | 一般化されたtrisoupジオメトリ符号化のための技術 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11584448B2 (en) * | 2020-11-12 | 2023-02-21 | Rivian Ip Holdings, Llc | Systems and methods for joining a vehicle structure |
CN114466212A (zh) * | 2022-02-07 | 2022-05-10 | 百度在线网络技术(北京)有限公司 | 一种直播方法、装置、电子设备和介质 |
WO2023167430A1 (ko) * | 2022-03-04 | 2023-09-07 | 엘지전자 주식회사 | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0584638A (ja) * | 1991-09-26 | 1993-04-06 | Hitachi Ltd | 曲面加工における逆オフセツト操作の並列処理方法 |
JP2012069762A (ja) * | 2010-09-24 | 2012-04-05 | Fujifilm Corp | ナノインプリント方法およびそれを利用した基板の加工方法 |
JP2014002696A (ja) * | 2012-06-21 | 2014-01-09 | Toyota Motor Corp | 設計データ生成装置、その生成方法及びプログラム |
JP2016184245A (ja) * | 2015-03-25 | 2016-10-20 | トヨタ自動車株式会社 | 粒子モデルおよびメッシュモデル間のデータ引き継ぎ方法 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6126603A (en) * | 1997-05-07 | 2000-10-03 | General Electric Company | Method and apparatus for segmenting color flow mode data using velocity information in three-dimensional ultrasound imaging |
KR100298789B1 (ko) * | 1998-04-29 | 2001-09-06 | 윤종용 | 그래픽 처리에 있어서 클리핑 처리방법 |
WO2003031005A2 (en) * | 2001-10-09 | 2003-04-17 | Massachusetts Institute Of Technology | Methods and apparatus for detecting and correcting penetration between objects |
GB0329534D0 (en) * | 2003-12-20 | 2004-01-28 | Ibm | Method for determining the bounding voxelisation of a 3d polygon |
KR100695142B1 (ko) * | 2004-03-08 | 2007-03-14 | 삼성전자주식회사 | 적응적 2의 n 제곱 진트리 생성방법 및 이를 이용한 3차원 체적 데이터 부호화/복호화 방법 및 장치 |
KR100738107B1 (ko) * | 2006-02-15 | 2007-07-12 | 삼성전자주식회사 | 3차원 포인트 기반 모델링 장치 및 방법 |
KR100797400B1 (ko) * | 2006-12-04 | 2008-01-28 | 한국전자통신연구원 | 주성분분석 및 자동상관을 이용한 단백질 구조 비교 장치및 그 방법 |
WO2009130622A1 (en) * | 2008-04-24 | 2009-10-29 | Koninklijke Philips Electronics N.V. | Dose-volume kernel generation |
US8502818B1 (en) * | 2010-07-12 | 2013-08-06 | Nvidia Corporation | System and method for surface tracking |
US8849015B2 (en) * | 2010-10-12 | 2014-09-30 | 3D Systems, Inc. | System and apparatus for haptically enabled three-dimensional scanning |
US9159162B2 (en) * | 2011-12-28 | 2015-10-13 | St. Jude Medical, Atrial Fibrillation Division, Inc. | Method and system for generating a multi-dimensional surface model of a geometric structure |
US9805497B2 (en) * | 2013-02-05 | 2017-10-31 | Reuven Bakalash | Collision-culling of lines over polygons |
US20170109462A1 (en) * | 2013-11-27 | 2017-04-20 | Akademia Gorniczo-Hutnicza Im. Stanislawa Staszica W Krakowie | System and a method for determining approximate set of visible objects in beam tracing |
CN105631936A (zh) * | 2014-10-31 | 2016-06-01 | 富泰华工业(深圳)有限公司 | 点云修补方法及*** |
KR20160071774A (ko) * | 2014-12-12 | 2016-06-22 | 삼성전자주식회사 | 영상 처리를 위한 영상 처리 장치, 방법 및 기록 매체 |
US10706608B2 (en) * | 2016-01-19 | 2020-07-07 | Nvidia Corporation | Tree traversal with backtracking in constant time |
US10573091B2 (en) * | 2017-02-22 | 2020-02-25 | Andre R. Vincelette | Systems and methods to create a virtual object or avatar |
US11514613B2 (en) * | 2017-03-16 | 2022-11-29 | Samsung Electronics Co., Ltd. | Point cloud and mesh compression using image/video codecs |
US10825244B1 (en) * | 2017-11-07 | 2020-11-03 | Arvizio, Inc. | Automated LOD construction for point cloud |
-
2019
- 2019-09-18 MX MX2021003538A patent/MX2021003538A/es unknown
- 2019-09-18 BR BR112021005937-7A patent/BR112021005937A2/pt not_active Application Discontinuation
- 2019-09-18 SG SG11202102923PA patent/SG11202102923PA/en unknown
- 2019-09-18 KR KR1020217007109A patent/KR20210070271A/ko unknown
- 2019-09-18 CA CA3115203A patent/CA3115203A1/en not_active Abandoned
- 2019-09-18 AU AU2019355381A patent/AU2019355381A1/en not_active Abandoned
- 2019-09-18 EP EP19868424.3A patent/EP3843046A4/en active Pending
- 2019-09-18 WO PCT/JP2019/036469 patent/WO2020071114A1/ja active Application Filing
- 2019-09-18 CN CN201980063608.XA patent/CN112771582B/zh active Active
- 2019-09-18 JP JP2020550268A patent/JP7424299B2/ja active Active
- 2019-09-18 US US17/278,497 patent/US11568602B2/en active Active
- 2019-09-23 TW TW108134151A patent/TW202025746A/zh unknown
-
2021
- 2021-03-23 PH PH12021550654A patent/PH12021550654A1/en unknown
-
2022
- 2022-12-21 US US18/086,027 patent/US11922579B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0584638A (ja) * | 1991-09-26 | 1993-04-06 | Hitachi Ltd | 曲面加工における逆オフセツト操作の並列処理方法 |
JP2012069762A (ja) * | 2010-09-24 | 2012-04-05 | Fujifilm Corp | ナノインプリント方法およびそれを利用した基板の加工方法 |
JP2014002696A (ja) * | 2012-06-21 | 2014-01-09 | Toyota Motor Corp | 設計データ生成装置、その生成方法及びプログラム |
JP2016184245A (ja) * | 2015-03-25 | 2016-10-20 | トヨタ自動車株式会社 | 粒子モデルおよびメッシュモデル間のデータ引き継ぎ方法 |
Non-Patent Citations (6)
Title |
---|
JIANLE CHENELENA ALSHINAGARY J. SULLIVANJENS-RAINERJILL BOYCE: "Algorithm Description of Joint Exploration Test Model 4", JVET-G1001_V1, JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 7TH MEETING: TORINO, IT, 13 July 2017 (2017-07-13) |
OHJI NAKAGAMIPHIL CHOUMAJA KRIVOKUCAKHALED MAMMOUROBERT COHENVLADYSLAV ZAKHARCHENKOGAELLE MARTIN-COCHER: "Second Working Draft for PCC Categories 1, 3", ISO/IEC JTC1/SC29/WG11, MPEG 2018/N17533, April 2018 (2018-04-01) |
R. MEKURIA, STUDENT MEMBER IEEEK. BLOMP. CESAR., MEMBER, IEEE, DESIGN, IMPLEMENTATION AND EVALUATION OF A POINT CLOUD CODEC FOR TELE-IMMERSIVE VIDEO |
See also references of EP3843046A4 |
TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, ADVANCED VIDEO CODING FOR GENERIC AUDIOVISUAL SERVICES, April 2017 (2017-04-01) |
TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, HIGH EFFICIENCY VIDEO CODING, December 2016 (2016-12-01) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022521991A (ja) * | 2019-09-03 | 2022-04-13 | テンセント・アメリカ・エルエルシー | 一般化されたtrisoupジオメトリ符号化のための技術 |
JP7262602B2 (ja) | 2019-09-03 | 2023-04-21 | テンセント・アメリカ・エルエルシー | 一般化されたtrisoupジオメトリ符号化のための技術 |
WO2021065536A1 (ja) * | 2019-10-01 | 2021-04-08 | ソニー株式会社 | 情報処理装置および方法 |
Also Published As
Publication number | Publication date |
---|---|
PH12021550654A1 (en) | 2022-02-21 |
MX2021003538A (es) | 2021-05-27 |
SG11202102923PA (en) | 2021-04-29 |
CN112771582B (zh) | 2024-01-12 |
JP7424299B2 (ja) | 2024-01-30 |
EP3843046A4 (en) | 2021-10-27 |
US20230126000A1 (en) | 2023-04-27 |
US20220036654A1 (en) | 2022-02-03 |
BR112021005937A2 (pt) | 2021-06-29 |
JPWO2020071114A1 (ja) | 2021-09-09 |
KR20210070271A (ko) | 2021-06-14 |
EP3843046A1 (en) | 2021-06-30 |
CN112771582A (zh) | 2021-05-07 |
AU2019355381A1 (en) | 2021-05-20 |
TW202025746A (zh) | 2020-07-01 |
CA3115203A1 (en) | 2020-04-09 |
US11568602B2 (en) | 2023-01-31 |
US11922579B2 (en) | 2024-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11910026B2 (en) | Image processing apparatus and method | |
WO2020071114A1 (ja) | 画像処理装置および方法 | |
JP7384159B2 (ja) | 画像処理装置および方法 | |
WO2019198523A1 (ja) | 画像処理装置および方法 | |
JP7327166B2 (ja) | 画像処理装置および方法 | |
WO2020071115A1 (ja) | 画像処理装置および方法 | |
WO2021010200A1 (ja) | 情報処理装置および方法 | |
WO2020145143A1 (ja) | 情報処理装置および方法 | |
WO2021002214A1 (ja) | 情報処理装置および方法 | |
US11790567B2 (en) | Information processing apparatus and method | |
WO2022230941A1 (ja) | 情報処理装置および方法 | |
WO2020262020A1 (ja) | 情報処理装置および方法 | |
JP2022051968A (ja) | 情報処理装置および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19868424 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020550268 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2101001706 Country of ref document: TH |
|
ENP | Entry into the national phase |
Ref document number: 3115203 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2019868424 Country of ref document: EP Effective date: 20210322 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021005937 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019355381 Country of ref document: AU Date of ref document: 20190918 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112021005937 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210326 |