CN114025146A - Dynamic point cloud geometric compression method based on scene flow network and time entropy model - Google Patents

Dynamic point cloud geometric compression method based on scene flow network and time entropy model Download PDF

Info

Publication number
CN114025146A
CN114025146A CN202111285773.5A CN202111285773A CN114025146A CN 114025146 A CN114025146 A CN 114025146A CN 202111285773 A CN202111285773 A CN 202111285773A CN 114025146 A CN114025146 A CN 114025146A
Authority
CN
China
Prior art keywords
point cloud
information
scene flow
motion vector
compression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111285773.5A
Other languages
Chinese (zh)
Other versions
CN114025146B (en
Inventor
叶振虎
杨柏林
江照意
邹文钦
丁璐赟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202111285773.5A priority Critical patent/CN114025146B/en
Publication of CN114025146A publication Critical patent/CN114025146A/en
Application granted granted Critical
Publication of CN114025146B publication Critical patent/CN114025146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a dynamic point cloud geometric compression method based on a scene flow network and a time entropy model. The method mainly aims at the problem of geometric compression of the dynamic point clouds, utilizes a scene stream network to estimate a motion vector of a previous frame point cloud, thereby utilizing time redundancy, takes the motion vector as the attribute of the point cloud, utilizes an attribute compression mode in MPEG (moving Picture experts group) to encode so as to utilize space redundancy, and then introduces a time entropy model network to encode residual errors of a predicted frame and a current frame in a hidden space so as to realize the geometric compression of the dynamic point cloud. The method solves the problem of optimized compression of time sequence mass dynamic point cloud data, and provides technical support for more extensive application and popularization of three-dimensional dynamic point cloud.

Description

Dynamic point cloud geometric compression method based on scene flow network and time entropy model
Technical Field
The invention relates to a dynamic point cloud geometric compression method, belongs to the technical field of artificial intelligence and GIS information, and particularly relates to a dynamic point cloud geometric compression method based on a scene flow network and a time entropy model.
Background
A Point Cloud (Point Cloud) is a collection of three-dimensional (or more) geometric model surface sampling points, each Point containing geometric information (x, y, z) and corresponding attribute information, such as color (r, g, b), reflectivity, transparency, etc. Dynamic point clouds are a continuous sequence of point clouds in time, unlike mesh data, the point clouds do not contain topological information in space, there is no correspondence in time, and the point clouds contain a lot of noise data, which makes it extremely difficult to effectively remove spatial and temporal redundancies.
On the other hand, with the development of sensing equipment, point cloud acquisition is easier and easier, and the point cloud has great application potential in many fields, such as immersive 3D remote, VR, free-view motion playback and automatic driving. Meanwhile, the data volume of the high-resolution dynamic point cloud is getting larger and larger, and a large amount of dynamic point cloud data causes huge pressure on the storage capacity and the transmission capacity of hardware equipment. Therefore, the research on the compression and storage of the dynamic point cloud data has very important practical significance.
According to the available literature data, a plurality of scientific researchers have been engaged in the research work related to dynamic point cloud compression in China and abroad in recent years, and a series of compression schemes are provided, including an XOR (difference of encoding adjacent frame octree structures), a dynamic point cloud compression method based on a graph, and a compression method based on ICP (nearest iteration point) and intra-frame encoding, which all realize compression effects of different degrees, but the compression ratios of the methods are lower.
Motion estimation and residual compression are very critical factors for geometric compression of dynamic point clouds, but the estimation accuracy of the previous motion estimation methods, such as motion estimation based on a graph and ICP (inductively coupled plasma) is low, and the coding amount of the previous residual compression methods, such as an XOR (exclusive-OR) method and intra-frame coding based on a block is large.
Therefore, the compression method capable of effectively removing the geometric redundancy of the dynamic point cloud is designed and realized, and the practical significance and the application value are stronger.
Disclosure of Invention
The invention mainly solves the problems in the prior art and provides a dynamic point cloud geometric compression method based on a scene flow network and a time entropy model.
The method mainly aims at the problem of geometric compression of the dynamic point clouds, utilizes a scene stream network to estimate a motion vector of a previous frame point cloud, thereby utilizing time redundancy, takes the motion vector as the attribute of the point cloud, utilizes an attribute compression mode in MPEG (moving Picture experts group) to encode so as to utilize space redundancy, and then introduces a time entropy model network to encode residual errors of a predicted frame and a current frame in a hidden space so as to realize the geometric compression of the dynamic point cloud. The method solves the problem of optimized compression of time sequence mass dynamic point cloud data, and provides technical support for more extensive application and popularization of three-dimensional dynamic point cloud.
The invention is solved by the following technical scheme:
the method comprises the following steps: a motion estimation step based on a scene flow network, which is used for estimating a motion vector of a previous frame point cloud relative to a current frame point cloud;
step two: a motion vector coding and motion compensation step, which is used for coding the motion vector estimated in the last step and carrying out motion compensation on the previous frame of point cloud by using the decoded motion vector to obtain predicted point cloud;
step three: and residual compression step, which is used for coding the difference information of the predicted point cloud and the original point cloud.
The invention has the following beneficial effects: by introducing the scene flow network, the method can quickly and accurately estimate the motion vector of the previous frame point cloud and effectively remove the time redundancy. The motion vector is regarded as the point cloud attribute and is encoded by utilizing an attribute compression mode in MPEG, and the invention can efficiently encode the motion vector and effectively utilize spatial redundancy. And a time entropy model network is introduced, so that the residual error coding amount can be greatly reduced. And finally, the whole framework uses a sparse convolution network, so that the memory can be greatly reduced, and the running speed can be improved.
Drawings
Fig. 1 is an overall framework of dynamic point cloud geometric compression based on a scene flow network and a temporal entropy model according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b):
the embodiment provides a dynamic point cloud geometric compression method based on a scene flow network and a temporal entropy model, as shown in fig. 1, specifically including:
the method comprises the following steps: scene flow estimation
Firstly, the decoded previous frame point cloud and the current frame point cloud are scaled and quantized, and a certain number of points are randomly sampled from the previous frame point cloud and the current frame point cloud to be input into a scene flow network for processing. The sampled points are subjected to several layers of sparse convolution with the step length of 2 to extract the multi-scale features of the point cloud, then scene flow information is estimated in a bottom-up mode, and the scene flow information of the current layer is estimated in each layer by using a scene flow estimation module.
The scene flow estimation module mainly utilizes a cost body sub-module and a scene flow predictor sub-module to estimate scene flow information. The cost body sub-module integrates the similarity between points and points, mainly in a block-to-block manner. The scene flow predictor submodule mainly utilizes the characteristics of the decoded previous frame point cloud, the characteristics of the current frame point cloud, the scene flow information of the upper sampling in the previous layer and the cost body information to predict the scene flow information of the current layer. After the motion vectors of the sampling points are obtained, the motion vectors of all the points of the previous frame are obtained through an interpolation mode.
Step two: motion vector compression and motion compensation
And compressing and decompressing the decoded motion vector by using an attribute compression mode in the MPEG to obtain a decompressed motion vector, and then performing motion compensation on the decoded previous frame point cloud by using the decompressed motion vector to obtain a predicted point cloud.
Step three: residual compression
Encoding the difference between the predicted point cloud and the original point cloud by using a time entropy model network: first, using an encoder to predictThe point cloud and the current frame point cloud are mapped to hidden variables Y1 and Y in the hidden space. The hidden variable Y is composed of position information CYAnd corresponding characteristic information FYAnd (4) showing. And subtracting Y1 from Y in the hidden space to obtain a difference value Y in the hidden spaceres. Lossless compression is carried out on the position information of the Y by using an octree compression method, and then how to encode the difference value Y is consideredres. For the difference YresCan be obtained by subtracting the positional information in Y1 from the positional information in Y, and for the difference YresThe characteristic information of (2) is quantized first and then lossless compressed by arithmetic coding. It is assumed that the probability distribution of the feature information corresponding to the difference satisfies gaussian mixture distribution, and for the probability distribution of each component, a gaussian distribution (mean μ, variance σ) is used for approximation, so that only one network needs to be designed to obtain parameters of the gaussian distribution.
Obtaining an implicit variable Z by carrying out 2-layer sparse convolution with the step length of 2 on the information spliced by Y1 and Y, and obtaining Z characteristic information FZQuantization is performed first, lossless compression is performed using arithmetic coding, and F is estimated using a full decomposition entropy modelZProbability distribution information of. The compressed feature information FZAfter arithmetic decoding, the decoded hidden variable is obtained
Figure BDA0003332914080000041
Figure BDA0003332914080000042
Z1 is obtained by 2 layers of sparse convolution with step size 2. And (3) carrying out 3-layer sparse convolution on the hidden variable Y1 of the predicted point cloud to obtain Y2, and then carrying out 3-layer sparse convolution on the hidden variable spliced by Y2 and Z1 to estimate the probability distribution of the feature information of the difference.
Carrying out octree decoding on the compressed Y position information to obtain the hidden variable of the current point cloud
Figure BDA0003332914080000043
Position information of (2) will
Figure BDA0003332914080000044
Is subtracted from the position information of Y1 to obtain a difference value
Figure BDA0003332914080000045
The feature information of the compressed difference is arithmetically decoded to obtain the difference
Figure BDA0003332914080000046
The difference value of decoding is formed by the position information and the characteristic information
Figure BDA0003332914080000047
Decoded difference value
Figure BDA0003332914080000048
Adding an implicit variable Y1 of the predicted point cloud to obtain an implicit variable of the current point cloud
Figure BDA0003332914080000049
Hidden variable
Figure BDA00033329140800000410
And obtaining the decoded point cloud through a decoder.
Example (b):
the data set used for testing in the embodiment is a dynamic point cloud sequence soldier in MPEG; as can be seen from the steps of the overall flowchart of the invention of FIG. 1, the scene flow network and the temporal entropy model network are trained first. Here, part of the point cloud data in the AMASS is selected for training. For the input current frame point cloud and the decoded previous frame point cloud, firstly reducing the current frame point cloud and the decoded previous frame point cloud by 2 times, quantizing the current frame point cloud and the decoded previous frame point cloud, and randomly sampling 100000 points to input the points into a scene stream network so as to estimate a motion vector of the previous frame point cloud. And for the obtained motion vector, obtaining the motion vectors of all points in the previous frame by utilizing an interpolation mode, and expanding the motion vectors by 2 times.
And then, the motion vector is taken as the attribute information of the point cloud, and the motion vector is encoded by using a lift conversion mode in the MPEG to obtain a bit stream. And then decoding the bit stream in a lift conversion mode to obtain a decoded motion vector, and performing motion compensation on the decoded previous frame point cloud by using the motion vector to obtain a predicted point cloud.
And finally, inputting the current frame point cloud and the predicted frame point cloud into a time entropy model network to obtain a final decoded point cloud, and putting the point cloud into a decoding frame buffer area.
The implementation method and other methods are respectively tested on several indexes of bpp, D1 and D2, and the obtained results are gathered into a table as follows:
Figure BDA00033329140800000411
Figure BDA0003332914080000051
where bpp represents how many bits of code are needed for each vertex on average, the smaller the value the better, D1 represents a point-to-point distortion metric, the larger the value the better, D2 represents a point-to-plane distortion metric, the larger the value the better. It can be seen from the results that the present embodiment achieves the best results at the minimum bpp under both distortion metrics D1 and D2, thereby embodying the present invention to effectively improve the compression ratio compared to the previous method.

Claims (6)

1. A dynamic point cloud geometric compression method based on a scene flow network and a time entropy model is characterized by comprising the following steps:
the method comprises the following steps: a motion estimation step based on a scene flow network, which is used for estimating a motion vector of a previous frame point cloud relative to a current frame point cloud;
step two: a motion vector coding and motion compensation step, which is used for coding the motion vector estimated in the last step and carrying out motion compensation on the previous frame of point cloud by using the decoded motion vector to obtain predicted point cloud;
step three: and residual compression step, which is used for coding the difference information of the predicted point cloud and the original point cloud.
2. The scene flow network and temporal entropy model-based dynamic point cloud geometric compression method of claim 1, wherein: the scene flow estimation module in the scene flow network mainly estimates scene flow information by using a cost body sub-module and a scene flow predictor sub-module, wherein the cost body sub-module mainly integrates the similarity between points in a block-to-block mode; the scene flow predictor submodule mainly utilizes the characteristics of the decoded previous frame point cloud, the characteristics of the current frame point cloud, the scene flow information of the upper sampling in the previous layer and the cost body information to predict the scene flow information of the current layer.
3. The scene flow network and temporal entropy model-based dynamic point cloud geometric compression method of claim 1, wherein: and compressing and decompressing the decoded motion vector by using an attribute compression mode in the MPEG to obtain a decompressed motion vector, and then performing motion compensation on the point cloud of the previous frame before decoding by using the decompressed motion vector to obtain a predicted point cloud.
4. The scene flow network and temporal entropy model-based dynamic point cloud geometric compression method of claim 1, wherein: and encoding the difference value of the predicted point cloud and the original point cloud by utilizing a time entropy model network.
5. The method of claim 4, wherein the method comprises the steps of:
mapping the predicted point cloud and the current frame point cloud to hidden variables Y1 and Y in a hidden space by using an encoder;
and subtracting Y1 from Y in the hidden space to obtain a difference value Y in the hidden spaceres
Lossless compression is carried out on the position information of the Y by using an octree compression method;
for the difference YresAfter quantization, lossless compression is carried out by utilizing arithmetic coding;
obtaining an implicit variable Z by carrying out 2-layer sparse convolution with the step length of 2 on the information spliced by Y1 and Y, and obtaining a characteristic signal of ZMessage FZQuantization is performed first, then lossless compression is performed using arithmetic coding, and F is estimated using a full decomposition entropy modelZProbability distribution information of (a);
compressed characteristic information FZAfter arithmetic decoding, the decoded hidden variable is obtained
Figure FDA0003332914070000029
Obtaining Z1 through 2 layers of sparse convolution with the step length of 2;
predicting hidden variables Y1 of the point cloud to obtain Y2 through 3-layer sparse convolution, and then performing 3-layer sparse convolution on the hidden variables spliced by Y2 and Z1 to estimate the probability distribution of the feature information of the difference;
carrying out octree decoding on the compressed Y position information to obtain the hidden variable of the current point cloud
Figure FDA0003332914070000021
Position information of (2) will
Figure FDA0003332914070000022
Is subtracted from the position information of Y1 to obtain a difference value
Figure FDA0003332914070000023
The location information of (a); difference Y to be compressedresPerforming arithmetic decoding on the characteristic information to obtain a difference value
Figure FDA0003332914070000024
The difference value of decoding is formed by the position information and the characteristic information
Figure FDA0003332914070000025
Decoded difference value
Figure FDA0003332914070000026
Adding an implicit variable Y1 of the predicted point cloud to obtain an implicit variable of the current point cloud
Figure FDA0003332914070000027
Hidden variable
Figure FDA0003332914070000028
And obtaining the decoded point cloud through a decoder.
6. The method of claim 5, wherein the method comprises the steps of: the difference YresIs obtained by subtracting the positional information in Y1 from the positional information in Y.
CN202111285773.5A 2021-11-02 2021-11-02 Dynamic point cloud geometric compression method based on scene flow network and time entropy model Active CN114025146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111285773.5A CN114025146B (en) 2021-11-02 2021-11-02 Dynamic point cloud geometric compression method based on scene flow network and time entropy model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111285773.5A CN114025146B (en) 2021-11-02 2021-11-02 Dynamic point cloud geometric compression method based on scene flow network and time entropy model

Publications (2)

Publication Number Publication Date
CN114025146A true CN114025146A (en) 2022-02-08
CN114025146B CN114025146B (en) 2023-11-17

Family

ID=80059612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111285773.5A Active CN114025146B (en) 2021-11-02 2021-11-02 Dynamic point cloud geometric compression method based on scene flow network and time entropy model

Country Status (1)

Country Link
CN (1) CN114025146B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347120A1 (en) * 2016-05-28 2017-11-30 Microsoft Technology Licensing, Llc Motion-compensated compression of dynamic voxelized point clouds
CN108322742A (en) * 2018-02-11 2018-07-24 北京大学深圳研究生院 A kind of point cloud genera compression method based on intra prediction
US20190116357A1 (en) * 2017-10-12 2019-04-18 Mitsubishi Electric Research Laboratories, Inc. System and method for Inter-Frame Predictive Compression for Point Clouds
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device
US20200151915A1 (en) * 2018-05-09 2020-05-14 Peking University Shenzhen Graduate School Hierarchical division-based point cloud attribute compression method
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
US20200304829A1 (en) * 2019-03-22 2020-09-24 Tencent America LLC Method and apparatus for interframe point cloud attribute coding
CN111866521A (en) * 2020-07-09 2020-10-30 浙江工商大学 Video image compression artifact removing method combining motion compensation and generation type countermeasure network
CN112862858A (en) * 2021-01-14 2021-05-28 浙江大学 Multi-target tracking method based on scene motion information
CN113012063A (en) * 2021-03-05 2021-06-22 北京未感科技有限公司 Dynamic point cloud repairing method and device and computer equipment
CN113281718A (en) * 2021-06-30 2021-08-20 江苏大学 3D multi-target tracking system and method based on laser radar scene flow estimation

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170347120A1 (en) * 2016-05-28 2017-11-30 Microsoft Technology Licensing, Llc Motion-compensated compression of dynamic voxelized point clouds
CN109196559A (en) * 2016-05-28 2019-01-11 微软技术许可有限责任公司 The motion compensation of dynamic voxelization point cloud is compressed
US20190116357A1 (en) * 2017-10-12 2019-04-18 Mitsubishi Electric Research Laboratories, Inc. System and method for Inter-Frame Predictive Compression for Point Clouds
CN108322742A (en) * 2018-02-11 2018-07-24 北京大学深圳研究生院 A kind of point cloud genera compression method based on intra prediction
US20200151915A1 (en) * 2018-05-09 2020-05-14 Peking University Shenzhen Graduate School Hierarchical division-based point cloud attribute compression method
US20200304829A1 (en) * 2019-03-22 2020-09-24 Tencent America LLC Method and apparatus for interframe point cloud attribute coding
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111866521A (en) * 2020-07-09 2020-10-30 浙江工商大学 Video image compression artifact removing method combining motion compensation and generation type countermeasure network
CN112862858A (en) * 2021-01-14 2021-05-28 浙江大学 Multi-target tracking method based on scene motion information
CN113012063A (en) * 2021-03-05 2021-06-22 北京未感科技有限公司 Dynamic point cloud repairing method and device and computer equipment
CN113281718A (en) * 2021-06-30 2021-08-20 江苏大学 3D multi-target tracking system and method based on laser radar scene flow estimation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孔剑虹;杨超;于晓辉;祁广源;王赜坤;: "基于视点的三维点云自适应多细节层次模型动态绘制", 科学技术与工程, no. 12 *
杨柏林;章志勇;王勋;潘志庚;: "面向移动有损网络的基于预测重构模型传输机制", 计算机辅助设计与图形学学报, no. 01 *

Also Published As

Publication number Publication date
CN114025146B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
WO2022063055A1 (en) 3d point cloud compression system based on multi-scale structured dictionary learning
US20200021856A1 (en) Hierarchical point cloud compression
US20200258262A1 (en) Method and device for predictive encoding/decoding of a point cloud
CN113272866A (en) Point cloud compression using space-filling curves for detail level generation
CN108174218B (en) Video coding and decoding system based on learning
CN113613010A (en) Point cloud geometric lossless compression method based on sparse convolutional neural network
CN105357540A (en) Method and apparatus for decoding video
CN110602494A (en) Image coding and decoding system and method based on deep learning
JP2015504545A (en) Predictive position coding
KR20140089426A (en) Predictive position decoding
CN104967850A (en) Method and apparatus for encoding and decoding image by using large transform unit
EP2723071A1 (en) Encoder, decoder and method
WO2022042538A1 (en) Block-based point cloud geometric inter-frame prediction method and decoding method
CN115606188A (en) Point cloud encoding and decoding method, encoder, decoder and storage medium
CN117354523A (en) Image coding, decoding and compressing method for frequency domain feature perception learning
Wang et al. The alpha parallelogram predictor: A lossless compression method for motion capture data
CN114025146B (en) Dynamic point cloud geometric compression method based on scene flow network and time entropy model
CN115393452A (en) Point cloud geometric compression method based on asymmetric self-encoder structure
CN115239563A (en) Point cloud attribute lossy compression device and method based on neural network
Hajizadeh et al. Predictive compression of animated 3D models by optimized weighted blending of key‐frames
Lu et al. Image Compression Based on Mean Value Predictive Vector Quantization.
CN117915107B (en) Image compression system, image compression method, storage medium and chip
Lv et al. A survey on motion capture data compression algorithm
CN116437089B (en) Depth video compression method based on key target
CN117915114B (en) Point cloud attribute compression method, device, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant