CN113784125A - Point cloud attribute prediction method and device - Google Patents

Point cloud attribute prediction method and device Download PDF

Info

Publication number
CN113784125A
CN113784125A CN202110954048.6A CN202110954048A CN113784125A CN 113784125 A CN113784125 A CN 113784125A CN 202110954048 A CN202110954048 A CN 202110954048A CN 113784125 A CN113784125 A CN 113784125A
Authority
CN
China
Prior art keywords
point
point cloud
attribute
coded
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110954048.6A
Other languages
Chinese (zh)
Inventor
陈建文
任青山
尹茜
赵丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhineng Technology Co ltd
Original Assignee
Beijing Yizhineng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhineng Technology Co ltd filed Critical Beijing Yizhineng Technology Co ltd
Priority to CN202110954048.6A priority Critical patent/CN113784125A/en
Publication of CN113784125A publication Critical patent/CN113784125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application belongs to the technical field of point cloud data processing, and particularly relates to a point cloud attribute prediction method and point cloud attribute prediction equipment. The present disclosure includes: k adjacent points of the current point to be coded are determined, a prediction mode is determined, and the attribute prediction value of the current point to be coded is determined according to the prediction mode. The invention can effectively improve the compression efficiency on the premise of ensuring that the complexity of encoding and decoding is basically unchanged. Through the combination of the weighting mode and the nearest neighbor point prediction mode, the attribute mutation of the neighbor point compared with the current point caused by the fact that the angle between the points or the reflection coefficient difference of the area where the points are located is too large is weakened, the influence on the prediction of the current point to be coded is reduced, the prediction precision is improved, the prediction residual error is reduced, the accuracy of the laser radar point cloud reflectivity attribute prediction is improved, and the prediction effect of the neighbor point of the laser radar point cloud sequence on the current point is better utilized.

Description

Point cloud attribute prediction method and device
Technical Field
The application belongs to the technical field of point cloud data processing, and particularly relates to a point cloud attribute prediction method and point cloud attribute prediction equipment.
Background
Three-dimensional point clouds are an important representation of real-world digitization. With the rapid development of three-dimensional scanning devices (such as laser radar, depth camera, etc.), the accuracy and resolution of the point cloud become higher and higher. High-precision point clouds have wide application in many fields (e.g., three-dimensional modeling, scientific visualization, etc.). Particularly, three-dimensional point cloud data is becoming one of the most important core data in emerging applications such as virtual reality, augmented reality, unmanned driving, robot positioning and navigation, and plays a technical support role in numerous popular researches. The point cloud is obtained by sampling the surface of an object by a three-dimensional scanning device, the number of points of one frame of point cloud is generally in the million level, each point contains geometric information and attribute information such as color and reflectivity, and the data volume is huge. The huge data volume of the three-dimensional point cloud brings huge challenges to data storage, transmission and the like, so that the point cloud compression becomes very important.
Disclosure of Invention
The present disclosure is intended to solve the above technical problems, at least to some extent, based on the discovery and recognition by the inventors of the following facts and problems: the frame of point cloud compression mainly includes TMC13v12.0(Test Model for Category 1) provided by the international standard organization MPEG (moving Picture Experts group)&3version12.0) and the PCRMv2.0(Point cloud Reference Software Model version2.0) provided by the China AVS (Audio Video coding Standard) Point cloud compression working group. The two test platforms both adopt a Compression method based on Geometry-based Point Cloud Compression (G-PCC), and the method respectively encodes the geometrical information and the attribute information of the Point Cloud. In a test platform PCRMv2.0, aiming at the attribute information of point clouds, particularly the reflectivity attribute of a laser radar point cloud data set, all points are arranged according to a certain sequence (Morton sequence and Hilbert sequence) when point cloud attribute prediction coding based on interpolation prediction is carried out on the point cloud attribute information, and on the basis of the sequence, the attribute prediction is carried out in a differential mode, firstly, the sorted point clouds are traversed, and. For the first point, because no reference point is used for prediction, the predicted value is 0, and the attribute value of the current point to be coded is set as A0Then predict residual R0=A0When a certain loss of attribute information is received, the prediction residual is quantized and the quantized residual is entropy-encoded in order to save the code rate. Decoding the quantized residual code stream of the first point, and performing inverse quantization to obtain predicted residual (quantization and inverse quantization are lossy operations, and the predicted residual is not equal to R0) And adding the predicted residual error of the first point to the predicted value 0 of the first point to obtain an attribute reconstruction value of the first point. And for the rest points to be coded, when the reflectivity attribute of one point is coded, searching K nearest neighbor points of the current coding point in the coded point set as prediction points, and then calculating the weighted average value of the attribute reconstruction values of the K neighbor points as the attribute prediction value of the current point to be coded by using the reciprocal of the Manhattan distance between the K prediction points and the current point to be coded as a weight. The quantized residual is obtained and entropy coded as in the first step. And repeating the steps until all the points in the point cloud are processed, and obtaining the code stream of the point cloud reflectivity attribute of the laser radar.
However, for the lidar point cloud, the reflectivity attribute and the angle, and the reflection coefficient have strong correlation, and the above weighting mode has the following problems: and searching K prediction points which are nearest to the current point to be coded in the Manhattan distance based on the Hilbert sequence or the Morton sequence, wherein the reflectivity attribute correlation of partial points is weakened by other influence factors, so that the attribute value difference between the prediction points and the current point to be coded is too large, the prediction residual error is increased, and the compression efficiency is reduced.
In view of this, the present disclosure provides a point cloud attribute prediction method and a device thereof, so as to solve the problems in the related art and improve the compression performance of the laser radar point cloud attribute information (reflectivity).
According to a first aspect of the present disclosure, a point cloud attribute prediction method is provided, including:
k adjacent points of the current point to be coded are determined;
determining a point cloud attribute prediction mode;
and determining the attribute predicted value of the current point to be coded according to the prediction mode.
Optionally, the determining K neighbor points of the current point to be encoded includes:
(1) let the Cartesian coordinate of the kth point in the original point cloud be (x)k,yk,zk) K is 0, K-1, K is the point cloud number in the original point cloud;
(2) carrying out quantitative conversion on the Cartesian coordinates of the original point cloud, wherein the Cartesian coordinates of the quantized point cloud are Xk=round(xk/QS),Yk=round(yk/QS),Zk=round(zk/QS), wherein QS is a set geometric quantization step size, QS>0, round(s) is the operator of the nearest integer from s,
Figure BDA0003219582830000021
(3) reconstructing the point cloud, including:
(3-1) carrying out inverse quantization conversion on the Cartesian coordinates of the point cloud, wherein the point cloud coordinates after inverse quantization conversion are xk′=round(Xk*QS),yk′=round(Yk*QS),zk′=round(ZkQS) to obtain geometric information of the reconstructed point cloud;
(3-2) calculating a new attribute value for each point in the reconstructed point cloud, and re-coloring the point cloud attributes, wherein the method comprises the following steps:
(3-2-1) setting the geometrical information of the original point cloud and the reconstructed point cloud as (P)i)i=0…K-1And
Figure BDA0003219582830000022
setting an initial point P in the original point cloudiOriginal attribute value (A) ofi)i=0…K-1Wherein K and KrecRespectively taking the total points of the original point cloud and the reconstructed point cloud;
(3-2-2) for each point in the reconstructed point cloud
Figure BDA0003219582830000031
Creating in an original point cloud
Figure BDA0003219582830000032
Relationships to Q (i), wherein Q (i) is quantified in the original point cloud
Figure BDA0003219582830000033
The set of points of (a), (b), (c), (d) and (d)k(i))k∈{1,…,D(i)}D (i) is the number of dots included in Q (i);
(3-2-3) calculating each point in the reconstructed point cloud
Figure BDA0003219582830000034
Reconstructed property value of
Figure BDA0003219582830000035
Figure BDA0003219582830000036
Wherein A isk(i) Is compared with the original point P in the original point cloudk(i) Corresponding original attribute value, wk(i) A weighting w representing each origink(i),
Figure BDA0003219582830000037
Wherein,
Figure BDA0003219582830000038
represents the origin point Pk(i) A value (II) divided by the geometric quantization step QS1Is the 1-norm of the matrix, e is a constant, e>0;
(4) Traversing geometric coordinates of all points in the reconstructed point cloud, and obtaining Hilbert codes corresponding to all points in the reconstructed point cloud by utilizing an iterative lookup table [12] [64] [2 ]; sequencing all Hilbert codes according to the sequence from small to large to obtain a Hilbert sequence;
(5) setting the current point to be coded as PcurThe current point to be coded is PcurHas a geometric coordinate of (x)cur,ycur,zcur) According to the Hilbert sequence, willCurrent point to be coded PcurThe first M points of (A) is Pj,PjHas a geometric coordinate of (x)j,yj,zj) J is 0. M-1; using the formula | xcur-xj|+|ycur-yj|+|zcur-zjRespectively calculating the Manhattan distance between the current point to be coded and the M points, and selecting the current point to be coded P from the M pointscurTakes the K points with the nearest Manhattan distance as the current point P to be codedcurK nearest neighbors of, K<M。
Optionally, the determining the prediction mode includes:
(1) to the current point cloud P to be codedcurThe K adjacent points are sequenced from near to far to obtain a sequence P1P2···PkIn which P is1Is equal to the current point to be coded PcurManhattan distance of (a);
(2) let P1Has an attribute value of A1The point with the farthest Manhattan distance from the current point to be coded is PkIs provided with PkHas an attribute value of Ak
(3) Setting a threshold value to be A _ Diff;
(4) according to nearest neighbor P1To the furthest neighbouring point PkAbsolute value | a of the attribute value difference of (1)1-AkThe magnitude relation of | and a preset threshold value A _ Diff determines the prediction mode:
if the nearest neighbor point P1To the furthest neighbouring point PkDoes not exceed a threshold value, i.e. | A1-AkIf | ≦ A _ Diff, using a weighting mode to obtain the attribute predicted value A of the current point to be coded, if the nearest neighbor point P1To the furthest neighbouring point PkThe absolute value of the attribute value difference of (a) exceeds a threshold value, | a1-AkIf > A _ Diff, | the attribute prediction value A of the current point to be coded is obtained by adopting a nearest neighbor point prediction mode.
According to a second aspect of the present disclosure, a point cloud attribute prediction apparatus is provided, which includes a processor, a memory, a communication interface, and a communication bus; the memory has stored thereon a computer readable program executable by the processor; the communication bus realizes the connection communication between the processor and the memory; the processor, when executing the computer readable program, implements:
determining K adjacent points of the current point cloud to be coded;
determining a point cloud attribute prediction mode;
and determining the attribute predicted value of the current point to be coded according to the prediction mode.
According to the embodiment of the disclosure, through the combination of the weighting mode and the nearest neighbor prediction mode, the spatial correlation of each point of the laser radar point cloud and its neighbor point is better utilized, and the prediction precision is improved, so that the prediction residual is reduced, and the efficiency of the laser radar point cloud attribute information coding is effectively improved.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow diagram of a point cloud attribute prediction method according to one embodiment of the present disclosure.
Fig. 2 is a block flow diagram of a specific implementation of a point cloud attribute prediction method at PCRMV2.0, according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of geometric quantization step size under the PCRMV2.0 general test condition.
FIG. 4 is a Hilbert sequential lookup table Hilbert Table [12] [64] [2 ].
Fig. 5 is a schematic diagram of hilbert sequence ordering.
FIG. 6 is a flowchart of another embodiment of a point cloud attribute prediction method provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow diagram illustrating a point cloud attribute prediction method according to an embodiment of the present disclosure, where the point cloud attribute prediction method according to this embodiment may be applied to a user device.
As shown in fig. 1, the point cloud attribute prediction method may include the following steps:
in step 1, K adjacent points of the current point cloud to be coded are determined;
in one embodiment, determining K neighbor points of the current point to be encoded includes:
(1) let the Cartesian coordinate of the kth point in the original point cloud be (x)k,yk,zk) K is 0, K-1, K is the point cloud number in the original point cloud;
the method comprises the steps of starting to encode a geometric part of point cloud information, firstly processing geometric position information of point cloud for point cloud data to be encoded, wherein the geometric position information is three-dimensional coordinate information of each point, and under the condition that the geometric information can accept certain limit loss, if repeated points (the geometric coordinates of a plurality of points are the same) exist in input point cloud, only one point can be selected to be reserved at the geometric coordinate position. I.e. removing redundant repeated points, and converting the coordinates of the points in the point cloud into integer coordinates through quantization.
(2) The method comprises the steps of quantizing the Cartesian coordinates of an original point cloud, wherein the Cartesian coordinates of the quantized point cloud are Xk=round(xk/QS),Yk=round(yk/QS),Zk=round(zk/QS), wherein QS is a set geometric quantization step size, QS>0, round(s) is the operator of the nearest integer from s,
Figure BDA0003219582830000051
(3) reconstructing the point cloud, including:
(3-1) carrying out inverse quantization conversion on the Cartesian coordinates of the point cloud, wherein the point cloud coordinates after inverse quantization conversion are xk′=round(Xk*QS),yk′=round(Yk*QS),zk′=round(ZkQS) to obtain geometric information of the reconstructed point cloud;
(3-2) calculating a new attribute value for each point in the reconstructed point cloud, and re-coloring the point cloud attributes, wherein the method comprises the following steps:
(3-2-1) setting the geometrical information of the original point cloud and the reconstructed point cloud as (P)i)i=0…K-1And
Figure BDA0003219582830000052
setting an initial point P in the original point cloudiOriginal attribute value (A) ofi)i=0…K-1Wherein K and KrecRespectively counting points in the original point cloud and the reconstructed point cloud;
(3-2-2) for each point in the reconstructed point cloud
Figure BDA0003219582830000053
Creating in an original point cloud
Figure BDA0003219582830000054
Relationships to Q (i), wherein Q (i) is quantified in the original point cloud
Figure BDA0003219582830000055
The set of points of (a), (b), (c), (d) and (d)k(i))k∈{1,…,D(i)}D (i) is the number of dots included in Q (i);
(3-2-3) calculating each point in the reconstructed point cloud
Figure BDA0003219582830000061
Reconstructed property value of
Figure BDA0003219582830000062
Figure BDA0003219582830000063
Wherein A isk(i) Is compared with the original point P in the original point cloudk(i) Corresponding original attribute value, wk(i) A weighting w representing each origink(i),
Figure BDA0003219582830000064
Wherein,
Figure BDA0003219582830000065
represents the origin point Pk(i) A value (II) divided by the geometric quantization step QS1Is the 1-norm of the matrix, e is a constant, e>0, in one embodiment of the invention, the value of e can be 0.1;
(4) traversing geometric coordinates of all points in the reconstructed point cloud, and obtaining Hilbert codes corresponding to all points in the reconstructed point cloud by utilizing an iterative lookup table [12] [64] [2] shown in FIG. 4; all Hilbert codes are sequenced from small to large, and the obtained Hilbert sequences are shown in FIG. 5;
(5) setting the current point to be coded as PcurThe current point to be coded is PcurHas a geometric coordinate of (x)cur,ycur,zcur) According to the Hilbert sequence, the current point P to be coded iscurThe first M points of (A) is Pj,PjHas a geometric coordinate of (x)j,yj,zj) J is 0. M-1; respectively calculating the Manhattan distance between the current point to be coded and the M points, wherein the Manhattan distance is calculated by the method of | xcur-xj|+|ycur-yj|+|zcur-zjI, selecting the point P to be coded from M pointscurTakes the K points with the nearest Manhattan distance as the current point P to be codedcurK nearest neighbors of, K<M。
In step 2, determining a point cloud attribute prediction mode;
in one embodiment, determining the prediction mode comprises:
(1) to the current point cloud P to be codedcurThe K adjacent points are sequenced from near to far to obtain a sequence P1P2···PkIn which P is1Is equal to the current point to be coded PcurManhattan distance of (a);
(2) let P1Has an attribute value of A1The point with the farthest Manhattan distance from the current point to be coded is PkIs provided with PkHas an attribute value of Ak
(3) Setting a threshold value to be A _ Diff;
(4) according to nearest neighbor P1To the furthest neighbouring point PkAbsolute value | a of the attribute value difference of (1)1-AkThe magnitude relation of | and a preset threshold value A _ Diff determines the prediction mode:
if the nearest neighbor point P1To the furthest neighbouring point PkDoes not exceed a threshold value, i.e. | A1-AkIf | ≦ A _ Diff, using the weighting mode to obtain the attribute predicted value A of the current point to be coded,
Figure BDA0003219582830000066
wiis a weight, wiThe value of (d) is equal to the manhattan distance between the ith neighbor point and the current point to be encoded. If the nearest neighbor point P1To the furthest neighbouring point PkThe absolute value of the attribute value difference of (a) exceeds a threshold value, | a1-AkIf > A _ Diff, | is greater than A _ Diff, the attribute predicted value A of the current point to be coded is obtained by adopting a nearest neighbor point prediction mode, and A equals to Aj,j=1。
Through the combination of the weighting mode and the nearest neighbor prediction mode, the spatial correlation of each point of the laser radar point cloud and the nearest neighbor point of the point cloud is better utilized, and the accuracy of point cloud attribute prediction is improved, so that the prediction residual is reduced, and the efficiency of laser radar point cloud attribute information coding is effectively improved.
In one embodiment, the threshold A _ Diff is set to: a _ Diff ═ 3 × attrQuantStep +30, where attrQuantStep is an attribute quantization parameter. attrQuantStep can be set to 8, 16, 24, 32, 40, 48 under the general test conditions of PCRMV 2.0. Under the condition that the attribute is not damaged, namely the attribute information of the reconstructed point cloud is the same as that of the original point cloud, the attribute quantization parameter is 0, and the threshold value at the moment is reduced to 30.
Corresponding to the point cloud attribute prediction method, the point cloud attribute prediction device is provided, and in one embodiment of the disclosure, the point cloud attribute prediction device is as shown in fig. 6 and comprises a processor, a memory, a communication interface and a communication bus; the memory has stored thereon a computer readable program executable by the processor; the communication bus realizes the connection communication between the processor and the memory; the processor, when executing the computer readable program, implements:
determining K adjacent points of the current point cloud to be coded;
determining a point cloud attribute prediction mode;
and determining the attribute predicted value of the current point to be coded according to the prediction mode.
In order that those skilled in the art will better understand the disclosure, the following detailed description is given with reference to the accompanying drawings and the following examples.
In step 1, as shown in fig. 2, the present example is embodied in a point cloud compression reference platform PCRMV2.0, for an input original point cloud, first processing geometric position information of the point cloud, i.e. three-dimensional coordinate information of each point, in the original point cloud, a cartesian coordinate of a k-th point may be represented as (x) in the original point cloudk,yk,zk) K is 0, K-1, K being the number of points in the point cloud. In the case that the geometric information can be lost to a certain extent, if there are duplicate points (the geometric coordinates are the same) in the input point cloud, it is chosen to only keep one point at the position of the geometric coordinates, i.e. to remove redundant duplicate points, and the coordinates of the points in the point cloud will be converted into integer coordinates by quantization. The geometric quantization step size is defined as QS, which is set as shown in fig. 3 in the general test condition of PCRMV2.0, and can be set to values shown in R1 to R6 in the case of two geometric bit widths (the geometric bit width is the maximum number of bits after conversion into binary of cartesian coordinate components of all points in the point cloud). The quantized coordinate is Xk=round(xk/QS),Yk=round(yk/QS),Zk=round(zk/QS) (where the round(s) function returns the nearest integer to s,
Figure BDA0003219582830000081
for encoding the geometric information, an octree structure is used, the entire point cloud is fitted to a cube by means of morton codes, and the cube is continuously divided into eight subcubes until each subcube contains only a single point. The geometric octree is constructed by Morton code, each coordinate value is expressed by N bits, and the coordinate of the k point can be expressed as
Figure BDA0003219582830000082
Figure BDA0003219582830000083
Figure BDA0003219582830000084
The Morton code corresponding to the k-th point can be expressed as
Figure BDA0003219582830000085
Representing every three bits as an octal number
Figure BDA0003219582830000086
The morton code corresponding to the k point can be expressed as
Figure BDA0003219582830000087
According to Morton code, priority is given to root node according to breadth
Figure BDA0003219582830000088
(level 0) a geometric octree is constructed. First, all the points are counted according to the 0 th octal of the Morton code
Figure BDA0003219582830000089
Divided into 8 child nodes:
all of
Figure BDA00032195828300000810
Is divided into 0 th child node
Figure BDA00032195828300000811
In (1),
all of
Figure BDA00032195828300000812
Is divided into 1 st child node
Figure BDA00032195828300000813
In
·…
All of
Figure BDA00032195828300000814
Is divided into 7 th child node
Figure BDA00032195828300000815
In
The nodes of the first layer of the octree are divided, and thus, 8 nodes are formed. Secondly, 8 bits are used for dividing the information of 8 nodes
Figure BDA00032195828300000816
Representing root nodes
Figure BDA00032195828300000817
Whether 8 child nodes are occupied. If it is not
Figure BDA00032195828300000818
At least one point in the point cloud, its corresponding bit bk1, whereas if the child node does not contain any point, bk0. Thirdly, according to the 1 st octal number of the Morton code of the geometric position
Figure BDA00032195828300000819
For occupied node in the first layer
Figure BDA00032195828300000820
Further dividing the node into 8 sub-nodes; using 8 bits
Figure BDA00032195828300000821
Occupancy information (l) representing its child nodesnIs the serial number of the occupied node, N is 0, …, N1-1,N1Representing the number of nodes the first layer is occupied). The fourth step is that the first step is that,
Figure BDA00032195828300000822
then according to t-th octal number in Morton code of geometric position
Figure BDA00032195828300000823
The occupied node in the layer t-2, 3, … is further divided into eight sub-nodes; using eight bits in combination
Figure BDA00032195828300000824
Indicates occupation information (l) of its child nodenIs the serial number of the occupied node, N is 0, …, Nt-1,NtIndicating the number of nodes that the t-th layer is occupied). And fifthly, for the t-th layer which is N-1, all the nodes become leaf nodes. If the encoder configuration allows for repetition points, the number of repetition points on each occupied leaf node needs to be recorded in the code stream. To this end, the position of the point is replaced by the position-occupying information of each node of the tree, the space-occupying code of which contains eight bits (b) each time an octree node is divided7b6b5b4b3b2b1b0) Each bit is entropy-coded, which indicates the occupation of eight child nodes of the node. And after the geometric part of the point cloud information is coded, processing the attribute information based on the decoded geometric position. Firstly, point cloud is reconstructed, inverse quantization conversion is carried out on Cartesian coordinates of the point cloud, and the point cloud coordinates after inverse quantization conversion are xk′=round(Xk*QS),yk′=round(Yk*QS),zk′=round(ZkQS) to obtain geometric information of the reconstructed point cloud; given the geometry andand re-coloring the attribute information and the decoded geometric information of the reconstructed point cloud, namely calculating a new attribute value for each point in the reconstructed point cloud so as to minimize the attribute error of the reconstructed point cloud and the original point cloud. In the general test condition, a quantization-based fast re-coloring method is mainly used, and the method is implemented as follows: the first step is to set the geometrical information of the original point cloud and the reconstructed point cloud as (P)i)i=0…N-1And
Figure BDA0003219582830000091
(wherein N and NrecPoints in the original point cloud and the reconstructed point cloud respectively); second, for each point in the reconstructed point cloud
Figure BDA0003219582830000092
Let Q (i) become (P)k(i))k∈{1,…,D(i)}Is quantized into the original point cloud
Figure BDA0003219582830000093
D (i) is the number of points included in Q (i); thirdly, each point in the point cloud is reconstructed
Figure BDA0003219582830000094
Calculating the reconstructed attribute value, and expressing the calculation formula as
Figure BDA0003219582830000095
Wherein A isk(i) Representing point Pk(i) The corresponding original attribute value. w is ak(i) Representing the weighted weight of each origin, formula for calculating the weight
Figure BDA0003219582830000096
Wherein
Figure BDA0003219582830000097
Represents the origin point Pk(i) Divided by the geometric quantization step size, ∈ is a constant. Before attribute prediction is carried out, Hilbert reordering is carried out on point clouds, and then differential prediction is carried out. Traversal reconstructionBased on the geometrical coordinates of the point cloud, iteratively querying a table HilbertTable [12] shown in FIG. 4][64][2]And obtaining the Hilbert code corresponding to the current point to be coded. Fig. 5 shows how to reorder the point clouds in the order from small to large according to the hilbert code. After reordering, attribute prediction is carried out in a differential mode, and in the first step, the reordered point cloud is traversed. For the first point, no reference point is predicted, the predicted value is 0, and the attribute value of the current point to be coded is set as A0Then predict residual R0=A0For the residual points, calculating an attribute predicted value A 'of the current point to be coded according to the attribute reconstructed values of a plurality of previous points of the current point to be coded in the Hilbert sequence'i. Setting the attribute value of the current point to be coded as AiThen the prediction residual is Ri=Ai-A′iAnd repeating the second step until the whole frame point cloud is traversed. Specifically, let the current point to be coded be PcurIn predicting the attribute value A of the current point to be codedcurThen, the first M points of the current point to be coded in Hilbert sequence are stored, and K (K) with the nearest Manhattan distance to the current point to be coded is selected<M) points are used as K nearest neighbors of the current point to be coded.
In step 2, a prediction mode needs to be determined. P found by step 1curThe K neighbor points are ranked as P from near to far1P2···PkFrom the current point PcurThe closest point is P1Let its reconstructed attribute value be A1The point farthest from the current point, i.e. PkLet its reconstructed attribute value be AkBy comparing nearest neighbors P1To the furthest neighbouring point PkAbsolute value | a of the attribute value difference of (1)1-AkThe magnitude relation of | and a preset threshold value a _ Diff determines the prediction mode. The threshold value in the scheme is set as: a _ Diff is 3 × attrQuantStep +30, where attrQuantStep is an attribute quantization parameter (attrQuantStep can be set to 8, 16, 24, 32, 40, 48 under the general test conditions of PCRMV 2.0). Under the condition of attribute loss-free (the attribute information of the reconstructed point cloud is the same as that of the original point cloud), the attribute quantization parameter is 0, and the threshold value is simplified at the momentIs 30. When nearest neighbor point P1To the furthest neighbouring point PkWhen the absolute value of the difference between the attribute values of (1) does not exceed the threshold value, i.e. | A1-AkIf the value is less than or equal to A _ Diff, obtaining the attribute predicted value A of the current point by using a weighting mode, otherwise, if the value is less than or equal to A1-AkIf > A _ Diff, | the attribute prediction value A of the current point is obtained by adopting a nearest neighbor point prediction mode.
In step 3, if the weighting mode is used, the predicted value of the current point is
Figure BDA0003219582830000101
Weight wiThe Manhattan distance between the ith adjacent point and the current point, if the nearest adjacent point prediction mode is adopted, the predicted value A of the current point is equal to A1. Then the prediction residual R ═ acur-A, the corresponding quantized residual is
Figure BDA0003219582830000102
(where offset is a rounding-controlling parameter), inverse quantization yields the reconstructed residual
Figure BDA0003219582830000103
Reconstructed property value of corresponding current point
Figure BDA0003219582830000104
And repeatedly using the method for the rest points, and coding the quantized residual error of each point to obtain a code stream with the point cloud reflectivity attribute.
According to the method for predicting the point cloud reflectivity attribute of the laser radar based on the nearest neighbor point in the embodiment of the disclosure, through the combination of the weighting mode and the nearest neighbor point prediction mode, the influence of the attribute mutation of the neighboring point compared with the current point caused by the angle between the points or the overlarge difference of the reflection coefficients of the areas where the points are located on the current point is weakened, the current point to be coded is predicted, the prediction precision is improved, the prediction residual error is reduced, and the efficiency of predicting the point cloud reflectivity attribute of the laser radar is improved.
The above embodiments are merely specific examples of the present disclosure, which are intended to illustrate rather than limit the technical solutions of the present disclosure, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (7)

1. A point cloud attribute prediction method is characterized by comprising the following steps:
step 1, determining K adjacent points of a current point cloud to be coded;
step 2, determining a point cloud attribute prediction mode;
and 3, determining the attribute predicted value of the current point to be coded according to the prediction mode.
2. The method of predicting point cloud attributes according to claim 1, wherein determining K neighboring points of the current point to be encoded comprises:
(1) let the Cartesian coordinate of the kth point in the original point cloud be (x)k,yk,zk) K is 0, … K-1, K is the point cloud number in the original point cloud;
(2) carrying out quantitative conversion on the Cartesian coordinates of the original point cloud, wherein the Cartesian coordinates of the quantized point cloud are Xk=round(xk/QS),Yk=round(yk/QS),Zk=round(zkQS), where QS is a set geometric quantization step size, QS > 0, round(s) is the operator for the nearest integer from s,
Figure FDA0003219582820000011
(3) reconstructing the point cloud, including:
(3-1) carrying out inverse quantization conversion on the Cartesian coordinates of the point cloud, wherein the point cloud coordinates after inverse quantization conversion are xk′=round(Xk*QS),yk′=round(Yk*QS),zk′=round(ZkQS) to obtain geometric information of the reconstructed point cloud;
(3-2) calculating a new attribute value for each point in the reconstructed point cloud, and re-coloring the point cloud attributes, wherein the method comprises the following steps:
(3-2-1) setting the geometrical information of the original point cloud and the reconstructed point cloud as (P)i)i=0...K-1And
Figure FDA0003219582820000012
setting an initial point P in the original point cloudiOriginal attribute value (A) ofi)i=0...K-1Wherein K and KrecRespectively counting points in the original point cloud and the reconstructed point cloud;
(3-2-2) for each point in the reconstructed point cloud
Figure FDA0003219582820000013
Creating in an original point cloud
Figure FDA0003219582820000014
Relationships to Q (i), wherein Q (i) is quantified in the original point cloud
Figure FDA0003219582820000015
The set of points of (a), (b), (c), (d) and (d)k(i))k∈{1,...,D(i)}D (i) is the number of dots included in Q (i);
(3-2-3) calculating each point in the reconstructed point cloud
Figure FDA0003219582820000016
Reconstructed property value of
Figure FDA0003219582820000017
Figure FDA0003219582820000018
Wherein A isk(i) Is compared with the original point P in the original point cloudk(i) Corresponding original attribute value, wk(i) A weighting w representing each origink(i),
Figure FDA0003219582820000021
Wherein,
Figure FDA0003219582820000022
represents the origin point Pk(i) Divided by the value of the geometric quantization step QS, | | | | | non-woven phosphor1Is the 1-norm of the matrix, belongs to a constant and belongs to more than 0;
(4) traversing geometric coordinates of all points in the reconstructed point cloud, and obtaining Hilbert codes corresponding to all points in the reconstructed point cloud by utilizing an iterative lookup table [12] [64] [2 ]; sequencing all Hilbert codes according to the sequence from small to large to obtain a Hilbert sequence;
(5) setting the current point to be coded as PcurThe current point to be coded is PcurHas a geometric coordinate of (x)cur,ycur,zcur) According to the Hilbert sequence, the current point P to be coded iscurThe first M points of (A) is Pj,PjHas a geometric coordinate of (x)j,yj,zj) J is 0 … M-1; using the formula | xcur-xj|+|ycur-yj|+|zcur-zjRespectively calculating the Manhattan distance between the current point to be coded and the M points, and selecting the current point to be coded P from the M pointscurTakes the K points with the nearest Manhattan distance as the current point P to be codedcurK < M.
3. The point cloud attribute prediction method of claim 1, wherein the determining a prediction mode comprises:
(1) to the current point cloud P to be codedcurThe K adjacent points are sequenced from near to far to obtain a sequence P1P2…PkIn which P is1Is equal to the current point to be coded PcurManhattan distance of (a);
(2) let P1Has an attribute value of A1The point with the farthest Manhattan distance from the current point to be coded is PkIs provided with PkHas an attribute value of Ak
(3) Setting a threshold value to be A _ Diff;
(4) according to nearest neighbor P1To the furthest neighbouring point PkAbsolute value | a of the attribute value difference of (1)1-AkThe magnitude relation of | and a preset threshold value A _ Diff determines the prediction mode:
if the nearest neighbor point P1To the furthest neighbouring point PkDoes not exceed a threshold value, i.e. | A1-AkIf | ≦ A _ Diff, using a weighting mode to obtain the attribute predicted value A of the current point to be coded, if the nearest neighbor point P1To the furthest neighbouring point PkThe absolute value of the attribute value difference of (a) exceeds a threshold value, | a1-AkIf > A _ Diff, | the attribute prediction value A of the current point to be coded is obtained by adopting a nearest neighbor point prediction mode.
4. The point cloud attribute prediction method of claim 3, wherein the threshold A _ Diff is set to: a _ Diff ═ 3 × attrQuantStep +30, where attrQuantStep is an attribute quantization parameter.
5. The point cloud attribute prediction method of claim 3, wherein the weighting mode obtains an attribute prediction value A of a current point to be encoded,
Figure FDA0003219582820000031
wiis a weight, wiThe value of (d) is equal to the manhattan distance between the ith neighbor point and the current point to be encoded.
6. The method of claim 3, wherein the nearest neighbor prediction mode is used to obtain the attribute of the current point to be encodedPredicted value of sexual activity A, A ═ Aj,j=1。
7. The point cloud attribute prediction equipment is characterized by comprising a processor, a memory, a communication interface and a communication bus; the memory has stored thereon a computer readable program executable by the processor; the communication bus realizes the connection communication between the processor and the memory; the processor, when executing the computer readable program, implements the point cloud attribute prediction method of any of claims 1-6.
CN202110954048.6A 2021-08-19 2021-08-19 Point cloud attribute prediction method and device Pending CN113784125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110954048.6A CN113784125A (en) 2021-08-19 2021-08-19 Point cloud attribute prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110954048.6A CN113784125A (en) 2021-08-19 2021-08-19 Point cloud attribute prediction method and device

Publications (1)

Publication Number Publication Date
CN113784125A true CN113784125A (en) 2021-12-10

Family

ID=78838427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110954048.6A Pending CN113784125A (en) 2021-08-19 2021-08-19 Point cloud attribute prediction method and device

Country Status (1)

Country Link
CN (1) CN113784125A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115688004A (en) * 2022-11-08 2023-02-03 中国民用航空总局第二研究所 Target attribute determination method, medium and device based on Hilbert coding
WO2024082854A1 (en) * 2022-10-19 2024-04-25 中兴通讯股份有限公司 Point cloud attribute prediction method, and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572655A (en) * 2019-09-30 2019-12-13 北京大学深圳研究生院 method and equipment for encoding and decoding point cloud attribute based on neighbor weight parameter selection and transmission
US20200111236A1 (en) * 2018-10-03 2020-04-09 Apple Inc. Point cloud compression using fixed-point numbers
CN111145090A (en) * 2019-11-29 2020-05-12 鹏城实验室 Point cloud attribute encoding method, point cloud attribute decoding method, point cloud attribute encoding equipment and point cloud attribute decoding equipment
CN111242997A (en) * 2020-01-13 2020-06-05 北京大学深圳研究生院 Filter-based point cloud attribute prediction method and device
CN111405284A (en) * 2020-03-30 2020-07-10 北京大学深圳研究生院 Point cloud density-based attribute prediction method and device
CN113179410A (en) * 2021-06-10 2021-07-27 上海交通大学 Point cloud attribute coding and decoding method, device and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200111236A1 (en) * 2018-10-03 2020-04-09 Apple Inc. Point cloud compression using fixed-point numbers
CN110572655A (en) * 2019-09-30 2019-12-13 北京大学深圳研究生院 method and equipment for encoding and decoding point cloud attribute based on neighbor weight parameter selection and transmission
CN111145090A (en) * 2019-11-29 2020-05-12 鹏城实验室 Point cloud attribute encoding method, point cloud attribute decoding method, point cloud attribute encoding equipment and point cloud attribute decoding equipment
CN111242997A (en) * 2020-01-13 2020-06-05 北京大学深圳研究生院 Filter-based point cloud attribute prediction method and device
CN111405284A (en) * 2020-03-30 2020-07-10 北京大学深圳研究生院 Point cloud density-based attribute prediction method and device
CN113179410A (en) * 2021-06-10 2021-07-27 上海交通大学 Point cloud attribute coding and decoding method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HONGLIAN WEI 等: "Enhanced Intra Prediction Scheme in Point Cloud Attribute Compression", 2019 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING(VCIP) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024082854A1 (en) * 2022-10-19 2024-04-25 中兴通讯股份有限公司 Point cloud attribute prediction method, and device and storage medium
CN115688004A (en) * 2022-11-08 2023-02-03 中国民用航空总局第二研究所 Target attribute determination method, medium and device based on Hilbert coding
CN115688004B (en) * 2022-11-08 2023-09-29 中国民用航空总局第二研究所 Target attribute determining method, medium and device based on Hilbert coding

Similar Documents

Publication Publication Date Title
US11450031B2 (en) Significant coefficient flag encoding for point cloud attribute compression
US11252441B2 (en) Hierarchical point cloud compression
CN110572655B (en) Method and equipment for encoding and decoding point cloud attribute based on neighbor weight parameter selection and transmission
JP7466849B2 (en) TRISOUP node size per slice
US10003792B2 (en) Video encoder for images
JP5932051B2 (en) Predictive position decoding
CN113473127B (en) Point cloud geometric coding method, point cloud geometric decoding method, point cloud geometric coding equipment and point cloud geometric decoding equipment
CN113784125A (en) Point cloud attribute prediction method and device
WO2013067674A1 (en) Predictive position encoding
CN113179410B (en) Point cloud attribute coding and decoding method, device and system
CN114598891B (en) Point cloud data encoding method, decoding method, point cloud data processing method and device
CN114095735A (en) Point cloud geometric inter-frame prediction method based on block motion estimation and motion compensation
US20230237705A1 (en) Methods for level partition of point cloud, and decoder
CN114187401A (en) Point cloud attribute encoding method, point cloud attribute decoding method, point cloud attribute encoding equipment and point cloud attribute decoding equipment
WO2024037244A1 (en) Method and apparatus for decoding point cloud data, method and apparatus for encoding point cloud data, and storage medium and device
WO2022131948A1 (en) Devices and methods for sequential coding for point cloud compression
JP2007019687A (en) Image processing method using csrbf
Kathariya et al. Embedded binary tree for dynamic point cloud geometry compression with graph signal resampling and prediction
CN115103186B (en) Code rate control method and device, electronic equipment and storage medium
WO2023023914A1 (en) Intra-frame prediction method and apparatus, encoding method and apparatus, decoding method and apparatus, and encoder, decoder, device and medium
WO2022217472A1 (en) Point cloud encoding and decoding methods, encoder, decoder, and computer readable storage medium
US20240257404A1 (en) Methods for level partition of point cloud, and decoder
CN118175319A (en) Point cloud encoding method, point cloud decoding method and related equipment
CN116458158A (en) Intra-frame prediction method and device, codec, device, and storage medium
CN115412713A (en) Method and device for predicting, encoding and decoding point cloud depth information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination