WO2023087159A1 - 一种基于avs运动估计编码的dvs数据生成方法 - Google Patents

一种基于avs运动估计编码的dvs数据生成方法 Download PDF

Info

Publication number
WO2023087159A1
WO2023087159A1 PCT/CN2021/131039 CN2021131039W WO2023087159A1 WO 2023087159 A1 WO2023087159 A1 WO 2023087159A1 CN 2021131039 W CN2021131039 W CN 2021131039W WO 2023087159 A1 WO2023087159 A1 WO 2023087159A1
Authority
WO
WIPO (PCT)
Prior art keywords
dvs
motion estimation
avs
video
data
Prior art date
Application number
PCT/CN2021/131039
Other languages
English (en)
French (fr)
Inventor
张伟民
张世雄
龙仕强
魏文应
Original Assignee
广东博华超高清创新中心有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东博华超高清创新中心有限公司 filed Critical 广东博华超高清创新中心有限公司
Publication of WO2023087159A1 publication Critical patent/WO2023087159A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Definitions

  • the invention relates to the field of computer vision, in particular to a method for generating DVS data based on AVS motion estimation coding.
  • DVS Dynamic Vision Sensor, also known as event camera
  • DVS is an emerging computer vision sensor in recent years.
  • DVS is mainly used to collect the transformation of pixels in the area, especially the pixel changes caused by object movement, and obtain the information of object movement by counting these changes.
  • DVS does not return color images, but returns event maps, that is, returns the position, direction and time stamp of object movement within the area. It is mainly used for data collection of movements and changes because of its fast speed. , good privacy protection, less data volume and other characteristics, it is favored and concerned by the industry.
  • the video stream data is generally decoded into complete image frames, and then the optical flow algorithm performs motion estimation on the video, which will result in decoding a large amount of redundant data and consume huge calculations due to the optical flow algorithm Quantities, especially the optical flow method based on deep neural network.
  • the invention provides a method for generating DVS data based on AVS motion estimation coding, using the characteristics of AVS motion estimation coding to generate DVS data in simulation, encoding RGB video with an AVS encoder to obtain a motion vector, and the motion vector is generated by encoding the DVS encoder
  • the DVS simulates data, so as to realize the generation of DVS data under the condition of low calculation amount.
  • the method of the present invention can effectively generate DVS analog data with low calculation amount, so as to solve the problem of large calculation amount of generating DVS data due to optical flow algorithm motion estimation when using video to generate DVS analog data, and quickly generate DVS data .
  • a kind of DVS data generating method based on AVS motion estimation coding comprises the following steps: S1. reads video; S2. obtains motion estimation: obtains the motion estimation of the adjacent prediction frame of same reference frame, calculates the motion estimation of two adjacent prediction frames residual; and S3. Generate DVS data: use the similarity between AVS and DVS motion estimation to generate DVS data according to the residual of adjacent predicted frames.
  • step S1 use the AVS codec to codec the video, use the AVS coder to convert the video format of other formats, and use the AVS decoding
  • the device performs video decoding to obtain the AVS decoded video data stream.
  • step S2 use the function of the AVS decoder to read the motion estimation vector, obtain the motion estimation vector of the video frame, and calculate two adjacent prediction frames residuals.
  • step S2 determine the current video frame relative to the adjacent previous frame video, the position where the pixel block changes and the motion direction and the corresponding pixel block timestamp.
  • step S2 when calculating the data, according to the frame rate of the video, a time stamp is simulated and generated, and its formula is:
  • t n is the timestamp of the nth frame
  • n is the nth frame
  • F is the video frame rate
  • step S3 the position of the pixel block generated by step S2, the direction of the corresponding pixel block motion, and the simulated time stamp are used as the DVS encoder
  • the input data is encoded by the DVS encoder, and the DVS data is output.
  • the present invention provides a DVS data generation method based on AVS motion estimation encoding.
  • the optical flow method is no longer used for data generation, but according to the motion estimation characteristics of AVS encoding and decoding, using the video encoder
  • the block coding motion estimation method that is, use the motion estimation coding function based on the AVS coding standard to quickly obtain the motion estimation vector from the video stream, use the motion estimation vector obtained by the AVS decoder as the motion estimation vector required for DVS coding, and finally in In the case of extremely low computation, the DVS encoder generates DVS data through DVS encoding, which effectively solves the problem of large computation for optical flow method motion estimation to generate DVS data, and finally realizes the function of generating DVS data with low computation.
  • Fig. 1 is the flowchart of the DVS data generating method based on AVS motion estimation coding of the present invention
  • FIG. 2 is a schematic diagram of AVS motion estimation coding involved in the DVS data generation method based on AVS motion estimation coding of the present invention.
  • Fig. 3 is a schematic diagram of simulating DVS data in the present invention.
  • the principle of the present invention is: use the AVS motion estimation coding feature to obtain motion estimation from the AVS video stream, and then realize the function of generating DVS data; the present invention uses an AVS codec to encode and decode the video stream to obtain motion estimation between video frames , and use these motion estimates to generate DVS data, so as to achieve the effect of generating DVS data with low computation
  • Read the video Utilize the AVS codec standard technology to uniformly encode the video into the AVS video encoding format, and use the AVS decoder to decode to obtain the AVS decoded video data stream.
  • the motion estimation coding of each coder may be different.
  • a unified motion estimation coding specification is required.
  • the AVS codec standard has a mature motion estimation coding specification, and because it is a relatively new coding standard, it has absorbed a lot of experience and has its latecomer advantage in the coding field. Therefore, in the present invention, the AVS codec is used to encode and decode the video, the video in other formats is converted using the AVS coder, and the video is decoded using the AVS decoder to obtain the AVS decoded video data stream.
  • FIG. 2 is a schematic diagram of AVS motion estimation coding involved in the present invention.
  • the reference frame has a 4*4 pixel block.
  • the difference between the predicted frame and the reference frame is that the pixel block moves from the upper left corner of the image to the lower right corner, and other pixel areas remain unchanged.
  • Estimating the motion direction and distance of such pixel blocks is usually called motion estimation.
  • motion estimation can describe the difference between two frames (prediction frame and reference frame) images, and the specific location of the change.
  • the object of the present invention is to determine which pixel blocks of the current video frame have changed relative to the adjacent previous frame of video (that is, the positions where the pixel blocks have changed), as well as the movement direction and time stamp of the corresponding pixel blocks.
  • a motion vector is included, and the motion vector includes a motion direction and a motion distance. Therefore, it is easy to calculate which pixel blocks have changed in the current frame compared to the previous adjacent frame: for the same reference frame, the sum of the motion vectors is the change; for different reference frames, after the sum of the motion vectors, add two The difference between the reference frames is the delta.
  • the corresponding timestamp can be simulated to generate a timestamp according to the frame rate of the video during data calculation. The formula is:
  • t n is the timestamp of the nth frame
  • n is the nth frame
  • F is the video frame rate.
  • the size of the relevant pixel block is adaptive in the AVS encoder, which can be 4*4, 8*8, 16*16, etc.
  • DVS data is composed of position, movement direction, and time stamp, where position is a required element, and movement direction and time stamp are non-essential elements.
  • the video frame is decoded, and then the video frame is input into the optical flow algorithm model, and the position and direction of the object motion are predicted through the algorithm model prediction calculation.
  • the calculation of optical flow algorithm is relatively large, especially the calculation of optical flow based on deep convolutional neural network, which will consume a lot of computing power.
  • step S2 the changed position of the pixel block, the direction of the corresponding pixel block movement and the simulated time stamp are obtained with a very low amount of calculation, only the position of the pixel block generated by step S2, As well as the direction of the corresponding pixel block movement and the simulated time stamp, as the input data of the DVS encoder, after being encoded by the DVS encoder, the DVS data is output to realize the purpose of simulating the DVS data (as shown in FIG. 3 ). So far, all the operations of DVS data generation are completed.
  • the optical flow method is computationally complex and time-consuming. It predicts the motion of an object by calculating the residual of two frames of images.
  • the residual calculation is extremely time-consuming.
  • the method of the present invention utilizes the characteristics of motion estimation coding in the AVS coding process, performs residual calculation on the motion estimation of adjacent prediction frames of the same reference frame, uses the residual result to simulate and generate DVS data, and directly uses video coding in this method
  • the optical flow method when a coded video is input, the optical flow method must be completely decoded into an image, and then the residual is calculated. That is, the method of the present invention directly uses the calculated residual in video coding, without performing residual calculation.
  • the method of the present invention has less residual calculation steps with the largest amount of calculation, and therefore, the calculation amount is much smaller than that of the optical flow method, so as to solve the problems in the existing method due to the prediction and calculation of the optical flow method. lead to a large computational problem.
  • a DVS data generation method based on AVS motion estimation coding is suitable for computer vision sensors.
  • DVS is mainly used to collect the transformation of pixels in the area, especially the pixel changes caused by object movement, and obtain the information of object movement by counting these changes.
  • video coding has the function of motion estimation, and the motion vectors of pixel blocks that change between video frames will be calculated through the motion estimation function module.
  • the method of the present invention can effectively generate DVS analog data with low calculation amount, so as to solve the problem of large calculation amount of generating DVS data due to optical flow algorithm motion estimation when using video to generate DVS analog data, and quickly generate DVS data .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种基于AVS运动估计编码的DVS数据生成方法,读取视频(S1);.获取运动估计(S2):获取相同参考帧的相邻预测帧的运动估计,计算两个相邻预测帧的残差;以及生成DVS数据:利用AVS和DVS运动估计的相似性,根据相邻预测帧的残差,生成DVS数据(S3)。在计算量较低的情况下有效生成DVS模拟数据,以此解决在使用视频生成DVS模拟数据时因光流算法运动估计导致生成DVS数据计算量大的问题,并快速生成DVS数据。

Description

一种基于AVS运动估计编码的DVS数据生成方法 技术领域
本发明涉及计算机视觉领域,特别涉及一种基于AVS运动估计编码的DVS数据生成方法。
背景技术
随着智能终端设备的普及,设备中搭载的传感器也越来越多元化。DVS(Dynamic Vision Sensor,动态视觉传感器,也称为事件相机),是近年来新兴的计算机视觉传感器。DVS主要用于采集区域范围内像素的变换情况,特别是物体运动产生的像素变化,通过统计这些变化,获取物体运动的信息。DVS相比于普通摄像头,它不返回彩色图像,而是返回事件图,即返回区域范围内物体运动的位置、方向和时间戳,主要用于运动和变化情况的数据采集,因其具有速度快、隐私保护好、数据量少等特点,受到业界的青睐和关注。但是,由于目前DVS还没有被大规模商业应用,DVS采集的数据还比较少,而基于深度学习的神经网络算法,在DVS相关算法设计和训练时,需要海量的训练数据集。目前大都采用光流算法模拟生成DVS数据,但光流法计算需要消耗大量的算力。与此同时,由中国主导的数字音视频编解码标准(Audio Video coding Standard,简称AVS),正在逐步推广使用,目前已推出第三代AVS3标准。在AVS标准中,视频编码具备运动估计的功能,视频帧之间发生变化的像素块的运动矢量,将通过运动估计功能模块被计算出来。
在现有技术中,一般将视频流数据,解码成一个个完整的图像帧,然后光流算法对视频进行运动估计,这样将导致解码出大量冗余数据,并因光流算法消耗巨大的计算量,尤其是基于深度神经网络的光流法。
发明内容
本发明提供了一种基于AVS运动估计编码的DVS数据生成方法,利用AVS运动估计编码的特性,模拟生成DVS数据,将RGB视频使用AVS编码器编码得到运动矢量,运动矢量通过DVS编码器编码生成DVS模拟数据,从而实现在计算量较低的情况下的DVS数 据生成。本发明方法可以在计算量较低的情况下有效生成DVS模拟数据,以此解决在使用视频生成DVS模拟数据时因光流算法运动估计导致生成DVS数据计算量大的问题,并快速生成DVS数据。
本发明的技术方案如下:
一种基于AVS运动估计编码的DVS数据生成方法,包括以下步骤:S1.读取视频;S2.获取运动估计:获取相同参考帧的相邻预测帧的运动估计,计算两个相邻预测帧的残差;以及S3.生成DVS数据:利用AVS和DVS运动估计的相似性,根据相邻预测帧的残差,生成DVS数据。
优选的,在上述基于AVS运动估计编码的DVS数据生成方法中,在步骤S1中,使用AVS编解码器,对视频进行编解码,将其他格式的视频使用AVS编码器进行格式转换,使用AVS解码器进行视频解码,得到AVS解码后的视频数据流。
优选的,在上述基于AVS运动估计编码的DVS数据生成方法中,在步骤S2中,利用AVS解码器可读取运动估计矢量的功能,获取视频帧的运动估计矢量,计算两个相邻预测帧的残差。
优选的,在上述基于AVS运动估计编码的DVS数据生成方法中,在步骤S2中,确定当前视频帧相对于相邻的前一帧视频,像素块发生变化的位置以及相应像素块的运动方向和时间戳。
优选的,在上述基于AVS运动估计编码的DVS数据生成方法中,在步骤S2中,在数据计算时,根据视频的帧率,模拟生成一个时间戳,其公式为:
Figure PCTCN2021131039-appb-000001
其中,t n为第n帧的时间戳,n为第n帧,F为视频帧率。
优选的,在上述基于AVS运动估计编码的DVS数据生成方法中,在步骤S3中,将由步骤S2生成的像素块的位置、以及对应像素块运动的方向、模拟的时间戳,作为DVS编码器的输入数据,经过DVS编码器编码,输出DVS数据。
根据本发明的技术方案,产生的有益效果是:
本发明提供了一种基于AVS运动估计编码的DVS数据生成方法,在DVS数据生成过程中,不再采用光流法进行数据生成,而是根据AVS编解码的运动估计特性,使用视频 编码器中的块编码运动估计方法,即,使用基于AVS编码标准中的运动估计编码功能从视频流中快速获得运动估计矢量,使用AVS解码器获得的运动估计矢量作为DVS编码需要的运动估计矢量,最终在极低运算量的情况下经过DVS编码器通过DVS编码生成DVS数据,有效解决了光流法运动估计生成DVS数据运算量大的问题,最终实现在较低计算量情况下生成DVS数据的功能。
为了更好地理解和说明本发明的构思、工作原理和发明效果,下面结合附图,通过具体实施例,对本发明进行详细说明如下:
附图说明
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍。
图1是本发明的基于AVS运动估计编码的DVS数据生成方法的流程图;
图2是本发明的基于AVS运动估计编码的DVS数据生成方法涉及的AVS运动估计编码的示意图;以及
图3是本发明模拟DVS数据的示意图。
具体实施方式
为使本发明的目的、技术方法及优点更加清晰,下面结合附图及具体实例,对本发明做进一步的详细说明。这些实例仅仅是说明性的,而并非对本发明的限制。
本发明的原理是:利用AVS运动估计编码特性,从AVS视频流中获得运动估计,进而实现生成DVS数据的功能;本发明使用AVS编解码器,对视频流进行编解码得到视频帧间运动估计,并使用这些运动估计生成DVS数据,从而实现生成DVS数据运算量低的效果
如图1所示,本发明的基于AVS运动估计编码的DVS数据生成方法具体步骤如下:
S1.读取视频:利用AVS编解码标准技术,将视频统一编码为AVS视频编码格式,并使用AVS解码器解码,得到AVS解码后的视频数据流。
目前,常用的视频编码有多种,常见的有MPEG-4、H.264、AVS等。在视频帧间预测块编码中,每种编码器的运动估计编码,可能有差异。在本发明中,需要一个统一的运 动估计编码规范。另一方面,AVS编解码标准,具有成熟的运动估计编码规范,同时因其为较新的编码标准,吸取了大量经验,在编码领域有其后发优势。因此,本发明中使用AVS编解码器,对视频进行编解码,将其他格式的视频使用AVS编码器进行格式转换,使用AVS解码器进行视频解码,得到AVS解码后的视频数据流。
S2.获取运动估计:获取相同参考帧的相邻预测帧的运动估计,计算两个相邻预测帧的残差。利用AVS解码器可读取运动估计矢量的功能,获取视频帧的运动估计矢量,计算两个相邻预测帧的残差。
在AVS编码标准中,使用块编码技术进行帧间压缩编码,而块编码技术包含了运动估计功能。图2是本发明涉及的AVS运动估计编码的示意图。如图2所示,参考帧有一个4*4像素块,预测帧和参考帧之间的差异,在于像素块从图像的左上角,移到右下角,其他像素区域不变。估算这种像素块的运动方向和距离,通常被称为运动估计。显然,运动估计可以描述两帧(预测帧和参考帧)图像之间的差异,以及发生变化的具***置。本发明的目标是要确定当前视频帧相对于相邻的前一帧视频,哪些像素块发生了变化(即,像素块发生变化的位置),以及相应像素块的运动方向和时间戳。AVS运动估计中,包含了运动矢量,运动矢量包含运动方向和运动距离。因此很容易计算出当前帧相对上一相邻帧,哪些像素块发生了变化:相同参考帧时,运动矢量求和即为变化量;不同参考帧时,运动矢量求和后,加上两个参考帧的差值即为变化量。而对应的时间戳,则可以在数据计算时,根据视频的帧率,模拟生成一个时间戳,其公式为:
Figure PCTCN2021131039-appb-000002
t n为第n帧的时间戳,n为第n帧,F为视频帧率。此外,相关像素块的大小,在AVS编码器中,是自适应节的,可以是4*4、8*8、16*16等。
S3.生成DVS数据:利用AVS和DVS运动估计的相似性,根据相邻预测帧的残差,生成DVS数据。
因为DVS数据是由位置、运动方向、时间戳构成,其中位置为必须元素,运动方向和时间戳为非必须元素。在旧有方法中,是将视频帧进行解码,然后将视频帧输入光流算法模型中,经过算法模型预测计算,预测得到物体运动的位置、方向。光流算法计算量相对庞大,特别是基于深度卷积神经网络的光流计算,更是如此,会消耗大量的算力。 而在本发明方法中,经由步骤S2,以极低的计算量得到了像素块发生变化的位置以及对应像素块运动的方向和模拟的时间戳,只需将由步骤S2生成的像素块的位置、以及对应像素块运动的方向、模拟的时间戳,作为DVS编码器的输入数据,经过DVS编码器编码,输出DVS数据,即可实现模拟DVS数据(如图3所示)的目的。至此,完成了DVS数据生成的所有操作。
光流法计算复杂耗时,其通过计算两帧图像的残差,来预测物体运动。而残差计算,是极度耗时的。本发明方法利用AVS编码过程中的运动估计编码的特性,将相同参考帧的相邻预测帧的运动估计,进行残差计算,利用残差结果模拟生成DVS数据,在本方法中直接使用视频编码中的残差,当输入一段编码好的视频时,光流法要彻底解码成图像,再进行残差计算。即本发明方法直接使用视频编码中的已经计算好的残差,不必进行残差计算。因此,本发明方法与光流法相比,少了计算量最大的残差计算步骤,因此,比光流法计算量要小的多,以此解决已有方法中,因光流法预测计算而导致计算量大的问题。
以上说明是依据发明的构思和工作原理的最佳实施例。上述实施例不应理解为对本权利要求保护范围的限制,依照本发明构思的其他实施方式和实现方式的组合均属于本发明的保护范围。
工业应用性
一种基于AVS运动估计编码的DVS数据生成方法适用于计算机视觉传感器。DVS主要用于采集区域范围内像素的变换情况,特别是物体运动产生的像素变化,通过统计这些变化,获取物体运动的信息。在AVS标准中,视频编码具备运动估计的功能,视频帧之间发生变化的像素块的运动矢量,将通过运动估计功能模块被计算出来。本发明方法可以在计算量较低的情况下有效生成DVS模拟数据,以此解决在使用视频生成DVS模拟数据时因光流算法运动估计导致生成DVS数据计算量大的问题,并快速生成DVS数据。

Claims (6)

  1. 一种基于AVS运动估计编码的DVS数据生成方法,其特征在于,包括以下步骤:
    S1.读取视频;
    S2.获取运动估计:获取相同参考帧的相邻预测帧的运动估计,计算两个相邻预测帧的残差;以及
    S3.生成DVS数据:利用AVS和DVS运动估计的相似性,根据相邻预测帧的残差,生成DVS数据。
  2. 根据权利要求1所述的基于AVS运动估计编码的DVS数据生成方法,其特征在于,在步骤S1中,使用AVS编解码器,对所述视频进行编解码,将其他格式的视频使用AVS编码器进行格式转换,使用AVS解码器进行视频解码,得到AVS解码后的视频数据流。
  3. 根据权利要求1所述的基于AVS运动估计编码的DVS数据生成方法,其特征在于,在步骤S2中,利用AVS解码器可读取运动估计矢量的功能,获取视频帧的运动估计矢量,计算所述两个相邻预测帧的残差。
  4. 根据权利要求1所述的基于AVS运动估计编码的DVS数据生成方法,其特征在于,在步骤S2中,确定当前视频帧相对于相邻的前一帧视频,像素块发生变化的位置以及相应像素块的运动方向和时间戳。
  5. 根据权利要求4所述的基于AVS运动估计编码的DVS数据生成方法,其特征在于,在步骤S2中,在数据计算时,根据视频的帧率,模拟生成时间戳,其公式为:
    Figure PCTCN2021131039-appb-100001
    其中,t n为第n帧的时间戳,n为第n帧,F为视频帧率。
  6. 根据权利要求1所述的基于AVS运动估计编码的DVS数据生成方法,其特征在于,在步骤S3中,将由步骤S2生成的像素块的位置、以及对应像素块运动的方向、模拟的时间戳,作为DVS编码器的输入数据,经过DVS编码器编码,输出DVS数据。
PCT/CN2021/131039 2021-11-16 2021-11-17 一种基于avs运动估计编码的dvs数据生成方法 WO2023087159A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111352163.2A CN114071156A (zh) 2021-11-16 2021-11-16 一种基于avs运动估计编码的dvs数据生成方法
CN202111352163.2 2021-11-16

Publications (1)

Publication Number Publication Date
WO2023087159A1 true WO2023087159A1 (zh) 2023-05-25

Family

ID=80272574

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131039 WO2023087159A1 (zh) 2021-11-16 2021-11-17 一种基于avs运动估计编码的dvs数据生成方法

Country Status (2)

Country Link
CN (1) CN114071156A (zh)
WO (1) WO2023087159A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006118383A1 (en) * 2005-04-29 2006-11-09 Samsung Electronics Co., Ltd. Video coding method and apparatus supporting fast fine granular scalability
CN101272488A (zh) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 降低lcd显示运动模糊的视频解码方法和装置
CN101835047A (zh) * 2010-04-30 2010-09-15 中山大学 一种基于残差下降率的快速UMHexagonS运动估计算法
EP2343901A1 (en) * 2010-01-08 2011-07-13 Research In Motion Limited Method and device for video encoding using predicted residuals
US20110170598A1 (en) * 2010-01-08 2011-07-14 Xun Shi Method and device for video encoding using predicted residuals
CN102387360A (zh) * 2010-09-02 2012-03-21 乐金电子(中国)研究开发中心有限公司 视频编解码帧间图像预测方法及视频编解码器
CN102843554A (zh) * 2011-06-21 2012-12-26 乐金电子(中国)研究开发中心有限公司 帧间图像预测编解码方法及视频编解码器
CN109819263A (zh) * 2017-11-22 2019-05-28 腾讯科技(深圳)有限公司 视频编码方法、装置、计算机设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006118383A1 (en) * 2005-04-29 2006-11-09 Samsung Electronics Co., Ltd. Video coding method and apparatus supporting fast fine granular scalability
CN101272488A (zh) * 2007-03-23 2008-09-24 展讯通信(上海)有限公司 降低lcd显示运动模糊的视频解码方法和装置
EP2343901A1 (en) * 2010-01-08 2011-07-13 Research In Motion Limited Method and device for video encoding using predicted residuals
US20110170598A1 (en) * 2010-01-08 2011-07-14 Xun Shi Method and device for video encoding using predicted residuals
CN101835047A (zh) * 2010-04-30 2010-09-15 中山大学 一种基于残差下降率的快速UMHexagonS运动估计算法
CN102387360A (zh) * 2010-09-02 2012-03-21 乐金电子(中国)研究开发中心有限公司 视频编解码帧间图像预测方法及视频编解码器
CN102843554A (zh) * 2011-06-21 2012-12-26 乐金电子(中国)研究开发中心有限公司 帧间图像预测编解码方法及视频编解码器
CN109819263A (zh) * 2017-11-22 2019-05-28 腾讯科技(深圳)有限公司 视频编码方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN114071156A (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
CN111405283B (zh) 基于深度学习的端到端视频压缩方法、***及存储介质
JP6843239B2 (ja) 符号化ユニットの深さ特定方法及び装置
CN103096055B (zh) 一种图像信号帧内预测及解码的方法和装置
WO2016141609A1 (zh) 图像预测方法和相关设备
WO2016065873A1 (zh) 图像预测方法及相关装置
WO2016131229A1 (zh) 用于视频图像编码和解码的方法、编码设备和解码设备
WO2023016155A1 (zh) 图像处理方法、装置、介质及电子设备
CN103826125B (zh) 用于已压缩监控视频的浓缩分析方法和装置
CN107155112A (zh) 一种多假设预测的压缩感知视频处理方法
CN105554502A (zh) 基于前景背景分离的分布式压缩感知视频编解码方法
CN111310594B (zh) 一种基于残差纠正的视频语义分割方法
CN103051891B (zh) 确定数据流内分块预测编码的视频帧的块的显著值的方法和装置
Guo et al. Learning cross-scale weighted prediction for efficient neural video compression
CN116600119B (zh) 视频编码、解码方法、装置、计算机设备和存储介质
WO2023087159A1 (zh) 一种基于avs运动估计编码的dvs数据生成方法
WO2021031225A1 (zh) 一种运动矢量导出方法、装置及电子设备
CN113709483B (zh) 一种插值滤波器系数自适应生成方法及装置
CN103997635B (zh) 自由视点视频的合成视点失真预测方法及编码方法
Luo et al. Super-High-Fidelity Image Compression via Hierarchical-ROI and Adaptive Quantization
Wang et al. A fast perceptual surveillance video coding (PSVC) based on background model-driven JND estimation
CN112672150A (zh) 基于视频预测的视频编码方法
Liu et al. BIRD-PCC: Bi-Directional Range Image-Based Deep Lidar Point Cloud Compression
Mamou et al. Multi-chart geometry video: A compact representation for 3D animations
CN110944211A (zh) 用于帧内预测的插值滤波方法、装置、介质及电子设备
CN116828184B (zh) 视频编码、解码方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964325

Country of ref document: EP

Kind code of ref document: A1