CN111709945A - Video copy detection method based on depth local features - Google Patents

Video copy detection method based on depth local features Download PDF

Info

Publication number
CN111709945A
CN111709945A CN202010691138.6A CN202010691138A CN111709945A CN 111709945 A CN111709945 A CN 111709945A CN 202010691138 A CN202010691138 A CN 202010691138A CN 111709945 A CN111709945 A CN 111709945A
Authority
CN
China
Prior art keywords
video
fusion
extracting
feature map
local features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010691138.6A
Other languages
Chinese (zh)
Other versions
CN111709945B (en
Inventor
贾宇
张家亮
董文杰
曹亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanglian Anrui Network Technology Co ltd
Original Assignee
Chengdu 30kaitian Communication Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu 30kaitian Communication Industry Co ltd filed Critical Chengdu 30kaitian Communication Industry Co ltd
Priority to CN202010691138.6A priority Critical patent/CN111709945B/en
Publication of CN111709945A publication Critical patent/CN111709945A/en
Application granted granted Critical
Publication of CN111709945B publication Critical patent/CN111709945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video copy detection method based on depth local features, which comprises the following steps: (1) extracting frame images from video data, and constructing an image pyramid by using different scales; (2) constructing a deep convolutional neural network model, extracting a feature map from an input image pyramid, and performing feature fusion on the feature map to obtain a fusion feature map; (3) training the deep convolutional neural network model by using a metric learning mode; (4) extracting a fusion characteristic graph from the image pyramid by using the trained deep convolution neural network model; (5) extracting key points from the fusion feature map by using maximum suppression, and extracting corresponding local features according to the key points; (6) and performing video copy detection according to the local characteristics. The method has the advantages of higher extraction speed and stronger local feature representation, so that the local features can be accurately detected aiming at various complex transformed copy videos, and the method has the characteristic of high robustness.

Description

Video copy detection method based on depth local features
Technical Field
The invention relates to the technical field of multimedia information processing, in particular to a video copy detection method based on depth local features.
Background
In the current mobile internet era, the difficulty of preventing tampering video data from spreading wantonly is increased due to the characteristics of complexity of multimedia video data, appearance of various video editing software, wide sources and the like. Related network supervision departments want to effectively supervise the online multimedia video data and cannot rely on manual supervision and user reporting.
The current solution is to use the traditional image processing or global feature extraction method, the traditional algorithm has low processing efficiency and low accuracy, and the global feature extraction method has good processing effect on the general edited video, but the processing effect on the edited video with various complex transformations is difficult to achieve. Both the traditional image processing method and the global feature extraction method have certain defects for the current multimedia video on the Internet.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the existing problems, a video copy detection method based on the depth local features is provided.
The technical scheme adopted by the invention is as follows:
a video copy detection method based on depth local features comprises the following steps:
(1) extracting frame images from video data, and constructing an image pyramid by using different scales;
(2) constructing a deep convolutional neural network model, extracting a feature map from an input image pyramid, and performing feature fusion on the feature map to obtain a fusion feature map;
(3) training the deep convolutional neural network model by using a metric learning mode;
(4) extracting a fusion characteristic graph from the image pyramid by using the trained deep convolution neural network model;
(5) extracting key points from the fusion feature map by using maximum suppression, and extracting corresponding local features according to the key points;
(6) and performing video copy detection according to the local characteristics.
Further, the deep convolutional neural network model is a full convolutional model comprising n-1 convolutional layers and 1 fusion convolutional layer; wherein the content of the first and second substances,
the n-i layers to the n-1 layers of convolution layers are used for extracting a characteristic diagram from an input image pyramid;
the fusion convolutional layer is used for carrying out feature fusion on the feature maps extracted by the n-i to n-1 convolutional layers to obtain a fusion feature map; i is more than or equal to 2 and less than or equal to n-1, and both i and n are integers.
Furthermore, the convolution channel of the n-i layers to the n-1 layers of convolution layers is 128.
Further, the convolution kernel size of the (n-1) th convolutional layer is 1 × 1, which is used for convolving the feature map to 1 × 1 size, and the feature map output by the convolutional layer is used as a global feature for model training.
Further, the step (6) comprises the following sub-steps:
(6.1) carrying out steps (1) to (5) on the library video to obtain local features of the library video;
(6.2) the local characteristics of the video to be detected are obtained through the steps (1) to (5);
(6.3) carrying out random consistency space verification on the local features of the video to be detected and the local features of the library video, and filtering out non-relevant matching points;
(6.4) calculating the similarity according to the residual matching points;
and (6.5) sequencing the similarity calculation results to obtain a source video data result.
Preferably, the similarity is calculated by means of vector inner product.
Preferably, the frame image extracted for the video data in step (1) is a key frame image.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the method extracts the fusion characteristic graph based on the deep convolutional neural network model, obtains key points by adopting maximum suppression, and can extract high-efficiency local characteristics, thereby comprehensively describing the video frame image. Compared with the traditional local feature extraction algorithm, the method has the advantages that the extraction speed is higher, the local feature representation is stronger, therefore, the local feature can be accurately detected aiming at various complex transformed copy videos, the method has the characteristic of high robustness, and a feasible technical scheme is provided for a network supervision department for supervising a large amount of tampered and arbitrarily spread multimedia video data on the Internet.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a video copy detection method based on a deep local feature according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a deep convolutional neural network model according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of key point and local feature extraction of the present invention.
FIG. 4 is a diagram of the effectiveness of video copy detection according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The technique according to the present invention will be explained as follows:
convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep).
Metric Learning (Metric Learning) is an algorithm that is central in the tasks of fine-grained classification, retrieval, face, etc., and can learn the subtle differences of images through training.
The features and properties of the present invention are described in further detail below with reference to examples.
As shown in fig. 1, the video copy detection method based on the depth local feature provided in this embodiment includes the following steps:
s1, extracting frame images for the video data, and then constructing an image pyramid by using different scales;
the video data is a temporal collection of images, and thus the processing for the video can be performed by extracting frame images, but since extracting the number of frames on a time scale causes much redundant information, it is preferable to extract key frame images for the video data. Therefore, the key frame extraction is carried out by utilizing the correlation of the video frame images, only one characteristic is reserved for similar characteristics, the redundancy is reduced, and the visual expression of the video data is improved. For example: the key frame extraction mainly utilizes the format and the content of the video frame image to judge, carries out characteristic judgment on the color, the texture, the structure and the like of the image, filters out similar pictures, ensures that only one frame is extracted from each scene, and the content of the part is the prior art and is not repeated herein.
S2, constructing a deep convolutional neural network model for extracting a feature map from the input image pyramid and carrying out feature fusion on the feature map to obtain a fusion feature map;
as shown in fig. 2, the deep convolutional neural network model is a full convolutional model including n-1 convolutional layers and 1 fusion convolutional layer, and no pooling layer is provided, so as to retain the original information of the image as much as possible; wherein the content of the first and second substances,
the n-i layers to the n-1 layers of convolution layers are used for extracting a characteristic diagram from an input image pyramid;
the fusion convolutional layer is used for carrying out feature fusion on the feature maps extracted by the n-i to n-1 convolutional layers to obtain a fusion feature map; i is more than or equal to 2 and less than or equal to n-1, and both i and n are integers. That is, the fused convolutional layers are the feature maps of the last several convolutional layers fused.
In some embodiments, the convolution channel of the n-i to n-1 convolutional layers is 128, so that the dimension of the subsequently extracted local features is kept at 128, and the scale of the feature map extracted by the convolutional layers is normalized, thereby enhancing the information of the fused feature map.
In some embodiments, the convolution kernel size of the (n-1) th convolutional layer is 1 × 1 for convolving the feature map to 1 × 1 size, and the feature map output by the convolutional layer is used as a global feature for model training.
S3, training the deep convolutional neural network model by using a metric learning mode;
and a metric learning mode is adopted, so that the model learns the slight difference between the images, and the detection precision is improved. The method specifically adopts the Arcface Loss function containing the angle information, is different from the traditional triple Loss function (triple Loss), and is easy to converge in model and richer in learned information.
S4, extracting a fusion feature map from the image pyramid by using the trained deep convolution neural network model;
s5, as shown in fig. 3, extracting key points from the fused feature map by using maximum suppression, and extracting corresponding local features according to the key points;
s6, video copy detection is carried out according to the local features:
s61, obtaining the local characteristics of the library video through the steps S1-S5, wherein the local characteristics can be understood as a local characteristic library of the library video which is configured in advance and used for detecting the video to be detected subsequently;
s62, processing the video to be detected by steps S1-S5 to obtain the local characteristics of the video; it should be noted that, if the pyramid is constructed on the key frame image and the local feature is obtained for the library video, the pyramid is constructed on the key frame image and the local feature is obtained for the video to be detected;
s63, performing random consistency spatial validation (RANSAC) on the local features of the video to be detected and the local features of the library video, and filtering out non-relevant matching points;
s64, calculating similarity according to the residual matching points by adopting a vector inner product mode;
s65, sorting the similarity calculation results to obtain the source video data result, as shown in fig. 4.
As can be seen from the above, the present invention has the following beneficial effects:
the method extracts the fusion characteristic graph based on the deep convolutional neural network model, obtains key points by adopting maximum suppression, and can extract high-efficiency local characteristics, thereby comprehensively describing the video frame image. Compared with the traditional local feature extraction algorithm, the method has the advantages that the extraction speed is higher, the local feature representation is stronger, therefore, the local feature can be accurately detected aiming at various complex transformed copy videos, the method has the characteristic of high robustness, and a feasible technical scheme is provided for a network supervision department for supervising a large amount of tampered and arbitrarily spread multimedia video data on the Internet.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A video copy detection method based on depth local features is characterized by comprising the following steps:
(1) extracting frame images from video data, and constructing an image pyramid by using different scales;
(2) constructing a deep convolutional neural network model, extracting a feature map from an input image pyramid, and performing feature fusion on the feature map to obtain a fusion feature map;
(3) training the deep convolutional neural network model by using a metric learning mode;
(4) extracting a fusion characteristic graph from the image pyramid by using the trained deep convolution neural network model;
(5) extracting key points from the fusion feature map by using maximum suppression, and extracting corresponding local features according to the key points;
(6) and performing video copy detection according to the local characteristics.
2. The method according to claim 1, wherein the deep convolutional neural network model is a full convolutional model comprising n-1 convolutional layers and 1 fusion convolutional layer; wherein the content of the first and second substances,
the n-i layers to the n-1 layers of convolution layers are used for extracting a characteristic diagram from an input image pyramid;
the fusion convolutional layer is used for carrying out feature fusion on the feature maps extracted by the n-i to n-1 convolutional layers to obtain a fusion feature map; i is more than or equal to 2 and less than or equal to n-1, and both i and n are integers.
3. The method according to claim 2, wherein the convolution channel of the n-i to n-1 convolutional layers is 128.
4. The method of claim 2, wherein the convolution kernel size of the (n-1) th layer of convolutional layer is 1 x 1 for convolving the feature map to 1 x 1 size, and the feature map output by the layer of convolutional layer is used as a global feature for model training.
5. The method for detecting video copy based on deep local features of claim 1, wherein the step (6) comprises the following sub-steps:
(6.1) carrying out steps (1) to (5) on the library video to obtain local features of the library video;
(6.2) the local characteristics of the video to be detected are obtained through the steps (1) to (5);
(6.3) carrying out random consistency space verification on the local features of the video to be detected and the local features of the library video, and filtering out non-relevant matching points;
(6.4) calculating the similarity according to the residual matching points;
and (6.5) sequencing the similarity calculation results to obtain a source video data result.
6. The method according to claim 5, wherein the similarity is calculated by means of vector inner product.
7. The method for detecting video copy based on deep local features of any one of claims 1 to 6, wherein the frame image extracted for the video data in step (1) is a key frame image.
CN202010691138.6A 2020-07-17 2020-07-17 Video copy detection method based on depth local features Active CN111709945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010691138.6A CN111709945B (en) 2020-07-17 2020-07-17 Video copy detection method based on depth local features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010691138.6A CN111709945B (en) 2020-07-17 2020-07-17 Video copy detection method based on depth local features

Publications (2)

Publication Number Publication Date
CN111709945A true CN111709945A (en) 2020-09-25
CN111709945B CN111709945B (en) 2023-06-30

Family

ID=72546636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010691138.6A Active CN111709945B (en) 2020-07-17 2020-07-17 Video copy detection method based on depth local features

Country Status (1)

Country Link
CN (1) CN111709945B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI776668B (en) * 2021-09-07 2022-09-01 台達電子工業股份有限公司 Image processing method and image processing system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376003A (en) * 2013-08-13 2015-02-25 深圳市腾讯计算机***有限公司 Video retrieval method and device
CN106845499A (en) * 2017-01-19 2017-06-13 清华大学 A kind of image object detection method semantic based on natural language
CN106991373A (en) * 2017-03-02 2017-07-28 中国人民解放军国防科学技术大学 A kind of copy video detecting method based on deep learning and graph theory
CN108197566A (en) * 2017-12-29 2018-06-22 成都三零凯天通信实业有限公司 Monitoring video behavior detection method based on multi-path neural network
US20190279014A1 (en) * 2016-12-27 2019-09-12 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object keypoint, and electronic device
CN110781350A (en) * 2019-09-26 2020-02-11 武汉大学 Pedestrian retrieval method and system oriented to full-picture monitoring scene
CN111126412A (en) * 2019-11-22 2020-05-08 复旦大学 Image key point detection method based on characteristic pyramid network
WO2020098225A1 (en) * 2018-11-16 2020-05-22 北京市商汤科技开发有限公司 Key point detection method and apparatus, electronic device and storage medium
CN111241338A (en) * 2020-01-08 2020-06-05 成都三零凯天通信实业有限公司 Depth feature fusion video copy detection method based on attention mechanism
CN111275044A (en) * 2020-02-21 2020-06-12 西北工业大学 Weak supervision target detection method based on sample selection and self-adaptive hard case mining

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376003A (en) * 2013-08-13 2015-02-25 深圳市腾讯计算机***有限公司 Video retrieval method and device
US20190279014A1 (en) * 2016-12-27 2019-09-12 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object keypoint, and electronic device
CN106845499A (en) * 2017-01-19 2017-06-13 清华大学 A kind of image object detection method semantic based on natural language
CN106991373A (en) * 2017-03-02 2017-07-28 中国人民解放军国防科学技术大学 A kind of copy video detecting method based on deep learning and graph theory
CN108197566A (en) * 2017-12-29 2018-06-22 成都三零凯天通信实业有限公司 Monitoring video behavior detection method based on multi-path neural network
WO2020098225A1 (en) * 2018-11-16 2020-05-22 北京市商汤科技开发有限公司 Key point detection method and apparatus, electronic device and storage medium
CN110781350A (en) * 2019-09-26 2020-02-11 武汉大学 Pedestrian retrieval method and system oriented to full-picture monitoring scene
CN111126412A (en) * 2019-11-22 2020-05-08 复旦大学 Image key point detection method based on characteristic pyramid network
CN111241338A (en) * 2020-01-08 2020-06-05 成都三零凯天通信实业有限公司 Depth feature fusion video copy detection method based on attention mechanism
CN111275044A (en) * 2020-02-21 2020-06-12 西北工业大学 Weak supervision target detection method based on sample selection and self-adaptive hard case mining

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI776668B (en) * 2021-09-07 2022-09-01 台達電子工業股份有限公司 Image processing method and image processing system

Also Published As

Publication number Publication date
CN111709945B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
Torralba et al. Labelme: Online image annotation and applications
Ding et al. Point cloud saliency detection by local and global feature fusion
Parikh et al. Exploring tiny images: The roles of appearance and contextual information for machine and human object recognition
CN112232134B (en) Human body posture estimation method based on hourglass network and attention mechanism
CN111241338B (en) Depth feature fusion video copy detection method based on attention mechanism
CN111754396A (en) Face image processing method and device, computer equipment and storage medium
CN111079539A (en) Video abnormal behavior detection method based on abnormal tracking
CN113761359B (en) Data packet recommendation method, device, electronic equipment and storage medium
CN112818904A (en) Crowd density estimation method and device based on attention mechanism
CN116071709A (en) Crowd counting method, system and storage medium based on improved VGG16 network
CN115131218A (en) Image processing method, image processing device, computer readable medium and electronic equipment
Niu et al. Image retargeting quality assessment based on registration confidence measure and noticeability-based pooling
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
Weng et al. A survey on improved GAN based image inpainting
CN111709945A (en) Video copy detection method based on depth local features
Zheng et al. Pose flow learning from person images for pose guided synthesis
CN114998814B (en) Target video generation method and device, computer equipment and storage medium
Xi et al. Reconstructing piecewise planar scenes with multi-view regularization
Zeng et al. Multi-view self-supervised learning for 3D facial texture reconstruction from single image
CN114329050A (en) Visual media data deduplication processing method, device, equipment and storage medium
CN111695526B (en) Network model generation method, pedestrian re-recognition method and device
Yin Albert et al. Identifying and Monitoring Students’ Classroom Learning Behavior Based on Multisource Information
Liu 3DSportNet: 3D sport reconstruction by quality-aware deep multi-video summation
Jam et al. V-LinkNet: Learning Contextual Inpainting Across Latent Space of Generative Adversarial Network
Raj Learning Augmentation Policy Schedules for Unsuperivsed Depth Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220517

Address after: 518000 22nd floor, building C, Shenzhen International Innovation Center (Futian science and Technology Plaza), No. 1006, Shennan Avenue, Xintian community, Huafu street, Futian District, Shenzhen, Guangdong Province

Applicant after: Shenzhen wanglian Anrui Network Technology Co.,Ltd.

Address before: Floor 4-8, unit 5, building 1, 333 Yunhua Road, high tech Zone, Chengdu, Sichuan 610041

Applicant before: CHENGDU 30KAITIAN COMMUNICATION INDUSTRY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant