CN109788346B - Video file configuration analysis method - Google Patents

Video file configuration analysis method Download PDF

Info

Publication number
CN109788346B
CN109788346B CN201810944771.4A CN201810944771A CN109788346B CN 109788346 B CN109788346 B CN 109788346B CN 201810944771 A CN201810944771 A CN 201810944771A CN 109788346 B CN109788346 B CN 109788346B
Authority
CN
China
Prior art keywords
pixel point
video
equipment
file
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810944771.4A
Other languages
Chinese (zh)
Other versions
CN109788346A (en
Inventor
朱丽萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Liangzi Technology Co ltd
Original Assignee
Shenzhen Liangzi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Liangzi Technology Co ltd filed Critical Shenzhen Liangzi Technology Co ltd
Priority to CN201810944771.4A priority Critical patent/CN109788346B/en
Publication of CN109788346A publication Critical patent/CN109788346A/en
Application granted granted Critical
Publication of CN109788346B publication Critical patent/CN109788346B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a video file configuration and analysis method, which comprises the following steps of analyzing a video file by using a video file configuration and analysis system, wherein the video file configuration and analysis system comprises: the on-site display equipment is used for displaying red characters corresponding to the drama identification signal when the drama identification signal of the children is received; the video storage equipment is used for pre-storing a video file database, the video file database stores all video folders, and a single target file comprising video content and a configuration file corresponding to the target file are placed in each video folder; the folder searching device is used for acquiring the name of the video file selected by the user according to the selection of the user, searching the corresponding video folder in the video file database based on the name of the video file, acquiring the target file comprising the video content from the video folder, and playing the target file.

Description

Video file configuration analysis method
Technical Field
The invention relates to the field of video analysis, in particular to a video file configuration analysis method.
Background
Frequently used video file formats include the AVI file format and the QuickTime file format.
The AVI file format is a digital audio and Video file format developed by Microsoft corporation and conforming to RIFF file specification, is originally used in Microsoft Video for Windows environment, and is directly supported by most operating systems such as Windows 95/98, OS/2 and the like. The AVI format allows video and audio to be synchronously played in an interlaced mode, 256-color and RLE compression is supported, but the AVI file does not limit the compression standard, so the AVI file format is only used as a standard on a control interface and does not have compatibility, and the AVI files generated by different compression algorithms can be played only by using corresponding decompression algorithms. Common AVI playback drivers are mainly Video 1 in Microsoft Video for Windows or Windows 95/98, and Indeo Video from Intel corporation.
The QuickTime file format is an audio and video file format developed by Apple computer company, is used for storing audio and video information, has advanced video and audio functions, and can provide support for all mainstream operating system platforms including Apple Mas OS and Microsoft Windows. The QuickTime file format supports 25-bit color, supports advanced integrated compression techniques such as RLE, JPEG, and the like, provides over 150 video effects, and is provided with sound full time to provide over 200 MIDI-compatible sound and equipment. QuickTime contains key characteristics based on Internet application, can provide real-time digital information flow, workflow and file playback function through the Internet, and in addition, QuickTime also adopts a virtual reality technology called as QuickTime VR technology, and a user can observe 360-degree scenes around a certain place or observe a certain object from any angle in space through interactive control of a mouse or a keyboard. QuickTime has gained wide acceptance in the industry with its leading multimedia technology and cross-platform characteristics, small storage space requirements, independence of technology details, and high openness of the system.
Disclosure of Invention
The invention provides a video file configuration analysis method, aiming at solving the technical problem that various items of video data are difficult to analyze in the prior art.
The invention has at least the following two important points:
(1) the electronic type analysis is carried out on the target files including the video content in the folder to identify the video category of the target files, and particularly, the evaluation of whether the target files are the children plays or not is realized based on the accumulated result of the number of the children objects in each video image;
(2) and determining a data processing mode of each channel of RGB of the pixel point to be processed based on a preset sliding window with the corresponding size of the resolution mapping of the high-definition image and based on the R channel value distribution condition of the pixel point in each direction in the preset sliding window, thereby obtaining the most effective reference value for image filtering.
According to an aspect of the present invention, there is provided a video file configuration parsing method including parsing a video file using a video file configuration parsing system, the video file configuration parsing system including: the on-site display equipment is used for displaying red characters corresponding to the drama identification signal when the drama identification signal of the children is received; the video storage device is used for pre-storing a video file database, the video file database stores all video folders, and a single target file containing video content and a configuration file corresponding to the target file are placed in each video folder.
More specifically, in the video file configuration parsing system, the method further includes:
the folder searching device is used for acquiring the name of the video file selected by the user according to the selection of the user, searching the corresponding video folder in the video file database based on the name of the video file, acquiring the target file comprising the video content from the video folder, and playing the target file.
More specifically, in the video file configuration parsing system, the method further includes:
the file analysis device is connected with the folder searching device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire the corresponding trailer duration; and the image intercepting equipment is respectively connected with the folder searching equipment and the file analyzing equipment and is used for intercepting each frame of image at the tail position from the target file based on the tail duration time and outputting each frame of image at the tail position as a content extraction image.
More specifically, in the video file configuration parsing system, the method further includes:
the pixel point analysis equipment is connected with the image interception equipment and used for receiving the content extraction image, judging noise points of all pixel points in the content extraction image and determining that each pixel point is a noise point or a non-noise point, wherein the pixel point analysis equipment detects various noises in the content extraction image to obtain each noise area in the content extraction image, confirms the pixel point in a certain noise area as a noise point and confirms the pixel point outside each noise area as a non-noise point; the resolution measuring equipment is used for receiving the content extraction image, extracting the resolution of the content extraction image and mapping a preset sliding window with a corresponding size based on the resolution of the content extraction image, wherein the larger the resolution of the content extraction image is, the larger the radial length of the mapped preset sliding window is; the dynamic processing equipment is connected with the resolution measuring equipment and used for acquiring the preset sliding window and carrying out dynamic filtering processing on each pixel point in the content extraction image; the dynamic processing device performing dynamic filtering processing on each pixel point in the content extraction image comprises: taking each pixel point in the content extraction image as an object pixel point, determining each pixel point in a preset sliding window taking the object pixel point as a centroid in the content extraction image as each pixel point to be evaluated, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the horizontal direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the vertical direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the main diagonal direction taking the object pixel point as a center in the preset sliding window, and calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the auxiliary diagonal direction taking the object pixel point as a center in the preset sliding window Obtaining the mean square error of R channel values of the pixels to be evaluated, obtaining the minimum value of the four mean square errors, taking the mean value of the R channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed R channel value of the target pixels, taking the mean value of the G channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed G channel value of the target pixels, and taking the mean value of the B channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed B channel value of the target pixels; the signal integration equipment is connected with the dynamic processing equipment and is used for acquiring a processed image corresponding to the content extraction image based on the processed R channel value, the processed G channel value and the processed B channel value of each pixel point in the content extraction image; the region segmentation device is connected with the signal integration device and used for receiving the processed image of each frame and executing the following actions on the processed image: identifying whether each pixel point in the processed image is a human body pixel point or not based on a preset human body gray threshold, forming one or more sub-images based on each pixel point in the processed image, and determining whether each sub-image corresponds to a child object or not based on child body shape characteristics so as to obtain the number of child objects in the processed image; the quantity analysis equipment is connected with the region segmentation equipment and used for receiving and accumulating the quantity of the child objects in each processed image to obtain the total number of the children, and dividing the total number of the children by the quantity of the processed images to obtain an evaluation reference value; and the drama identification device is respectively connected with the on-site display device and the quantity analysis device and is used for sending out a drama identification signal when the evaluation reference value exceeds a limit quantity.
Drawings
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
fig. 1 is a data structure diagram illustrating a content extraction image captured by an image capturing apparatus of a video file configuration parsing system according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Video files are one of the internet multimedia important content. It mainly refers to multimedia files containing real-time audio and video information, which usually originates from a video input device. Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) of pictures per second, human eyes cannot distinguish a single static picture according to the persistence of vision principle; it appears as a smooth continuous visual effect, so that the continuous picture is called a video. Video technology was originally developed for television systems, but has now evolved into a variety of different formats to facilitate consumer recording of video. Advances in networking technology have also enabled recorded segments of video to be streamed over the internet and received and played by computers. Video and movies are different technologies that take advantage of photography to capture dynamic images as a series of still photographs.
In order to overcome the defects, the invention builds a video file configuration and analysis method, which comprises the step of analyzing a video file by using a video file configuration and analysis system. The video file configuration and analysis system can effectively solve the corresponding technical problems.
Fig. 1 is a data structure diagram illustrating a content extraction image captured by an image capturing apparatus of a video file configuration parsing system according to an embodiment of the present invention.
The video file configuration and analysis system according to the embodiment of the invention comprises:
the on-site display equipment is used for displaying red characters corresponding to the drama identification signal when the drama identification signal of the children is received;
the video storage device is used for pre-storing a video file database, the video file database stores all video folders, and a single target file containing video content and a configuration file corresponding to the target file are placed in each video folder.
Next, a detailed configuration of the video file arrangement and analysis system according to the present invention will be further described.
In the video file configuration parsing system, the method further includes:
the folder searching device is used for acquiring the name of the video file selected by the user according to the selection of the user, searching the corresponding video folder in the video file database based on the name of the video file, acquiring the target file comprising the video content from the video folder, and playing the target file.
In the video file configuration parsing system, the method further includes:
the file analysis device is connected with the folder searching device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire the corresponding trailer duration;
and the image intercepting equipment is respectively connected with the folder searching equipment and the file analyzing equipment and is used for intercepting each frame of image at the tail position from the target file based on the tail duration time and outputting each frame of image at the tail position as a content extraction image.
In the video file configuration parsing system, the method further includes:
the pixel point analysis equipment is connected with the image interception equipment and used for receiving the content extraction image, judging noise points of all pixel points in the content extraction image and determining that each pixel point is a noise point or a non-noise point, wherein the pixel point analysis equipment detects various noises in the content extraction image to obtain each noise area in the content extraction image, confirms the pixel point in a certain noise area as a noise point and confirms the pixel point outside each noise area as a non-noise point;
the resolution measuring equipment is used for receiving the content extraction image, extracting the resolution of the content extraction image and mapping a preset sliding window with a corresponding size based on the resolution of the content extraction image, wherein the larger the resolution of the content extraction image is, the larger the radial length of the mapped preset sliding window is;
the dynamic processing equipment is connected with the resolution measuring equipment and used for acquiring the preset sliding window and carrying out dynamic filtering processing on each pixel point in the content extraction image;
the dynamic processing device performing dynamic filtering processing on each pixel point in the content extraction image comprises: taking each pixel point in the content extraction image as an object pixel point, determining each pixel point in a preset sliding window taking the object pixel point as a centroid in the content extraction image as each pixel point to be evaluated, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the horizontal direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the vertical direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the main diagonal direction taking the object pixel point as a center in the preset sliding window, and calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the auxiliary diagonal direction taking the object pixel point as a center in the preset sliding window Obtaining the mean square error of R channel values of the pixels to be evaluated, obtaining the minimum value of the four mean square errors, taking the mean value of the R channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed R channel value of the target pixels, taking the mean value of the G channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed G channel value of the target pixels, and taking the mean value of the B channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed B channel value of the target pixels;
the signal integration equipment is connected with the dynamic processing equipment and is used for acquiring a processed image corresponding to the content extraction image based on the processed R channel value, the processed G channel value and the processed B channel value of each pixel point in the content extraction image;
the region segmentation device is connected with the signal integration device and used for receiving the processed image of each frame and executing the following actions on the processed image: identifying whether each pixel point in the processed image is a human body pixel point or not based on a preset human body gray threshold, forming one or more sub-images based on each pixel point in the processed image, and determining whether each sub-image corresponds to a child object or not based on child body shape characteristics so as to obtain the number of child objects in the processed image;
the quantity analysis equipment is connected with the region segmentation equipment and used for receiving and accumulating the quantity of the child objects in each processed image to obtain the total number of the children, and dividing the total number of the children by the quantity of the processed images to obtain an evaluation reference value;
and the drama identification device is respectively connected with the on-site display device and the quantity analysis device and is used for sending out a drama identification signal when the evaluation reference value exceeds a limit quantity.
In the video file configuration parsing system: the drama identification device is also used for sending a drama unidentified signal when the evaluation reference value does not exceed the limit value.
In the video file configuration parsing system: the dynamic processing equipment comprises a data receiving sub-equipment, a horizontal direction evaluation sub-equipment, a vertical direction evaluation sub-equipment, a main diagonal direction evaluation sub-equipment, a sub diagonal direction evaluation sub-equipment and a data output sub-equipment.
In the video file configuration parsing system: the main diagonal direction is a direction from the lower left corner of the preset sliding window to the upper right corner of the preset sliding window.
In the video file configuration parsing system: the secondary diagonal direction is a direction from a lower right corner of the preset sliding window to an upper left corner of the preset sliding window.
In addition, the drama identification device is selected to be a graphic processor. A Graphics processor (abbreviated as GPU), also called a display core, a visual processor, and a display chip, is a microprocessor specially used for image operation on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer and a smart phone).
The graphic processor is used for converting and driving display information required by a computer system, providing a line scanning signal for the display and controlling the correct display of the display, is an important element for connecting the display and a personal computer mainboard, and is also one of important equipment for man-machine conversation. The display card is an important component in the computer host, takes charge of outputting display graphics, and is very important for people engaged in professional graphic design.
The processor of the graphics card is called the Graphics Processor (GPU), which is the "heart" of the graphics card, similar to the CPU, except that the GPU is designed specifically to perform the complex mathematical and geometric calculations necessary for graphics rendering. Some of the fastest GPUs integrate even more transistors than normal CPUs.
By adopting the video file configuration and analysis system, aiming at the technical problem that various video data are difficult to analyze in the prior art, the video type is identified for the target file by carrying out electronic type analysis on the target file comprising video content in the folder, and more importantly, the evaluation on whether the target file is a children play or not is realized based on the accumulated result of the number of various children objects in various video images; meanwhile, a preset sliding window with a corresponding size is mapped based on the resolution of the high-definition image, and a data processing mode of each channel of RGB of the pixel point to be processed is determined based on the R channel value distribution condition of the pixel point in each direction in the preset sliding window, so that the most effective reference value for image filtering can be obtained.
It is to be understood that while the present invention has been described in conjunction with the preferred embodiments thereof, it is not intended to limit the invention to those embodiments. It will be apparent to those skilled in the art from this disclosure that many changes and modifications can be made, or equivalents modified, in the embodiments of the invention without departing from the scope of the invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (1)

1. A video file configuration parsing method, the method comprising parsing a video file using a video file configuration parsing system, wherein the video file configuration parsing system comprises:
the on-site display equipment is used for displaying red characters corresponding to the drama identification signal when the drama identification signal is received;
the video storage equipment is used for pre-storing a video file database, the video file database stores all video folders, and a single target file comprising video content and a configuration file corresponding to the target file are placed in each video folder;
the folder searching device is used for acquiring the name of a video file selected by a user according to the selection of the user, searching a corresponding video folder in a video file database based on the name of the video file, acquiring a target file comprising video content from the video folder, and playing the target file;
the file analysis device is connected with the folder searching device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire the corresponding trailer duration;
the image intercepting equipment is respectively connected with the folder searching equipment and the file analyzing equipment and is used for intercepting each frame of image at the tail position from the target file based on the tail duration time and outputting each frame of image at the tail position as a content extraction image;
the pixel point analysis equipment is connected with the image interception equipment and used for receiving the content extraction image, judging noise points of all pixel points in the content extraction image and determining that each pixel point is a noise point or a non-noise point, wherein the pixel point analysis equipment detects various noises in the content extraction image to obtain each noise area in the content extraction image, confirms the pixel point in a certain noise area as a noise point and confirms the pixel point outside each noise area as a non-noise point;
the resolution measuring equipment is used for receiving the content extraction image, extracting the resolution of the content extraction image and mapping a preset sliding window with a corresponding size based on the resolution of the content extraction image, wherein the larger the resolution of the content extraction image is, the larger the radial length of the mapped preset sliding window is;
the dynamic processing equipment is connected with the resolution measuring equipment and used for acquiring the preset sliding window and carrying out dynamic filtering processing on each pixel point in the content extraction image;
the dynamic processing device performing dynamic filtering processing on each pixel point in the content extraction image comprises: taking each pixel point in the content extraction image as an object pixel point, determining each pixel point in a preset sliding window taking the object pixel point as a centroid in the content extraction image as each pixel point to be evaluated, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the horizontal direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the vertical direction taking the object pixel point as a center in the preset sliding window, calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the main diagonal direction taking the object pixel point as a center in the preset sliding window, and calculating the mean square error of R channel values of each pixel point to be evaluated after the object pixel point is eliminated in the auxiliary diagonal direction taking the object pixel point as a center in the preset sliding window Obtaining the mean square error of R channel values of pixels to be evaluated, obtaining the minimum value of four mean square errors, taking the mean value of the R channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed R channel value of the target pixels, taking the mean value of the G channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed G channel value of the target pixels, and taking the mean value of the B channel values of the pixels to be evaluated after the target pixels are eliminated in the direction corresponding to the minimum value as the processed B channel value of the target pixels;
the signal integration equipment is connected with the dynamic processing equipment and is used for acquiring a processed image corresponding to the content extraction image based on the processed R channel value, the processed G channel value and the processed B channel value of each pixel point in the content extraction image;
the region segmentation device is connected with the signal integration device and used for receiving the processed image of each frame and executing the following actions on the processed image: identifying whether each pixel point in the processed image is a human body pixel point or not based on a preset human body gray threshold, forming one or more sub-images based on each pixel point in the processed image, and determining whether each sub-image corresponds to a child object or not based on child body shape characteristics so as to obtain the number of child objects in the processed image;
the quantity analysis equipment is connected with the region segmentation equipment and used for receiving and accumulating the quantity of the child objects in each processed image to obtain the total number of the children, and dividing the total number of the children by the quantity of the processed images to obtain an evaluation reference value;
the drama identification device is respectively connected with the on-site display device and the quantity analysis device and is used for sending out a drama identification signal when the evaluation reference value exceeds a limit quantity;
the drama identification device is also used for sending a drama unidentified signal when the evaluation reference value does not exceed a limit value;
the dynamic processing equipment comprises a data receiving sub-equipment, a horizontal direction evaluation sub-equipment, a vertical direction evaluation sub-equipment, a main diagonal direction evaluation sub-equipment, an auxiliary diagonal direction evaluation sub-equipment and a data output sub-equipment;
the main diagonal direction is a direction from the lower left corner of the preset sliding window to the upper right corner of the preset sliding window;
the secondary diagonal direction is a direction from a lower right corner of the preset sliding window to an upper left corner of the preset sliding window.
CN201810944771.4A 2018-08-19 2018-08-19 Video file configuration analysis method Expired - Fee Related CN109788346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810944771.4A CN109788346B (en) 2018-08-19 2018-08-19 Video file configuration analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810944771.4A CN109788346B (en) 2018-08-19 2018-08-19 Video file configuration analysis method

Publications (2)

Publication Number Publication Date
CN109788346A CN109788346A (en) 2019-05-21
CN109788346B true CN109788346B (en) 2021-01-22

Family

ID=66496263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810944771.4A Expired - Fee Related CN109788346B (en) 2018-08-19 2018-08-19 Video file configuration analysis method

Country Status (1)

Country Link
CN (1) CN109788346B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883230A (en) * 2010-05-31 2010-11-10 中山大学 Digital television actor retrieval method and system
CN101981576A (en) * 2008-03-31 2011-02-23 杜比实验室特许公司 Associating information with media content using objects recognized therein
CN103714094A (en) * 2012-10-09 2014-04-09 富士通株式会社 Equipment and method for recognizing objects in video
CN103870513A (en) * 2012-12-18 2014-06-18 华为技术有限公司 Method, equipment and system for video processing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE512411T1 (en) * 2003-12-05 2011-06-15 Koninkl Philips Electronics Nv SYSTEM AND METHOD FOR THE INTEGRATED ANALYSIS OF INTRINSIC AND EXTRINSIC AUDIOVISUAL DATA

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101981576A (en) * 2008-03-31 2011-02-23 杜比实验室特许公司 Associating information with media content using objects recognized therein
CN101883230A (en) * 2010-05-31 2010-11-10 中山大学 Digital television actor retrieval method and system
CN103714094A (en) * 2012-10-09 2014-04-09 富士通株式会社 Equipment and method for recognizing objects in video
CN103870513A (en) * 2012-12-18 2014-06-18 华为技术有限公司 Method, equipment and system for video processing

Also Published As

Publication number Publication date
CN109788346A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
EP2109313B1 (en) Television receiver and method
WO2020108082A1 (en) Video processing method and device, electronic equipment and computer readable medium
CN104519401B (en) Video segmentation point preparation method and equipment
US8416332B2 (en) Information processing apparatus, information processing method, and program
EP3794557A1 (en) Point cloud mapping
US8218025B2 (en) Image capturing apparatus, image capturing method, and computer program product
US7908547B2 (en) Album creating apparatus, album creating method and program
US7643070B2 (en) Moving image generating apparatus, moving image generating method, and program
WO2020244553A1 (en) Subtitle border-crossing processing method and apparatus, and electronic device
CN113126937B (en) Display terminal adjusting method and display terminal
US20180137835A1 (en) Removing Overlays from a Screen to Separately Record Screens and Overlays in a Digital Medium Environment
US9535928B2 (en) Combining information of different levels for content-based retrieval of digital pathology images
CN109743566B (en) Method and equipment for identifying VR video format
CN108076359B (en) Business object display method and device and electronic equipment
CN109309868B (en) Video file Command Line Parsing system
CN114598893B (en) Text video realization method and system, electronic equipment and storage medium
US7826667B2 (en) Apparatus for monitor, storage and back editing, retrieving of digitally stored surveillance images
CN109788346B (en) Video file configuration analysis method
US20220084314A1 (en) Method for obtaining multi-dimensional information by picture-based integration and related device
CN112055247B (en) Video playing method, device, system and storage medium
CN109672917B (en) Video file reading and analyzing method
US20110064311A1 (en) Electronic apparatus and image search method
CN113052067A (en) Real-time translation method, device, storage medium and terminal equipment
CN108280834B (en) Video area determines method and device
US20200092429A1 (en) Image processing apparatus, image processing method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201225

Address after: 2105Q, 21 / F, Shihong building, 2095 Nanlian Bixin Road, Longgang street, Longgang District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Liangzi Technology Co.,Ltd.

Address before: 215347 No. 317 Qianjin West Road, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: Zhu Liping

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210122

Termination date: 20210819

CF01 Termination of patent right due to non-payment of annual fee