CN109672917B - Video file reading and analyzing method - Google Patents

Video file reading and analyzing method Download PDF

Info

Publication number
CN109672917B
CN109672917B CN201810906187.XA CN201810906187A CN109672917B CN 109672917 B CN109672917 B CN 109672917B CN 201810906187 A CN201810906187 A CN 201810906187A CN 109672917 B CN109672917 B CN 109672917B
Authority
CN
China
Prior art keywords
image
video
equipment
window
age
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810906187.XA
Other languages
Chinese (zh)
Other versions
CN109672917A (en
Inventor
张利军
邹培利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zou Peili
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810906187.XA priority Critical patent/CN109672917B/en
Publication of CN109672917A publication Critical patent/CN109672917A/en
Application granted granted Critical
Publication of CN109672917B publication Critical patent/CN109672917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a video file reading analysis method, which comprises the step of analyzing video file reading by using a video file reading analysis system, wherein the video file reading analysis system comprises the following steps: the video playing device is used for acquiring the name of a video file selected by a user according to the selection of the user, searching a corresponding video folder in a video file database based on the name of the video file, acquiring a target file comprising video content from the video folder, and playing the target file; the configuration file reading device is used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to obtain corresponding leader duration; and the content extraction equipment is respectively connected with the video playing equipment and the configuration file reading equipment and is used for intercepting each frame of image at the head position from the target file based on the duration of the head and outputting each frame of image at the head position as a content extraction image.

Description

Video file reading and analyzing method
Technical Field
The invention relates to the field of video files, in particular to a video file reading and analyzing method.
Background
Video files are one of the internet multimedia important content. It mainly refers to multimedia files containing real-time audio and video information, which usually originates from a video input device.
Video generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) of pictures per second, human eyes cannot distinguish a single static picture according to the persistence of vision principle; it appears as a smooth continuous visual effect, so that the continuous picture is called a video. Video technology was originally developed for television systems, but has now evolved into a variety of different formats to facilitate consumer recording of video. Advances in networking technology have also enabled recorded segments of video to be streamed over the internet and received and played by computers. Video and movies are different technologies that take advantage of photography to capture dynamic images as a series of still photographs.
Disclosure of Invention
The invention provides a video file reading and analyzing method, which aims to solve the technical problem that the data reading and analyzing precision of the current video file is not high.
The invention has at least the following two important points:
(1) on the basis of targeted leader content analysis, consistency evaluation is carried out on the leader age type and the set age type, and an evaluation result is output, so that an important basis is provided for judgment including video production level;
(2) the method comprises the steps of normalizing each target contour region in an image to obtain a normalized region with the area being a preset fixed area, placing the centroids of one or more normalized regions at the same position to realize the overlapping operation of the one or more normalized regions, fitting the overlapped shape to obtain a fitting processing pattern, and carrying out corresponding small scaling processing on the fitting processing pattern based on the instant contrast of the high-definition image to obtain a corresponding filtering window, so that the refined filtering processing of the image content is realized.
According to an aspect of the present invention, there is provided a video file read analysis method comprising analysing a video file read using a video file read analysis system comprising: the video playing device is used for acquiring the name of a video file selected by a user according to the selection of the user, searching a corresponding video folder in a video file database based on the name of the video file, acquiring a target file comprising video content from the video folder, and playing the target file; the configuration file reading device is connected with the video playing device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire corresponding leader duration; the content extraction equipment is respectively connected with the video playing equipment and the configuration file reading equipment and is used for intercepting each frame of image at the head position from the target file based on the duration time of the head and outputting each frame of image at the head position as a content extraction image; the contrast extraction equipment is connected with the content extraction equipment, is used for receiving the content extraction image, and is also used for being connected with the window scaling equipment and carrying out contrast acquisition on the content extraction image so as to obtain the instant contrast of the content extraction image; the pixel point combination equipment is used for receiving the content extraction image, acquiring the pixel value of each pixel point in the content extraction image, determining whether the pixel point is an edge pixel point or not based on the pixel values of surrounding pixel points, and combining each edge pixel point in the content extraction image into one or more target contour regions in the content extraction image based on the edge pixel points; the fitting processing equipment is connected with the pixel point combination equipment and used for receiving the one or more target contour areas, carrying out normalization processing on each target contour area to obtain a normalization area with the area as a preset fixed area, obtaining one or more normalization areas respectively corresponding to the one or more target contour areas, placing the centroids of the one or more normalization areas at the same position to realize the overlapping operation of the one or more normalization areas, and fitting the overlapped shape to obtain a fitting processing pattern; the window scaling equipment is respectively connected with the area processing equipment and the contrast extraction equipment and is used for receiving the fitting processing pattern and carrying out corresponding small scaling processing on the fitting processing pattern based on the instant contrast of the content extraction image so as to obtain a corresponding filtering window; the window processing equipment is respectively connected with the pixel point combination equipment and the window scaling equipment and is used for receiving the filtering window and executing corresponding median filtering processing on the content extraction image by adopting the filtering window so as to obtain and output a window processing image; the content identification device is connected with the window processing device and used for receiving each frame of window processing image and executing the following actions on the window processing image: acquiring each image area in which each human body object in the window processing image is respectively positioned, identifying the clothing age corresponding to each image area to be used as a reference clothing age, and acquiring and outputting each reference clothing age; the age identification device is connected with the content identification device and used for receiving each reference clothing age corresponding to each window processing image and determining the head age type based on each reference clothing age corresponding to each window processing image; and the age evaluation equipment is connected with the age identification equipment and used for receiving the leader age type, performing consistency evaluation on the leader age type and the age type set by the target file and outputting an evaluation result.
Drawings
Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
fig. 1 is a data distribution diagram illustrating a frame of image data currently acquired by a content extraction device of a video file reading analysis system according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Common video file formats include the following:
the AVI file format is a digital audio and Video file format developed by Microsoft corporation and conforming to RIFF file specification, is originally used in Microsoft Video for Windows environment, and is directly supported by most operating systems such as Windows 95/98, OS/2 and the like. The AVI format allows video and audio to be synchronously played in an interlaced mode, 256-color and RLE compression is supported, but the AVI file does not limit the compression standard, so the AVI file format is only used as a standard on a control interface and does not have compatibility, and the AVI files generated by different compression algorithms can be played only by using corresponding decompression algorithms. Common AVI playback drivers are mainly Video 1 in Microsoft Video for Windows or Windows 95/98, and Indeo Video from Intel corporation.
The QuickTime file format is an audio and video file format developed by Apple computer company, is used for storing audio and video information, has advanced video and audio functions, and can provide support for all mainstream operating system platforms including Apple Mas OS and Microsoft Windows. The QuickTime file format supports 25-bit color, supports advanced integrated compression techniques such as RLE, JPEG, and the like, provides over 150 video effects, and is provided with sound full time to provide over 200 MIDI-compatible sound and equipment. QuickTime contains key characteristics based on Internet application, can provide real-time digital information flow, workflow and file playback function through the Internet, and in addition, QuickTime also adopts a virtual reality technology called as QuickTime VR technology, and a user can observe 360-degree scenes around a certain place or observe a certain object from any angle in space through interactive control of a mouse or a keyboard. QuickTime has gained wide acceptance in the industry with its leading multimedia technology and cross-platform characteristics, small storage space requirements, independence of technology details, and high openness of the system.
In order to overcome the above-mentioned disadvantages, the present invention builds a video file reading analysis method, which includes analyzing video file reading using a video file reading analysis system. The video file reading and analyzing system can effectively solve the corresponding technical problems.
Fig. 1 is a data distribution diagram illustrating a frame of image data currently acquired by a content extraction device of a video file reading analysis system according to an embodiment of the present invention.
The video file reading and analyzing system according to the embodiment of the invention comprises:
the video playing device is used for acquiring the name of a video file selected by a user according to the selection of the user, searching a corresponding video folder in a video file database based on the name of the video file, acquiring a target file comprising video content from the video folder, and playing the target file;
the configuration file reading device is connected with the video playing device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire corresponding leader duration;
the content extraction equipment is respectively connected with the video playing equipment and the configuration file reading equipment and is used for intercepting each frame of image at the head position from the target file based on the duration time of the head and outputting each frame of image at the head position as a content extraction image;
the contrast extraction equipment is connected with the content extraction equipment, is used for receiving the content extraction image, and is also used for being connected with the window scaling equipment and carrying out contrast acquisition on the content extraction image so as to obtain the instant contrast of the content extraction image;
the pixel point combination equipment is used for receiving the content extraction image, acquiring the pixel value of each pixel point in the content extraction image, determining whether the pixel point is an edge pixel point or not based on the pixel values of surrounding pixel points, and combining each edge pixel point in the content extraction image into one or more target contour regions in the content extraction image based on the edge pixel points;
the fitting processing equipment is connected with the pixel point combination equipment and used for receiving the one or more target contour areas, carrying out normalization processing on each target contour area to obtain a normalization area with the area as a preset fixed area, obtaining one or more normalization areas respectively corresponding to the one or more target contour areas, placing the centroids of the one or more normalization areas at the same position to realize the overlapping operation of the one or more normalization areas, and fitting the overlapped shape to obtain a fitting processing pattern;
the window scaling equipment is respectively connected with the area processing equipment and the contrast extraction equipment and is used for receiving the fitting processing pattern and carrying out corresponding small scaling processing on the fitting processing pattern based on the instant contrast of the content extraction image so as to obtain a corresponding filtering window;
the window processing equipment is respectively connected with the pixel point combination equipment and the window scaling equipment and is used for receiving the filtering window and executing corresponding median filtering processing on the content extraction image by adopting the filtering window so as to obtain and output a window processing image;
the content identification device is connected with the window processing device and used for receiving each frame of window processing image and executing the following actions on the window processing image: acquiring each image area in which each human body object in the window processing image is respectively positioned, identifying the clothing age corresponding to each image area to be used as a reference clothing age, and acquiring and outputting each reference clothing age;
the age identification device is connected with the content identification device and used for receiving each reference clothing age corresponding to each window processing image and determining the head age type based on each reference clothing age corresponding to each window processing image;
and the age evaluation equipment is connected with the age identification equipment and used for receiving the leader age type, performing consistency evaluation on the leader age type and the age type set by the target file and outputting an evaluation result.
Next, the detailed structure of the video file reading and analyzing system of the present invention will be further described.
In the video file reading analysis system: in the age identifying apparatus, determining the type of the head of the age based on the respective reference garment ages corresponding to the respective window-processed images includes: and taking the era type corresponding to the reference garment age with the largest occurrence frequency as the leader age type.
In the video file reading analysis system:
in the video file reading analysis system: in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the red component values of the surrounding pixel points.
In the video file reading analysis system: in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the blue component values of the surrounding pixel points.
In the video file reading analysis system: in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the green component values of the surrounding pixel points.
In the video file reading analysis system: in the window scaling device, the correspondingly reducing the fit processing pattern based on the instantaneous contrast of the content-extracted image to obtain a corresponding filter window comprises: the higher the instant contrast of the content extraction image is, the smaller the multiple of correspondingly reducing the fitting processing pattern is.
In addition, in the video file reading analysis system: the age assessment equipment is realized by adopting an MCU chip.
The MCU may be classified into a non-on-chip ROM type and an on-chip ROM type according to its memory type. For a chip without on-chip ROM, an EPROM must be connected externally to be used (8031 is a typical chip). The chip with on-chip ROM type is further classified into an on-chip EPROM type (a typical chip is 87C51), a MASK on-chip MASK ROM type (a typical chip is 8051), an on-chip FLASH type (a typical chip is 89C51), and the like, and some companies also provide a chip with on-chip One Time Programming (OTP) (a typical chip is 97C 51). The MCU of the MASKROM is low in price, but the program is solidified when leaving the factory, so that the MASKROM is suitable for application occasions with fixed and unchangeable programs; the MCU program of the FLASH ROM can be repeatedly erased and written, has strong flexibility but higher price, and is suitable for application occasions insensitive to price or development application; the MCU price of the OTPROM is between the first two, and the OTPROM has one-time programmable capability, is suitable for application occasions requiring certain flexibility and low cost, and is especially an electronic product with continuously renewed functions and rapid mass production.
By adopting the video file reading and analyzing system, aiming at the technical problem of low video file data reading and analyzing precision in the prior art, on the basis of targeted leader content analysis, consistency evaluation is carried out on the leader year type and the set year type, and an evaluation result is output, so that an important basis is provided for judgment including video production level; the method is characterized by further performing normalization processing on each target contour region in the image to obtain a normalization region with the area being a preset fixed area, placing the centroids of the one or more normalization regions at the same position to realize overlapping operation on the one or more normalization regions, fitting the overlapped shape to obtain a fitting processing pattern, and performing corresponding small scaling processing on the fitting processing pattern based on the real-time contrast of the high-definition image to obtain a corresponding filtering window, so that fine filtering processing on image content is realized.
It is to be understood that while the present invention has been described in conjunction with the preferred embodiments thereof, it is not intended to limit the invention to those embodiments. It will be apparent to those skilled in the art from this disclosure that many changes and modifications can be made, or equivalents modified, in the embodiments of the invention without departing from the scope of the invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (6)

1. A video file reading analysis method comprising analyzing a video file reading using a video file reading analysis system, the video file reading analysis system comprising:
the video playing device is used for acquiring the name of a video file selected by a user according to the selection of the user, searching a corresponding video folder in a video file database based on the name of the video file, acquiring a target file comprising video content from the video folder, and playing the target file;
the configuration file reading device is connected with the video playing device and used for acquiring a configuration file corresponding to the target file in the video folder and reading the configuration file to acquire corresponding leader duration;
the content extraction equipment is respectively connected with the video playing equipment and the configuration file reading equipment and is used for intercepting each frame of image at the head position from the target file based on the duration time of the head and outputting each frame of image at the head position as a content extraction image;
the contrast extraction equipment is connected with the content extraction equipment, is used for receiving the content extraction image, and is also used for being connected with the window scaling equipment and carrying out contrast acquisition on the content extraction image so as to obtain the instant contrast of the content extraction image;
the pixel point combination equipment is used for receiving the content extraction image, acquiring the pixel value of each pixel point in the content extraction image, determining whether the pixel point is an edge pixel point or not based on the pixel values of surrounding pixel points, and combining each edge pixel point in the content extraction image into one or more target contour regions in the content extraction image based on the edge pixel points;
the fitting processing equipment is connected with the pixel point combination equipment and used for receiving the one or more target contour areas, carrying out normalization processing on each target contour area to obtain a normalization area with the area as a preset fixed area, obtaining one or more normalization areas respectively corresponding to the one or more target contour areas, placing the centroids of the one or more normalization areas at the same position to realize the overlapping operation of the one or more normalization areas, and fitting the overlapped shape to obtain a fitting processing pattern;
the window scaling equipment is respectively connected with the fitting processing equipment and the contrast extraction equipment and is used for receiving the fitting processing pattern and correspondingly reducing the fitting processing pattern based on the instant contrast of the content extraction image to obtain a corresponding filtering window;
the window processing equipment is respectively connected with the pixel point combination equipment and the window scaling equipment and is used for receiving the filtering window and executing corresponding median filtering processing on the content extraction image by adopting the filtering window so as to obtain and output a window processing image;
the content identification device is connected with the window processing device and used for receiving each frame of window processing image and executing the following actions on the window processing image: acquiring each image area in which each human body object in the window processing image is respectively positioned, identifying the clothing age corresponding to each image area to be used as a reference clothing age, and acquiring and outputting each reference clothing age;
the age identification device is connected with the content identification device and used for receiving each reference clothing age corresponding to each window processing image and determining the head age type based on each reference clothing age corresponding to each window processing image;
the age assessment equipment is connected with the age identification equipment and used for receiving the leader age type, performing consistency assessment on the leader age type and the age type set by the target file and outputting an assessment result;
in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the green component values of the surrounding pixel points.
2. The method of claim 1, wherein:
in the age identifying apparatus, determining the type of the head of the age based on the respective reference garment ages corresponding to the respective window-processed images includes: and taking the era type corresponding to the reference garment age with the largest occurrence frequency as the leader age type.
3. The method of claim 2, wherein:
in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the red component values of the surrounding pixel points.
4. The method of claim 3, wherein:
in the pixel point combination device, determining whether a pixel point is an edge pixel point based on pixel values of surrounding pixel points includes: and determining whether the pixel points are edge pixel points or not based on the blue component values of the surrounding pixel points.
5. The method of claim 4, wherein:
in the window scaling device, the performing a respective reduction process on the fit processing pattern based on the contrast of the content-extracted image to obtain a corresponding filter window comprises: and when the pixel point occupied by the fitting processing pattern is less than or equal to 6, directly taking the fitting processing pattern as a corresponding filtering window.
6. The method of claim 5, wherein:
in the window scaling device, the correspondingly reducing the fit processing pattern based on the instantaneous contrast of the content-extracted image to obtain a corresponding filter window comprises: the higher the instant contrast of the content extraction image is, the smaller the multiple of correspondingly reducing the fitting processing pattern is.
CN201810906187.XA 2018-08-10 2018-08-10 Video file reading and analyzing method Active CN109672917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810906187.XA CN109672917B (en) 2018-08-10 2018-08-10 Video file reading and analyzing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810906187.XA CN109672917B (en) 2018-08-10 2018-08-10 Video file reading and analyzing method

Publications (2)

Publication Number Publication Date
CN109672917A CN109672917A (en) 2019-04-23
CN109672917B true CN109672917B (en) 2020-05-08

Family

ID=66142024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810906187.XA Active CN109672917B (en) 2018-08-10 2018-08-10 Video file reading and analyzing method

Country Status (1)

Country Link
CN (1) CN109672917B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
CN105260747A (en) * 2015-09-30 2016-01-20 广东工业大学 Clothing identification method based on clothes concurrent information and multitask learning
CN106462979A (en) * 2014-04-17 2017-02-22 电子湾有限公司 Fashion preference analysis

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8068676B2 (en) * 2007-11-07 2011-11-29 Palo Alto Research Center Incorporated Intelligent fashion exploration based on clothes recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521565A (en) * 2011-11-23 2012-06-27 浙江晨鹰科技有限公司 Garment identification method and system for low-resolution video
CN106462979A (en) * 2014-04-17 2017-02-22 电子湾有限公司 Fashion preference analysis
CN105260747A (en) * 2015-09-30 2016-01-20 广东工业大学 Clothing identification method based on clothes concurrent information and multitask learning

Also Published As

Publication number Publication date
CN109672917A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN108024079B (en) Screen recording method, device, terminal and storage medium
TWI253860B (en) Method for generating a slide show of an image
EP2109313B1 (en) Television receiver and method
JP2019504532A (en) Information processing method and terminal
US10546557B2 (en) Removing overlays from a screen to separately record screens and overlays in a digital medium environment
CN111726689B (en) Video playing control method and device
CN108510557B (en) Image tone mapping method and device
US11211097B2 (en) Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus
WO2020244553A1 (en) Subtitle border-crossing processing method and apparatus, and electronic device
US6611629B2 (en) Correcting correlation errors in a composite image
US9258458B2 (en) Displaying an image with an available effect applied
US11302045B2 (en) Image processing apparatus, image providing apparatus,control methods thereof, and medium
US8244005B2 (en) Electronic apparatus and image display method
CN108076359B (en) Business object display method and device and electronic equipment
CN111340848A (en) Object tracking method, system, device and medium for target area
CN107770487B (en) Feature extraction and optimization method, system and terminal equipment
CN109672917B (en) Video file reading and analyzing method
CN109309868B (en) Video file Command Line Parsing system
US10789693B2 (en) System and method for performing pre-processing for blending images
US11915480B2 (en) Image processing apparatus and image processing method
CN106303366B (en) Video coding method and device based on regional classification coding
CN109788346B (en) Video file configuration analysis method
CN111242116B (en) Screen positioning method and device
CN110930354B (en) Video picture content analysis system for smooth transition of image big data
CN109309877B (en) Video file reads analysis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhang Lijun

Inventor after: Zou Peili

Inventor before: Zhang Lijun

CB03 Change of inventor or designer information
TA01 Transfer of patent application right

Effective date of registration: 20200110

Address after: 361006 Room 516, No.12 Jiaxing Li, Huli District, Xiamen City, Fujian Province

Applicant after: Zou Peili

Address before: 210009 No. 152 Jiangsu Road, Gulou District, Nanjing City, Jiangsu Province

Applicant before: Zhang Lijun

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant