CN114007116A - Video processing method and video processing device - Google Patents

Video processing method and video processing device Download PDF

Info

Publication number
CN114007116A
CN114007116A CN202210002654.2A CN202210002654A CN114007116A CN 114007116 A CN114007116 A CN 114007116A CN 202210002654 A CN202210002654 A CN 202210002654A CN 114007116 A CN114007116 A CN 114007116A
Authority
CN
China
Prior art keywords
video
video file
file
audio
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210002654.2A
Other languages
Chinese (zh)
Inventor
任国斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaixin Chuangda Shenzhen Technology Development Co ltd
Original Assignee
Kaixin Chuangda Shenzhen Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kaixin Chuangda Shenzhen Technology Development Co ltd filed Critical Kaixin Chuangda Shenzhen Technology Development Co ltd
Priority to CN202210002654.2A priority Critical patent/CN114007116A/en
Publication of CN114007116A publication Critical patent/CN114007116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to the technical field of video processing, in particular to a video processing method and a video processing device, which comprise the following steps: acquiring a video file, analyzing the attribute of the video file, judging the format of the video file from the attribute of the video file, and selecting a default video conversion format; converting the acquired video file in a default video format, acquiring the attribute of the video file, and analyzing the characteristics of the video file; capturing the audio file content in the video file according to the video file characteristics, and ending after the audio file is not captured; the invention provides a processing system for adding subtitles to a later-period video, which is matched with the method described in the invention for use, so that the operation of adding subtitles to the later-period video is more intelligent, and the operation difficulty is greatly reduced, thereby reducing the specialty of the operation of adding subtitles to the later-period video, and facilitating more people to easily finish the work of adding subtitles to the later-period video.

Description

Video processing method and video processing device
Technical Field
The invention relates to the technical field of video processing, in particular to a video processing method and a video processing device.
Background
Video generally refers to various technologies for capturing, recording, processing, storing, transmitting and reproducing a series of still images in the form of electric signals, and when continuous image changes exceed 24 frames per second, human eyes cannot distinguish a single still image according to the principle of persistence of vision; the visual effect of a smooth and continuous appearance, such as a continuous picture called video, was originally developed for television systems, but it has now been developed into various formats for consumers to record video, and the development of network technology has also promoted the recording of video segments in streaming media on the internet that can be received and played by computers, video and movies belonging to different technologies that capture moving images as a series of still photographs by photography.
Aiming at the problems that in the prior art, people add subtitles to videos by performing later-stage processing on the videos through manual means, software and programs for adding the subtitles are complex and diversified, so that users with similar part requirements in daily work cannot operate or are difficult to operate and long in time consumption, and therefore the working efficiency of related users is influenced.
Disclosure of Invention
Solves the technical problem
Aiming at the defects in the prior art, the invention provides a video processing method and a video processing device, and solves the problems that no simple system and method which are convenient to operate are used for adding subtitles at the later stage of a video, and related technologies of the system and the method are diversified in required software, have strong specialties and cannot be widely used by people.
Technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme:
in a first aspect, a video processing method includes the following steps:
step 1: acquiring a video file, analyzing the attribute of the video file, judging the format of the video file from the attribute of the video file, and selecting a default video conversion format;
step 2: converting the acquired video file in a default video format, acquiring the attribute of the video file, and analyzing the characteristics of the video file;
step 3: capturing the audio file content in the video file according to the video file characteristics, and ending after the audio file is not captured;
step 4: capturing audio file contents in the video file according to the video file characteristics;
step 5: analyzing the video file frame rate, and acquiring a single-frame picture under the video file frame rate;
step 6: corresponding the audio starting time in the audio file to the acquired single-frame picture;
step 7: capturing a single-frame picture of the virtual living body opening animation in the video file;
step 8: extracting all single-frame picture surfaces of the virtual living body opening animation in the corresponding video file, and corresponding the audio file to all the single-frame picture images;
step 9: establishing a language database, transmitting the audio file content to the language database for translation, acquiring the translated audio content and generating subtitles, and correspondingly matching the translated audio content and the subtitles with the audio starting time in the video file or the virtual living body opening animation in the video file.
Further, the Step of converting the video format in Step1 includes: AVI, WMV, MPEG, QuickTime, RealVideo, Flash, Mpeg-4, wherein three formats of AVI, Flash, Mpeg-4 are set as the priority selection items.
Further, the distinguishing characteristic of analyzing the characteristics of the video file in Step2 is set to whether the target video file contains an audio file.
Further, a sub-node audio start time is set in the audio start time in the Step6, and the sub-node audio start time and the corresponding principle of the corresponding single-frame picture are performed with reference to the Step 6.
Further, the audio content translated in Step9 selects the corresponding language caption according to the usage situation or usage requirement.
In a second aspect, a video processing apparatus comprises:
the control terminal is a master control end of the system and is used for sending a control command to be executed by the lower-level module;
the receiving module is used for receiving the video file;
the capturing module is used for capturing the audio file content in the video file received by the receiving module;
the coordination module is used for coordinating the delay of synchronous playing of each frame of picture in the video file and the audio file;
the database is used for storing language data and providing a translation basis for the translation module to translate the audio file;
the translation module is used for translating the content of the audio file in the video file and internally finishing the conversion of the translated content and characters;
the matching module is used for comparing and matching the video file with the characters of the translation content converted by the translation module to form a new video file;
the verification module is used for playing the synthesized video file and comparing and verifying the playing content of the video file with the audio file;
and the output module is used for outputting the synthesized video file.
Furthermore, the receiving module is provided with an identification unit for identifying the format of the video file received by the receiving module, and the identification unit synchronously operates to identify the format of the received video file when the receiving module triggers the operation to start.
Still further, the capture module comprises:
the analysis module is used for analyzing whether the audio file exists in the video file received by the receiving module and analyzing whether the audio file with the language characteristic exists in the video file received by the receiving module;
the processing module is used for processing the impurity audio in the audio file, and the processing module operates when the analysis module analyzes that the audio file with the language characteristic exists in the video file received by the receiving module;
the scene judging unit is used for judging the analysis module by the auxiliary processing module and judging the content scene in the video file; and judging whether the audio file with the language characteristics exists in the content scene in the video file.
Furthermore, the matching module comprises a selection unit for selecting the character type to be converted by the translation module for the audio file, so that the matching module can match the characters converted by the video file and the audio file.
Further, the video file size calculation formula is as follows:
(X+Y)/8·Qt=D
in the formula: x is the video encoding rate, and Kbps is the unit;
y is the audio encoding rate, and Kbps is the unit;
qt is the total length of the video, and the second is the unit;
d is the video file size, MB is the unit.
Advantageous effects
Compared with the known public technology, the technical scheme provided by the invention has the following beneficial effects:
1. the invention provides a processing method for adding subtitles at the later stage of a video, which can process the video and add the subtitles, so that the processing process of adding the subtitles at the later stage of the video is more convenient.
2. The invention provides a processing system for adding subtitles to a later-period video, and the system is matched with the method described in the invention for use, so that the operation of adding subtitles to the later-period video is more intelligent, the operation difficulty is greatly reduced, the specialty of the operation of adding subtitles to the later-period video is reduced, more people can easily complete the work of adding subtitles to the later-period video, more convenience is brought to people, and the efficiency is high.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart of a video processing method;
FIG. 2 is a schematic diagram of a video processing apparatus;
the reference numerals in the drawings denote: 1. a control terminal; 2. a receiving module; 21. an identification unit; 3. a capture module; 31. an analysis module; 32. a processing module; 321. a scene determination unit; 4. a coordination module; 5. a database; 6. a translation module; 7. a matching module; 71. a selection unit; 8. a checking module; 9. and an output module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention will be further described with reference to the following examples.
Example 1
A video processing method of this embodiment, as shown in fig. 1, includes the following steps:
step 1: acquiring a video file, analyzing the attribute of the video file, judging the format of the video file from the attribute of the video file, and selecting a default video conversion format;
step 2: converting the acquired video file in a default video format, acquiring the attribute of the video file, and analyzing the characteristics of the video file;
step 3: capturing the audio file content in the video file according to the video file characteristics, and ending after the audio file is not captured;
step 4: capturing audio file contents in the video file according to the video file characteristics;
step 5: analyzing the video file frame rate, and acquiring a single-frame picture under the video file frame rate;
step 6: corresponding the audio starting time in the audio file to the acquired single-frame picture;
step 7: capturing a single-frame picture of the virtual living body opening animation in the video file;
step 8: extracting all single-frame picture surfaces of the virtual living body opening animation in the corresponding video file, and corresponding the audio file to all the single-frame picture images;
step 9: establishing a language database, transmitting the audio file content to the language database for translation, acquiring the translated audio content and generating subtitles, and correspondingly matching the translated audio content and the subtitles with the audio starting time in the video file or the virtual living body opening animation in the video file.
As shown in fig. 1, the video conversion format in Step1 includes: AVI, WMV, MPEG, QuickTime, RealVideo, Flash, Mpeg-4, wherein three formats of AVI, Flash, Mpeg-4 are set as the priority selection items.
The method can effectively improve the adaptability of the method in the using process, thereby providing more choices for users.
As shown in fig. 1, the distinguishing characteristic of analyzing the characteristics of the video file in Step2 is set to whether the target video file contains an audio file.
As shown in fig. 1, the audio start time in Step6 is provided with a sub-node audio start time, and the sub-node audio start time and the corresponding single-frame picture are executed with reference to Step 6.
Through the setting of the audio starting time of the sub-node, the accuracy of the video and the audio in the playing process of the whole video can be effectively improved, and therefore the effect of improving video treatment is achieved.
As shown in fig. 1, the audio content translated in Step9 selects the corresponding language caption according to the usage situation or usage requirement.
Example 2
In a specific real-time aspect, a video processing apparatus according to the present invention, as shown in fig. 2, includes:
the control terminal 1 is a master control end of the system and is used for sending a control command to be executed by a lower module;
the receiving module 2 is used for receiving the video file;
a capturing module 3, configured to capture the content of the audio file in the video file received by the receiving module 2;
the coordination module 4 is used for coordinating the delay of synchronous playing of each frame of picture in the video file and the audio file;
the database 5 is used for storing language data and providing a translation basis for the translation module 6 to translate the audio file;
the translation module 6 is used for translating the content of the audio file in the video file and internally finishing the conversion of the translated content and characters;
the matching module 7 is used for comparing and matching the video file with the characters of the translation content converted by the translation module 6 to form a new video file;
the checking module 8 is used for playing the synthesized video file and comparing and checking the playing content of the video file with the audio file;
and the output module 9 is used for outputting the synthesized video file.
In the operation process of the system, the control terminal 1 controls the receiving module 2 to operate and receive video files, further the capturing module 3 captures the audio file content in the video files received by the receiving module 2, then the control coordination module 4 operates and matches the captured audio file content in the video files, then the database 5 is used for assisting the translation module 6 to translate the language content in the audio file content, then the translated language characters form subtitles through the matching module 7 to be matched with corresponding positions in the video files, and finally the transmission module 9 is used for exporting the processed synthesized video after the verification of the verification module 8.
As shown in fig. 2, the receiving module 2 is provided with an identifying unit 21 for identifying the format of the video file received by the receiving module 2, and the identifying unit 21 synchronously operates to identify the format of the received video file when the receiving module triggers the start of operation.
The efficiency of the system for processing the video is improved to a certain degree through the setting, and the purpose of saving the time required by processing the video is achieved.
Example 3
As shown in fig. 2, the capturing module 3 includes:
the analysis module 31 is configured to analyze whether an audio file exists in the video file received by the receiving module 2, and analyze whether an audio file with a language feature exists in the video file received by the receiving module 2;
the processing module 32 is configured to process the audio with impurities in the audio file, and the processing module 32 operates when the analysis module 31 analyzes that the audio file with language features exists in the video file received by the receiving module 2;
a scene determining unit 321, configured to assist the processing module 32 in determining the analysis module 31 and determining a content scene in the video file; and judging whether the audio file with the language characteristics exists in the content scene in the video file.
When the capture module 3 captures an audio file in a video file, the audio file is detected by the analysis module 31 to further determine whether a language audio exists in the audio file, and in the process, the processing module 32 can be used for synchronously processing noise of the audio file, so that the auxiliary analysis module 31 can better distinguish whether the language audio exists in the audio file, wherein the auxiliary processing module 32 further provided with the scene determination unit 321 determines the analysis module 31 to determine a content scene in the video file and determine whether the content scene in the video file exists in the language audio.
As shown in fig. 2, the matching module 7 includes a selecting unit 71, configured to select a type of a text to be converted by the translation module 6 for the audio file, so that the matching module 7 performs text matching on the video file and the audio file.
By the aid of the method, the intelligence of the system in the process of adding the subtitles to the video file can be effectively improved, multiple translated subtitles can be synchronously added to the video file, and the subsequently synthesized video file can be conveniently used by people.
As shown in fig. 2, when the result of the comparison and verification of the playing content of the video file and the audio file by the verification module 8 captures that the video and the audio are delayed or mismatched, the coordination module 4 is triggered to operate again by the medium transmission signal, and the output module 9 is controlled to export the video file after the synthesized new video file is processed.
The video file size calculation formula is as follows:
(X+Y)/8·Qt=D
in the formula: x is the video encoding rate, and Kbps is the unit;
y is the audio encoding rate, and Kbps is the unit;
qt is the total length of the video, and the second is the unit;
d is the video file size, MB is the unit.
In summary, the invention provides a method for processing subtitles added in the later stage of a video, and the method can be used for processing the video and adding the subtitles, so that the processing process of adding the subtitles in the later stage of the video is more convenient;
the system is matched with the method described in the invention, so that the operation of adding the subtitles to the later-period video is more intelligent, the operation difficulty is greatly reduced, the specialty of the operation of adding the subtitles to the later-period video is reduced, more people can easily complete the work of adding the subtitles to the later-period video, more convenience is brought to people, and the efficiency is high.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A video processing method, comprising the steps of:
step 1: acquiring a video file, analyzing the attribute of the video file, judging the format of the video file from the attribute of the video file, and selecting a default video conversion format;
step 2: converting the acquired video file in a default video format, acquiring the attribute of the video file, and analyzing the characteristics of the video file;
step 3: capturing the audio file content in the video file according to the video file characteristics, and ending after the audio file is not captured;
step 4: capturing audio file contents in the video file according to the video file characteristics;
step 5: analyzing the video file frame rate, and acquiring a single-frame picture under the video file frame rate;
step 6: corresponding the audio starting time in the audio file to the acquired single-frame picture;
step 7: capturing a single-frame picture of the virtual living body opening animation in the video file;
step 8: extracting all single-frame picture surfaces of the virtual living body opening animation in the corresponding video file, and corresponding the audio file to all the single-frame picture images;
step 9: establishing a language database, transmitting the audio file content to the language database for translation, acquiring the translated audio content and generating subtitles, and correspondingly matching the translated audio content and the subtitles with the audio starting time in the video file or the virtual living body opening animation in the video file.
2. The video processing method according to claim 1, wherein said Step of converting the video format in Step1 comprises: AVI, WMV, MPEG, QuickTime, RealVideo, Flash, Mpeg-4, wherein three formats of AVI, Flash, Mpeg-4 are set as the priority selection items.
3. The video processing method according to claim 1, wherein said distinguishing characteristic of analyzing the characteristics of the video file at Step2 is set to determine whether the target video file contains an audio file.
4. The video processing method according to claim 1, wherein said Step6 is performed with reference to Step6, wherein a sub-node audio start time is set in the audio start time, and said sub-node audio start time corresponds to a single-frame picture.
5. A video processing method according to claim 1, wherein the audio content translated in Step9 selects the corresponding translated language caption according to the usage situation or usage requirement.
6. A video processing apparatus that implements the video processing method according to any one of claims 1 to 5, comprising:
the control terminal (1) is a master control end of the system and is used for sending a control command to be executed by a lower module;
the receiving module (2) is used for receiving the video file;
the capturing module (3) is used for capturing the audio file content in the video file received by the receiving module (2);
the coordination module (4) is used for coordinating the delay of synchronous playing of each frame picture in the video file and the audio file;
the database (5) is used for storing language data and providing a translation basis for the translation module (6) to translate the audio file;
the translation module (6) is used for translating the content of the audio file in the video file and internally finishing the conversion of the translated content and characters;
the matching module (7) is used for comparing and matching the video file with the characters of the translation content converted by the translation module (6) to form a new video file;
the verification module (8) is used for playing the synthesized video file and comparing and verifying the playing content of the video file with the audio file;
and the output module (9) is used for outputting the synthesized video file.
7. A video processing device according to claim 6, wherein the receiving module (2) is provided with an identifying unit (21) for identifying the format of the video file received by the receiving module (2), and the identifying unit (21) synchronously operates to identify the format of the received video file when the receiving module triggers the start of operation.
8. A video processing device according to claim 6, characterized in that said capture module (3) comprises:
the analysis module (31) is used for analyzing whether the audio file exists in the video file received by the receiving module (2) or not, and is used for analyzing whether the audio file with the language feature exists in the video file received by the receiving module (2) or not;
the processing module (32) is used for processing impurity audio in the audio file, and the processing module (32) operates when the analysis module (31) analyzes that the audio file with language features exists in the video file received by the receiving module (2);
a scene judging unit (321) for judging the content scene in the video file by the auxiliary processing module (32) to the analysis module (31); and judging whether the audio file with the language characteristics exists in the content scene in the video file.
9. A video processing device according to claim 6, wherein the matching module (7) comprises a selection unit (71) for selecting the type of the text to be converted by the translation module (6) for the audio file, so that the matching module (7) can perform matching on the text converted by the video file and the audio file.
10. The video processing apparatus according to claim 6, wherein the video file size calculation formula is as follows:
(X+Y)/8·Qt=D
in the formula: x is the video encoding rate, and Kbps is the unit;
y is the audio encoding rate, and Kbps is the unit;
qt is the total length of the video, and the second is the unit;
d is the video file size, MB is the unit.
CN202210002654.2A 2022-01-05 2022-01-05 Video processing method and video processing device Pending CN114007116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210002654.2A CN114007116A (en) 2022-01-05 2022-01-05 Video processing method and video processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210002654.2A CN114007116A (en) 2022-01-05 2022-01-05 Video processing method and video processing device

Publications (1)

Publication Number Publication Date
CN114007116A true CN114007116A (en) 2022-02-01

Family

ID=79932565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210002654.2A Pending CN114007116A (en) 2022-01-05 2022-01-05 Video processing method and video processing device

Country Status (1)

Country Link
CN (1) CN114007116A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297710A (en) * 2013-06-19 2013-09-11 江苏华音信息科技有限公司 Audio and video recorded broadcast device capable of marking Chinese and foreign language subtitles automatically in real time for Chinese
CN103309855A (en) * 2013-06-18 2013-09-18 江苏华音信息科技有限公司 Audio-video recording and broadcasting device capable of translating speeches and marking subtitles automatically in real time for Chinese and foreign languages
CN106791913A (en) * 2016-12-30 2017-05-31 深圳市九洲电器有限公司 Digital television program simultaneous interpretation output intent and system
CN108401192A (en) * 2018-04-25 2018-08-14 腾讯科技(深圳)有限公司 Video stream processing method, device, computer equipment and storage medium
CN109658919A (en) * 2018-12-17 2019-04-19 深圳市沃特沃德股份有限公司 Interpretation method, device and the translation playback equipment of multimedia file
CN109862422A (en) * 2019-02-28 2019-06-07 腾讯科技(深圳)有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN110610444A (en) * 2019-08-27 2019-12-24 格局商学教育科技(深圳)有限公司 Background data management system based on live broadcast teaching cloud
CN111683266A (en) * 2020-05-06 2020-09-18 厦门盈趣科技股份有限公司 Method and terminal for configuring subtitles through simultaneous translation of videos

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103309855A (en) * 2013-06-18 2013-09-18 江苏华音信息科技有限公司 Audio-video recording and broadcasting device capable of translating speeches and marking subtitles automatically in real time for Chinese and foreign languages
CN103297710A (en) * 2013-06-19 2013-09-11 江苏华音信息科技有限公司 Audio and video recorded broadcast device capable of marking Chinese and foreign language subtitles automatically in real time for Chinese
CN106791913A (en) * 2016-12-30 2017-05-31 深圳市九洲电器有限公司 Digital television program simultaneous interpretation output intent and system
CN108401192A (en) * 2018-04-25 2018-08-14 腾讯科技(深圳)有限公司 Video stream processing method, device, computer equipment and storage medium
CN109658919A (en) * 2018-12-17 2019-04-19 深圳市沃特沃德股份有限公司 Interpretation method, device and the translation playback equipment of multimedia file
CN109862422A (en) * 2019-02-28 2019-06-07 腾讯科技(深圳)有限公司 Method for processing video frequency, device, computer readable storage medium and computer equipment
CN110610444A (en) * 2019-08-27 2019-12-24 格局商学教育科技(深圳)有限公司 Background data management system based on live broadcast teaching cloud
CN111683266A (en) * 2020-05-06 2020-09-18 厦门盈趣科技股份有限公司 Method and terminal for configuring subtitles through simultaneous translation of videos

Similar Documents

Publication Publication Date Title
US10181325B2 (en) Audio-visual speech recognition with scattering operators
EP2326091B1 (en) Method and apparatus for synchronizing video data
CN101860704B (en) Display device for automatically closing image display and realizing method thereof
WO2018107605A1 (en) System and method for converting audio/video data into written records
US7706663B2 (en) Apparatus and method for embedding content information in a video bit stream
EP2960905A1 (en) Method and device of displaying a neutral facial expression in a paused video
US20020051081A1 (en) Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US20030068087A1 (en) System and method for generating a character thumbnail sequence
WO2009056038A1 (en) A method and device for describing and capturing video object
CN109640056B (en) USB camera monitoring system and method based on Android platform
EP1648172A1 (en) System and method for embedding multimedia editing information in a multimedia bitstream
CN102724492B (en) Method and system for transmitting and playing video images
JP2002238027A (en) Video and audio information processing
JP2002142175A (en) Recording playback apparatus capable of extracting and searching index information at once
WO2007120337A2 (en) Selecting key frames from video frames
CN103024607B (en) Method and apparatus for showing summarized radio
CN111063079A (en) Binocular living body face detection method and device based on access control system
CN111666446B (en) Method and system for judging automatic video editing material of AI
CN110379130B (en) Medical nursing anti-falling system based on multi-path high-definition SDI video
CN110570862A (en) voice recognition method and intelligent voice engine device
CN114007116A (en) Video processing method and video processing device
US20040205655A1 (en) Method and system for producing a book from a video source
US20040252977A1 (en) Still image extraction from video streams
CN101340525B (en) Video overlapping system and method for picture-in-picture and cash amount summing
KR102457176B1 (en) Electronic apparatus and method for generating contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220201