WO2023169240A1 - 字幕同步方法、装置、机顶盒及计算机可读存储介质 - Google Patents

字幕同步方法、装置、机顶盒及计算机可读存储介质 Download PDF

Info

Publication number
WO2023169240A1
WO2023169240A1 PCT/CN2023/078379 CN2023078379W WO2023169240A1 WO 2023169240 A1 WO2023169240 A1 WO 2023169240A1 CN 2023078379 W CN2023078379 W CN 2023078379W WO 2023169240 A1 WO2023169240 A1 WO 2023169240A1
Authority
WO
WIPO (PCT)
Prior art keywords
subtitle
current target
playback
objects
time
Prior art date
Application number
PCT/CN2023/078379
Other languages
English (en)
French (fr)
Inventor
杨连发
Original Assignee
湖南国科微电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 湖南国科微电子股份有限公司 filed Critical 湖南国科微电子股份有限公司
Publication of WO2023169240A1 publication Critical patent/WO2023169240A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43074Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on the same device, e.g. of EPG data or interactive icon with a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the present disclosure relates to the field of multimedia technology, and in particular, to a subtitle synchronization method, device, set-top box and computer-readable storage medium.
  • Existing subtitle technologies include inline subtitles and external subtitles. Different subtitle technologies can be used in different application scenarios. For example, for foreign videos, most character dialogues are in foreign languages, and Chinese external subtitles can be used to facilitate users to understand the video content.
  • External subtitles are subtitle files that are independent of the stream file. When playing a video, the external subtitle file needs to be parsed independently to display the subtitles. Most of the external subtitle functions are independently expanded by each player. Each player that needs to support external subtitles needs to independently implement the external subtitle function.
  • the synchronous display of external subtitles and videos depends on the current video playback time. In order to achieve subtitle synchronization, the timestamps of subtitles need to be compared within a specific time, which requires frequent data exchange across threads or processes.
  • embodiments of the present disclosure provide a subtitle synchronization method, device, set-top box, and computer-readable storage medium.
  • embodiments of the present disclosure provide a subtitle synchronization method, which is applied to a subtitle server.
  • the subtitle server is communicatively connected to a player.
  • the method includes:
  • embodiments of the present disclosure provide a subtitle synchronization device, which is applied to a subtitle server.
  • the subtitle server is communicatively connected to the player.
  • the device includes:
  • An acquisition module configured to receive the code stream file name from the playback end, and obtain the corresponding subtitle file according to the code stream file name;
  • a parsing module used to parse the subtitle file to obtain multiple subtitle sentences
  • a processing module used to instantiate the multiple subtitle sentences to obtain multiple subtitle objects
  • a determining module configured to receive the current video playback time from the playback end, and determine the current target subtitle object from a plurality of the subtitle objects according to the current video playback time;
  • a display module is used to display the current target subtitle object.
  • an embodiment of the present disclosure provides a set-top box, including a memory and a processor.
  • the memory is used to store a computer program.
  • the computer program executes the subtitle synchronization method provided in the first aspect when the processor is running.
  • embodiments of the present disclosure provide a computer-readable storage medium that stores a computer program that executes the subtitle synchronization method provided in the first aspect when running on a processor.
  • the subtitle synchronization method, device, set-top box and computer-readable storage medium receive the name of the code stream file from the playback end, obtain the corresponding subtitle file according to the name of the code stream file, and parse the subtitle file. , obtain multiple subtitle sentences; perform instantiation processing on the multiple subtitle sentences to obtain multiple subtitle objects; receive the current video playback time from the playback end, and obtain multiple subtitle objects based on the current video playback time. Determine the current target subtitle object; display the current target subtitle object.
  • the subtitle synchronization scheme provided by this embodiment instantiates the subtitle sentences of the subtitle file into multiple subtitle objects, determines the current target subtitle object from the multiple subtitle objects based on the current video playback time, and displays the current target subtitle object to reduce cross-over The number of process communications improves the subtitle synchronization effect.
  • Figure 1 shows a schematic flow chart of a subtitle synchronization method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic structural diagram of a subtitle synchronization device provided by an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides a subtitle synchronization method.
  • the subtitle synchronization method of this embodiment is applied to a subtitle server, and the subtitle server is communicated with the player.
  • the subtitle synchronization method in this embodiment includes:
  • Step S101 Receive a code stream file name from the playback end, and obtain a corresponding subtitle file according to the code stream file name.
  • the playback end is a video player (MediaPlayer), and the video player can be installed on the set-top box.
  • the subtitle server is an external subtitle system independent of the video player.
  • the subtitle server can realize cross-process communication and data exchange with the player.
  • the subtitle server can achieve rapid migration.
  • the current video playback time can be obtained through the getCurrentPosition interface on the playback end.
  • the subtitle server runs in the background as an independent service.
  • the subtitle server registers the subtitle service in the service manager and allows the player to obtain the subtitle service through the binder function.
  • the player communicates with the subtitle server.
  • the player obtains a subtitle service instance from the subtitle server, creates a subtitle session object through the subtitle service instance, and displays the external subtitles by operating the subtitle session object.
  • the player creates a subtitle time provider object and provides the current video playback time to the subtitle server through the subtitle time provider object, thereby facilitating the subtitle server to perform subtitle synchronization operations.
  • step S101 includes the following steps:
  • the subtitle server performs search, analysis and display of external subtitle files.
  • search for the subtitle file with the same name in the same directory as the code stream file According to the file name of the code stream file sent by the player to the subtitle server, search for the subtitle file with the same name in the same directory as the code stream file.
  • the subtitle file The name is the same as the file name of the end-stream file, but the suffix is different.
  • Step S102 Analyze the subtitle file to obtain multiple subtitle sentences.
  • subtitles are in srt format
  • the file suffix is srt.
  • the subtitles include subtitle serial number, subtitle start display time, subtitle end display time, and subtitle content;
  • Graphic subtitles are in DVDsub format.
  • Image subtitles contain two files, with the suffix idx and sub.
  • the idx file is an index file and contains the display time of the subtitles.
  • the sub file stores the data of each subtitle picture. Subtitle files in different formats can be parsed according to different standards to obtain multiple subtitle sentences.
  • step S102 may include the following steps:
  • An empty subtitle sentence is generated according to the subtitle-free period.
  • the subtitle server determines the file format of the subtitle file and performs corresponding processing according to different file formats to obtain real subtitle sentences and non-subtitle periods.
  • the period without subtitles corresponds to an empty subtitle sentence whose display content is empty, and real subtitle sentences and empty subtitle sentences are composed of multiple subtitle sentences.
  • Step S103 Instantiate the multiple subtitle sentences to obtain multiple subtitle objects.
  • the subtitle server instantiates each real subtitle sentence and empty subtitle sentence into a corresponding subtitle object according to the display time and display content of each subtitle sentence.
  • the current time period is a real subtitle sentence
  • the real subtitle sentence will be instantiated into a real subtitle object
  • the starting display time of the real subtitle object is the start time of the real subtitle sentence.
  • the end display time of the object is the start display time of the real subtitle sentence, and the content of the subtitle is set to the real text content; if there is an empty subtitle sentence in the current time period, the empty subtitle sentence will be instantiated into an empty subtitle object.
  • the start display time of an empty subtitle object is the end time of the previous subtitle object
  • the end display time is the start display time of the next subtitle object
  • the content of the subtitle is set to empty.
  • step S103 includes:
  • the plurality of subtitle objects are stored in a subtitle queue according to a preset display time sequence, wherein the starting display time of each subtitle object in the subtitle queue is the end display time of the adjacent previous subtitle object.
  • the instantiated subtitle object includes the start display time and end display time of each subtitle object, and multiple subtitle objects are stored in the subtitle queue in the order of the preset display time, and each subtitle object in the subtitle queue
  • the start display time of each subtitle object is the end display time of the adjacent previous subtitle object. Based on the start display time and end display time of each subtitle object, the subtitle object corresponding to the current video playback time can be determined from the subtitle queue.
  • Step S104 Receive the current video playback time from the playback end, and obtain the current video playback time from multiple The current target subtitle object is determined among the subtitle objects.
  • the player when the player starts to play the video, the player establishes a communication connection with the subtitle server, and the subtitle server starts running to obtain the current video play time of the player. Through the current video play time and the display start and end time of each subtitle object, from Find the current target subtitle object matching the current video playback time among multiple subtitle objects.
  • step S104 includes:
  • Matching is performed based on the current video playback time and the starting display time corresponding to each subtitle object, and a current target subtitle object matching the current video playback time is determined in the subtitle queue.
  • the starting display time and the current video can be found from the subtitle queue based on the current video playback time. If the playback time matches the subtitle object, the found subtitle object will be used as the current target subtitle object. In this way, the current video playback time can be matched with the corresponding starting display time of each subtitle object, and the current target subtitle object can be found from the subtitle queue. The corresponding current subtitle object can be quickly found based on the current video playback time, and the search for the current subtitle object can be improved. The efficiency of the target subtitle object.
  • the subtitle objects are stored in the subtitle queue according to the preset display time sequence, when the video playback does not receive a pause operation, a fast forward operation, or a rewind operation, the subtitle objects in the subtitle queue can be displayed in sequence in order of time. Achieve the purpose of synchronizing the display of subtitle objects and video playback.
  • Step S105 Display the current target subtitle object.
  • the current target subtitle object will be displayed until the end display time point.
  • the method further includes:
  • a subtitle object adjacent to the current target subtitle object is sequentially obtained from the subtitle queue and displayed as the current target subtitle object.
  • the subtitle server only needs to determine the current target subtitle object by obtaining the current video playback time when the subtitle object is synchronously displayed for the first time. After the first subtitle synchronous display, it can sequentially obtain the next adjacent subtitle object from the subtitle queue as the current The target subtitle object is displayed, and the subtitle server does not need to repeatedly obtain the current video playback time, reducing cross-process communication and saving system resources.
  • the next undisplayed subtitle object can be obtained from the subtitle queue as the current target subtitle object.
  • the subtitle server can be used as a separate background service and can be extended well in other media frameworks. Must depend on the specific player.
  • the subtitle synchronization on the subtitle server is less dependent on the current playback time of the video, so the number of cross-process communications is greatly reduced, and the system performance is greatly improved.
  • displaying the current target subtitle object in step S104 includes:
  • the display time of the current target subtitle object can be accurately controlled to ensure that the subtitle display is consistent with the video playback progress.
  • the method further includes:
  • the current video playback time is re-received from the playback end, the current target subtitle object is re-determined from the plurality of subtitle objects according to the re-received current video play time, and the re-determined current target subtitle object is displayed.
  • the subtitle server can re-obtain the current video playback time after synchronously displaying the subtitle object once, and then display the subtitle object synchronously through the re-obtained previous video playback time, which can ensure that the video playback time and the subtitle object display time are consistent. Synchronization improves the accuracy of synchronized display of subtitle objects.
  • the subtitle server can adjust and display the subtitle object accordingly according to the video playback progress.
  • the subtitle synchronization method in this embodiment also includes:
  • Playback time update information is received from the playback end, and the current target subtitle object is determined from the plurality of subtitle objects according to the playback time update information, wherein the playback time update information is synchronized by the playback end according to the video Request generation.
  • the playback control instructions may include pause playback instructions, resume playback instructions, and time selection instructions.
  • the following describes the process of the subtitle server adjusting the subtitle object according to the pause playback command, resume playback command, and time selection command.
  • the subtitle server receives a pause playback instruction from the playback end, and controls not to update the current subtitle object according to the pause playback instruction; it receives a resume playback instruction from the playback end, and sends a resume playback instruction to the subtitle server according to the resume playback instruction.
  • the player sends a first video synchronization request.
  • the subtitle server receives the first playback time update information from the playback end, determines the next target subtitle object from the plurality of subtitle objects according to the first playback time update information, and displays the next target subtitle object, so The first playback time update information is generated by the playback end according to the first video synchronization request.
  • the player when pausing video playback, the player sends a pause instruction to the subtitle server, and simultaneously notifies the subtitle server to pause refreshing the subtitle object, and the subtitle server continues to display the currently displayed subtitle object.
  • the player end When the video on the player end resumes playback, the player end simultaneously sends a resume playback instruction to the subtitle server.
  • the subtitle server generates a first video synchronization request to the player according to the resume play instruction, and sends the first video synchronization request to the player.
  • the player generates first playback time update information according to the first video synchronization request, and sends the first video synchronization request to the subtitle server.
  • Play time update information the subtitle server determines the next target subtitle object from multiple subtitle objects based on the first play time update information, displays the next target subtitle object, and restarts the synchronous rendering process of the subtitle object.
  • the subtitle server receives a timing instruction from the player, and sends a second video synchronization request to the player according to the timing instruction;
  • Receive second playback time update information from the playback end determine a current target subtitle object from the plurality of subtitle objects according to the second playback time update information, display the current target subtitle object, and the second playback time
  • the update information is generated by the player according to the second video synchronization request.
  • the player will send a timing instruction to the subtitle server.
  • the subtitle server generates a second video synchronization request based on the timing instruction.
  • the subtitle service The end sends a second video synchronization request to the playback end, the playback end receives the second synchronization request, generates second playback time update information according to the second synchronization request, sends the second playback time update information to the subtitle server, and the subtitle server receives the second video synchronization request.
  • Play time update information determine the next target subtitle object from the plurality of subtitle objects according to the second play time update information, display the next target subtitle object, and restart the synchronous rendering process of the subtitle object.
  • the subtitle server in this embodiment exists independently of the player.
  • the subtitle server can communicate and interact with the player media framework of the Android system.
  • the subtitle server can be independent of other modules. When other modules crash, It can still work normally despite being affected by other modules.
  • the subtitle server is designed as a background service that can be called remotely by the player. It can be easily extended and implemented. It covers more external subtitle scenarios and can easily expand graphic subtitles.
  • the subtitle server can achieve higher subtitle synchronization accuracy and improve system performance through less interaction with other processes.
  • real subtitle sentences and subtitle objects in non-subtitle periods are obtained according to continuous time growth.
  • the subtitle objects can be displayed sequentially according to time order, reducing the time comparison process in the subtitle synchronization process, refreshing subtitles in time, and improving the viewing experience. .
  • the present disclosure implements a subtitle synchronization method that receives a code stream file name from the playback end, obtains a corresponding subtitle file according to the code stream file name, parses the subtitle file, and obtains multiple subtitle sentences; Multiple subtitle sentences are instantiated to obtain multiple subtitle objects; the current video playback time is received from the playback end, and the current target subtitle object is determined from the plurality of subtitle objects according to the current video playback time; and the The current target subtitle object.
  • the subtitle synchronization scheme provided by this embodiment instantiates the subtitle sentences of the subtitle file into multiple subtitle objects, determines the current target subtitle object from the multiple subtitle objects based on the current video playback time, and displays the current target subtitle object to reduce cross-over The number of process communications improves the subtitle synchronization effect.
  • embodiments of the present disclosure provide a subtitle synchronization device, which is applied to a subtitle server.
  • the subtitle server Communicate with the player.
  • the subtitle synchronization device 200 includes:
  • the acquisition module 201 is used to receive the code stream file name from the playback end, and obtain the corresponding subtitle file according to the code stream file name;
  • the parsing module 202 is used to parse the subtitle file to obtain multiple subtitle sentences;
  • the processing module 203 is used to instantiate the multiple subtitle sentences to obtain multiple subtitle objects;
  • Determining module 204 configured to receive the current video playback time from the playback end, and determine the current target subtitle object from a plurality of the subtitle objects according to the current video playback time;
  • the display module 205 is used to display the current target subtitle object.
  • the parsing module 202 is also used to parse the subtitle file to obtain real subtitle sentences and non-subtitle periods;
  • An empty subtitle sentence is generated according to the subtitle-free period.
  • the processing module 203 is further configured to instantiate each subtitle sentence according to the start display time and end display time corresponding to each subtitle sentence to obtain multiple subtitle objects;
  • the plurality of subtitle objects are stored in a subtitle queue according to a preset display time sequence, wherein the starting display time of each subtitle object in the subtitle queue is the end display time of the adjacent previous subtitle object.
  • the processing module 203 is further configured to match the current video playback time with the starting display time corresponding to each subtitle object, and determine the current video playback time in the subtitle queue. The current target subtitle object to match.
  • the processing module 203 is further configured to sequentially obtain the subtitle object adjacent to the current target subtitle object from the subtitle queue and display it as the current target subtitle object.
  • the display module 205 is also used to obtain the end display time of the current target subtitle object
  • the display module 205 is also configured to receive a playback control instruction from the playback end, and send a video synchronization request to the playback end according to the playback control instruction;
  • Playback time update information is received from the playback end, and the current target subtitle object is determined from the plurality of subtitle objects according to the playback time update information, wherein the playback time update information is synchronized by the playback end according to the video Request generation.
  • the subtitle synchronization device 200 provided in this embodiment can implement the subtitle synchronization method provided in Embodiment 1, which will not be described again in order to avoid duplication.
  • the subtitle synchronization device receives the code stream file name from the playback end, obtains the corresponding subtitle file according to the code stream file name; parses the subtitle file to obtain multiple subtitle sentences; multiple words Instantiate the subtitle statement to obtain multiple subtitle objects; receive the current video playback time from the playback end, determine the current target subtitle object from the multiple subtitle objects according to the current video playback time; display the current target Subtitle object.
  • the subtitle synchronization scheme provided by this embodiment instantiates the subtitle sentences of the subtitle file into multiple subtitle objects, determines the current target subtitle object from the multiple subtitle objects based on the current video playback time, and displays the current target subtitle object to reduce cross-over The number of process communications improves the subtitle synchronization effect.
  • an embodiment of the present disclosure provides a set-top box, including a memory and a processor.
  • the memory stores a computer program.
  • the computer program is run on the processor, the subtitle synchronization method provided in the above-mentioned Embodiment 1 is executed.
  • the set-top box provided in this embodiment can implement the subtitle synchronization method provided in Embodiment 1, which will not be described again to avoid duplication.
  • An embodiment of the present disclosure also provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program executes the subtitle synchronization method provided in Embodiment 1 when running on a processor.
  • the computer-readable storage medium may be a read-only memory (Read-Only Memory, ROM for short), a random access memory (Random Access Memory, RAM for short), a magnetic disk or an optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • the computer-readable storage medium provided in this embodiment can implement the water level acquisition method based on image recognition provided in Embodiment 1. To avoid duplication, the details will not be described again.
  • the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ), includes several instructions to cause a terminal (which can be a mobile phone, computer, server, air conditioner, or network device, etc.) to execute the methods described in various embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开实施例提供了一种字幕同步方法、装置、机顶盒及计算机可读存储介质,其中方法包括:从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;对所述字幕文件进行解析,得到多个字幕语句;对所述多个字幕语句进行实例化处理,得到多个字幕对象;从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;显示所述当前目标字幕对象。本实施例提供的字幕同步方案,将字幕文件的字幕语句实例化处理为多个字幕对象,基于当前视频播放时间从多个字幕对象中确定当前目标字幕对象,并显示当前目标字幕对象,减少跨进程通信的次数,提升字幕同步效果。

Description

字幕同步方法、装置、机顶盒及计算机可读存储介质
相关申请的交叉引用
本公开要求于2022年3月9日提交中国国家知识产权局的申请号为202210223615.5、名称为“字幕同步方法、装置、机顶盒及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及多媒体技术领域,尤其涉及一种字幕同步方法、装置、机顶盒及计算机可读存储介质。
背景技术
现有字幕技术包括内嵌字幕和外挂字幕,不同应用场景可以采用不同字幕技术,例如,对于国外视频,人物对话大多采用外语,可以使用中文外挂字幕方便用户理解视频内容。
外挂字幕是独立于码流文件的字幕文件,在播放视频时,需要独立解析外挂字幕文件才能显示字幕。对于外挂字幕功能大多都是由各个播放器独立进行扩展,每个需要支持外挂字幕的播放器都需要独立地实现外挂字幕功能。外挂字幕与视频的同步显示,需要依赖于当前视频播放的时间。为了实现字幕同步需要在特定时间内比较字幕的时间戳,需要频繁跨线程或进程进行数据交互。
发明内容
为了解决上述技术问题,本公开实施例提供了一种字幕同步方法、装置、机顶盒及计算机可读存储介质。
第一方面,本公开实施例提供了一种字幕同步方法,应用于字幕服务端,所述字幕服务端与播放端通信连接,所述方法包括:
从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;
对所述字幕文件进行解析,得到多个字幕语句;
对所述多个字幕语句进行实例化处理,得到多个字幕对象;
从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;
显示所述当前目标字幕对象。
第二方面,本公开实施例提供了一种字幕同步装置,应用于字幕服务端,所述字幕服务端与播放端通信连接,所述装置包括:
获取模块,用于从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;
解析模块,用于对所述字幕文件进行解析,得到多个字幕语句;
处理模块,用于对所述多个字幕语句进行实例化处理,得到多个字幕对象;
确定模块,用于从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;
显示模块,用于显示所述当前目标字幕对象。
第三方面,本公开实施例提供了一种机顶盒,包括存储器以及处理器,所述存储器用于存储计算机程序,所述计算机程序在所述处理器运行时执行第一方面提供的字幕同步方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,其存储有计算机程序,所述计算机程序在处理器上运行时执行第一方面提供的字幕同步方法。
上述本公开提供的字幕同步方法、装置、机顶盒及计算机可读存储介质,从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;对所述字幕文件进行解析,得到多个字幕语句;对所述多个字幕语句进行实例化处理,得到多个字幕对象;从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;显示所述当前目标字幕对象。本实施例提供的字幕同步方案,将字幕文件的字幕语句实例化处理为多个字幕对象,基于当前视频播放时间从多个字幕对象中确定当前目标字幕对象,并显示当前目标字幕对象,减少跨进程通信的次数,提升字幕同步效果。
附图说明
为了更清楚地说明本公开的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对本公开保护范围的限定。在各个附图中,类似的构成部分采用类似的编号。
图1示出了本公开实施例提供的字幕同步方法的一流程示意图;
图2示出了本公开实施例提供的字幕同步装置的一结构示意图。
具体实施方式
下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。
通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
在下文中,可在本公开的各种实施例中使用的术语“包括”、“具有”及其同源词仅意 在表示特定特征、数字、步骤、操作、元件、组件或前述项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。
此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本公开的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理想化的含义或过于正式的含义,除非在本公开的各种实施例中被清楚地限定。
实施例1
本公开实施例提供了一种字幕同步方法,本实施例的字幕同步方法应用于字幕服务端,字幕服务端与播放端通信连接。
请参图1,本实施例的字幕同步方法包括:
步骤S101,从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件。
在本实施例中,播放端为视频播放器(MediaPlayer),视频播放器可以安装在机顶盒上。字幕服务端是独立于视频播放端之外的外挂字幕***。字幕服务端可以与播放端实现跨进程通信进行数据交互。字幕服务端可以实现快速移植。可以通过播放端的getCurrentPosition接口获取当前视频播放时间。
具体的,字幕服务端作为独立的服务运行在后台,字幕服务端将字幕服务注册到服务管理器中,已供播放端通过binder函数获取字幕服务。播放端与字幕服务端通信,播放端从字幕服务端获取一个字幕服务实例,通过字幕服务实例创建字幕会话对象,通过操作字幕会话对象实现对外挂字幕进行显示处理。在一实施方式中,播放端创建字幕时间提供者对象,通过字幕时间提供者对象向字幕服务端提供当前视频播放时间,方便字幕服务端进行字幕同步操作。
在一实施方式中,步骤S101包括以下步骤:
根据所述码流文件名称查找对应的目标码流文件;
获取所述目标码流文件的存储路径;
在所述存储路径下读取与所述码流文件名称对应的字幕文件。
具体的,字幕服务端进行外挂字幕文件的查找、解析和显示工作。根据播放端发送给字幕服务端的码流文件的文件名称查找与码流文件同目录下的同名的字幕文件,字幕文件 的名称与末流文件的文件名称相同,后缀名不同。
步骤S102,对所述字幕文件进行解析,得到多个字幕语句。
在本实施例中,字幕文件的文件格式有多种,例如,文本字幕为srt格式,文件后缀名是srt,为本字幕包括字幕序号、字幕起始显示时间、字幕结束显示时间、字幕内容;图形字幕为dvdsub格式,图像字幕包含两个文件,后缀名idx和sub,idx文件是索引文件,包含字幕的显示时间,sub文件中保存每一张字幕图片的数据。不同格式的字幕文件可以根据不同规范标准进行解析,进而得到多个字幕语句。
在一实施方式中,步骤S102可以包括以下步骤:
对所述字幕文件进行解析,得到真实字幕语句和无字幕时段;
根据所述无字幕时段生成空字幕语句。
在一实施方式中,字幕服务端确定字幕文件的文件格式,并根据不同的文件格式进行相应处理,得到真实字幕语句和无字幕时段,为使得视频的每个时段都对应有字幕对象,需要在无字幕时段对应显示内容为空的空字幕语句,将真实字幕语句及空字幕语句组成多个字幕语句。
步骤S103,对所述多个字幕语句进行实例化处理,得到多个字幕对象。
在一实施方式中,字幕服务端根据各字幕语句的显示时间和显示内容将每一条真实字幕语句和空字幕语句实例化成对应的字幕对象。需要说明的是,如果当前的时间段内为真实字幕语句,则会将真实字幕语句实例化为一个真实字幕对象,该真实字幕对象的起始显示时间为真实字幕语句的开始时间,该真实字幕对象的结束显示时间为真实字幕语句的起始显示时间,字幕的内容置为真实文字内容;如果当前的时间段内为空字幕语句,则会将空字幕语句实例化为一个空字幕对象,该空字幕对象的起始显示时间为上一条字幕对象的结束时间,结束显示时间为下一条字幕对象的起始显示时间,字幕的内容置为空。
在一实施方式中,步骤S103包括:
根据每个所述字幕语句对应的起始显示时间和结束显示时间,对每个所述字幕语句进行实例化处理,得到多个字幕对象;
按照预设显示时间顺序,将所述多个字幕对象存储在字幕队列中,其中,所述字幕队列中的每个字幕对象的起始显示时间为相邻上一字幕对象的结束显示时间。
在本实施方式中,实例化处理后的字幕对象包括各字幕对象的起始显示时间和结束显示时间,将多个字幕对象按照预设显示时间顺序存储在字幕队列中,且字幕队列中的每个字幕对象的起始显示时间为相邻上一字幕对象的结束显示时间,可以基于各字幕对象的起始显示时间和结束显示时间,从字幕队列中确定当前视频播放时间对应的字幕对象。
步骤S104,从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个 所述字幕对象中确定当前目标字幕对象。
具体来说,播放端开始播放视频时,播放端与字幕服务端建立通信连接,字幕服务端开始运行,获取播放端的当前视频播放时间,通过当前视频播放时间、各字幕对象的显示起止时间,从多个所述字幕对象中查找与当前视频播放时间匹配的当前目标字幕对象。
在一实施方式中,步骤S104包括:
根据所述当前视频播放时间和每个所述字幕对象对应的起始显示时间进行匹配,在所述字幕队列中确定与所述当前视频播放时间匹配的当前目标字幕对象。
具体来说,字幕服务开始运行后,由于字幕队列中是按照预设显示时间顺序将多个字幕对象依次存入的,根据当前视频播放时间可以从字幕队列中查找到起始显示时间与当前视频播放时间匹配的字幕对象,将查找到的字幕对象作为当前目标字幕对象。这样,可以通过当前视频播放时间和每个字幕对象对应的起始显示时间进行匹配,从字幕队列中查找到当前目标字幕对象,能够快速根据当前视频播放时间查找对应的当前字幕对象,提高查找当前目标字幕对象的效率。
由于字幕队列中是根据预设显示时间顺序对字幕对象进行存储,在视频播放没有接收到暂停操作、快进操作、快退操作时,可以按照时间先后顺序依次显示字幕队列中的字幕对象,以达到字幕对象显示与视频播放同步的目的。
步骤S105,显示所述当前目标字幕对象。
具体来说,在当前目标字幕对象的结束显示时间之前,会一直显示当前目标字幕对象,直到结束显示时间点结束显示。
在一实施方式中,在所述当前目标字幕对象显示结束后,还包括:
依次从所述字幕队列中获取所述当前目标字幕对象的相邻下一个字幕对象作为当前目标字幕对象进行显示。
具体的,在当前目标字幕对象的结束显示时间达到时,可以依次从字幕队列中确定当前目标字幕对象的相邻下一个字幕对象,将相邻下一字幕对象作为当前目标字幕对象,并显示当前目标字幕对象。这样,字幕服务端只需要在首次进行字幕对象同步显示时,通过获取当前视频播放时间确定当前目标字幕对象,进行首次字幕同步显示后,可依次从字幕队列中获取相邻下一个字幕对象作为当前目标字幕对象进行显示,字幕服务端无需多次重复获取当前视频播放时间,减少跨进程通信,节约***资源。
这样,在视频播放过程中没有接收到暂停操作、快进操作、快退操作时,即视频处于正常播放状态时,可以从字幕队列中依次获取未显示的下一字幕对象作为当前目标字幕对象,以使得字幕对象显示与持续播放的视频保持时间同步。
字幕服务端可以作为一个单独的后台服务,在其他媒体框架中也能够很好地扩展,不 必依赖于具体的播放器。字幕服务端的字幕同步对视频当前播放时间的依赖较少,所以跨进程通信的次数大大较少,***性能大大提升。
在一实施方式中,步骤S104中的显示所述当前目标字幕对象,包括:
获取所述当前目标字幕对象的结束显示时间;
根据所述结束显示时间结束显示所述当前目标字幕对象。
这样,可以精确控制当前目标字幕对象的显示时间,保证字幕显示与视频播放进度一致。
在一实施方式中,所述显示所述当前目标字幕对象之后,还包括:
从所述播放端重新接收当前视频播放时间,根据重新接收的当前视频播放时间从多个所述字幕对象中重新确定当前目标字幕对象,对重新确定的当前目标字幕对象进行显示。
这样,字幕服务端可以在进行一次字幕对象同步显示后,重新获取当前视频播放时间,再一次通过重新获取的前视频播放时间进行字幕对象的同步显示,可以保证视频播放时间与字幕对象显示时间的同步,提高字幕对象同步显示的精度。
补充说明的是,当视频播放进度进行调整时,字幕服务器可以根据视频播放进度调整相应的对字幕对象进行调整显示。具体的,本实施例的字幕同步方法还包括:
从所述播放端接收播放控制指令,根据所述播放控制指令向所述播放端发送视频同步请求;
从所述播放端接收播放时间更新信息,根据所述播放时间更新信息从所述多个字幕对象中确定当前目标字幕对象,其中,所述播放时间更新信息由所述播放端根据所述视频同步请求生成。
在本实施例中,播放控制指令可以包括暂停播放指令、恢复播放指令、选时指令。下面对字幕服务端根据暂停播放指令、恢复播放指令、选时指令进行字幕对象调整的过程进行说明。
在一实施方式中,字幕服务端从所述播放端接收暂停播放指令,根据所述暂停播放指令控制不对当前字幕对象进行更新;从所述播放端接收恢复播放指令,根据所述恢复播放指令向所述播放端发送第一视频同步请求。字幕服务端从所述播放端接收第一播放时间更新信息,根据所述第一播放时间更新信息从所述多个字幕对象中确定下一目标字幕对象,显示所述下一目标字幕对象,所述第一播放时间更新信息为所述播放端根据所述第一视频同步请求生成得到。
举例来说,若播放端调用暂停操作,则播放器在暂停视频播放时,向字幕服务端发送暂停播放指令,同步通知字幕服务端暂停刷新字幕对象,字幕服务端继续显示当前显示的字幕对象,直到播放端的视频恢复播放时,播放端同步向字幕服务端发送恢复播放指令, 字幕服务端向播放端根据恢复播放指令生成第一视频同步请求,向播放端发送第一视频同步请求,播放端根据第一视频同步请求生成第一播放时间更新信息,向字幕服务端发送第一播放时间更新信息,字幕服务端根据第一播放时间更新信息从多个字幕对象中确定下一目标字幕对象,显示下一目标字幕对象,重新开始字幕对象的同步渲染处理。
在另一实施方式中,字幕服务端从所述播放端接收选时指令,根据所述选时指令向所述播放端发送第二视频同步请求;
从所述播放端接收第二播放时间更新信息,根据所述第二播放时间更新信息从所述多个字幕对象中确定当前目标字幕对象,显示所述当前目标字幕对象,所述第二播放时间更新信息为所述播放端根据所述第二视频同步请求得到生成。
举例来说,若播放端调用选时操作,播放端的当前播放的位置发生了变化,播放端将向字幕服务端发送选时指令,字幕服务端根据选时指令生成第二视频同步请求,字幕服务端向播放端发送第二视频同步请求,播放端接收第二同步请求,根据第二同步请求生成第二播放时间更新信息,向字幕服务端发送第二播放时间更新信息,字幕服务端接收第二播放时间更新信息,并根据第二播放时间更新信息从多个字幕对象中确定下一目标字幕对象,显示下一目标字幕对象,重新开始字幕对象的同步渲染处理。
本实施例的字幕服务端独立于播放端存在,字幕服务端可以与安卓(Android)***的播放器媒体框架进行通信和信息交互,字幕服务端能够独立于其他模块,在其他模块崩溃时要不受其他模块的影响而仍能正常的工作。字幕服务端被设计为一个可供播放器远程调用的后台服务,可以非常容易的扩展和实现;覆盖更多的外挂字幕场景,可以方便扩展图形字幕。字幕服务端可以通过与其他进程较少的交互就能实现较高的字幕同步准确性,提升***性能。此外,本实施例中按连续时间增长获取真实字幕语句、无字幕时段的字幕对象,可以依据时间顺序依次显示字幕对象,减少字幕同步过程中的时间比较过程,能够及时刷新字幕,提升观影体验。
本公开实施提供的字幕同步方法,从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;对所述字幕文件进行解析,得到多个字幕语句;对所述多个字幕语句进行实例化处理,得到多个字幕对象;从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;显示所述当前目标字幕对象。本实施例提供的字幕同步方案,将字幕文件的字幕语句实例化处理为多个字幕对象,基于当前视频播放时间从多个字幕对象中确定当前目标字幕对象,并显示当前目标字幕对象,减少跨进程通信的次数,提升字幕同步效果。
实施例2
此外,本公开实施例提供了一种字幕同步装置,应用于字幕服务端,所述字幕服务端 与播放端通信连接。
具体的,如图2所示,字幕同步装置200包括:
获取模块201,用于从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;
解析模块202,用于对所述字幕文件进行解析,得到多个字幕语句;
处理模块203,用于对所述多个字幕语句进行实例化处理,得到多个字幕对象;
确定模块204,用于从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;
显示模块205,用于显示所述当前目标字幕对象。
在一实施方式中,解析模块202,还用于对所述字幕文件进行解析,得到真实字幕语句和无字幕时段;
根据所述无字幕时段生成空字幕语句。
在一实施方式中,处理模块203,还用于根据每个所述字幕语句对应的起始显示时间和结束显示时间,对每个所述字幕语句进行实例化处理,得到多个字幕对象;
按照预设显示时间顺序,将所述多个字幕对象存储在字幕队列中,其中,所述字幕队列中的每个字幕对象的起始显示时间为相邻上一字幕对象的结束显示时间。
在一实施方式中,处理模块203,还用于根据所述当前视频播放时间和每个所述字幕对象对应的起始显示时间进行匹配,在所述字幕队列中确定与所述当前视频播放时间匹配的当前目标字幕对象。
在一实施方式中,处理模块203,还用于依次从所述字幕队列中获取所述当前目标字幕对象的相邻下一个字幕对象作为当前目标字幕对象进行显示。
在一实施方式中,显示模块205,还用于获取所述当前目标字幕对象的结束显示时间;
根据所述结束显示时间结束显示所述当前目标字幕对象。
在一实施方式中,显示模块205,还用于从所述播放端接收播放控制指令,根据所述播放控制指令向所述播放端发送视频同步请求;
从所述播放端接收播放时间更新信息,根据所述播放时间更新信息从所述多个字幕对象中确定当前目标字幕对象,其中,所述播放时间更新信息由所述播放端根据所述视频同步请求生成。
本实施例提供的字幕同步装置200可以实现实施例1所提供的字幕同步方法,为避免重复,在此不再赘述。
本实施例提供的字幕同步装置,从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;对所述字幕文件进行解析,得到多个字幕语句;对所述多个字 幕语句进行实例化处理,得到多个字幕对象;从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;显示所述当前目标字幕对象。本实施例提供的字幕同步方案,将字幕文件的字幕语句实例化处理为多个字幕对象,基于当前视频播放时间从多个字幕对象中确定当前目标字幕对象,并显示当前目标字幕对象,减少跨进程通信的次数,提升字幕同步效果。
实施例3
此外,本公开实施例提供了一种机顶盒,包括存储器以及处理器,所述存储器存储有计算机程序,所述计算机程序在所述处理器上运行时执行上述实施例1所提供的字幕同步方法。
本实施例提供的机顶盒可以实现实施例1所提供的字幕同步方法,为避免重复,在此不再赘述。
实施例4
本公开公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序在处理器上运行时执行实施例1所提供的字幕同步方法。
在本实施例中,计算机可读存储介质可以为只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
本实施例提供的计算机可读存储介质可以实现实施例1所提供的基于图像识别的水位获取方法,为避免重复,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者终端中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形 式,均属于本公开的保护之内。

Claims (10)

  1. 一种字幕同步方法,其中,所述字幕同步方法应用于字幕服务端,所述字幕服务端与播放端通信连接,所述方法包括:
    从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;
    对所述字幕文件进行解析,得到多个字幕语句;
    对所述多个字幕语句进行实例化处理,得到多个字幕对象;
    从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;
    显示所述当前目标字幕对象。
  2. 根据权利要求1所述的字幕同步方法,其中,所述对所述字幕文件进行解析,得到多个字幕语句,包括:
    对所述字幕文件进行解析,得到真实字幕语句和无字幕时段;
    根据所述无字幕时段生成空字幕语句。
  3. 根据权利要求1所述的字幕同步方法,其中,所述对所述多个字幕语句进行实例化处理,得到多个字幕对象,包括:
    根据每个所述字幕语句对应的起始显示时间和结束显示时间,对每个所述字幕语句进行实例化处理,得到多个字幕对象;
    按照预设显示时间顺序,将所述多个字幕对象存储在字幕队列中,其中,所述字幕队列中的每个字幕对象的起始显示时间为相邻上一字幕对象的结束显示时间。
  4. 根据权利要求3所述的字幕同步方法,其中,所述根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象,包括:
    根据所述当前视频播放时间和每个所述字幕对象对应的起始显示时间进行匹配,在所述字幕队列中确定与所述当前视频播放时间匹配的当前目标字幕对象。
  5. 根据权利要求3所述的方法,其中,在所述当前目标字幕对象显示结束后,所述字幕同步方法还包括:
    依次从所述字幕队列中获取所述当前目标字幕对象的相邻下一个字幕对象作为当前目标字幕对象进行显示。
  6. 根据权利要求1所述的方法,其中,所述显示所述当前目标字幕对象,还包括:
    获取所述当前目标字幕对象的结束显示时间;
    根据所述结束显示时间结束显示所述当前目标字幕对象。
  7. 根据权利要求1所述的方法,其中,所述显示所述当前目标字幕对象,包括:
    从所述播放端接收播放控制指令,根据所述播放控制指令向所述播放端发送视频同步请求;
    从所述播放端接收播放时间更新信息,根据所述播放时间更新信息从所述多个字幕对象中确定当前目标字幕对象,其中,所述播放时间更新信息由所述播放端根据所述视频同步请求生成。
  8. 一种字幕同步装置,其中,所述字幕同步装置应用于字幕服务端,所述字幕服务端与播放端通信连接,所述装置包括:
    获取模块,用于从所述播放端接收码流文件名称,根据所述码流文件名称获取对应的字幕文件;
    解析模块,用于对所述字幕文件进行解析,得到多个字幕语句;
    处理模块,用于对所述多个字幕语句进行实例化处理,得到多个字幕对象;
    确定模块,用于从所述播放端接收当前视频播放时间,根据所述当前视频播放时间从多个所述字幕对象中确定当前目标字幕对象;
    显示模块,用于显示所述当前目标字幕对象。
  9. 一种机顶盒,其中,所述机顶盒包括存储器以及处理器,所述存储器存储有计算机程序,所述计算机程序在所述处理器运行时执行权利要求1至7中任一项所述的字幕同步方法。
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序在处理器上运行时执行权利要求1至7中任一项所述的字幕同步方法。
PCT/CN2023/078379 2022-03-09 2023-02-27 字幕同步方法、装置、机顶盒及计算机可读存储介质 WO2023169240A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210223615.5A CN114640874A (zh) 2022-03-09 2022-03-09 字幕同步方法、装置、机顶盒及计算机可读存储介质
CN202210223615.5 2022-03-09

Publications (1)

Publication Number Publication Date
WO2023169240A1 true WO2023169240A1 (zh) 2023-09-14

Family

ID=81948161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078379 WO2023169240A1 (zh) 2022-03-09 2023-02-27 字幕同步方法、装置、机顶盒及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114640874A (zh)
WO (1) WO2023169240A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319738A (zh) * 2023-12-01 2023-12-29 飞狐信息技术(天津)有限公司 一种字幕延迟方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640874A (zh) * 2022-03-09 2022-06-17 湖南国科微电子股份有限公司 字幕同步方法、装置、机顶盒及计算机可读存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008141693A (ja) * 2006-12-05 2008-06-19 Matsushita Electric Ind Co Ltd コンテンツ再生装置及びコンテンツ再生方法
US20140003792A1 (en) * 2012-06-29 2014-01-02 Kourosh Soroushian Systems, methods, and media for synchronizing and merging subtitles and media content
CN104506957A (zh) * 2014-12-08 2015-04-08 广东欧珀移动通信有限公司 一种显示字幕的方法及装置
CN105898556A (zh) * 2015-12-30 2016-08-24 乐视致新电子科技(天津)有限公司 一种外挂字幕的自动同步方法及装置
CN106791494A (zh) * 2016-12-19 2017-05-31 腾讯科技(深圳)有限公司 视频字幕的生成方法和装置
CN109525899A (zh) * 2018-11-19 2019-03-26 青岛海信传媒网络技术有限公司 字幕和视频同步呈现的方法及装置
CN111526414A (zh) * 2020-04-30 2020-08-11 青岛海信传媒网络技术有限公司 一种字幕显示方法及显示设备
CN112601101A (zh) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 一种字幕显示方法、装置、电子设备及存储介质
CN114640874A (zh) * 2022-03-09 2022-06-17 湖南国科微电子股份有限公司 字幕同步方法、装置、机顶盒及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9716919B2 (en) * 2013-09-30 2017-07-25 Hulu, LLC Queue to display additional information for entities in captions
CN105704582A (zh) * 2015-05-11 2016-06-22 深圳Tcl数字技术有限公司 基于浏览器的字幕显示方法及装置
KR20170047547A (ko) * 2015-10-23 2017-05-08 엘지전자 주식회사 디스플레이 디바이스 및 그 제어 방법
FR3051092A1 (fr) * 2016-05-03 2017-11-10 Orange Procede et dispositif de synchronisation de sous-titres
CN106804011B (zh) * 2017-02-10 2021-03-30 深圳创维数字技术有限公司 一种播放视频时加载字幕文件的方法及***
CN108259963A (zh) * 2018-03-19 2018-07-06 成都星环科技有限公司 一种tv端播放器
CN108924599A (zh) * 2018-06-29 2018-11-30 北京优酷科技有限公司 视频字幕显示方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008141693A (ja) * 2006-12-05 2008-06-19 Matsushita Electric Ind Co Ltd コンテンツ再生装置及びコンテンツ再生方法
US20140003792A1 (en) * 2012-06-29 2014-01-02 Kourosh Soroushian Systems, methods, and media for synchronizing and merging subtitles and media content
CN104506957A (zh) * 2014-12-08 2015-04-08 广东欧珀移动通信有限公司 一种显示字幕的方法及装置
CN105898556A (zh) * 2015-12-30 2016-08-24 乐视致新电子科技(天津)有限公司 一种外挂字幕的自动同步方法及装置
CN106791494A (zh) * 2016-12-19 2017-05-31 腾讯科技(深圳)有限公司 视频字幕的生成方法和装置
CN109525899A (zh) * 2018-11-19 2019-03-26 青岛海信传媒网络技术有限公司 字幕和视频同步呈现的方法及装置
CN111526414A (zh) * 2020-04-30 2020-08-11 青岛海信传媒网络技术有限公司 一种字幕显示方法及显示设备
CN112601101A (zh) * 2020-12-11 2021-04-02 北京有竹居网络技术有限公司 一种字幕显示方法、装置、电子设备及存储介质
CN114640874A (zh) * 2022-03-09 2022-06-17 湖南国科微电子股份有限公司 字幕同步方法、装置、机顶盒及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117319738A (zh) * 2023-12-01 2023-12-29 飞狐信息技术(天津)有限公司 一种字幕延迟方法、装置、电子设备及存储介质
CN117319738B (zh) * 2023-12-01 2024-03-08 飞狐信息技术(天津)有限公司 一种字幕延迟方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114640874A (zh) 2022-06-17

Similar Documents

Publication Publication Date Title
WO2023169240A1 (zh) 字幕同步方法、装置、机顶盒及计算机可读存储介质
EP2637083A1 (en) Method and device for displaying startup interface of multimedia terminal
US10728613B2 (en) Method and apparatus for content insertion during video playback, and storage medium
CN114157893B (zh) 多设备间视频同步播放的方法及装置
US11128894B2 (en) Method and mobile terminal for processing data
US20230144966A1 (en) Method, apparatus, and device for video-based interaction, and storage medium
CN111031376B (zh) 基于微信小程序的弹幕处理方法和***
CN116506672B (zh) 内网设备的音视频同步播放方法
US11513937B2 (en) Method and device of displaying video comments, computing device, and readable storage medium
CN112118484B (zh) 视频弹幕显示方法、装置、计算机设备及可读存储介质
CN114925656B (zh) 富文本显示方法、装置、设备和存储介质
CN113986161A (zh) 一种音视频通信中实时提词的方法和装置
CN111107283A (zh) 一种信息显示方法、电子设备及存储介质
US6650709B2 (en) Image decoding device, image decoding method, and storage medium and integrated circuit thereof
CN115495185A (zh) 显示页面元素的方法及装置
CN112118473B (zh) 视频弹幕显示方法、装置、计算机设备及可读存储介质
CN112990173B (zh) 阅读处理方法、装置及***
CN111818381B (zh) 一种显示方法、终端、电子设备及可读存储介质
CN110286955A (zh) 应用程序启动的方法、装置及计算机可读存储介质
CN112055262A (zh) 一种网络流媒体字幕的显示方法及***
CN112770164A (zh) 一种视频同步播放方法
CN111954056A (zh) 视频投放方法、装置及***
CN112732212A (zh) 显示方法、电子设备及存储介质
CN112511887A (zh) 视频播放控制方法及相应的装置、设备、***和存储介质
WO2023241579A1 (zh) 视频播放方法、终端设备、服务器、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765818

Country of ref document: EP

Kind code of ref document: A1