WO2022052838A1 - 视频文件的处理方法、装置、电子设备及计算机存储介质 - Google Patents
视频文件的处理方法、装置、电子设备及计算机存储介质 Download PDFInfo
- Publication number
- WO2022052838A1 WO2022052838A1 PCT/CN2021/115733 CN2021115733W WO2022052838A1 WO 2022052838 A1 WO2022052838 A1 WO 2022052838A1 CN 2021115733 W CN2021115733 W CN 2021115733W WO 2022052838 A1 WO2022052838 A1 WO 2022052838A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- identification information
- video file
- interactive
- interaction
- preset
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000003993 interaction Effects 0.000 claims abstract description 120
- 238000012545 processing Methods 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000002452 interceptive effect Effects 0.000 claims description 225
- 230000006870 function Effects 0.000 claims description 58
- 230000015654 memory Effects 0.000 claims description 18
- 238000012790 confirmation Methods 0.000 claims description 6
- 230000035515 penetration Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 239000000835 fiber Substances 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/11—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4753—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for user identification, e.g. by entering a PIN or password
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4756—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/485—End-user interface for client configuration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to the technical field of video processing, and in particular, the present disclosure relates to a video file processing method, apparatus, electronic device, and computer-readable storage medium.
- users can watch videos in video applications, and applications usually set up a comment area or a message area.
- applications usually set up a comment area or a message area.
- users can also use @ Interact with other users.
- the system will prompt user B.
- User B can jump to the comment area to view the message according to the prompt, or, without viewing the comment area, The system pushes user A's message to user B separately.
- the present disclosure provides a video file processing method, device, electronic device, and computer-readable storage medium, which can solve the problem of poor interaction between users when watching videos.
- the technical solution is as follows:
- a method for processing a video file comprising:
- a preset second editing interface is displayed; the second editing interface includes the preset interaction Label;
- a device for processing video files comprising:
- the first processing module is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
- the editing interface includes preset interactive labels;
- a second processing module configured to receive the first identification information of the interactive object determined by the editor in the interactive label, and obtain the interactive label containing the first identification information
- the third processing module is configured to generate a target video file including the interaction tag when receiving the editing completion instruction initiated by the editor, and publish the target video file.
- the third aspect processor, memory and bus
- the bus for connecting the processor and the memory
- the memory for storing operation instructions
- the processor is used for invoking the operation instructions, and the executable instructions cause the processor to perform operations corresponding to the video file processing method shown in the first aspect of the present disclosure.
- a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium.
- the program is executed by a processor, the method for processing a video file shown in the first aspect of the present disclosure is implemented.
- a preset second editing interface is displayed, and the second editing interface includes the preset interaction label; then receive the first identification information of the interactive object determined by the editor in the interactive label, and obtain the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive label containing the first identification information.
- the target video file of the interactive tag and publish the target video file.
- FIG. 1 is a schematic flowchart of a method for processing a video file according to an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of a method for processing a video file according to another embodiment of the present disclosure
- FIG. 3 is a schematic diagram of a first editing interface in the present disclosure
- FIGS. 4A to 4C are schematic diagrams of an interface for editing an interactive label in a second editing interface according to the present disclosure
- 5A-5C are interface schematic diagrams 2 of editing interactive labels in the second editing interface in the present disclosure.
- FIG. 6 is a schematic diagram of a playback interface when an interactive object plays a target video file in the present disclosure
- FIG. 7 is a schematic diagram of a playback interface after an interactive object clicks on the second prompt information in the present disclosure
- FIG. 8 is a schematic diagram of a playback interface when an editor plays a target video file in the present disclosure
- FIG. 9 is a schematic diagram of a playback interface when other users play a target video file in the present disclosure.
- FIG. 10 is a schematic diagram of a playback interface when any user in the present disclosure plays an updated target video file
- FIG. 11 is a schematic structural diagram of an apparatus for processing a video file according to another embodiment of the present disclosure.
- FIG. 12 is a schematic structural diagram of an electronic device for processing video files according to another embodiment of the present disclosure.
- the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
- the term “based on” is “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- the video file processing method, device, electronic device and computer-readable storage medium provided by the present disclosure are intended to solve the above technical problems in the prior art.
- a method for processing a video file includes:
- Step S101 in the preset first editing interface for the original video file, when receiving a trigger instruction for the preset first interactive function, display a preset second editing interface; the second editing interface includes a preset editing interface. interactive label;
- an application client for playing video files and editing video files is installed in the terminal.
- the application client is preset with at least one playing interface for playing video files, and for editing video files.
- At least one editing interface for the file is preset with at least one playing interface for playing video files, and for editing video files.
- playing video files and editing video files may be the same application client or different application clients, which may be set according to actual needs in practical applications, which is not limited in this embodiment of the present disclosure .
- the original video file can be edited by the editor to shoot the finished video file.
- the editor can edit the original video file in each editing interface of the application client to obtain the edited video file, and then upload the edited video file to the server to share with others; or Without editing, upload the original video file directly to the server to share with others.
- the editor opens the preset first editing interface, and then imports the original video file and edits the original video file.
- the interactive function may be an "@" function, for example, an editor @ owns a friend.
- the application client When the application client receives the trigger instruction for the first interactive function, it can display a preset second editing interface, and the second editing interface includes a preset interactive label; wherein the editor can edit the interaction in the interactive label The identification information of the object.
- Step S102 receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information;
- the editor can determine the first identification information of the interactive object, so as to obtain the interactive label including the first identification information. For example, when the interactive function is @friend, then the interactive object corresponding to the first interactive function is the friend B of editor A@, and the first identification information is the ID (Identity document, identity number) of B, so that the information containing B is obtained.
- the interactive tag of the ID which can be displayed in the video image when the video file is played.
- Step S103 when an editing completion instruction initiated by the editor is received, a target video file containing an interactive tag is generated, and the target video file is released.
- a virtual button for generating the target video file can be preset in the editing interface.
- the application client can generate the target video file containing the interactive tag based on the editing completion instruction. and publish the target video file.
- a preset second editing interface is displayed, and the second editing interface includes a preset editing interface.
- the set interactive label then receive the first identification information of the interactive object corresponding to the first interactive function in the interactive label, and obtain the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive label including Tag the target video file and publish the target video file.
- a method for processing a video file includes:
- Step S201 in the preset first editing interface for the original video file, when a trigger instruction for the preset first interactive function is received, the preset second editing interface is displayed; the second editing interface includes the preset second editing interface. interactive label;
- an application client for playing video files and editing video files is installed in the terminal.
- the application client is preset with at least one playing interface for playing video files, and for editing video files.
- At least one editing interface for the file may have the following characteristics:
- the device On the hardware system, the device has a central processing unit, a memory, an input component and an output component, that is to say, the device is often a microcomputer device with communication functions. In addition, it can also have a variety of input methods, such as keyboard, mouse, touch screen, microphone and camera, etc., and can adjust the input according to needs. At the same time, devices often have multiple output methods, such as receivers, display screens, etc., which can also be adjusted as needed;
- the device In the software system, the device must have an operating system, such as Windows Mobile, Symbian, Palm, Android, iOS, etc. At the same time, these operating systems are becoming more and more open, and personalized applications developed based on these open operating system platforms emerge in an endless stream, such as address books, calendars, notepads, calculators and various games, etc. Customize the needs of users;
- an operating system such as Windows Mobile, Symbian, Palm, Android, iOS, etc.
- the device has flexible access modes and high-bandwidth communication performance, and can automatically adjust the selected communication mode according to the selected business and the environment, so as to facilitate the use of users.
- the device can support GSM (Global System for Mobile Communication, Global System for Mobile Communications), WCDMA (Wideband Code Division Multiple Access, Wideband Code Division Multiple Access), CDMA2000 (Code Division Multiple Access, Code Division Multiple Access), TDSCDMA (Time Division- Synchronous Code Division Multiple Access), Wi-Fi (Wireless-Fidelity, wireless fidelity) and WiMAX (Worldwide Interoperability for Microwave Access), etc., so as to adapt to a variety of standard networks, Not only supports voice services, but also supports a variety of wireless data services;
- GSM Global System for Mobile Communication
- WCDMA Wideband Code Division Multiple Access
- CDMA2000 Code Division Multiple Access, Code Division Multiple Access
- TDSCDMA Time Division- Synchronous Code Division Multiple Access
- Wi-Fi Wireless-Fidelity, wireless fidelity
- WiMAX Worldwide Interoperability for Microwave
- the equipment pays more attention to humanization, personalization and multi-function.
- the equipment has changed from the "equipment-centered" model to the "people-centered” model, integrating embedded computing, control technology, artificial intelligence technology and biometric authentication technology, which fully reflects the purpose of people-oriented .
- the device can adjust the settings according to individual needs, making it more personalized.
- the device itself integrates a lot of software and hardware, and its functions are becoming more and more powerful.
- playing video files and editing video files may be the same application client or different application clients, which may be set according to actual needs in practical applications, which is not limited in this embodiment of the present disclosure .
- the original video file can be edited by the editor to shoot the finished video file.
- the editor can edit the original video file in each editing interface of the application client to obtain the edited video file, and then upload the edited video file to the server to share with others; or Without editing, upload the original video file directly to the server to share with others.
- the editor opens the preset first editing interface, and then imports the original video file and edits the original video file.
- the interactive function may be an "@" function, for example, an editor @ owns a friend.
- a trigger instruction for the first interactive function is initiated, and the application client can display the preset second editing interface after receiving the trigger instruction.
- the trigger instruction is generated in the following manner:
- the face recognition of the original video file is successful
- the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
- the application client can perform face recognition on the original video file, and if the face recognition is successful, a trigger instruction can be generated; or, a first interaction is preset in the first editing interface
- the virtual button corresponding to the function when the editor clicks the virtual button, the application client can generate a trigger instruction.
- the application client can perform face recognition on the original video file by first playing the original video file, and then performing face recognition on the played video image; or, the application client can play the original video file in the background and perform face recognition identify.
- face recognition on video files by first playing the original video file, and then performing face recognition on the played video image; or, the application client can play the original video file in the background and perform face recognition identify.
- other methods for performing face recognition on video files are also applicable to the embodiments of the present disclosure, which are not limited in the embodiments of the present disclosure.
- the editor edits the original video in the first editing interface as shown in FIG. 3, and the application client recognizes that there is a portrait in the current video image, then the first interaction can be displayed in the first editing interface
- the first prompt information 301 of the function, wherein the first interactive function may correspond to the virtual button 302 in the first editing interface.
- other virtual buttons may also be preset in the first editing interface, which may be set according to actual requirements in practical applications, which are not limited in the embodiments of the present disclosure.
- Step S202 receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information;
- the editor can determine the first identification information of the interactive object, so as to obtain the interactive label including the first identification information. For example, when the interactive function is @friend, then the interactive object corresponding to the first interactive function is the friend B of editor A@, and the first identification information is the ID (Identity document, identity number) of B, so that the information containing B is obtained.
- the interactive tag of the ID which can be displayed in the video image when the video file is played.
- the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
- the first identification information of the interactive object determined by the editor is received in the interactive label, and the interactive label containing the first identification information is obtained, including:
- an interactive label including any identification information is generated.
- the second editing interface may include a preset interaction label and a preset identification information list, where the identification information list includes identification information of at least one interactive object.
- the application client may display the preset interactive label and the preset identification information list in the second editing interface.
- a selection instruction for any identification information is initiated.
- the application client enters any identification information corresponding to the selection instruction into the pre-selection instruction.
- the set interactive label when the editor determines to generate an interactive label, an interactive label containing any of the identification information is generated.
- a preset interaction label 401 and an identification information list 402 are displayed, wherein the interaction label "@" of an interaction function is preset.
- the application client inputs 401 the "little star”, as shown in FIG. 4B .
- the generation instruction for generating the interactive label is initiated.
- the application client After receiving the generation instruction, the application client generates the interactive label containing "Little Star", as shown in Figure 4C.
- identification information list in the second editing interface may be the editor's friend list, the editor's recently contacted friends, or may also be other types of identification information lists. Set according to requirements, which is not limited in this embodiment of the present disclosure.
- the editor can also change the style of the interactive label. For example, in the interactive label shown in FIG. 4C , when the editor clicks on the interactive label, the style of the interactive label can be changed.
- the styles of the interactive labels may also be changed in other ways, which are not limited in this embodiment of the present disclosure.
- the interactive label includes a preset first text box
- the first identification information of the interactive object determined by the editor is received in the interactive label, and the interactive label containing the first identification information is obtained, including:
- the second editing interface may also include a preset first text box.
- the application client may display the preset first text box in the second editing interface.
- the editor can directly input the instruction "@" of the interactive function and the identification information of the interactive object in the first text box, and then determine to generate an interactive label to generate an interactive label including any identification information.
- a preset first text box 501 is displayed, and then the editor can input “@ ⁇ ” in the first text box, as shown in FIG. 5B , when editing
- the generation instruction for generating the interactive label is initiated.
- the application client After receiving the generation instruction, the application client generates the interactive label containing "Little Star", as shown in Figure 4C.
- a preset list of identification information is displayed, as shown in FIG. 5C .
- the editor can directly select the interactive object without inputting the identification information of the interactive object, which provides convenience for the editor.
- interaction object and the object corresponding to face recognition may be the same or different.
- object of successful face recognition in the original video file is A
- the interaction object of editor @ can be A or B.
- the interaction tag may include identification information of one interaction object, or may include identification information of multiple interaction objects, for example, the editor @ three interaction objects A, B, and C at the same time.
- the editor @ three interaction objects A, B, and C at the same time.
- Step S203 when receiving the editing completion instruction initiated by the editor, generate a target video file containing an interactive tag, and release the target video file;
- a virtual button for generating the target video file can be preset in the editing interface.
- the application client can generate the interactive label based on the editing completion instruction. target video file, and publish the target video file. For example, when the editor clicks "OK" in the lower right corner as shown in FIG. 4C, the editing completion instruction is triggered, and the application client can generate the target video file containing the interactive tag based on the editing completion instruction.
- the target video file can be uploaded to the preset server for publishing.
- any user including the editor of the target video file
- the preset server sends the target video file after receiving the playback request, thereby realizing the Sharing of the target video file.
- Step S204 when receiving the playback instruction for the target video file initiated by the player, obtain the second identification information of the target video file and the player;
- the application client can generate a playback request based on the playback instruction, and send the playback request to the preset server to obtain the target video. file, and obtain the second identification information of the player at the same time.
- the application client can obtain the second identification information of the player in addition to obtaining the target video file from the preset server.
- Step S205 if the second identification information is the same as the first identification information, when the target video file is played, the first identification information and the preset second prompt information of the second interactive function are displayed in the interactive label;
- the acquired second identification information is the same as the above-mentioned first identification information, it means that the player is the above-mentioned interactive object
- the target video file is played in the playback interface
- an interactive label is displayed, and the interactive label includes the first identification information of the interactive object and the second prompt information of the preset second interactive function; wherein, the second interactive function can be a "comment" function, and the second prompt information can be Information that prompts the interacting object to comment.
- the target video file can be played and an interactive label can be displayed in the playback interface, and the interactive label includes the first identification information "" @ ⁇ ” and the second prompt message “click here to comment”.
- Step S206 when receiving the click instruction for the second prompt information initiated by the player, displaying the preset second text box;
- a click command is initiated, and the application client can display a preset second text box after receiving the click command, and the second text box is used to receive the input from the interactive object. interactive information, while the second text box is in an editable state.
- the preset second text box 701 can be displayed, and the second text box is in the editable state. condition.
- Step S207 receiving the interaction information input in the second text box
- the interactive object can input interactive information in the second text box.
- the interactive object inputs the interactive information of "la la la la la la la la" in the second text box.
- the second prompt information can be displayed in the interaction label; if there is interaction information in the interaction label, the interaction information can be directly displayed.
- Step S208 when the confirmation instruction is received, the updated interaction label is displayed; the updated interaction label includes the interaction information;
- the application client sends the interaction information to the preset server, and the preset server uses the interaction information to update the interaction label of the target video file. to obtain an updated visual video file containing the updated interaction label.
- the preset server updates and obtains the updated target video file
- any user initiates a playback request and obtains the updated target video file.
- the user watches the updated target video file, he can see the updated interactive label.
- the updated interaction label includes interaction information.
- Step S209 if the second identification information is different from the first identification information and is the same as the editor's third identification information, then when the target video file is played, the first identification information and the preset first identification information are displayed in the interactive label.
- Three prompt information
- the second identification information is different from the first identification information and is the same as the editor's third identification information, it means that the player is not an interactive object, but an editor, then when playing the target video file, in The target video file is played in the playback interface, and an interactive label is displayed at the same time, wherein the interactive label includes the first identification information and the preset third prompt information.
- the target video file can be played and interactive tags can be displayed in the playback interface.
- the interactive tags include The first identification information "@ ⁇ " and the preset third prompt information "friend comments will be displayed here".
- Step S2010 if the second identification information is different from the first identification information and the editor's third identification information, when the target video file is played, the first identification information is displayed in the interactive label, and the first identification information is used to view the first identification information.
- the target video file is being played.
- the interactive label includes the first identification information and a data interface for viewing the relevant information corresponding to the first identification information, such as an individual viewing the interactive object.
- the data interface of the home page, etc. so that the user can click the data interface to view the personal home page of the interactive object.
- the target video file can be played and the interactive label displayed in the playback interface. It includes the first identification information "@ ⁇ ” and the related information "view personal homepage” corresponding to the first identification information.
- the player clicks "View Personal Homepage” the personal homepage of "Little Star” can be displayed in the application client.
- a play request initiated by any user to the preset server is a play request for the updated target video file.
- the updated target video file can be obtained.
- the playback request only needs to include the identification information of the video file.
- the latest video file can be obtained according to the identification information in the playback request. That is to say, when the preset server receives the playback request, if the target video file is stored in the preset server, it will deliver the target video file; If the updated target video file is stored, then the updated target video file is delivered without the need for the user to distinguish.
- the application client After receiving the updated target video file delivered by the preset server, the application client can play the target video file in the playback interface and display the updated interactive label at the same time.
- the target video file is played in the playback interface as shown in Figure 10, and the updated interactive label is displayed at the same time.
- the updated interactive label includes the first identification information "@ ⁇ ” and the interactive information "Xiao Xing's comment: la la la” la la la la la la la la la la la.”
- the preset second editing interface is displayed, and the second editing interface is displayed.
- Including a preset interactive label then receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive Tag the target video file and publish the target video file.
- the target video file contains the identification information of the interactive object
- the interactive object browses the target video file, he can directly comment in the interactive tag, which does not affect the browsing of the video file, but also can interact, which improves the interaction.
- the interactive experience of the object is not limited to, but not affect the browsing of the video file, but also can interact, which improves the interaction.
- the editor can also directly view the interactive information from the updated interactive tab without flipping through operations, thereby improving the editor's interactive experience.
- FIG. 11 is a schematic structural diagram of an apparatus for processing a video file provided by another embodiment of the present disclosure. As shown in FIG. 11 , the apparatus in this embodiment may include:
- the first processing module 1101 is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
- the interface includes preset interactive labels;
- the second processing module 1102 is configured to receive the first identification information of the interaction object corresponding to the first interaction function in the interaction label, and obtain the interaction label including the first identification information;
- the third processing module 1103 is configured to generate a target video file containing an interactive tag when an editing completion instruction initiated by the editor is received, and publish the target video file.
- the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
- the second processing module is specifically used for:
- a selection instruction for any identification information in the identification information list is received; when a generation instruction for generating an interactive label is received, an interactive label including any identification information is generated.
- the interactive label includes a preset first text box
- the second processing module is specifically used for:
- the fourth processing module is used to obtain the second identification information of the target video file and the player when receiving the playback instruction for the target video file initiated by the player;
- the fifth processing module is configured to display the first identification information and the preset second prompt information of the second interactive function in the interactive label when the target video file is played if the second identification information is the same as the first identification information ;
- the sixth processing module is used to display the preset second text box when receiving the click instruction for the second prompt information initiated by the player;
- a receiving module for receiving the interaction information input in the second text box
- the seventh processing module is used for displaying the updated interaction label when receiving the confirmation instruction; the updated interaction label includes interaction information.
- the eighth processing module is used to display the first identification information in the interactive label when the target video file is played if the second identification information is different from the first identification information and is the same as the editor's third identification information, and The default third prompt message.
- the ninth processing module is used for displaying the first identification information in the interactive label when playing the target video file, and for A data interface for viewing related information corresponding to the first identification information.
- the trigger instruction is generated in the following manner:
- the face recognition of the original video file is successful
- the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
- the video file processing apparatus of this embodiment can execute the video file processing methods shown in the first embodiment and the second embodiment of the present disclosure, and the implementation principles thereof are similar, which will not be repeated here.
- the preset second editing interface is displayed, and the second editing interface is displayed.
- Including a preset interactive label then receiving the first identification information of the interactive object determined by the editor in the interactive label, and obtaining the interactive label containing the first identification information; when receiving the editing completion instruction initiated by the editor, generate an interactive Tag the target video file and publish the target video file.
- the target video file contains the identification information of the interactive object
- the interactive object browses the target video file, he can directly comment in the interactive tag, which does not affect the browsing of the video file, but also can interact, which improves the interaction.
- the interactive experience of the object is not limited to, but not affect the browsing of the video file, but also can interact, which improves the interaction.
- the editor can also directly view the interactive information from the updated interactive tab without flipping through operations, thereby improving the editor's interactive experience.
- FIG. 12 it shows a schematic structural diagram of an electronic device 1200 suitable for implementing an embodiment of the present disclosure.
- the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
- the electronic device shown in FIG. 12 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
- the electronic device includes: a memory and a processor, wherein the processor here may be referred to as the processing device 1201 described below, and the memory may include a read-only memory (ROM) 1202, a random access memory (RAM) 1203, and a storage device described below.
- the electronic device 1200 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1201, which may Various appropriate actions and processes are performed by the program in or loaded from the storage device 1208 into the random access memory (RAM) 1203.
- RAM 1203 various programs and data required for the operation of the electronic device 1200 are also stored.
- the processing device 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204.
- An input/output (I/O) interface 1205 is also connected to bus 1204 .
- the following devices may be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1207 of a computer, etc.; a storage device 1208 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1209. Communication means 1209 may allow electronic device 1200 to communicate wirelessly or by wire with other devices to exchange data.
- FIG. 12 shows an electronic device 1200 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via the communication device 1209, or from the storage device 1208, or from the ROM 1202.
- the processing apparatus 1201 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
- the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
- HTTP HyperText Transfer Protocol
- Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: in the preset first editing interface for the original video file, when receiving the In response to the trigger instruction of the preset first interactive function, a preset second editing interface is displayed, and the second editing interface includes a preset interaction label; the first interaction object determined by the editor is received in the interaction label. identification information, and obtain an interactive label including the first identification information; when receiving an editing completion instruction initiated by the editor, generate a target video file including the interactive label, and publish the target video file.
- Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the modules or units involved in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances.
- exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs Systems on Chips
- CPLDs Complex Programmable Logical Devices
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- Example 1 provides a video file processing method, including:
- a preset second editing interface is displayed; the second editing interface includes the preset interaction Label;
- the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
- the receiving, in the interaction label, the first identification information of the interaction object determined by the editor, and obtaining the interaction label including the first identification information includes:
- the interactive label includes a preset first text box
- the receiving, in the interaction label, the first identification information of the interaction object determined by the editor, and obtaining the interaction label including the first identification information includes:
- an interaction label including the input identification information is generated.
- the second identification information is the same as the first identification information, when the target video file is played, the first identification information and the first identification information of the preset second interactive function are displayed in the interactive label. 2. prompt information;
- the updated interaction label is displayed; the updated interaction label includes the interaction information.
- the second identification information is different from the first identification information and is the same as the editor's third identification information, then when the target video file is played, the first identification information is displayed in the interactive label. identification information, and preset third prompt information.
- the first identification is displayed in the interaction label when the target video file is played. information, and a data interface for viewing related information corresponding to the first identification information.
- the trigger instruction is generated in the following manner:
- the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
- Example 2 provides the apparatus of Example 1, including:
- the first processing module is configured to, in the preset first editing interface for the original video file, display the preset second editing interface when receiving a trigger instruction for the preset first interactive function; the second editing interface
- the editing interface includes preset interactive labels;
- a second processing module configured to receive the first identification information of the interaction object corresponding to the first interaction function in the interaction label, and obtain the interaction label including the first identification information
- the third processing module is configured to generate a target video file including the interaction tag when receiving the editing completion instruction initiated by the editor, and publish the target video file.
- the second editing interface includes a preset identification information list, and the identification information list includes identification information of at least one interactive object;
- the second processing module is specifically used for:
- a selection instruction for any identification information in the identification information list is received; when a generation instruction for generating an interactive label is received, an interactive label including the any identification information is generated.
- the interactive label includes a preset first text box
- the second processing module is specifically used for:
- a fourth processing module configured to acquire the second identification information of the target video file and the player when receiving a playback instruction for the target video file initiated by the player;
- a fifth processing module configured to display the first identification information in the interactive label when the target video file is played, and preset the first identification information if the second identification information is the same as the first identification information
- a sixth processing module configured to display a preset second text box when receiving a click instruction initiated by the player for the second prompt information
- a receiving module configured to receive the interaction information input in the second text box
- the seventh processing module is configured to display the updated interaction label when receiving the confirmation instruction; the updated interaction label includes the interaction information.
- the eighth processing module is used for, if the second identification information is different from the first identification information, and is the same as the editor's third identification information, when playing the target video file, in the The first identification information and the preset third prompt information are displayed in the interactive label.
- the ninth processing module is configured to, if the second identification information is different from the first identification information and the editor's third identification information, when the target video file is played, in the interactive label
- the first identification information and a data interface for viewing the related information corresponding to the first identification information are displayed in .
- the trigger instruction is generated in the following manner:
- the face recognition of the original video file is successful
- the editor triggers a virtual button corresponding to the first interactive function in the first editing interface.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (10)
- 一种视频文件的处理方法,其特征在于,包括:在针对原始视频文件的预设的第一编辑界面中,当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,所述第二编辑界面包括预设的标识信息列表,所述标识信息列表包括至少一个交互对象的标识信息;所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:接收针对所述标识信息列表中任一标识信息的选择指令;当接收到生成交互标签的生成指令时,生成包含所述任一标识信息的交互标签。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,所述交互标签包含预设的第一文本框;所述在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签,包括:接收在所述第一文本框中输入的标识信息;当接收到生成交互标签的生成指令时,生成包含所述输入的标识信息的交互标签。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:当接收到播放者发起的针对所述目标视频文件的播放指令时,获取所 述目标视频文件和所述播放者的第二标识信息;若所述第二标识信息与所述第一标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第二交互功能的第二提示信息;当接收到所述播放者发起的针对所述第二提示信息的点击指令时,展示预设的第二文本框;接收在所述第二文本框中输入的交互信息;当接收到确认指令时,展示更新后的交互标签;所述更新后的交互标签包括所述交互信息。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:若所述第二标识信息与所述第一标识信息不相同,且与所述编辑者的第三标识信息相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及预设的第三提示信息。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,还包括:若所述第二标识信息与所述第一标识信息、所述编辑者的第三标识信息均不相同,则在播放所述目标视频文件时,在所述交互标签中展示所述第一标识信息,以及用于查看所述第一标识信息对应的相关信息的数据接口。
- 根据权利要求1所述的视频文件的处理方法,其特征在于,所述触发指令通过如下方式生成:在所述第一编辑界面中对所述原始视频文件进行人脸识别成功;或,所述编辑者触发所述第一编辑界面中与所述第一交互功能对应的虚拟按钮。
- 一种视频文件的处理装置,其特征在于,包括:第一处理模块,用于在针对原始视频文件的预设的第一编辑界面中, 当接收到针对预设第一交互功能的触发指令时,展示预设的第二编辑界面;所述第二编辑界面包括预设的交互标签;第二处理模块,用于在所述交互标签中接收编辑者确定的交互对象的第一标识信息,得到包含所述第一标识信息的交互标签;第三处理模块,用于当接收到所述编辑者发起的编辑完成指令时,生成包含所述交互标签的目标视频文件,并发布所述目标视频文件。
- 一种电子设备,其特征在于,其包括:处理器、存储器和总线;所述总线,用于连接所述处理器和所述存储器;所述存储器,用于存储操作指令;所述处理器,用于通过调用所述操作指令,执行上述权利要求1-7中任一项所述的视频文件的处理方法。
- 一种计算机可读存储介质,其特征在于,所述计算机存储介质用于存储计算机指令,当其在计算机上运行时,使得计算机可以执行上述权利要求1-7中任一项所述的视频文件的处理方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022564729A JP7481490B2 (ja) | 2020-09-09 | 2021-08-31 | 動画ファイルの処理方法、装置、電子機器及びコンピュータ記憶媒体 |
BR112023001285A BR112023001285A2 (pt) | 2020-09-09 | 2021-08-31 | Método e aparelho de processamento de arquivos de vídeo, dispositivo eletrônico e meio de armazenamento por computador |
KR1020227036625A KR102687787B1 (ko) | 2020-09-09 | 2021-08-31 | 비디오 파일의 처리 방법, 장치, 전자 기기 및 컴퓨터 저장 매체 |
EP21865893.8A EP4093042A4 (en) | 2020-09-09 | 2021-08-31 | VIDEO FILE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE AND COMPUTER STORAGE MEDIUM |
US17/887,138 US11889143B2 (en) | 2020-09-09 | 2022-08-12 | Video file processing method and apparatus, electronic device, and computer storage medium |
US18/541,783 US20240114197A1 (en) | 2020-09-09 | 2023-12-15 | Video file processing method and apparatus, electronic device, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010943738.7A CN112040330B (zh) | 2020-09-09 | 2020-09-09 | 视频文件的处理方法、装置、电子设备及计算机存储介质 |
CN202010943738.7 | 2020-09-09 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/887,138 Continuation US11889143B2 (en) | 2020-09-09 | 2022-08-12 | Video file processing method and apparatus, electronic device, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022052838A1 true WO2022052838A1 (zh) | 2022-03-17 |
Family
ID=73585150
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/115733 WO2022052838A1 (zh) | 2020-09-09 | 2021-08-31 | 视频文件的处理方法、装置、电子设备及计算机存储介质 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11889143B2 (zh) |
EP (1) | EP4093042A4 (zh) |
JP (1) | JP7481490B2 (zh) |
CN (1) | CN112040330B (zh) |
BR (1) | BR112023001285A2 (zh) |
WO (1) | WO2022052838A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112040330B (zh) | 2020-09-09 | 2021-12-07 | 北京字跳网络技术有限公司 | 视频文件的处理方法、装置、电子设备及计算机存储介质 |
CN113419800B (zh) * | 2021-06-11 | 2023-03-24 | 北京字跳网络技术有限公司 | 交互方法、装置、介质和电子设备 |
CN113655930B (zh) * | 2021-08-30 | 2023-01-10 | 北京字跳网络技术有限公司 | 信息发布方法、信息的展示方法、装置、电子设备及介质 |
CN113741757B (zh) * | 2021-09-16 | 2023-10-17 | 北京字跳网络技术有限公司 | 显示提醒信息的方法、装置、电子设备和存储介质 |
CN114430499B (zh) * | 2022-01-27 | 2024-02-06 | 维沃移动通信有限公司 | 视频编辑方法、视频编辑装置、电子设备和可读存储介质 |
CN115941841A (zh) * | 2022-12-06 | 2023-04-07 | 北京字跳网络技术有限公司 | 关联信息展示方法、装置、设备、存储介质和程序产品 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130254816A1 (en) * | 2012-03-21 | 2013-09-26 | Sony Corporation | Temporal video tagging and distribution |
CN103873945A (zh) * | 2014-02-21 | 2014-06-18 | 周良文 | 与视频节目中对象进行社交的***、方法 |
CN105847913A (zh) * | 2016-05-20 | 2016-08-10 | 腾讯科技(深圳)有限公司 | 一种控制视频直播的方法、移动终端及*** |
CN106446056A (zh) * | 2016-09-05 | 2017-02-22 | 奇异牛科技(深圳)有限公司 | 一种基于移动端图片定义标签的***及其方法 |
CN108289057A (zh) * | 2017-12-22 | 2018-07-17 | 北京达佳互联信息技术有限公司 | 数据分享方法、***及移动终端 |
US10063910B1 (en) * | 2017-10-31 | 2018-08-28 | Rovi Guides, Inc. | Systems and methods for customizing a display of information associated with a media asset |
CN110378247A (zh) * | 2019-06-26 | 2019-10-25 | 腾讯科技(深圳)有限公司 | 虚拟对象识别方法和装置、存储介质及电子装置 |
CN110460578A (zh) * | 2019-07-09 | 2019-11-15 | 北京达佳互联信息技术有限公司 | 建立关联关系的方法、装置及计算机可读存储介质 |
CN110868639A (zh) * | 2019-11-28 | 2020-03-06 | 北京达佳互联信息技术有限公司 | 视频合成方法及装置 |
CN111325004A (zh) * | 2020-02-21 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 一种文件评论、查看方法 |
CN112040330A (zh) * | 2020-09-09 | 2020-12-04 | 北京字跳网络技术有限公司 | 视频文件的处理方法、装置、电子设备及计算机存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2014274171B2 (en) | 2013-05-30 | 2016-08-11 | Facebook, Inc. | Tag suggestions for images on online social networks |
JP5939659B2 (ja) | 2014-09-17 | 2016-06-22 | イーキューデザイン株式会社 | 写真・動画像提供システム |
CN104581409A (zh) * | 2015-01-22 | 2015-04-29 | 广东小天才科技有限公司 | 一种虚拟互动视频播放方法和装置 |
CN107484019A (zh) * | 2017-08-03 | 2017-12-15 | 乐蜜有限公司 | 一种视频文件的发布方法及装置 |
CN110049266A (zh) * | 2019-04-10 | 2019-07-23 | 北京字节跳动网络技术有限公司 | 视频数据发布方法、装置、电子设备和存储介质 |
CN111523053A (zh) * | 2020-04-26 | 2020-08-11 | 腾讯科技(深圳)有限公司 | 信息流处理方法、装置、计算机设备和存储介质 |
CN111580724B (zh) * | 2020-06-28 | 2021-12-10 | 腾讯科技(深圳)有限公司 | 一种信息互动方法、设备及存储介质 |
-
2020
- 2020-09-09 CN CN202010943738.7A patent/CN112040330B/zh active Active
-
2021
- 2021-08-31 EP EP21865893.8A patent/EP4093042A4/en active Pending
- 2021-08-31 BR BR112023001285A patent/BR112023001285A2/pt unknown
- 2021-08-31 JP JP2022564729A patent/JP7481490B2/ja active Active
- 2021-08-31 WO PCT/CN2021/115733 patent/WO2022052838A1/zh active Application Filing
-
2022
- 2022-08-12 US US17/887,138 patent/US11889143B2/en active Active
-
2023
- 2023-12-15 US US18/541,783 patent/US20240114197A1/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130254816A1 (en) * | 2012-03-21 | 2013-09-26 | Sony Corporation | Temporal video tagging and distribution |
CN103873945A (zh) * | 2014-02-21 | 2014-06-18 | 周良文 | 与视频节目中对象进行社交的***、方法 |
CN105847913A (zh) * | 2016-05-20 | 2016-08-10 | 腾讯科技(深圳)有限公司 | 一种控制视频直播的方法、移动终端及*** |
CN106446056A (zh) * | 2016-09-05 | 2017-02-22 | 奇异牛科技(深圳)有限公司 | 一种基于移动端图片定义标签的***及其方法 |
US10063910B1 (en) * | 2017-10-31 | 2018-08-28 | Rovi Guides, Inc. | Systems and methods for customizing a display of information associated with a media asset |
CN108289057A (zh) * | 2017-12-22 | 2018-07-17 | 北京达佳互联信息技术有限公司 | 数据分享方法、***及移动终端 |
CN110378247A (zh) * | 2019-06-26 | 2019-10-25 | 腾讯科技(深圳)有限公司 | 虚拟对象识别方法和装置、存储介质及电子装置 |
CN110460578A (zh) * | 2019-07-09 | 2019-11-15 | 北京达佳互联信息技术有限公司 | 建立关联关系的方法、装置及计算机可读存储介质 |
CN110868639A (zh) * | 2019-11-28 | 2020-03-06 | 北京达佳互联信息技术有限公司 | 视频合成方法及装置 |
CN111325004A (zh) * | 2020-02-21 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 一种文件评论、查看方法 |
CN112040330A (zh) * | 2020-09-09 | 2020-12-04 | 北京字跳网络技术有限公司 | 视频文件的处理方法、装置、电子设备及计算机存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240114197A1 (en) | 2024-04-04 |
EP4093042A1 (en) | 2022-11-23 |
JP7481490B2 (ja) | 2024-05-10 |
EP4093042A4 (en) | 2023-05-24 |
CN112040330B (zh) | 2021-12-07 |
US20220394319A1 (en) | 2022-12-08 |
JP2023522759A (ja) | 2023-05-31 |
BR112023001285A2 (pt) | 2023-03-21 |
KR20220156910A (ko) | 2022-11-28 |
CN112040330A (zh) | 2020-12-04 |
US11889143B2 (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022052838A1 (zh) | 视频文件的处理方法、装置、电子设备及计算机存储介质 | |
US11621022B2 (en) | Video file generation method and device, terminal and storage medium | |
WO2022152064A1 (zh) | 视频生成方法、装置、电子设备和存储介质 | |
US20240121468A1 (en) | Display method, apparatus, device and storage medium | |
CN111970571B (zh) | 视频制作方法、装置、设备及存储介质 | |
JP2023539815A (ja) | 議事録のインタラクション方法、装置、機器及び媒体 | |
WO2023103889A1 (zh) | 视频处理方法、装置、电子设备及存储介质 | |
CN111343074A (zh) | 一种视频处理方法、装置和设备以及存储介质 | |
US11886484B2 (en) | Music playing method and apparatus based on user interaction, and device and storage medium | |
CN112241397A (zh) | 多媒体文件的分享方法、装置、电子设备及可读存储介质 | |
CN112000267A (zh) | 信息显示方法、装置、设备及存储介质 | |
WO2023155822A1 (zh) | 会话的方法、装置、电子设备和存储介质 | |
CN114363686B (zh) | 多媒体内容的发布方法、装置、设备和介质 | |
US20240103802A1 (en) | Method, apparatus, device and medium for multimedia processing | |
WO2024140270A1 (zh) | 特效展示方法、装置、电子设备及存储介质 | |
WO2024037491A1 (zh) | 媒体内容处理方法、装置、设备及存储介质 | |
WO2024037480A1 (zh) | 交互方法、装置、电子设备和存储介质 | |
WO2023134558A1 (zh) | 交互方法、装置、电子设备、存储介质和程序产品 | |
CN115981769A (zh) | 页面显示方法、装置、设备、计算机可读存储介质及产品 | |
CN115793936A (zh) | 信息处理方法、装置和电子设备 | |
CN110366002B (zh) | 视频文件合成方法、***、介质和电子设备 | |
KR102687787B1 (ko) | 비디오 파일의 처리 방법, 장치, 전자 기기 및 컴퓨터 저장 매체 | |
EP4354885A1 (en) | Video generation method and apparatus, device, storage medium, and program product | |
WO2022161329A1 (zh) | 作品的展示方法、装置、电子设备和存储介质 | |
WO2023143141A1 (zh) | 视频播放设置方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21865893 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021865893 Country of ref document: EP Effective date: 20220815 |
|
ENP | Entry into the national phase |
Ref document number: 20227036625 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022564729 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202327005986 Country of ref document: IN |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023001285 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112023001285 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230124 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |