CN112261465A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN112261465A
CN112261465A CN202011158479.3A CN202011158479A CN112261465A CN 112261465 A CN112261465 A CN 112261465A CN 202011158479 A CN202011158479 A CN 202011158479A CN 112261465 A CN112261465 A CN 112261465A
Authority
CN
China
Prior art keywords
target
video
information
virtual
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011158479.3A
Other languages
Chinese (zh)
Inventor
巫鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Nanjing Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiwo Software Technology Co ltd filed Critical Nanjing Weiwo Software Technology Co ltd
Priority to CN202011158479.3A priority Critical patent/CN112261465A/en
Publication of CN112261465A publication Critical patent/CN112261465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • A63F13/497Partially or entirely replaying previous game actions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video processing method and device, and belongs to the field of communication. The method is applied to a first electronic device and comprises the following steps: when a target video recorded by second electronic equipment is played, acquiring target information of a first video frame in the target video; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame; and when the first video frame is displayed, the target object and the target information are displayed in an associated manner. The embodiment of the application solves the problem that a large amount of information in the game video is easily hidden in the prior art.

Description

Video processing method and device
Technical Field
The present application belongs to the field of communications, and in particular, relates to a video processing method and apparatus.
Background
With the rapid development of mobile communication technology, various mobile electronic devices and non-mobile electronic devices have become indispensable tools in various aspects of people's lives. The functions of various Application programs (APPs) of the electronic equipment are gradually improved, and the functions do not only play a role in communication, but also provide various intelligent services for users, so that great convenience is brought to the work and life of the users.
Currently, electronic games have already occupied a large part of applications, such as Multiplayer Online Battle Arena (MOBA). The user communicates with the server of the application program through the client side by installing the client side of the game application program in the electronic equipment, so that the multiplayer online tactical competition is realized. With the rise of electronic games, watching game videos becomes an important part of internet entertainment traffic, for example, a large number of users watch game videos in a live or recorded form.
In general, a game video contains a large amount of hidden information, such as skills, virtual characters, and the like; for users familiar with the game, the video content of the game can be easily understood, and the video content is convenient for the users to watch. But the difficulty of understanding the content within the game video is greater for relatively unfamiliar users. In the process of making the game video, the experience of audiences with different levels is difficult to take care of, and particularly the game video of a live broadcast type.
Therefore, in the prior art, a large amount of information in the game video is easily hidden, so that the watching requirements of audiences with different familiarity degrees of the game are difficult to meet.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video processing method and apparatus, which can solve the problem in the prior art that a large amount of information in a game video is easily hidden.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, which is applied to a first electronic device, and the method includes:
when a target video recorded by second electronic equipment is played, acquiring target information of a first video frame in the target video; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
and when the first video frame is displayed, the target object and the target information are displayed in an associated manner.
In a second aspect, an embodiment of the present application provides a video processing method, which is applied to a second electronic device, and the method includes:
in the process of recording a screen picture, acquiring target information in the screen picture; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
and synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment.
In a third aspect, an embodiment of the present application further provides a video processing apparatus applied to a second electronic device, where the video processing apparatus includes:
the playing module is used for acquiring target information of a first video frame in a target video when the target video recorded by second electronic equipment is played; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
and the display module is used for displaying the target object and the target information in an associated manner when the first video frame is displayed.
In a fourth aspect, an embodiment of the present application further provides a video processing apparatus, which is applied to a second electronic device, and the apparatus includes:
the recording module is used for acquiring target information in a screen picture in the process of recording the screen picture; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
and the storage module is used for synthesizing the target information and the screen picture into a target video and sending the target video to the first electronic equipment.
In a fifth aspect, the present application further provides an electronic device, which includes a memory, a processor, and a program or instructions stored on the memory and executable on the processor, and when the processor executes the program or instructions, the steps in the video processing method described above are implemented.
In a sixth aspect, the present application further provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps in the video processing method as described above.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect and the second aspect.
In the embodiment of the application, in the process of recording the screen picture, target information in the screen picture is acquired; synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment, so that the first electronic equipment acquires the target information of a first video frame in the target video when playing the target video recorded by second electronic equipment, and displays the target object and the target information in an associated manner when displaying the first video frame, thereby avoiding the target information from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic view of a scene of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a second flowchart of a video processing method according to an embodiment of the present application;
FIG. 4 shows a flow chart of a first example provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a first example provided by an embodiment of the present application;
FIG. 6 shows a flow chart of a second example provided by an embodiment of the present application;
FIG. 7 shows one of the schematic diagrams of a second example provided by an embodiment of the present application;
fig. 8 shows a second schematic diagram of a second example provided by an embodiment of the present application;
fig. 9 shows one of block diagrams of a video processing apparatus provided in an embodiment of the present application;
fig. 10 shows a second block diagram of a video processing apparatus according to an embodiment of the present application;
FIG. 11 shows one of the block diagrams of an electronic device provided by an embodiment of the application;
fig. 12 shows a second block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a video processing method provided in an embodiment of the present application is applied to a first electronic Device and a second electronic Device, and optionally, the first electronic Device and the second electronic Device include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of Mobile Stations (MS), Terminal devices (Terminal devices), and the like. Referring to fig. 1, in fig. 1, a target video is taken as an example of a game video, a first electronic device serves as a receiving end, a second electronic device serves as a sending end, the second electronic device records the target video and sends the target video to the first electronic device (the second electronic device directly sends the target video to the first electronic device or forwards the target video through a server), and the first electronic device can play the target video after receiving the target video.
With reference to fig. 2, in the process of performing video processing on the second electronic device, the method mainly includes the following steps:
step 201, in the process of recording a screen, acquiring target information in the screen; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
for example, when the second electronic device performs a preset operation or receives a screen recording operation, recording a screen picture of the second electronic device; and if preset operation, such as running of a game APP of the second electronic device, the second electronic device automatically starts screen picture recording.
In the process of recording the screen picture, acquiring a target object in the screen picture, wherein the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen picture;
taking recording a game video as an example, a user operation such as a user selection operation on an icon in a screen interface, for example, the user selects a virtual article in the game interface; the virtual roles include virtual roles in the screen interface, such as virtual roles corresponding to the user of the second electronic device, and virtual roles of other users playing in the same game. Virtual operations are operations that are not performed by manipulating icons on the screen interface, such as user release of skills by a single key or combination of keys in a keyboard. The virtual article is an article such as a virtual device or a virtual prize in the game.
Taking recording other videos as an example, such as a video played by a user through the second electronic device, and a user operation such as a user selection operation on an icon in the screen interface, for example, the user selects a certain icon in the screen interface. The virtual roles comprise virtual roles in the played video; virtual operations are operations that are not performed by manipulating icons on the screen interface, for example, a user activating a function by a single key or a combination of keys on a keyboard. Virtual items such as red packs, virtual prizes, and the like.
The target information is parameter information of a target object; for example, when the target object includes a user operation, the target information includes an icon corresponding to the user operation; when the target object comprises a virtual role, the target information comprises the name, characteristics, skill and other information of the virtual role; when the target object comprises virtual operation, the target information comprises virtual operation sending virtual roles, virtual operation receiving virtual roles, virtual operation results, operation processes and the like; when the target object includes a virtual article, the target information includes a name, a characteristic, and the like of the virtual article.
Step 202, synthesizing the target information and the screen picture into a target video, and sending the target video to the first electronic device.
And after the target information is acquired, synthesizing the target information and the recorded screen picture into a target video, and sending the target video to the first electronic equipment.
When the second electronic equipment records the target video, a live broadcast or recorded broadcast mode can be adopted; for the live broadcasting mode, the time length of a target video recorded each time is short, for example, within 5 minutes, and then the recorded video is sent to the first electronic device in real time; for recorded broadcast, the recorded broadcast may be sent to the first electronic device after the recording is completed by the first electronic device.
Referring to fig. 3, the first electronic device performs the following video processing procedure on the target video:
step 301, when a target video recorded by a second electronic device is played, acquiring target information of a first video frame in the target video; the target information includes parameter information of a target object, and the target object includes at least one of a user operation, a virtual character, a virtual operation, and a virtual article in the first video frame.
The target video is a video of a screen image recorded by the second electronic device, for example, when the second electronic device performs a preset operation or receives a screen recording operation, the screen image of the second electronic device is recorded; and if preset operation, such as running of a game APP of the second electronic device, the second electronic device automatically starts screen picture recording.
When a target video recorded by second electronic equipment is played by first electronic equipment, acquiring target information of a first video frame in the target video, wherein the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in a screen picture;
taking the target video as the game video as an example, the user operation is, for example, a user selection operation on an icon in the screen interface, for example, the user selects a virtual article in the game interface; the virtual characters include virtual characters in the screen interface, such as virtual characters of the user of the second electronic device and virtual characters of other users playing the game simultaneously therewith. Virtual operations are operations that are not performed by manipulating icons on the screen interface, such as user release of skills by a single key or combination of keys in a keyboard. The virtual article is an article such as a virtual device or a virtual prize in the game.
Taking the target video as another video, for example, the user plays the video through the second electronic device, and the user operation is, for example, a user selection operation on an icon in the screen interface, for example, the user selects a certain icon in the screen interface. The virtual roles comprise virtual roles in the played video; virtual operations are operations that are not performed by manipulating icons on the screen interface, for example, a user activating a function by a single key or a combination of keys on a keyboard. Virtual items such as red packs, virtual prizes, and the like.
The target information is parameter information of a target object; for example, when the target object includes a user operation, the target information includes an icon corresponding to the user operation; when the target object comprises a virtual role, the target information comprises the name, characteristics, skill and other information of the virtual role; when the target object comprises virtual operation, the target information comprises virtual operation sending virtual roles, virtual operation receiving virtual roles, virtual operation results, operation processes and the like; when the target object includes a virtual article, the target information includes a name, a characteristic, and the like of the virtual article.
Step 302, when the first video frame is displayed, the target object and the target information are displayed in an associated manner.
With reference to fig. 1, the second electronic device sends the recorded target video to the first electronic device, a screen interface of the second electronic device corresponds to a first video frame in the target video, and the screen interface of the second electronic device includes three user-operated icons corresponding to the icon A, B, C; and the second electronic equipment acquires the target information of the three and sends the target video to the first electronic equipment. When the first electronic device plays the target video, the target object and the target information are displayed in an associated manner, that is, the target object is displayed and the target information is simultaneously displayed in a display interface of the first electronic device, and an associated identifier is added to the target object and the target information, as shown in fig. 1, in the display interface of the first electronic device, the target information (display controls corresponding to skill 1, skill 2, and skill 3) is associated to the icon a, the icon B, and the icon C through lead wires, respectively.
Therefore, when a user watches live or recorded game videos or other screen recording videos through the first electronic equipment, the user can know the video content by combining the target information; taking a game video as an example, for a live or recorded game video, for example, when a team of a plurality of teams plays a group of battles, the target information such as various skills, various stations, various players' positions, blood volume, skill hit status, and the like, which are very quickly explained by an announcer, can be heard. Due to the fact that the information amount is too large, a user cannot observe detailed content from a picture quickly, and watching experience is poor; in the present embodiment, the content information in the screen is displayed in association with the target object in the form of target information, and as shown in fig. 1, the icon a, the icon B, and the icon C are displayed in association with the corresponding target information, respectively. Therefore, the target information of the target object is displayed on the premise of not influencing normal watching, the user can know the details of the target video, and the watching experience is improved.
Taking the target video as another video, for example, the user of the second electronic device records a video of the screen. When the target video is watched through the first electronic equipment, the video content can be known by combining the target information; for example, if the target video includes a selection operation of the user on an icon in the screen interface, for example, the user selects a certain icon in the screen interface, the target information may include parameter information of the icon, for example, a name, a function, and the like of the icon. If the information amount in the target video is too large, a user cannot observe detailed content from the picture quickly, and the watching experience is poor; in the implementation of the application, the content information in the picture is displayed in the form of the target information in association with the target object, and the target information of the target object is displayed on the premise of not influencing normal watching, so that the user can know the details of the target video, and the watching experience is improved.
In the embodiment of the application, in the process of recording the screen picture, target information in the screen picture is acquired; synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment, so that the first electronic equipment acquires the target information of a first video frame in the target video when playing the target video recorded by second electronic equipment, and displays the target object and the target information in an associated manner when displaying the first video frame, thereby avoiding the target information from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame. The embodiment of the application solves the problem that a large amount of information in the game video is easily hidden in the prior art.
In an optional embodiment, in a case where the target object includes a user operation in the screen, the acquiring target information in the screen includes:
when user operation is received, determining an icon corresponding to the user operation, and acquiring the change content information in the screen picture;
and determining the changed content information and the icon as target information of the user operation.
Optionally, the video recording APP monitors the click operation of the user and judges whether the screen picture changes; and if the change is detected, extracting the icon of the clicked area and identifying the changed content information when the change is detected in the screen picture.
For example, the changed content information is displayed by a pop-up prompt box, and the text corresponding to the prompt box can be recognized and copied as the changed content information by Optical Character Recognition (OCR). After a group of icons and the change content information are identified, prompting a user to check whether the icons are matched with the corresponding change content information; if not, the user can manually correct the changed content information.
And determining the changed content information and the icon as target information of the user operation, and establishing a mapping relation between the icon and the changed content information.
In an optional embodiment, the acquiring the target information in the screen includes:
acquiring an operation log of a user of the second electronic equipment; an operation log is the record of the operation of the user in the video recording period;
and acquiring parameter information of the target object in the screen picture from the operation log.
Taking the target video as the game video as an example, the operation log is also called a "combat log", and the operation log records skills, articles used, injuries caused or received by all players in the game video at specific time, and the like.
Extracting the virtual roles, the used virtual articles, skills (virtual operations) and the caused damage values contained in the operation log; and the 'key' injury from a certain attack is the 'common attack' or 'skill' or 'injury' from which 'virtual role' and the like; the above information is extracted as parameter information of the target object.
In an optional embodiment, in a case that the target object includes a user operation and/or a virtual article in the screen, the acquiring target information in the screen includes:
and detecting that the target object in the screen picture is in a recovery state, and acquiring identification information of the target object as target information.
If the operation log cannot be extracted, the target information may be determined according to the state of the target object in the screen, for example, if a certain user operates a corresponding icon, or if a certain virtual article is in a recovery state (cooling state) and cannot be used currently, the target object has just been used, and the identification information of the target object is used as the target information.
Taking a game video as an example, for example, when a skill (user operation) or a virtual article in a game enters into cooling, it indicates that the user releases or uses the corresponding skill or virtual article, identifies the cooling time for the skill to enter into the video, and extracts identification information of a target object as target information if the difference between the cooling time and the current time is within a preset difference range, that is, the skill is just released or the virtual article is just used.
In an alternative embodiment, synthesizing the target information with the screen into a target video includes:
embedding the target information into video frame data of the corresponding screen picture to obtain a target video; and fusing the target information and the video frame data, for example, embedding coordinate information and characters in the target information into each video frame data and then supplementing and recording the video frame data.
Or
And respectively storing the recorded video frame data and the target information data into two data packets to obtain the target video, separating and storing the target information and the corresponding video frame data, and loading and analyzing the target information and the corresponding video frame data during playing.
In an optional embodiment, in a case where the target object includes a virtual operation in the screen, the target information includes transmission character information and character information of the virtual operation.
Virtual operations such as skills in game videos, character information records relevant parameters of corresponding virtual characters, such as character names, skill information corresponding to the characters, and the like; the target information describes transmission character information of skills (i.e., character information of characters releasing the skills) and reception character information (i.e., character information of characters on which the skills act).
As a first example, referring to fig. 4, taking a target video as a game video as an example, fig. 4 shows an implementation manner of a recording and playing scene in an embodiment of the present application, which mainly includes the following steps:
step 401, start screen recording.
And simultaneously opening the game client and the screen recording APP, identifying the version of the current game client, traversing the page in the game client, and triggering a corresponding description page when an article or a skill appears.
Recording software monitors user operation and acquires icons and icon descriptions.
Step 402, receiving user operation, and acquiring the changed content information in the screen.
Recording software monitors the clicking operation of a user, and when clicking, whether a broken screen picture changes; and if the change is detected, extracting the icon of the clicked area and identifying the changed content information when the change is detected in the screen picture. And determining the changed content information and the icon as target information of the user operation, and establishing a mapping relation between the icon and the changed content information.
And step 403, finishing recording the target video and uploading the target video to a server.
After traversing all the icons and the descriptions, the APP recorded on the screen can be finally confirmed by the user, and after the APP recorded on the screen is confirmed to be correct, the second electronic equipment uploads the target video to the server.
And step 404, selecting a template in the uploading process, and completing uploading.
Using the preset template-matched icons and descriptions (target information), a skills/goods icon within the video is identified.
As shown in fig. 5, when the target video is played, on the video page, icons of the respective target objects are displayed as shown in S1; and a prompt box is displayed in association with prompting the user to view more information, as shown at S2.
As a second example, referring to fig. 6, taking a target video as an example of a game video, fig. 6 shows an implementation manner of a live scene in an embodiment of the present application, which mainly includes the following steps:
step 601, starting screen recording.
And simultaneously opening the game client and the screen recording APP, and identifying the version of the current game client.
Step 602, extracting the operation log and determining the target information.
Step 603, if the combat log cannot be extracted, determining target information according to the state of the target object in the screen.
In the case where the battle log (i.e., the operation log) cannot be extracted, when the skill or the item in the game is cooled, for example, when the skill (user operation) or the virtual item in the game is cooled, as shown in S3 in fig. 7, the icon displays a countdown indicating that the user releases or uses the corresponding skill or virtual item, the cooling time during which the skill in the video is entered is identified, and if the difference between the cooling time and the current time is within the preset difference range, it can be considered that the skill has just been released or the virtual item has just been used, the identification information of the target object is extracted as the target information.
In this case, as shown in fig. 7, when the object information is displayed, the object information is presented only on the corresponding virtual character (virtual character a) (the object information presentation box is shown in S4), and the associated virtual character is not involved.
And step 604, synthesizing the target information and the screen picture into a target video.
For example, the target information includes the coordinate position of the virtual character in the screen; because the identification degrees of the virtual characters in the game are high, and the game is not easy to confuse players, different virtual characters in the current game can be marked in the video, skill release and killing conditions are obtained according to 'combat recording' or 'skill cooling identification', and the coordinates of the game players are marked on each video frame.
The target information storage modes include two types:
the first method is as follows: the target information is fused with the video. And (4) carrying out supplementary recording on the coordinate information and the characters in the target information after the data of each video frame in an embedded mode.
The second method comprises the following steps: and separating and storing the target information from the video, and loading and analyzing the target information during playing.
Step 605, add the highlight playback function.
When the current video is recorded and then used for live broadcasting, according to the extracted skill or article use information and the virtual role position coordinates, a prompt of skill release is dynamically prompted at a video playing end supporting the function, and a wonderful playback function is added.
Optionally, the source and destination of the "core injury" extracted in step 602 may be identified in different colors and lines in the display, and an auxiliary line may be added according to the relevance of the "skill" or "item" used by the game virtual character, and the user may select to turn on and off.
For example, as shown in fig. 8, if "player a has released skill m to player B" is recorded in the battle log, a corresponding skill prompt is generated between players a and B on the corresponding screen, the goal information prompt box for skill m is shown as S4, and the transmission virtual character and the reception virtual character for skill m are indicated by an arrow S5.
In addition, if a large number of skills or articles are used in a short time and a "killing" condition occurs, the time is regarded as a wonderful moment, and the start time and the end time of the time are recorded.
The game video or the picture provides a wonderful playback function, and a user can manually enter the game video or the picture or enter a wonderful playback after the game video or the picture is normally played. When the wonderful is played back, the playing speed of the video is reduced to play in slow motion, and meanwhile, the use prompt information of skills and articles is supplemented, so that a viewer can easily know specific things in a short time in a complex game environment.
And step 606, controlling the playing speed in the live broadcasting process.
When in a live broadcast environment, due to the fact that wonderful playback is watched, a current ongoing picture is not watched, and the part which is overlooked is cached locally;
after the wonderful replay is finished, continuously playing the cached content, and dynamically adjusting the playing speed according to the information intensity in the cached 'combat log';
when the interaction of the game virtual characters is less, the playing speed is properly accelerated, so that the current progress can catch up with the normal live broadcast progress as much as possible.
In step 607, the video recording is completed.
And after processing the recorded video, storing the processed video to the local, and uploading the video and the target information to a server.
And in the process of watching the target video by the subsequent first electronic equipment, the user can select to load the target information, show the use condition of the skills and the articles and kill the objects.
In addition, the playing speed can be automatically adjusted according to the density of the video frames calibrated in the step 606, when the number of players in the video is less, the playing speed of the game is accelerated, and when the number of players is more, the normal playing speed can be recovered, and the wonderful playback function is provided.
In the live broadcast service, providing a service for adding prompt target information; the introduction of important information in the game is promoted by recording the target information at the moment of the wonderful instant which is passed by the instant transition in the game, and the viewer can quickly know the content in the game. When the information amount is less, the playing can be accelerated, repeated information lacking interaction among players of some machines is filtered, and the video watching efficiency is improved.
In the embodiment of the application, in the process of recording the screen picture, target information in the screen picture is acquired; synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment, so that the first electronic equipment acquires the target information of a first video frame in the target video when playing the target video recorded by second electronic equipment, and displays the target object and the target information in an associated manner when displaying the first video frame, thereby avoiding the target information from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame.
Referring to fig. 2, the present application further provides a video processing method applied to a second electronic Device, which optionally includes various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of Mobile Stations (MSs), Terminal devices (Terminal devices), and so on.
The method comprises the following steps:
step 201, in the process of recording a screen, acquiring target information in the screen; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
for example, when the second electronic device performs a preset operation or receives a screen recording operation, recording a screen picture of the second electronic device; and if preset operation, such as running of a game APP of the second electronic device, the second electronic device automatically starts screen picture recording.
In the process of recording the screen picture, acquiring a target object in the screen picture, wherein the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen picture;
taking recording a game video as an example, a user operation such as a user selection operation on an icon in a screen interface, for example, the user selects a virtual article in the game interface; the virtual characters include virtual characters in the screen interface, such as virtual characters of the user of the second electronic device and virtual characters of other users playing the game simultaneously therewith. Virtual operations are operations that are not performed by manipulating icons on the screen interface, such as user release of skills by a single key or combination of keys in a keyboard. The virtual article is an article such as a virtual device or a virtual prize in the game.
Taking recording other videos as an example, such as a video played by a user through the second electronic device, and a user operation such as a user selection operation on an icon in the screen interface, for example, the user selects a certain icon in the screen interface. The virtual roles comprise virtual roles in the played video; virtual operations are operations that are not performed by manipulating icons on the screen interface, for example, a user activating a function by a single key or a combination of keys on a keyboard. Virtual items such as red packs, virtual prizes, and the like.
The target information is parameter information of a target object; for example, when the target object includes a user operation, the target information includes an icon corresponding to the user operation; when the target object comprises a virtual role, the target information comprises the name, characteristics, skill and other information of the virtual role; when the target object comprises virtual operation, the target information comprises virtual operation sending virtual roles, virtual operation receiving virtual roles, virtual operation results, operation processes and the like; when the target object includes a virtual article, the target information includes a name, a characteristic, and the like of the virtual article.
Step 202, synthesizing the target information and the screen picture into a target video, and sending the target video to the first electronic device.
And after the target information is acquired, synthesizing the target information and the recorded screen picture into a target video, and sending the target video to the first electronic equipment.
When the second electronic equipment records the target video, a live broadcast or recorded broadcast mode can be adopted; for the live broadcasting mode, the time length of a target video recorded each time is short, for example, within 5 minutes, and then the recorded video is sent to the first electronic device in real time; for recorded broadcast, the recorded broadcast may be sent to the first electronic device after the recording is completed by the first electronic device.
Therefore, when a user watches live or recorded game videos or other screen recording videos through the first electronic equipment, the user can know the video content by combining the target information; taking a game video as an example, for a live or recorded game video, for example, when a team of a plurality of teams plays a group of battles, the target information such as various skills, various stations, various players' positions, blood volume, skill hit status, and the like, which are very quickly explained by an announcer, can be heard. Due to the fact that the information amount is too large, a user cannot observe detailed content from a picture quickly, and watching experience is poor; in the present embodiment, the content information in the screen is displayed in association with the target object in the form of target information, and as shown in fig. 1, the icon a, the icon B, and the icon C are displayed in association with the corresponding target information, respectively. Therefore, the target information of the target object is displayed on the premise of not influencing normal watching, the user can know the details of the target video, and the watching experience is improved.
In an optional embodiment, in a case where the target object includes a user operation in the screen, the acquiring target information in the screen includes:
when user operation is received, determining an icon corresponding to the user operation, and acquiring the change content information in the screen picture;
and determining the changed content information and the icon as target information of the user operation.
In an optional embodiment, the acquiring the target information in the screen includes:
acquiring an operation log of a user of the second electronic equipment;
and acquiring parameter information of the target object in the screen picture from the operation log.
In an optional embodiment, in a case that the target object includes a user operation and/or a virtual article in the screen, the acquiring target information in the screen includes:
and detecting that the target object in the screen picture is in a recovery state, and acquiring identification information of the target object as target information.
In an alternative embodiment, synthesizing the target information with the screen into a target video includes:
embedding the target information into video frame data of the corresponding screen picture to obtain a target video;
or
And respectively storing the recorded video frame data and the target information data into two data packets to obtain the target video.
In an optional embodiment, in a case where the target object includes a virtual operation in the screen, the target information includes transmission character information and character information of the virtual operation.
In the embodiment of the application, in the process of recording the screen picture, target information in the screen picture is acquired; synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment, so that the first electronic equipment displays the target object and the target information in a correlated manner when playing the target video recorded by second electronic equipment and displaying the first video frame, and the target information is prevented from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame. The embodiment of the application solves the problem that a large amount of information in the game video is easily hidden in the prior art.
Referring to fig. 3, the present application further provides a video processing method applied to a second electronic device, which optionally includes various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of mobile stations, terminal devices, and so on.
The method comprises the following steps:
step 301, when a target video recorded by a second electronic device is played, acquiring target information of a first video frame in the target video; the target information includes parameter information of a target object, and the target object includes at least one of a user operation, a virtual character, a virtual operation, and a virtual article in the first video frame.
The target video is a video of a screen image recorded by the second electronic device, for example, when the second electronic device performs a preset operation or receives a screen recording operation, the screen image of the second electronic device is recorded; and if preset operation, such as running of a game APP of the second electronic device, the second electronic device automatically starts screen picture recording.
When a target video recorded by second electronic equipment is played by first electronic equipment, acquiring target information of a first video frame in the target video, wherein the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in a screen picture;
taking the target video as the game video as an example, the user operation is, for example, a user selection operation on an icon in the screen interface, for example, the user selects a virtual article in the game interface; the virtual characters include virtual characters in the screen interface, such as virtual characters of the user of the second electronic device and virtual characters of other users playing the game simultaneously therewith. Virtual operations are operations that are not performed by manipulating icons on the screen interface, such as user release of skills by a single key or combination of keys in a keyboard. The virtual article is an article such as a virtual device or a virtual prize in the game.
Taking the target video as another video, for example, the user plays the video through the second electronic device, and the user operation is, for example, a user selection operation on an icon in the screen interface, for example, the user selects a certain icon in the screen interface. The virtual roles comprise virtual roles in the played video; virtual operations are operations that are not performed by manipulating icons on the screen interface, for example, a user activating a function by a single key or a combination of keys on a keyboard. Virtual items such as red packs, virtual prizes, and the like.
The target information is parameter information of a target object; for example, when the target object includes a user operation, the target information includes an icon corresponding to the user operation; when the target object comprises a virtual role, the target information comprises the name, characteristics, skill and other information of the virtual role; when the target object comprises virtual operation, the target information comprises virtual operation sending virtual roles, virtual operation receiving virtual roles, virtual operation results, operation processes and the like; when the target object includes a virtual article, the target information includes a name, a characteristic, and the like of the virtual article.
Step 302, when the first video frame is displayed, the target object and the target information are displayed in an associated manner.
With reference to fig. 1, the second electronic device sends the recorded target video to the first electronic device, a screen interface of the second electronic device corresponds to a first video frame in the target video, and the screen interface of the second electronic device includes three user-operated icons corresponding to the icon A, B, C; and the second electronic equipment acquires the target information of the three and sends the target video to the first electronic equipment. When the first electronic device plays the target video, the target object and the target information are displayed in an associated manner, that is, the target object is displayed and the target information is simultaneously displayed in a display interface of the first electronic device, and an associated identifier is added to the target object and the target information, as shown in fig. 1, in the display interface of the first electronic device, the target information (display controls corresponding to skill 1, skill 2, and skill 3) is associated to the icon a, the icon B, and the icon C through lead wires, respectively.
Therefore, when a user watches live or recorded game videos or other screen recording videos through the first electronic equipment, the user can know the video content by combining the target information; taking a game video as an example, for a live or recorded game video, for example, when a team of a plurality of teams plays a group of battles, the target information such as various skills, various stations, various players' positions, blood volume, skill hit status, and the like, which are very quickly explained by an announcer, can be heard. Due to the fact that the information amount is too large, a user cannot observe detailed content from a picture quickly, and watching experience is poor; in the present embodiment, the content information in the screen is displayed in association with the target object in the form of target information, and as shown in fig. 1, the icon a, the icon B, and the icon C are displayed in association with the corresponding target information, respectively. Therefore, the target information of the target object is displayed on the premise of not influencing normal watching, the user can know the details of the target video, and the watching experience is improved.
Optionally, in this embodiment of the application, in a case where the target object includes a user operation in the first video frame,
the target information includes: the icon corresponding to the user operation and the changed content information generated based on the user operation in the first video frame.
Optionally, in this embodiment of the application, when the target object includes a virtual operation in the first video frame, the target information includes sending role information and receiving role information of the virtual operation.
In the embodiment of the application, when a target video recorded by a second electronic device is played, a first electronic device acquires target information of a first video frame in the target video, and when the first video frame is displayed, the target object and the target information are displayed in an associated manner, so that the target information is prevented from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame. The embodiment of the application solves the problem that a large amount of information in the game video is easily hidden in the prior art.
Having described the video processing method provided by the embodiments of the present application, the video processing apparatus provided by the embodiments of the present application will be described below with reference to the accompanying drawings.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing apparatus executes a video processing method as an example, and the video processing method provided in the embodiment of the present application is described.
Referring to fig. 9, an embodiment of the present application further provides a video processing apparatus 900, which is applied to a second electronic device, where the apparatus 900 includes:
a recording module 901, configured to acquire target information in a screen during a process of recording the screen; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
a storage module 902, configured to synthesize the target information and the screen into a target video, and send the target video to the first electronic device.
Optionally, in this embodiment of the present application, the recording module 901 includes:
the determining submodule is used for determining an icon corresponding to the user operation and acquiring the change content information in the screen picture when the user operation is received under the condition that the target object comprises the user operation in the screen picture;
and the first storage submodule is used for determining the change content information and the icon as target information of the user operation.
Optionally, in this embodiment of the present application, the recording module 901 includes:
the log obtaining submodule is used for obtaining an operation log of a user of the second electronic equipment in English;
and the parameter acquisition submodule is used for acquiring the parameter information of the target object in the screen picture from the operation log.
Optionally, in this embodiment of the present application, the recording module 901 includes:
and the identification acquisition sub-module is used for detecting that the target object in the screen is in a recovery state and acquiring the identification information of the target object as target information under the condition that the target object comprises user operation and/or virtual articles in the screen.
Optionally, in this embodiment of the present application, the storage module 902 includes:
the second storage submodule is used for embedding the target information into video frame data of the corresponding screen picture to obtain a target video;
or
And the third storage submodule is used for respectively storing the recorded video frame data and the target information data into two data packets to obtain the target video.
Optionally, in this embodiment of the application, in a case that the target object includes a virtual operation in the screen, the target information includes sending role information and receiving role information of the virtual operation.
In the embodiment of the present application, the recording module 901 obtains target information in a screen during a process of recording the screen; the storage module 902 synthesizes the target information and the screen picture into a target video, and sends the target video to the first electronic device, so that when the first electronic device plays the target video recorded by the second electronic device and displays the first video frame, the target object and the target information are displayed in an associated manner, and the target information is prevented from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame.
Referring to fig. 10, an embodiment of the present application further provides a video processing apparatus 1000, which is applied to a first electronic device, and the method includes:
the playing module 1001 is configured to acquire target information of a first video frame in a target video when the target video recorded by a second electronic device is played; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
a display module 1002, configured to display the target object and the target information in an associated manner when the first video frame is displayed.
Optionally, in this embodiment of the application, in a case where the target object includes a user operation in the first video frame,
the target information includes: the icon corresponding to the user operation and the changed content information generated based on the user operation in the first video frame.
Optionally, in this embodiment of the application, when the target object includes a virtual operation in the first video frame, the target information includes sending role information and receiving role information of the virtual operation.
In the embodiment of the application, when a target video recorded by a second electronic device is played, a playing module 1001 acquires target information of a first video frame in the target video, and when the first video frame is displayed, a display module 1002 displays the target object and the target information in an associated manner, so that the target information is prevented from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the video processing apparatus in the method embodiments of fig. 1 to fig. 8, and for avoiding repetition, details are not repeated here.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and executable on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 12 is a schematic hardware configuration diagram of an electronic device 1200 implementing various embodiments of the present application;
the electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensor 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, processor 1210, and power source 1211. Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1210 is configured to, when a target video recorded by a second electronic device is played, obtain target information of a first video frame in the target video; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
a display unit 1206, configured to display the target object and the target information in association when the first video frame is displayed.
Optionally, the target information includes: the icon corresponding to the user operation and the changed content information generated based on the user operation in the first video frame.
Optionally, in a case where the target object includes a virtual operation in the screen, the target information includes sending role information and receiving role information of the virtual operation.
Or
A processor 1210, configured to obtain target information in a screen during a process of recording the screen; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
the memory 1209 is configured to synthesize the target information and the screen into a target video, and send the target video to the first electronic device.
Optionally, the processor 1210 is configured to, when receiving a user operation, determine an icon corresponding to the user operation, and acquire change content information in the screen;
and determining the changed content information and the icon as target information of the user operation.
Optionally, the processor 1210 is configured to obtain an operation log of a user of the second electronic device;
and acquiring parameter information of the target object in the screen picture from the operation log.
Optionally, the processor 1210 is configured to detect that the target object in the screen is in a recovery state, and acquire identification information of the target object as target information.
Optionally, the processor 1210 is configured to embed the target information into video frame data of the corresponding screen to obtain a target video;
or
And respectively storing the recorded video frame data and the target information data into two data packets to obtain the target video.
Optionally, in a case where the target object includes a virtual operation in the screen, the target information includes sending role information and receiving role information of the virtual operation.
In the embodiment of the application, in the process of recording the screen picture, target information in the screen picture is acquired; synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment, so that the first electronic equipment acquires the target information of a first video frame in the target video when playing the target video recorded by second electronic equipment, and displays the target object and the target information in an associated manner when displaying the first video frame, thereby avoiding the target information from being hidden; therefore, the user can know the detail information in the first video frame according to the target information, and key information is prevented from being missed due to the large amount of information in the first video frame.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1209 may be used to store software programs as well as various data, including but not limited to application programs and an operating system. Processor 1210 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, virtual article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, virtual article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, virtual article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video processing method applied to a first electronic device is characterized by comprising the following steps:
when a target video recorded by second electronic equipment is played, acquiring target information of a first video frame in the target video; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
and when the first video frame is displayed, the target object and the target information are displayed in an associated manner.
2. The video processing method according to claim 1, wherein, in a case where the target object includes a user operation in the first video frame,
the target information includes: the icon corresponding to the user operation and the changed content information generated based on the user operation in the first video frame.
3. The video processing method according to claim 1,
in a case where the target object includes a virtual operation in the first video frame, the target information includes transmission character information and reception character information of the virtual operation.
4. A video processing method applied to a second electronic device is characterized by comprising the following steps:
in the process of recording a screen picture, acquiring target information in the screen picture; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
and synthesizing the target information and the screen picture into a target video, and sending the target video to first electronic equipment.
5. The video processing method according to claim 4, wherein in a case where the target object includes a user operation in the screen, the acquiring target information in the screen includes:
when user operation is received, determining an icon corresponding to the user operation, and acquiring the change content information in the screen picture;
and determining the changed content information and the icon as target information of the user operation.
6. The video processing method according to claim 4, wherein in the case where the target object includes a user operation and/or a virtual article in the screen, the acquiring target information in the screen includes:
and detecting that the target object in the screen picture is in a recovery state, and acquiring identification information of the target object as target information.
7. The video processing method according to claim 4, wherein said synthesizing the target information into the target video with the screen includes:
embedding the target information into video frame data of the corresponding screen picture to obtain a target video;
or
And respectively storing the recorded video frame data and the target information data into two data packets to obtain the target video.
8. The video processing method according to claim 4, wherein in a case where the target object includes a virtual operation in the screen, the target information includes sending character information and receiving character information of the virtual operation.
9. A video processing apparatus applied to a first electronic device, the apparatus comprising:
the playing module is used for acquiring target information of a first video frame in a target video when the target video recorded by second electronic equipment is played; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operations and virtual articles in the first video frame;
and the display module is used for displaying the target object and the target information in an associated manner when the first video frame is displayed.
10. A video processing apparatus applied to a second electronic device, the apparatus comprising:
the recording module is used for acquiring target information in a screen picture in the process of recording the screen picture; the target information comprises parameter information of a target object, and the target object comprises at least one of user operation, virtual roles, virtual operation and virtual articles in the screen;
and the storage module is used for synthesizing the target information and the screen picture into a target video and sending the target video to the first electronic equipment.
CN202011158479.3A 2020-10-26 2020-10-26 Video processing method and device Pending CN112261465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011158479.3A CN112261465A (en) 2020-10-26 2020-10-26 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011158479.3A CN112261465A (en) 2020-10-26 2020-10-26 Video processing method and device

Publications (1)

Publication Number Publication Date
CN112261465A true CN112261465A (en) 2021-01-22

Family

ID=74261267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011158479.3A Pending CN112261465A (en) 2020-10-26 2020-10-26 Video processing method and device

Country Status (1)

Country Link
CN (1) CN112261465A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014996A (en) * 2021-02-18 2021-06-22 上海哔哩哔哩科技有限公司 Video generation method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111903A (en) * 2018-01-17 2018-06-01 广东欧珀移动通信有限公司 Record screen document play-back method, device and terminal
CN109874043A (en) * 2017-12-01 2019-06-11 腾讯科技(深圳)有限公司 Video flow sending method, playback method and device
CN111097168A (en) * 2019-12-24 2020-05-05 网易(杭州)网络有限公司 Display control method and device in game live broadcast, storage medium and electronic equipment
CN111818382A (en) * 2020-07-01 2020-10-23 联想(北京)有限公司 Screen recording method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109874043A (en) * 2017-12-01 2019-06-11 腾讯科技(深圳)有限公司 Video flow sending method, playback method and device
CN108111903A (en) * 2018-01-17 2018-06-01 广东欧珀移动通信有限公司 Record screen document play-back method, device and terminal
CN111097168A (en) * 2019-12-24 2020-05-05 网易(杭州)网络有限公司 Display control method and device in game live broadcast, storage medium and electronic equipment
CN111818382A (en) * 2020-07-01 2020-10-23 联想(北京)有限公司 Screen recording method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014996A (en) * 2021-02-18 2021-06-22 上海哔哩哔哩科技有限公司 Video generation method and device
CN113014996B (en) * 2021-02-18 2022-07-22 上海哔哩哔哩科技有限公司 Video generation method and device

Similar Documents

Publication Publication Date Title
US10834479B2 (en) Interaction method based on multimedia programs and terminal device
CN110418151B (en) Bullet screen information sending and processing method, device, equipment and medium in live game
CN104811814B (en) Information processing method and system, client and server based on video playing
US11178450B2 (en) Image processing method and apparatus in video live streaming process, and storage medium
RU2605840C2 (en) Automatic design of proposed mini-games for cloud games based on recorded game process
CN111491197B (en) Live content display method and device and storage medium
CN109040766B (en) Live video processing method and device and storage medium
CN105933739B (en) Program interaction system, method, client and background server
US10255022B2 (en) Information processing system, electronic device, image file playing method, and generation method for displaying a screenshot of an image file including a link to a webpage for downloading an application program
CN107438200A (en) The method and apparatus of direct broadcasting room present displaying
CN108123945B (en) Distribution method, system and the intelligent terminal of random data
US10596451B2 (en) Program and information processing device
CN113350783B (en) Game live broadcast method and device, computer equipment and storage medium
CN112969087B (en) Information display method, client, electronic equipment and storage medium
JP2011253452A (en) Information processor
CN110830813B (en) Video switching method and device, electronic equipment and storage medium
CN110505528B (en) Method, device and equipment for matching game in live broadcast and readable storage medium
US20230325051A1 (en) Information processing device, information processing method, and computer program
CN108076379B (en) Multi-screen interaction realization method and device
CN113485617A (en) Animation display method and device, electronic equipment and storage medium
CN113938748A (en) Video playing method, device, terminal, storage medium and program product
CN114095742A (en) Video recommendation method and device, computer equipment and storage medium
US8483435B2 (en) Information processing device, information processing system, information processing method, and information storage medium
CN112261465A (en) Video processing method and device
CN112843733A (en) Method and device for shooting image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122