CN113473215B - Screen recording method, device, terminal and storage medium - Google Patents

Screen recording method, device, terminal and storage medium Download PDF

Info

Publication number
CN113473215B
CN113473215B CN202110830451.8A CN202110830451A CN113473215B CN 113473215 B CN113473215 B CN 113473215B CN 202110830451 A CN202110830451 A CN 202110830451A CN 113473215 B CN113473215 B CN 113473215B
Authority
CN
China
Prior art keywords
screen recording
audio data
data
video
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110830451.8A
Other languages
Chinese (zh)
Other versions
CN113473215A (en
Inventor
胡文昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110830451.8A priority Critical patent/CN113473215B/en
Publication of CN113473215A publication Critical patent/CN113473215A/en
Application granted granted Critical
Publication of CN113473215B publication Critical patent/CN113473215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The application relates to a screen recording method, a screen recording device, a terminal and a storage medium, and belongs to the technical field of terminals. The method comprises the following steps: in the screen recording process, responding to the suspension of the screen recording operation, recording a first time stamp and keeping the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation; responding to the continuous screen recording operation, and recording a second time stamp, wherein the second time stamp is the triggering time of the continuous screen recording operation; based on the first timestamp and the second timestamp, intercepting the first audio data and the first video data obtained by screen recording to obtain second audio data and second video data; and respectively correcting the time stamps of the second video data and the second audio data, and synthesizing the multimedia data based on the corrected time stamps of the second audio data and the second video data. Through this scheme, realized suspending recording at the record screen in-process to richened the record screen mode, made the record screen can satisfy richer service scenario.

Description

Screen recording method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a screen recording method, a screen recording device, a terminal and a storage medium.
Background
With the development of terminal technology, most terminals can realize a screen recording function. By starting the screen recording function, the terminal can collect audio data and record pictures displayed by the screen so as to obtain video data, and the multimedia data is synthesized based on the collected audio data and the recorded video data, so that a user can recall by checking the recorded multimedia data.
Disclosure of Invention
The embodiment of the application provides a screen recording method, a device, a terminal and a storage medium, which can enrich the use scenes of the screen recording. The technical scheme is as follows:
in one aspect, a screen recording method is provided, and the method includes:
in the screen recording process, responding to the suspension of the screen recording operation, recording a first time stamp and keeping the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation;
responding to the continuous screen recording operation, and recording a second time stamp, wherein the second time stamp is the triggering time of the continuous screen recording operation;
based on the first timestamp and the second timestamp, intercepting the first audio data and the first video data obtained by screen recording to obtain second audio data and second video data;
And respectively correcting the time stamps of the second video data and the second audio data, and synthesizing the multimedia data based on the second audio data and the second video data after the time stamps are corrected.
In another aspect, a screen recording device is provided, the device including:
the first screen recording module is used for responding to the suspension of the screen recording operation in the screen recording process, recording a first time stamp and keeping the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation;
the second screen recording module is used for responding to the continuous screen recording operation and recording a second time stamp, wherein the second time stamp is the triggering time of the continuous screen recording operation;
the intercepting module is used for intercepting the first audio data and the first video data obtained by screen recording based on the first timestamp and the second timestamp to obtain second audio data and second video data;
and the synthesizing module is used for respectively correcting the time stamps of the second video data and the second audio data and synthesizing the multimedia data based on the second audio data and the second video data after the time stamps are corrected.
In another aspect, a terminal is provided that includes a processor and a memory; the memory stores at least one computer program for execution by the processor to implement the screen recording method of the above aspect.
In another aspect, a computer readable storage medium is provided, where at least one computer program is stored, where the at least one computer program is configured to be executed by a processor to implement the screen recording method according to the above aspect.
In another aspect, a computer program product is provided, which stores at least one computer program that is loaded and executed by a processor to implement the screen recording method of the above aspect.
In the embodiment of the application, the first timestamp of suspending the screen recording and the second timestamp of continuing the screen recording are recorded in the screen recording process, so that when the multimedia file obtained by the screen recording is synthesized, the first timestamp and the second timestamp can be intercepted, the time stamps of the intercepted video data and audio data can be corrected, the corrected audio data and video data form the multimedia data of the screen recording, the screen recording is suspended and the screen recording is continued in the screen recording process, the screen recording mode is enriched, and the screen recording can meet richer use scenes.
Drawings
FIG. 1 is a schematic diagram of a screen recording module according to an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of a screen recording method shown in an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a screen recording interface shown in an exemplary embodiment of the present application;
FIG. 4 illustrates a schematic diagram of a screen recording interface shown in an exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of a screen recording interface shown in an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram of a screen recording interface shown in an exemplary embodiment of the present application;
FIG. 7 illustrates a schematic diagram of a screen recording interface shown in an exemplary embodiment of the present application;
FIG. 8 illustrates a flowchart of a screen recording method shown in an exemplary embodiment of the present application;
FIG. 9 illustrates a flowchart of a screen recording method shown in an exemplary embodiment of the present application;
FIG. 10 illustrates a flowchart of a screen recording method shown in an exemplary embodiment of the present application;
FIG. 11 shows a block diagram of a screen recording apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the video data and the audio data referred to in the present application are data authorized by the user or sufficiently authorized by each party.
The screen recording method is realized by a terminal with a screen recording function. The screen recording function is a system function of the terminal, or the screen recording function is a function of an application program installed in the terminal. In the embodiment of the present application, this is not particularly limited.
The terminal realizes the screen recording function through the screen recording module. Referring to fig. 1, a schematic structural diagram of a screen recording module according to an exemplary embodiment of the present application is shown. Referring to fig. 1, the recording module includes an Audio recorder (Audio Record), an Audio encoder (Audio Media Codec), a Renderer (Renderer), a Virtual Display (Virtual Display), a video encoder (Video Media Codec), and a multimedia synthesizer (Media Muxer).
The audio recorder is used for collecting audio data based on a preset audio sampling frequency in the screen recording process, and inputting the collected audio data to the audio encoder. The audio encoder is used for receiving the audio data input by the audio recorder, encoding the received audio data and marking a time stamp, and inputting the encoded audio data into the multimedia synthesizer. The renderer is used for rendering the page data in the display interface of the terminal and inputting the rendered page data into the virtual display. The virtual display is used for recording page data rendered by the renderer in the screen recording process, marking a time stamp for a video frame corresponding to the page data to obtain video data, and inputting the video data into the video encoder. The video encoder is used for encoding the video data and transmitting the encoded video data to the multimedia synthesizer. The multimedia synthesizer is used for synthesizing multimedia data based on the encoded video data and audio data.
In the process of recording, a situation that recording needs to be paused often occurs. For example, in the process of video conference through the terminal, the recording of conference content can be realized through recording the screen of the terminal, when the conference time is long, the conference is usually interrupted, so that the participants can rest and adjust, the recording of the screen can be suspended until the rest is finished in the rest time, and then the recording of the screen is continued, so that the complete conference content is finally obtained. For another example, when the user records the screen through the mobile phone, the privacy information of some users may not want to be displayed when the user records the screen, at this time, the user may pause the screen recording, fill in the privacy information, switch to the interface without displaying the privacy information after filling in, and continue to record the screen. For example, when receiving a message in the screen recording process, the screen recording is paused first, and after the user replies the message, the user returns to the screen recording interface again to continue the screen recording. Or in the screen recording process, when the password needs to be filled, the screen recording can be paused first, and after the password is input, the screen recording is continued. Therefore, the embodiment of the application provides a screen recording method, which realizes the process of suspending and resuming during screen recording.
Referring to fig. 2, a flowchart of a screen recording method according to an exemplary embodiment of the present application is shown. The execution body in the embodiment of the present application may be a terminal, or may be a processor in the terminal or an operating system in the terminal, and this embodiment is described taking the execution body as an example of the terminal. The method comprises the following steps:
step S21: in the screen recording process, responding to the suspension of the screen recording operation, the terminal records a first time stamp and keeps the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation.
And in the process of terminal screen recording, starting the pause screen recording monitoring, responding to the screen recording state after receiving the pause screen recording operation, and recording the first timestamp triggered by the pause screen recording operation.
In some embodiments, the pause screen recording operation is an operation triggered by a voice command. Correspondingly, in the screen recording process, the terminal starts voice instruction monitoring, receives a first voice signal input by a user, performs semantic analysis on the received first voice signal to obtain a semantic analysis result, and if the voice analysis result is used for indicating to pause screen recording, determines that the screen recording operation is paused.
In some embodiments, the pause screen recording operation is an operation triggered by a pause screen recording control presented in the terminal. Correspondingly, in the screen process, displaying the pause screen recording control. The pause screen recording control is used for pausing screen recording.
And responding to the screen recording operation, and starting the screen recording and displaying the pause screen recording control by the terminal. The pause screen recording control is displayed on any display interface of the terminal. For example, the pause screen recording control is presented at a setup interface of the terminal. The setting interface is an interface for setting the recording screen. Or the pause screen recording control is displayed on a screen recording interface of the terminal. The screen recording interface is any interface of the terminal, for example, the screen recording interface is a main interface, a screen locking interface, other display interfaces or display interfaces of application programs of the terminal, and the like. In the embodiment of the present application, the interface where the pause screen recording control is located is not specifically limited.
The pause screen recording control is displayed at any position of a display interface of the terminal. For example, the pause screen recording control is presented in the middle or left side of the display interface, etc. In addition, the position of the pause screen recording control can be set according to the requirement. And, the position of the pause screen recording control is fixed or changeable. Under the condition that the position of the pause screen recording control can be changed, the terminal changes the position of the pause screen recording control through the dragging operation aiming at the pause screen recording control. For example, in response to a drag operation on the pause screen recording control, the pause screen recording control moves in the display interface following the drag operation until the drag operation ends, and the end point of the drag operation is determined as the final position of the pause screen recording control which is moved this time.
It should be noted that, the terminal collects audio data and video data during the recording process. The terminal collects audio data through the audio recorder, marks a time stamp for the collected audio data, and obtains first audio data.
The audio data are audio data played by the terminal, or the audio data are audio data collected by the terminal through a microphone. In the embodiment of the present application, the type of audio data is not particularly limited.
Accordingly, in some embodiments, the audio data is audio data played in a terminal. In this step, audio data played by the terminal is collected from an audio data interface of the terminal through an audio recorder based on a preset audio sampling frequency. In some embodiments, the audio data is audio data collected by a microphone of the terminal. In this step, audio data collected by a microphone of the terminal is acquired through an audio recorder based on a preset audio sampling frequency.
After the terminal collects the audio data, the time stamp of each frame of audio data is marked according to the time of the terminal when the audio data are collected, and the first audio data are obtained. In some embodiments, the terminal uses the time of the terminal when the audio data is collected as a time stamp of the audio data, and the audio data marked with the time stamp is composed into first audio data. In some embodiments, the terminal uses the time of the terminal when the first frame of audio data is acquired as the time stamp of the first frame of audio data, and the time stamp of other audio data after the first frame of audio data is determined according to the audio sampling frequency. The terminal determines a sampling interval for collecting audio data based on the audio sampling frequency, and marks the time stamp of the audio data after the audio data of the first frame as the sum of the time stamp of the audio data of the previous frame and the sampling interval.
In the implementation manner, the time stamp of each frame of audio data is determined by continuously collecting the audio data of the terminal based on the time of the terminal when the audio data are collected, so that the accuracy of the time stamp of the first audio data is ensured.
The video data of the recording screen is recorded by the created virtual display and video encoder. The terminal marks a time stamp for the acquired video frame through the virtual display, the video frame marked with the time stamp is input into the video encoder, and the video frame is encoded through the video encoder to obtain first video data.
The terminal acquires page data rendered by the renderer through the virtual display, displays the page data in the virtual display, records a display picture displayed by the virtual display, and obtains a recorded video frame. And the terminal marks the time stamp of each video frame according to the time of the terminal when the virtual display acquires the display picture, and first video data are obtained.
In the implementation mode, the obtained time stamp of the video frame is marked through the virtual display and the renderer to obtain the first video data, so that the video frame is prevented from being lost during screen recording, and the integrity of the first video data obtained through screen recording is ensured.
It should be noted that, in response to the screen recording operation, the terminal records the audio data and the video data simultaneously, so as to obtain the first audio data and the first video data.
Step S22: and responding to the continuous screen recording operation, and recording a second time stamp by the terminal, wherein the second time stamp is the trigger time of the continuous screen recording operation.
In the process of recording, responding to suspension of recording operation, starting continuous recording monitoring by the terminal, and responding to recording a second time stamp triggered by the continuous recording operation after receiving the continuous recording operation.
In some embodiments, the start screen operation is an operation triggered by a voice command. Correspondingly, the terminal keeps monitoring voice instructions in the process of suspending screen recording, receives a second voice signal input by a user, performs semantic analysis on the received second voice signal to obtain a semantic analysis result, and determines that screen recording operation is to be continued if the voice analysis result is used for indicating that screen recording is to be continued.
In some embodiments, the continuing screen recording operation is an operation triggered by a continuing screen recording control presented in the terminal. Correspondingly, in the process of suspending the screen recording, the continuous screen recording control is displayed. The continuous screen recording control is used for continuous screen recording.
In some embodiments, in response to suspending the screen recording operation, the terminal suspends the screen recording and exposes a continue screen recording control. The display mode of the continuous screen recording control is similar to the display mode of the pause screen recording control, and will not be described again.
It should be noted that the pause screen recording control and the continue screen recording control can be displayed through the suspension control. The display mode of the suspension control is similar to that of the pause screen recording control, and will not be described herein. In some embodiments, the pause screen recording control and the continue screen recording control are presented in the hover control simultaneously. Referring to fig. 3, the pause screen recording control and the continue screen recording control are displayed in the same suspension control. Alternatively, referring to FIG. 4, the pause screen recording control and the resume screen recording control are presented in different hover controls. When the pause screen recording control and the continuous screen recording control are displayed in different suspension controls, the positions of the suspension control where the pause screen recording control is positioned and the suspension control where the continuous screen recording control is positioned are not particularly limited. In some embodiments, referring to fig. 5, the terminal switches the pause screen recording control and the continue screen recording control in the hover control according to the current screen recording state. Correspondingly, responding to a screen recording operation, displaying the suspension control in the display interface and displaying the pause screen recording control in the suspension control by the terminal; responding to the trigger of the pause screen recording control, switching the pause screen recording control into the suspension control to display the continuous screen recording control; and responding to the trigger of the continuous screen recording control, and switching the continuous screen recording control into the suspension control to display the pause screen recording control.
In the implementation mode, the terminal switches and displays the pause screen recording control and the continuous screen recording control in the suspension control according to the current screen recording state, so that the content displayed in the suspension control is reduced, the area of a display area occupied by the suspension control is reduced, the suspension control is prevented from occupying a larger display area, and the user experience is optimized.
In some embodiments, referring to fig. 6, the hover control further includes a screen stop control, where the screen stop control is used to end the screen. In some embodiments, referring to fig. 7, the suspension control further includes a screen recording setting control, where the screen recording setting control is used to set a screen recording parameter, for example, in response to the screen recording setting control being triggered, the terminal displays a screen recording setting window, and a screen recording parameter setting button is displayed in the screen recording setting window. Wherein, the screen recording parameters include: picture quality parameters, sound recording options, etc. For example, referring to fig. 7, the current picture quality is 1080×720, and the sound recording option is in an on state.
The screen recording operation is an operation triggered based on the screen recording control. The screen recording control is a screen recording control corresponding to an icon displayed in a main interface of the terminal. Or the screen recording control is a screen recording control displayed in a drop-down interface of the terminal. Or the screen recording control is a hidden display control, and correspondingly, the terminal receives a target gesture operation, and displays the target gesture operation in a floating window based on the gesture operation, wherein the floating window comprises the hidden display screen recording control.
Step S23: and the terminal intercepts the first audio data and the first video data obtained by screen recording based on the first timestamp and the second timestamp to obtain second audio data and second video data.
In this step, the terminal intercepts audio data and video data between a first timestamp and a second timestamp in the recorded first audio data and first video data, and obtains second audio data and second video data.
Based on the audio data collection principle in step S21, the timestamp of the first audio data obtained by recording the screen corresponds to the time of the terminal. Therefore, in this step, the terminal intercepts the first audio data, and the process of obtaining the second audio data is as follows: the terminal deletes first target audio data in the first audio data to obtain the second audio data, wherein the first target audio data is audio data between the first time stamp and the second time stamp.
Wherein the timestamp of any one of the first target audio data is between the first timestamp and the second timestamp. The process of determining the first target audio data by the terminal is as follows: the terminal determines two frames of audio data with time stamps matched with the first time stamp or the second time stamp from the first audio data, and determines the audio data between the frames of audio data as first target audio data. Wherein a timestamp matching the first timestamp or the second timestamp means that the timestamp is the same as the first timestamp or the second timestamp. Alternatively, the difference between the time stamp and the first time stamp or the second time stamp is less than a preset threshold.
In the implementation manner, the terminal deletes the first target audio data from the first audio data through the time stamp of the first audio data, the first time stamp and the second time stamp to obtain the second audio data, so that the audio data corresponding to the screen recording process is determined through suspending the screen recording control and continuing the screen recording control in the screen recording process, and the application scene of the screen recording function is enriched.
Based on the video data collection principle in step S21, since the video data is that the page data rendered by the renderer is rendered into the virtual display, then the rendered display picture in the virtual display is collected, a video frame is obtained, and the collected video frame is marked by using the time of the terminal when the video frame is collected as a time stamp. Therefore, there is a time difference between the time stamp of the acquired video frame and the time stamp of the display picture corresponding to the video frame actually displayed by the terminal in the screen recording process, and in order to ensure the accuracy of the intercepted video data, in the embodiment of the present application, the first target video data is determined from the first video data through similarity comparison.
Referring to fig. 8, the process is implemented by the following steps S23-1 to S23-4, including:
step S23-1: the terminal determines a first time difference between the first time stamp and the second time stamp.
In this step, the terminal determines the difference between the first timestamp and the second timestamp as the time difference between the pause screen recording instruction and the continue screen recording instruction.
Step S23-2: the terminal determines the time difference between the time stamps of any two video frames from the first video data to obtain a plurality of second time differences.
In this step, the terminal determines a plurality of sets of video frames from a plurality of video frames of the video data, each set of video frames including two video frames. In this step, the terminal groups video frames in the video data one by one, respectively, and determines a time difference between time stamps of each group of video frames.
Or the terminal determines one video frame corresponding to the first time stamp from the first video data according to the first time stamp, takes the video frame as the start, groups video frames after the video frame into a group, and determines the time difference between the time stamps of each group of video frames.
Or the terminal determines a first video frame with a timestamp matched with the first timestamp from the first video data; determining a plurality of second video frames having time stamps after the first video frame, respectively; a second time difference between each second video frame and the first video frame is determined.
According to the video data collection principle in step S21, there is a delay between the time stamp of the video data collected by the terminal and the actual time of the display screen displaying the video data in the terminal, so that the second time difference is determined starting from the video frame corresponding to the first time stamp, and it can be ensured that the effective second time difference is within the current video frame range.
In the implementation manner, the second time difference between the video frames is determined from the video frames corresponding to the first time stamp, so that the processing amount of data is reduced, and the operation speed is improved.
One video frame corresponding to the first timestamp is a display picture with the timestamp matched with the first timestamp. The time stamp matching with the first time stamp means that the time stamp is the same as the first time stamp, or that in the first video data, the time stamp is adjacent to the first time stamp, which is not specifically limited in this embodiment of the present application.
Step S23-3: for any second time difference, in response to the second time difference matching the first time difference, the terminal determines video data between two video frames corresponding to the second time difference as first target video data.
In this step, the terminal compares the similarities between the plurality of second time differences and the first time difference. The terminal determines the similarity between the first time difference and the second time difference based on the ratio or the difference between the first time difference and the second time difference.
The second time difference is matched with the first time difference, which means that the difference between the second time difference and the first time difference is smaller than a preset difference. Or, the ratio of the second time difference to the first time difference is not smaller than a preset ratio.
For example, the terminal determines that the second time difference matches the first time difference by a ratio between the second time difference and the first time difference. Correspondingly, if the second time difference and the first time difference meet the condition shown in the formula I, the second time difference is determined to be matched with the first time difference.
Equation one:
Figure BDA0003175334560000101
wherein Δc is the second time difference, Δa is the first time difference, and s is the predetermined ratio (similarity). Wherein s has a value of any one of 0.74 to 1.
It should be noted that, when a plurality of sets of second time differences matching the first time differences occur, a second time difference having the highest similarity with the first time difference is determined from the plurality of sets of second time differences. Namely, the terminal determines a second time difference with the smallest difference between the second time difference and the first time difference; alternatively, a time difference is determined in which the ratio of the second time difference to the first time difference is greatest. Since the video frames in the first video data are obtained according to the refreshing of the screen, the refreshing rate of the terminal is generally higher, and therefore, the time interval between adjacent video frames is smaller, so that the error between the first target video data determined by the similarity and the video data actually required to be deleted is within a reasonable range.
In the implementation manner, the first target video data in the first video data is determined through similarity comparison, so that the situation that after the screen recording is suspended, the video data is not matched with the actual screen recording suspending operation due to the fact that the time stamp of the video data is delayed from the time stamp of the actual display picture is prevented, and the accuracy of the screen recording is improved.
Step S23-4: the terminal deletes the first target video data to obtain the second video data; or, modifying the data content of the first target video data to obtain the second video data.
Wherein the first target video data is video data having a timestamp between the first timestamp and the second timestamp. In the implementation manner, the terminal deletes the first target video data to intercept the first video data, so that the video data reserved in the intercepted second video data can be coherent, and the problem of blocking during subsequent playing of the video data is prevented.
And the terminal modifies the data content of the first target video data in the first video data to obtain the second video data. The data content comprises the content of video frames, labels of video data and the like. Correspondingly, the terminal adds a blank frame tag to the first target video data to obtain the second video data, wherein the blank frame tag is used for indicating to discard the corresponding video data when synthesizing the multimedia data.
When the terminal synthesizes the audio data and the video data, the data content of each video frame in the video data is read, and if the data content comprises a blank frame label, the terminal discards the video data in the video frame.
Or the terminal sets the video frame of the first target video data as a target video frame to obtain the second video data, wherein the target video frame is discarded when the multimedia data is synthesized.
Wherein the target video frame is a video frame of a specified form, and the video frame of the specified form is discarded when synthesizing the multimedia data. Correspondingly, when the terminal reads the video frame, determining the form of the video frame, if the video frame is the video frame in the appointed form, determining the video frame as a target video frame, and discarding the target video frame.
In the implementation manner, by changing the data content of the video data, when the multimedia data is synthesized, the corresponding video frames can be discarded according to the data content of the video data, so that the first video data can be intercepted, the second video data can be obtained, and the accuracy of intercepting the first video data is improved.
Step S24: the terminal corrects the time stamp of the second video data and the time stamp of the second audio data respectively, and synthesizes the multimedia data based on the corrected time stamp of the second audio data and the second video data.
In this step, the terminal inputs the second video data and the second audio data into a second multimedia synthesizer, and synthesizes the second video data and the second audio data into multimedia data through the multimedia synthesizer.
In some embodiments, the terminal synthesizes the second video data and the second audio data into multimedia data based on the time stamps of the second video data and the second audio data. In some embodiments, the terminal may need to correct the time stamps of the second audio data and the second video data before inputting the second audio data and the second video data to the multimedia synthesizer. Accordingly, referring to fig. 9, the process of the terminal correcting the time stamp of the second audio data is implemented by the following steps (1-1) - (1-4), including:
(1-1) the terminal determining a sampling interval of the first audio data based on an audio sampling frequency of the first audio data.
Since the audio data is obtained by sampling, the time stamps of the audio data are uniform. Referring to fig. 10, time stamps between each frame of audio data in the audio data are uniform. The terminal corrects the time stamp of the second target audio data based on the sampling interval.
In the implementation manner, the terminal corrects the time stamp of the second target audio data based on the audio sampling frequency when the audio data is collected, so that the continuity of the time stamp of the second audio data before the multimedia data is synthesized is ensured, and the problem of interruption of the audio data caused by the time stamp is prevented.
The process of the terminal correcting the time stamp of the audio signal based on the sampling frequency is as follows: a sampling interval of the first audio data is determined based on the audio sampling frequency.
(1-2) the terminal obtaining a first target time stamp of last frame of audio data in second target audio data, the second target audio data being audio data preceding the first time stamp in the first audio data.
The terminal determines second target audio data before the first time stamp from the second audio data, determines the last frame of audio data of the second target audio data, and acquires the first target time stamp marked by the last frame of audio data. Note that the first target time stamp is the same as the first time stamp, or the first target time stamp is a time stamp preceding the first time stamp, or the like. In the embodiment of the present application, this is not particularly limited.
(1-3) the terminal modifying a time stamp of a first frame of audio data in third target audio data, which is audio data after the second time stamp in the first audio data, to a sum of the first target time stamp and the sampling interval.
In this step, the terminal determines third target audio data after the second time stamp from the second audio data, and corrects the time stamp of the first frame audio data of the third target audio data so that the time stamp of the third target audio data is consecutive to the time stamp of the second target audio data. Wherein, since the first audio data is uniformly sampled based on the sampling frequency, the time interval between each frame of audio data in the second audio data should be the same, so that the video time stamp of the third target audio frame is determined as the sum of the first target time stamp and the sampling interval.
(1-4) for any first target frame audio data following the first frame audio data in the third target audio data, the terminal determining the time stamp of the first target frame audio data based on the sum of the time stamp of the previous frame audio data of the first target frame audio data and the sampling interval.
In this step, the terminal corrects the time stamp of the audio data after the first target audio frame. The terminal corrects the time stamp of the audio data in the third target video data to be the sum of the time stamp of the audio data of the previous frame and the sampling interval.
In the implementation manner, since the time stamps of the collected first audio data are uniform, the time stamps of the second audio data can be corrected through the time interval corresponding to the sampling period, so that the accuracy of the time stamps of the audio data is ensured.
It should be noted that, before the second audio data is input to the multimedia synthesizer, the terminal further modifies the timestamp of the first frame of audio data of the second audio data to a first preset timestamp, and modifies the timestamp of the audio data after the first frame of audio frame according to the sampling interval. Namely, the terminal determines the time stamp of the first frame of audio data in the second audio data as a first preset time stamp; for any second target frame audio data subsequent to the first frame audio data, determining a time stamp of the second target frame audio data based on a sum of a time stamp of a previous frame audio data of the second target frame audio data and the sampling interval. The first preset timestamp is set as required, and in this embodiment of the present application, the first preset timestamp is not specifically limited.
It should be noted that, the terminal performs the step of modifying the time stamp of the first frame of audio data of the second audio data to the first preset time stamp after modifying the time stamp of the third target audio segment, or the terminal directly modifies the time stamp of the third target audio data by modifying the time stamp of the first frame of audio data of the second audio data to the first preset time stamp, which is not particularly limited in this embodiment of the present application.
For the second video data, since the time stamp of the video data is related to the refresh mechanism of the terminal screen, the refresh mechanism of the terminal screen is typically a dynamic refresh. Thus, the time stamps of the video frames in the first video data are non-uniform. Referring to fig. 10, time stamps between video frames are not uniform. In order to make the correction of the second video data more accurate, the terminal corrects the time stamp of the second video data based on the time stamp of the first video data. The process is as follows: the terminal determines the average time difference corresponding to the first video data based on the time difference between the time stamps of two adjacent video frames in the first video data; the timestamp of the second video is corrected based on the average time difference.
In the implementation manner, the terminal corrects the time stamp of the second video data based on the time stamp of each video frame in the acquired first video data, so that the continuity of the time stamp of the second video data before the second video data is synthesized into the multimedia data is ensured, and the problem of interruption of the video data caused by the time stamp is prevented.
Wherein the terminal determines an average difference value based on the difference value of the stamps between the adjacent video frames, and corrects the second target video data based on the average difference value.
The terminal determines an average difference value of the time stamps of the video frames based on the recursive average difference, namely, determines the average difference value through a formula II.
Formula II:
Figure BDA0003175334560000131
where n represents the current frame number of video data, t n Time stamp, Δt, representing the video frame of the nth frame n Representing the average difference value corresponding to the nth frame of video frame.
In this implementation, the accuracy of correcting the timestamp of the second target video data is improved by determining the difference between a relatively stable timestamp by averaging the differences.
The process of the terminal correcting the time stamp of the second video data is realized by the following steps (2-1) - (2-3), comprising:
(2-1) the terminal obtaining a second target time stamp of a last frame of video frames in second target video data, the second target video data being video data preceding the first time stamp in the first video data.
This step is similar to step (1-1) and will not be described again here.
(2-2) the terminal modifying a time stamp of a first video frame in third target video data, which is video data after the second time stamp in the first video data, to a sum of the second target time stamp and the average time difference.
This step is similar to step (1-2) and will not be described again here.
(2-3) for any first target video frame following the first video frame in the third target video data, the terminal determining the timestamp of the first target video frame based on the sum of the timestamp of the previous video frame to the first target video frame and the average time difference.
This step is similar to step (1-3) and will not be described again here.
It should be noted that, before the second video data is input to the multimedia synthesizer, the terminal also modifies the timestamp of the first frame of video data of the second video data to a second preset timestamp, and modifies the timestamp of the video data after the first frame of video frame according to the average time difference. Namely, the terminal determines the time stamp of the first video frame in the second video data as a second preset time stamp; for any second target video frame subsequent to the first video frame, determining a timestamp of the second target video frame based on a sum of a timestamp of a previous video frame of the second target video frame and the average time difference. The second preset timestamp is set as required, and is the same as or different from the first preset timestamp, which is not specifically limited in this embodiment of the present application.
It should be noted that, the terminal performs the step of modifying the timestamp of the first video frame of the second video data to the second preset timestamp after modifying the timestamp of the third target video segment, or the terminal directly modifies the timestamp of the third target video data by modifying the timestamp of the first video frame of the second video data to the second preset timestamp, which is not specifically limited in this embodiment of the present application.
The terminal inputs the corrected second audio data and second video data to a multimedia synthesizer, and synthesizes the audio data and the video data into multimedia data based on the time stamps of the second audio data and the second video data through the multimedia synthesizer.
In the implementation manner, the terminal deletes the first target audio data and the first target video data corresponding to the first timestamp and the second timestamp from the first audio data and the first video data respectively to obtain the second audio data and the second video and data, and synthesizes the second audio data and the second video data into the multimedia data, so that the pause of the screen recording in the screen recording process is realized, the video data and the audio data after the pause of the screen recording are ensured to be matched, the video data delay caused by the pause of the screen recording is reduced, and the problem that the video picture and the audio in the recorded multimedia data are not synchronous is solved.
It should be noted that, when receiving the screen recording end operation, the terminal synthesizes the second video data and the second audio data into multimedia data. Or, in the process of recording the screen, the terminal synthesizes the second video data and the second audio data into multimedia data, and in the embodiment of the application, the game is not specifically limited.
Another point to be described is that the terminal can also stop collecting audio data and video data after detecting the triggering operation of the pause screen recording control. Correspondingly, responding to the trigger of the pause screen recording control, the terminal generates a pause screen recording instruction, and the pause screen recording instruction is used for indicating the terminal to stop collecting the audio signal and the video data. In the implementation manner, the terminal acquires the video source data rendered by the renderer so as to ensure that the delay between the video data and the display picture actually displayed is within an allowable range.
In the embodiment of the application, the first timestamp of suspending the screen recording and the second timestamp of continuing the screen recording are recorded in the screen recording process, so that when the multimedia file obtained by the screen recording is synthesized, the first timestamp and the second timestamp can be intercepted, the time stamps of the intercepted video data and audio data can be corrected, the corrected audio data and video data form the multimedia data of the screen recording, the screen recording is suspended and the screen recording is continued in the screen recording process, the screen recording mode is enriched, and the screen recording can meet richer use scenes.
Referring to fig. 11, a block diagram of a screen recording device according to an embodiment of the present application is shown. The recording device may be implemented as all or part of the processor by software, hardware, or a combination of both. The device comprises:
the first screen recording module 1101 is configured to record a first timestamp in response to a pause screen recording operation and keep a screen recording during a screen recording process, where the first timestamp is a trigger time of the pause screen recording operation;
the second recording module 1102 is configured to record a second timestamp in response to the continuous recording operation, where the second timestamp is a trigger time of the continuous recording operation;
the intercepting module 1103 is configured to intercept the first audio data and the first video data obtained by recording based on the first timestamp and the second timestamp, to obtain second audio data and second video data;
and the synthesizing module is used for respectively correcting the time stamps of the second video data and the second audio data and synthesizing the multimedia data based on the second audio data and the second video data after the time stamps are corrected.
In some embodiments, the intercept module 1103 includes:
a first determining unit configured to determine a first time difference between the first time stamp and the second time stamp;
A second determining unit, configured to determine a time difference between time stamps of any two video frames from the first video data, to obtain a plurality of second time differences;
a third determining unit, configured to determine, for any second time difference, video data between two video frames corresponding to the second time difference as first target video data in response to the second time difference matching the first time difference;
and the intercepting unit is used for deleting the first target video data to obtain the second video data, or modifying the data content of the first target video data to obtain the second video data.
In some embodiments, the second determining unit is configured to determine, from the first video data, a first video frame whose timestamp matches the first timestamp; determining a plurality of second video frames having time stamps after the first video frame, respectively; a second time difference between each second video frame and the first video frame is determined.
In some embodiments, the intercepting unit is configured to add a null frame tag to the first target video data to obtain the second video data, where the null frame tag is used to indicate that the corresponding video data is discarded when the multimedia data is synthesized; or,
The intercepting unit is configured to set a video frame of the first target video data as a target video frame, obtain the second video data, and discard the target video frame when synthesizing the multimedia data.
In some embodiments, the intercept module 1103 includes:
and the deleting unit is used for deleting first target audio data in the first audio data to obtain the second audio data, wherein the first target audio data is the audio data between the first time stamp and the second time stamp.
In some embodiments, the intercept module 1103 includes:
a fourth determining unit configured to determine a sampling interval of the first audio data according to an audio sampling frequency of the first audio data;
the acquisition unit is used for acquiring a first target time stamp of the last frame of audio data in second target audio data, wherein the second target audio data is the audio data before the first time stamp in the first audio data;
a first modifying unit, configured to modify a timestamp of a first frame of audio data in third target audio data to be a sum of the first target timestamp and the sampling interval, where the third target audio data is audio data after the second timestamp in the first audio data; the method comprises the steps of,
And the second modification unit is used for determining the time stamp of any first target frame audio data after the first frame audio data in the third target audio data according to the sum of the time stamp of the previous frame audio data of the first target frame audio data and the sampling interval.
In some embodiments, the intercept module 1103 includes:
a fifth determining unit, configured to determine an average time difference corresponding to the first video data based on a time difference between time stamps of two adjacent video frames in the first video data;
and the correction unit is used for correcting the time stamp of the second video based on the average time difference.
In some embodiments, the correction unit is configured to obtain a second target timestamp of a last frame of video frames in second target video data, where the second target video data is video data before the first timestamp in the first video data; modifying a timestamp of a first video frame in third target video data to be a sum of the second target timestamp and the average time difference, wherein the third target video data is video data after the second timestamp in the first video data; and for any first target video frame after the first video frame in the third target video data, determining the time stamp of the first target video frame according to the sum of the time stamp of the previous video frame of the first target video frame and the average time difference.
In some embodiments, the apparatus further comprises:
the display module is used for displaying the suspension control, the suspension control comprises a pause screen recording control and a continuous screen recording control, the pause screen recording control is used for triggering the pause screen recording operation, and the continuous screen recording control is used for triggering the continuous execution of the screen recording operation.
In some embodiments, the display module is configured to display the suspension control in a display interface, and display the pause screen recording control in the suspension control; responding to the trigger of the pause screen recording control, and switching the pause screen recording control into the continuous screen recording control; and responding to the trigger of the continuous screen recording control, and switching the continuous screen recording control into the pause screen recording control.
In the embodiment of the application, the first timestamp of suspending the screen recording and the second timestamp of continuing the screen recording are recorded in the screen recording process, so that when the multimedia file obtained by the screen recording is synthesized, the first timestamp and the second timestamp can be intercepted, the time stamps of the intercepted video data and audio data can be corrected, the corrected audio data and video data form the multimedia data of the screen recording, the screen recording is suspended and the screen recording is continued in the screen recording process, the screen recording mode is enriched, and the screen recording can meet richer use scenes.
Referring to fig. 12, a block diagram illustrating a structure of a terminal 1200 according to an exemplary embodiment of the present application is shown. Terminal 1200 may be a smart phone, tablet computer, or other terminal with a screen recording function. Terminal 1200 in the present application may include one or more of the following: processor 1210, memory 1220, screen recording module 1230.
Processor 1210 may include one or more processing cores. The processor 1210 connects various parts within the entire terminal 1200 using various interfaces and lines, performs various functions of the terminal 1200 and processes data by running or executing computer programs stored in the memory 1220, and calling data stored in the memory 1220. Optionally, the processor 1210 is implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1210 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processing unit (Graphics Processing Unit, GPU), a Neural network processing unit (Neural-network Processing Unit, NPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the NPU is used to implement artificial intelligence (Artificial Intelligence, AI) functionality; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 1210 and may be implemented by a single chip.
The Memory 1220 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 1220 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 1220 may be used to store computer programs. The memory 1220 may include a stored program area and a stored data area, wherein the stored program area may store program code for implementing an operating system, program code for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), program code for implementing the various method embodiments described below, etc.; the storage data area may store data (e.g., audio data, phonebook) created according to the use of the terminal 1200, etc.
The recording module 1230 includes an Audio recorder (Audio Record), an Audio encoder (Audio Media Codec), a Renderer (Renderer), a Virtual Display (Virtual Display), a Video encoder (Video MediaCodec), and a multimedia synthesizer (Media Muxer).
In some embodiments, terminal 1200 also includes a display screen. Wherein the display screen is a display component for displaying a user interface. Optionally, the display screen is a display screen with a touch function, and through the touch function, a user can perform touch operation on the display screen by using any suitable object such as a finger, a touch pen, and the like. The display screen is typically provided at the front panel of the terminal 1200. The display screen may be designed as a full screen, a curved screen, a contoured screen, a double-sided screen, or a folded screen. The display screen can also be designed into a combination of a full screen and a curved screen, a combination of a special-shaped screen and a curved screen, and the like, which is not limited in this embodiment.
In addition, those skilled in the art will appreciate that the structure of the terminal 1200 shown in the above-described figures does not constitute a limitation on the terminal 1200, and the terminal 1200 may include more or less components than illustrated, or may combine certain components, or may have a different arrangement of components. For example, the terminal 1200 further includes a microphone, a speaker, a radio frequency circuit, an input unit, a sensor, an audio circuit, a power supply, a bluetooth module, and the like, which are not described herein.
Embodiments of the present application also provide a computer readable medium storing at least one computer program loaded and executed by the processor to implement the screen recording method as shown in the above embodiments.
Embodiments of the present application also provide a computer program product storing at least one computer program that is loaded and executed by the processor to implement the screen recording method as shown in the above embodiments.
In some embodiments, the computer program related to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site, or on multiple computer devices distributed across multiple sites and interconnected by a communication network, where the multiple computer devices distributed across multiple sites and interconnected by a communication network may constitute a blockchain system.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more computer programs on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (13)

1. A method of recording a screen, the method comprising:
in the screen recording process, responding to the suspension of the screen recording operation, recording a first time stamp and keeping the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation;
Responding to the continuous screen recording operation, and recording a second time stamp, wherein the second time stamp is the triggering time of the continuous screen recording operation;
based on the first timestamp and the second timestamp, intercepting the first audio data and the first video data obtained by screen recording to obtain second audio data and second video data;
and respectively correcting the time stamps of the second video data and the second audio data, and synthesizing the multimedia data based on the second audio data and the second video data after the time stamps are corrected.
2. The method of claim 1, wherein intercepting the first video data based on the first timestamp and the second timestamp comprises:
determining a first time difference between the first time stamp and the second time stamp;
determining the time difference between the time stamps of any two video frames from the first video data to obtain a plurality of second time differences;
for any second time difference, determining video data between two video frames corresponding to the second time difference as first target video data in response to the second time difference being matched with the first time difference;
And deleting the first target video data to obtain the second video data, or modifying the data content of the first target video data to obtain the second video data.
3. The method of claim 2, wherein determining the time difference between the time stamps of any two video frames from the first video data results in a plurality of second time differences, comprising:
determining a first video frame with a timestamp matched with the first timestamp from the first video data;
determining a plurality of second video frames having time stamps subsequent to the first video frame, respectively;
a second time difference between each second video frame and the first video frame is determined.
4. The method of claim 2, wherein modifying the data content of the first target video data to obtain the second video data comprises:
adding a blank frame tag to the first target video data to obtain the second video data, wherein the blank frame tag is used for indicating that the corresponding video data is discarded when the multimedia data is synthesized; or,
setting the video frame of the first target video data as a target video frame to obtain the second video data, wherein the target video frame is discarded when synthesizing the multimedia data.
5. The method of claim 1, wherein the act of intercepting the first audio data based on the first timestamp and the second timestamp comprises:
deleting first target audio data in the first audio data to obtain the second audio data, wherein the first target audio data is the audio data between the first timestamp and the second timestamp.
6. The method of claim 1, wherein the process of modifying the timestamp of the second audio data comprises:
determining a sampling interval of the first audio data according to an audio sampling frequency of the first audio data;
acquiring a first target time stamp of last frame of audio data in second target audio data, wherein the second target audio data is audio data before the first time stamp in the first audio data;
modifying a time stamp of a first frame of audio data in third target audio data to be a sum of the first target time stamp and the sampling interval, wherein the third target audio data is audio data after the second time stamp in the first audio data; the method comprises the steps of,
For any first target frame audio data after the first frame audio data in the third target audio data, determining the time stamp of the first target frame audio data according to the sum of the time stamp of the previous frame audio data of the first target frame audio data and the sampling interval.
7. The method of claim 1, wherein the process of modifying the timestamp of the second video data comprises:
determining an average time difference corresponding to the first video data based on a time difference between time stamps of two adjacent video frames in the first video data;
and correcting the time stamp of the second video based on the average time difference.
8. The method of claim 7, wherein the correcting the timestamp of the second video based on the average time difference comprises:
acquiring a second target time stamp of a last frame of video frame in second target video data, wherein the second target video data is video data before the first time stamp in the first video data;
modifying a timestamp of a first video frame in third target video data to be a sum of the second target timestamp and the average time difference, wherein the third target video data is video data after the second timestamp in the first video data; the method comprises the steps of,
For any first target video frame after the first video frame in the third target video data, determining the time stamp of the first target video frame according to the sum of the time stamp of the previous video frame of the first target video frame and the average time difference.
9. The method according to claim 1, wherein the method further comprises:
and displaying a suspension control, wherein the suspension control comprises a pause screen recording control and a continuous screen recording control, the pause screen recording control is used for triggering the pause screen recording operation, and the continuous screen recording control is used for triggering the continuous execution of the screen recording operation.
10. The method of claim 9, wherein the presenting a hover control comprises:
displaying the suspension control in a display interface, and displaying the pause screen recording control in the suspension control;
responding to the trigger of the pause screen recording control, and switching the pause screen recording control into the continuous screen recording control;
and responding to the trigger of the continuous screen recording control, and switching the continuous screen recording control into the pause screen recording control.
11. A screen recording apparatus, the apparatus comprising:
the first screen recording module is used for responding to the suspension of the screen recording operation in the screen recording process, recording a first time stamp and keeping the screen recording, wherein the first time stamp is the triggering time of the suspension of the screen recording operation;
The second screen recording module is used for responding to the continuous screen recording operation and recording a second time stamp, wherein the second time stamp is the triggering time of the continuous screen recording operation;
the intercepting module is used for intercepting the first audio data and the first video data obtained by screen recording based on the first timestamp and the second timestamp to obtain second audio data and second video data; and the synthesizing module is used for respectively correcting the time stamps of the second video data and the second audio data and synthesizing the multimedia data based on the second audio data and the second video data after the time stamps are corrected.
12. A terminal, the terminal comprising a processor and a memory; the memory stores at least one computer program for execution by the processor to implement the screen recording method of any one of claims 1 to 10.
13. A computer readable storage medium, characterized in that the computer readable storage medium stores at least one computer program for execution by a processor to implement a screen recording method according to any one of claims 1 to 10.
CN202110830451.8A 2021-07-22 2021-07-22 Screen recording method, device, terminal and storage medium Active CN113473215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110830451.8A CN113473215B (en) 2021-07-22 2021-07-22 Screen recording method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110830451.8A CN113473215B (en) 2021-07-22 2021-07-22 Screen recording method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113473215A CN113473215A (en) 2021-10-01
CN113473215B true CN113473215B (en) 2023-04-28

Family

ID=77882024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110830451.8A Active CN113473215B (en) 2021-07-22 2021-07-22 Screen recording method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113473215B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048363B (en) * 2023-04-03 2023-08-25 数孪模型科技(北京)有限责任公司 Display method, system, equipment and medium of software interface based on artificial intelligence

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9269399B2 (en) * 2011-06-13 2016-02-23 Voxx International Corporation Capture, syncing and playback of audio data and image data
CN105704539A (en) * 2016-02-15 2016-06-22 努比亚技术有限公司 Video sharing device and video sharing method

Also Published As

Publication number Publication date
CN113473215A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN106254311B (en) Live broadcast method and device and live broadcast data stream display method and device
CN108737908B (en) Media playing method, device and storage medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
US11587317B2 (en) Video processing method and terminal device
CN108769726B (en) Multimedia data pushing method and device, storage medium and equipment
US20220417417A1 (en) Content Operation Method and Device, Terminal, and Storage Medium
CN104639977B (en) The method and device that program plays
CN111367434B (en) Touch delay detection method and device, electronic equipment and storage medium
CN108737884B (en) Content recording method and equipment, storage medium and electronic equipment
CN108900855B (en) Live content recording method and device, computer readable storage medium and server
JP2016541214A (en) Video browsing method, apparatus, program and recording medium
CN110475140A (en) Barrage data processing method, device, computer readable storage medium and computer equipment
CN109617945A (en) Sending method, sending device, electronic equipment and the readable medium of file transmission
CN113473215B (en) Screen recording method, device, terminal and storage medium
CN107592556A (en) Control information synthetic method and controlled terminal and mobile terminal based on mobile terminal
CN111835739A (en) Video playing method and device and computer readable storage medium
CN109656463B (en) Method, device and system for generating individual expressions
CN114979785A (en) Video processing method and related device
US11600300B2 (en) Method and device for generating dynamic image
CA3102425A1 (en) Video processing method, device, terminal and storage medium
CN107710754B (en) Audio and video data synchronization method and device
CN111083506B (en) Management system based on 5G intelligent terminal
CN112328895A (en) User portrait generation method, device, server and storage medium
CN116723353A (en) Video monitoring area configuration method, system, device and readable storage medium
CN114697568B (en) Special effect video determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant