CN118075540A - Video recording method and electronic device - Google Patents

Video recording method and electronic device Download PDF

Info

Publication number
CN118075540A
CN118075540A CN202211467870.0A CN202211467870A CN118075540A CN 118075540 A CN118075540 A CN 118075540A CN 202211467870 A CN202211467870 A CN 202211467870A CN 118075540 A CN118075540 A CN 118075540A
Authority
CN
China
Prior art keywords
application
tag
video clip
recording
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211467870.0A
Other languages
Chinese (zh)
Inventor
张静
郑嘉琨
张沁峰
杨裕伟
孟三军
方雄飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211467870.0A priority Critical patent/CN118075540A/en
Publication of CN118075540A publication Critical patent/CN118075540A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a video recording method and electronic equipment, which belong to the technical field of electronics, wherein the video recording method is executed by the electronic equipment and comprises the following steps: displaying a first control, a first interface of a first application and a second interface of a second application in a screen of the electronic equipment, wherein the first application is an application for playing video; responding to a recording starting instruction input by a user through a first control, and recording a screen of the first area; the first area is an area of a screen in which a first application displays a video picture; in the process of screen recording, responding to a label adding instruction input by a user through a first control, generating information of a first label, wherein the information of the first label and the adding time of the first label have a first corresponding relation, and the adding time of the first label is the recording duration of screen recording when the label adding instruction is received; and displaying the information of the first label in the second interface. The method can record the video and improve the user experience.

Description

Video recording method and electronic device
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a video recording method and an electronic device.
Background
A note Application (APP) is an APP commonly used by users. Currently, many note APPs are capable of recording text content and sound recordings, but are incapable of recording video. Thus, the use requirement of the user in watching the video scene cannot be met, for example, the video content cannot be recorded conveniently when students surf the internet, and the user experience is affected.
Disclosure of Invention
The application provides a video recording method and electronic equipment, which can record video content and improve user experience.
In a first aspect, the present application provides a video recording method, the method being performed by an electronic device, the method comprising: displaying a first control, a first interface of a first application and a second interface of a second application in a screen of the electronic equipment, wherein the first application is an application for playing video; responding to a recording starting instruction input by a user through a first control, and recording a screen of the first area; the first area is an area of a screen in which a first application displays a video picture; in the process of screen recording, responding to a label adding instruction input by a user through a first control, generating information of a first label, wherein the information of the first label and the adding time of the first label have a first corresponding relation, and the adding time of the first label is the recording duration of screen recording when the label adding instruction is received; and displaying the information of the first label in the second interface.
The first application may be, for example, an application capable of playing video, such as a video APP, a net lesson APP, or the like. The second application is an APP capable of recording content, for example, a note APP or a memo APP, etc. The second interface may be, for example, a note editing interface of a note APP. Optionally, the first control may be a control in the third application, or may be a control in the second application. The third application is an APP capable of implementing a recording, for example, a video clip APP.
The first control may include a start recording control, and the user inputs a start recording instruction by clicking the start control. And responding to clicking operation of a user, and starting screen recording on the first area. In the screen recording process, if a user inputs a label adding instruction through a first control, information of a first label is generated, and the information of the first label is displayed on a second interface. The first label refers to the name of the label added this time. The information of the tag is also referred to as the content of the tag. The first area is the video playing area.
Alternatively, the first tag may be a plain text tag, or may be a screenshot text tag. The first control may include a control for adding a plain text label and a control for adding a screenshot text label. The screenshot text label refers to a label comprising the screenshot in the information of the label. The plain text label refers to a label which does not contain screenshot in the information of the label. In the embodiment of the application, for the plain text label, the information of the label can include, but is not limited to, an icon of the label, an adding time of the label, a text editing area of the label, content edited by a user in the text editing area, and the like. The screenshot text labels include a screenshot in addition to the information described above. The screenshot is an image displayed in the first area when the electronic equipment receives the instruction for adding the label.
The video obtained by recording the current screen in the first area may be referred to as a target video clip. It will be appreciated that in the screen recording process, the steps of "generating information of the first tag in response to the tag adding instruction input by the user through the first control" and "displaying information of the first tag in the second interface" may be repeatedly performed to generate and display information of a plurality of tags corresponding to the target video clip. In addition, the above steps may be repeatedly performed, where the first area is recorded on the screen in response to a recording start instruction input by the user through the first control; in the screen recording process, responding to a label adding instruction input by a user through a first control, and generating information of a first label; information "of the first tag" is displayed in the second interface to record a plurality of video clips, and information of the tag corresponding to each video clip is generated and displayed.
In the video extraction method provided in the first aspect, under the condition that the interface includes a first interface of a first application, a first control and a second interface of a second application, screen recording is performed on the first area through control of the first control. Therefore, the user can record the video at any time in the process of watching the video through the first application, and the user experience is improved. In addition, in the screen recording process, the user can also add a label, and the information of the label is displayed on the second interface. The information of the tag has a first correspondence with the tag addition time. In this way, when the recorded video clip is played later, bidirectional positioning of video clip playing and tag information displaying can be realized, namely: playing the video clip to a certain playing time length, and if a label with the adding time matched with the playing time length exists, highlighting the label; or the user clicks the information of a certain label in the note, and the playing progress of the video clip jumps to the playing time length matched with the adding time of the label. Therefore, the video clip playing is synchronized with the information display of the tag, so that the user can check and check conveniently, and the user experience is further improved.
In one possible implementation, the screen recording of the first area includes: identifying a first region; acquiring position information of a first area; and recording the image displayed in the first area and the audio corresponding to the image according to the position information of the first area.
Alternatively, the electronic device may include a logging service, and various steps in this implementation may be implemented by the logging service.
In the implementation manner, the position information of the first area is acquired by identifying the first area, and then according to the position information of the first area, only the content of the first area is recorded, so that the recorded video clip does not contain the content outside the first area, the screen recording effect is better, and the user experience is improved.
In one possible implementation manner, recording the image displayed in the first area and the audio corresponding to the image includes: acquiring all layer data in a first area at a first moment, wherein the first moment is any moment in the screen recording process; filtering the layer data of the non-first application in all the layer data to obtain the rest layer data; synthesizing the rest layer data to obtain a first image; acquiring first audio at a first moment, wherein the first audio is corresponding to a first image.
The first image is any frame of image in the recorded video clip. In the implementation manner, the image is obtained by filtering the layer data of the non-first application and synthesizing the rest layer data. Therefore, the content of the non-first application is removed from the image, the shielding of the video picture of the first application is prevented, the recorded video clip display effect is better, and the user experience is further improved.
In one possible implementation, the tag adding instruction is used for indicating adding a plain text tag, and the tag information of the first tag includes at least one of an adding time of the first tag, an icon of the first tag, and a text editing area of the first tag, where the text editing area of the first tag is used for editing the text information.
Alternatively, the text editing area of the tag, i.e., the tag information input area. In this implementation manner, the tag adding instruction is used for indicating that a plain text tag is added, that is, when the first tag is a plain text tag, the tag information may include at least one of the above several information, where the adding time of the first tag is convenient for the user to learn the recording duration of the video clip when the tag is added, so as to improve user experience. The icon of the first label is convenient for a user to identify the label, and user experience is improved. The text editing area of the first label is convenient for a user to record text information related to the label, and user experience is improved.
In one possible implementation, the information of the first tag includes a text editing area of the first tag, and the method further includes: receiving first content input by a user in a text editing area of a first tag; storing the first content as information of a first tag; the first content is displayed in a text editing area of the first tag.
In the implementation manner, when the user inputs the first content in the text editing area of the first tag, the first content is stored as the information of the first tag, so that the first content and other information are taken as a whole, and the first content and the adding time of the first tag have a first corresponding relation, and the bidirectional positioning of the first content display and the video clip playback is conveniently realized when the subsequent video clip playback is performed.
In one possible implementation manner, the instruction of adding the tag is used for indicating to add the screenshot tag, and the information of the first tag includes at least one of tag time of the first tag, an icon of the first tag and a text editing area of the first tag, and the screenshot; the text editing area of the first tag is used for editing text information, and the screenshot is an image displayed in the first area and captured when the tag adding instruction is received.
In this implementation manner, the tag adding instruction is used for indicating to add the screenshot text tag, that is, when the first tag is the screenshot text tag, the tag information may include at least one of the above information and the screenshot, where the adding time of the first tag is convenient for the user to learn the recording duration of the video clip when the tag is added, so as to improve the user experience. The icon of the first label is convenient for a user to identify the label, and user experience is improved. The text editing area of the first label is convenient for a user to record text information related to the label, and user experience is improved. The screenshot is convenient for a user to intuitively acquire pictures in the video, further facilitates the user to record the content of the video, and improves the user experience.
In one possible implementation, the method further includes: responding to a recording starting instruction input by a user through a first control, displaying a first card in a second interface, wherein the first card has a second corresponding relation with a preset mark, and the preset mark is a video mark of a video clip finally obtained by recording on a current screen; displaying information of the first tag in a second interface, including: and displaying the information of the first label in the first card according to the preset identification based on the second corresponding relation.
Alternatively, the preset identifier may be generated in advance when the screen recording is started, where the preset identifier can represent a unique identity of the video clip generated by the current screen recording.
In the implementation manner, the first card is displayed in the second interface, and the information of the first tag is displayed on the first card based on the second corresponding relation between the first card and the preset mark. The preset identifier is the video identifier of the video clip finally obtained by recording on the current screen, so that the information of the first tag can be displayed in the card corresponding to the video clip, and the user can intuitively know the relationship between the card of the video clip and the tag of the video clip.
In one possible implementation, the method further includes: responding to a recording ending instruction input by a user, and stopping screen recording; generating and saving a target video clip; the preset identifier is a video identifier of the target video clip.
Alternatively, the target video snippet may be saved to a database of the second application. The target video clip is the video generated by the current screen recording, so that the video identification of the target video clip is the preset identification.
In one possible implementation, the end recording instruction is: an instruction which is input through the first control and indicates to stop screen recording; or instructions indicating to exit the second interface; or an instruction to exit the second application.
That is, the stop screen recording may be triggered by a stop screen recording instruction input by the first control, for example, the first control includes a stop recording control, and the stop screen recording is triggered by clicking the stop recording control. Stopping screen recording may also be triggered by an instruction to exit the second interface, for example, the user clicks a control in the second interface that returns to the upper layer, or performs a sliding operation on the edge of the screen toward the inside of the screen, or the like. Stopping the screen recording may also be triggered by an instruction to exit the second interface, e.g. the user clicking a close control in the second interface, or the user deleting a task of the second application in the latest taskbar window at the latest task, etc.
In the implementation mode, various instructions for triggering and stopping screen recording are provided, so that the user operation is facilitated, and the user experience is improved.
In one possible implementation manner, the first control is a control of the third application, and the recording ending instruction is an instruction input through the first control and indicating to exit the third application.
In the implementation manner, when the user exits the third application, the screen recording is stopped, so that the operation of the second application is not influenced while the user operation is facilitated, and the user experience is further improved.
In one possible implementation, the method further includes: in the process of screen recording, if the position information of the first area is monitored to change, stopping screen recording; generating and saving a target video clip; the preset identifier is a video identifier of the target video clip.
The change in the location information of the first area refers to a change in the size or range of the area where the first application plays the video, for example, the user moves the video playing window of the first application or zooms the video playing window.
In the implementation manner, in the screen recording process, the position information of the first area is monitored, when the position information of the first area changes, the screen recording is triggered to stop, so that the user can be prevented from recording according to the original position of the video playing window after changing the position of the video playing window of the first application, the recording effect is improved, and further the user experience is improved.
In one possible implementation, the method further includes: acquiring a first frame image, wherein the first frame image is an image displayed in a first area intercepted at a second moment, and the second moment is the moment when a recording starting instruction is received; after generating and saving the target video snippet, the method further includes: and displaying the first frame image and a second control in a second interface, wherein the second control is a play control of the target video clip.
In the implementation mode, after the target video clip is recorded, the first frame image of the target video clip and the second control are further displayed on the second interface, so that a user can intuitively know the general content of the video clip, and in the second interface, the user can play the target video clip through the second control, the user operation is facilitated, and the user experience is improved.
In one possible implementation, displaying the first frame image and the second control in the second interface includes: and displaying the first frame image and the second control in the first card according to the preset identification based on the second corresponding relation.
In this implementation manner, the first frame image and the second control are displayed on the first card, and as described in the above embodiment, the information of the first tag is also displayed on the first card, and of course, the information of other tags generated in the screen recording process is also displayed on the first card, so that the user can conveniently learn the correspondence between the target video clip and the information of the tag corresponding to the target video clip, and user experience is improved.
In a possible implementation manner, the second interface includes information of a plurality of labels, the plurality of labels have a third corresponding relationship with the plurality of video identifiers in a one-to-one correspondence, the plurality of labels include a first label, the plurality of video identifiers include a preset identifier, the first label corresponds to the preset identifier, and the information of the plurality of labels is displayed in the first mode, and the method further includes: responding to a playing instruction input by a user through a second control, and playing a target video clip; determining at least one target label corresponding to a preset identifier in the plurality of labels based on the third corresponding relation; and in the process of playing the target video clip, displaying information of the tag with the time less than or equal to the current playing time in at least one target tag in a second mode based on the first corresponding relation, wherein the second mode is different from the first mode.
That is, the target video snippet corresponds to at least one target tag. Alternatively, the first mode may be gray display and the second mode may be highlighting. The highlighting may be, for example, highlighting. It can be appreciated that in this implementation manner, information of the tag with the addition time less than or equal to the current playing time length is displayed in the second manner, and meanwhile, information of the tag with the addition time greater than the current playing time length is displayed in the first manner.
In the implementation manner, the at least one target tag corresponding to the target video is determined based on the third corresponding relation. And then, in the process of playing the target video clip, displaying the information of the target tag with the adding time less than or equal to the current playing time in a second mode based on the first corresponding relation. Therefore, bidirectional positioning of target video clip playing and information display of target tags can be realized, a user can watch the target video clip and check the tag corresponding to the video clip conveniently, and user experience is improved.
In one possible implementation manner, the second control includes a progress bar, where the progress bar is used to display a playing duration, and the method further includes: and responding to a play instruction input by a user through a second control, displaying at least one label anchor point on the progress bar, wherein the at least one label anchor point corresponds to at least one label one by one, the first play duration represented by the distance between the first label anchor point and the starting position of the progress bar is equal to the adding time of the label corresponding to the first label anchor point, and the first label anchor point is any one of the at least one label anchor point.
In the implementation mode, the label anchor point is displayed on the progress bar, so that a user can intuitively know the relation between the adding time and the playing time of the label, the user can conveniently and rapidly jump the playing progress to the corresponding playing time by clicking the label anchor point, and the user experience is improved.
In one possible implementation, the method further includes: and responding to the operation that the user clicks the first label anchor or drags the playing progress to the first label anchor, and starting to play the target video clip from the first playing duration.
The implementation mode can realize the jump of the playing progress of the target video clip. In addition, according to the above steps, based on the first correspondence, information of a label with an addition time less than or equal to the current playing time length in at least one target label is displayed in the second mode, so that the playing progress is skipped, and meanwhile, information of a corresponding label is displayed in the second mode, that is, bidirectional positioning is realized in a playing progress skipping scene.
In one possible implementation, the method further includes: and responding to the information of clicking the first label by the user, and starting to play the target video clip from a second playing time length, wherein the second playing time length is equal to the adding time of the first label.
According to the method, the skip of the playing progress can be realized by clicking the information of the labels, in addition, according to the steps of the step of displaying the information of the labels with the adding time less than or equal to the current playing time length in the at least one target label in a second mode based on the first corresponding relation in the process of playing the target video clip, the skip of the playing progress can be realized, and meanwhile, the information of the corresponding labels is displayed in the second mode, namely, the bidirectional positioning is realized under the label information skip scene.
In one possible implementation manner, the first control is a control of the third application, and the first control, the first interface of the first application, and the second interface of the second application are displayed in a screen of the electronic device, including: when the first interface is displayed in the screen, recognizing a preset gesture executed by a user on the screen; in response to the preset gesture, starting a second application and a third application under the condition that the first application is determined to be one of the preset applications; and displaying the first control and the second interface.
Alternatively, the preset gesture may be, for example, a three-finger swipe-and-hover gesture. The preset application may be recorded in the form of a preset whitelist.
In the implementation manner, the user can start the second application and the third application through a preset gesture, and the first control and the second interface are displayed (namely, the video extraction function is started), so that the user operation is facilitated, and the user experience is improved. Moreover, after the preset gesture is recognized, the video clip function is started when the first application is one of the preset applications. Therefore, the starting of the video clip function is limited to the preset scene, the false starting of the video clip function can be prevented, the use conflict of the preset gesture in different scenes can be solved, and the user experience is further improved.
In one possible implementation manner, the first control is a control of the third application, and the first control, the first interface of the first application, and the second interface of the second application are displayed in a screen of the electronic device, including: when the first interface and the third control are displayed in the screen, receiving an instruction which is input by a user through the third control and used for starting a video clip function; in response to an instruction to start the video clip function, starting a second application and a third application if the first application is determined to be one of the preset applications; and displaying the first control and the second interface.
Alternatively, the third control may be, for example, a video clip option control provided for a stylus hover sphere.
In the implementation mode, the user can quickly and conveniently start the video clip function through the third control, so that the user operation is simplified, and the user experience is improved. Moreover, the video clip function is only activated when the first application is determined to be one of the preset applications. Thus, the activation of the video clip function is limited to a predetermined scene, and erroneous activation of the video clip function can be prevented.
In one possible implementation, displaying a first control, a first interface of a first application, and a second interface of a second application in a screen of an electronic device includes: when a second interface is displayed in the screen, receiving an instruction which is input by a user through the second interface and used for starting a video clip function; responding to an instruction for starting a video extraction function, displaying a fourth control in a second interface, wherein the fourth control comprises at least one icon of a preset application, and the at least one preset application comprises a first application; responding to the operation of clicking the icon of the first application by the user, starting the first application, and displaying a first interface; the first control is displayed.
In the implementation manner, the first application and the third application are started by one key through the second interface of the second application, so that the operation is convenient, and the user experience is improved.
In one possible implementation manner, the first control is a control of the third application, the electronic device includes an unobstructed service, a screen recording service, and an interface map layer synthesizer SurfaceFlinger, and the screen recording is performed on the first area in response to a recording starting instruction input by the user through the first control, including: the third application responds to a recording starting instruction input by a user through a first control, and sends a first request message to the barrier-free service, wherein the first request message is used for requesting to acquire the position information of the first area; the barrier-free service responding to the first request message, identifying a first area and determining the position information of the first area; the barrier-free service sends the position information of the first area to a third application; the third application sends a screen recording starting instruction to the screen recording service, wherein the screen recording starting instruction carries the position information of the first area; the screen recording service responds to a screen recording starting instruction, and acquires all layer data in a first area at a first moment according to the position information of the first area, wherein the first moment is any moment in the screen recording process; the screen recording service sends all the layer data and layer filtering marks to SurfaceFlinger, wherein the layer filtering marks indicate to filter the layer data of non-first application in all the layer data; surfaceFlinger filtering non-first-application layer data in all layer data according to the layer filtering mark to obtain residual layer data, and synthesizing the residual layer data to obtain a first image; surfaceFlinger sending the first image to a screen recording service; the screen recording service stores a first image, and acquires first audio at a first moment, wherein the first audio is corresponding to the first image; the third application sends a preset identifier to the second application, wherein the preset identifier is a video identifier of a video clip finally obtained by recording on a current screen; the second application displays the first card in the second interface, and the first card has a second corresponding relation with the preset mark.
In one possible implementation, generating, in response to a tag adding instruction input by a user through a first control, information of a first tag includes: the third application responds to a label adding instruction input by a user through a first control, and determines the adding time of the first label; the third application sends the adding time of the first label and the preset identifier to the second application; the second application generates information of the first tag, the information of the first tag including at least one of an addition time of the first tag, an icon of the first tag, and a text editing area of the first tag.
In one possible implementation, displaying information of the first tag in the second interface includes: the second application determines that the card corresponding to the preset identifier is the first card based on the second corresponding relation; the second application displays information of the first tag in the first card.
In one possible implementation, the information of the first tag includes a text editing area of the first tag, and the method further includes: the second application receives first content input by a user in a text editing area of the first tag, and stores the first content as information of the first tag; the second application displays the first content in a text editing area of the first tag.
In one possible implementation, the add tag instruction is configured to instruct to add a screenshot tag, and the method further includes: the third application responds to a label adding instruction input by a user through a first control, and sends a screenshot instruction to SurfaceFlinger, wherein the screenshot instruction carries a layer filtering mark; surfaceFlinger, responding to a screenshot instruction, and acquiring a screenshot synthesized at a third moment, wherein the third moment is the moment when a third application receives the label adding instruction; surfaceFlinger sending the screenshot to a third application; the third application stores the screenshot into a database of the third application according to the uniform resource identifier URI of the screenshot; the third application sends the URI of the screenshot to the second application; the second application obtains the screenshot from a database of the third application according to the URI of the screenshot; the second application stores the screenshot in a database of the second application; the information of the first label also comprises a screenshot.
In one possible implementation, the method further includes: the third application sends a task identifier corresponding to the task added with the first label to the second application; after the second application saves the screenshot to the database of the second application, the method further comprises: the second application returns a processing result to the third application, wherein the processing result carries a task identifier; and deleting the screenshot in the database of the third application according to the task identifier.
In a second aspect, the present application provides an apparatus, which is included in an electronic device, the apparatus having a function of implementing the electronic device behavior in the first aspect and possible implementations of the first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a receiving module or unit, a processing module or unit, etc.
In a third aspect, the present application provides an electronic device, including: a processor, a memory, and an interface; the processor, the memory and the interface cooperate with each other such that the electronic device performs any one of the methods of the technical solutions of the first aspect.
In a fourth aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Further optionally, the chip further comprises a communication interface.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, which when executed by a processor causes the processor to perform any one of the methods of the first aspect.
In a sixth aspect, the application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform any one of the methods of the solutions of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application;
FIG. 2 is a block diagram illustrating a software architecture of an exemplary electronic device 100 according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an exemplary interface change for enabling video clip functionality according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating an exemplary method for enabling a video clip function according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another interface change for a process of initiating a video clip function provided by an embodiment of the present application;
FIG. 6 is a flow chart of another method for enabling video clip functionality provided by an embodiment of the present application;
FIG. 7-1 is a schematic illustration of another interface change for enabling video clip functionality according to an embodiment of the present application;
FIG. 7-2 is a diagram illustrating a second interface change for enabling video clip functionality according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating an interface change of a video clip recording process according to an embodiment of the present application;
FIG. 9 is a flowchart illustrating a method for recording a video clip according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an interface change of a label adding process according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an interface change of another label adding process according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for an exemplary tagging process according to an embodiment of the present application;
FIG. 13 is a schematic diagram illustrating an interface change for ending a recording process according to an embodiment of the present application;
FIG. 14 is a flowchart illustrating an example of a method for ending a recording process according to an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating another interface change for ending a recording process according to an embodiment of the present application;
FIG. 16 is a schematic diagram illustrating an interface change for ending a recording process according to an embodiment of the present application;
FIG. 17 is a flowchart of another method for ending a recording process according to an embodiment of the present application;
FIG. 18 is a schematic diagram illustrating an interface change for ending a recording process according to an embodiment of the present application;
FIG. 19 is a schematic diagram showing an interface change for ending a recording process according to another embodiment of the present application;
FIG. 20 is a flowchart illustrating a method for ending a recording process according to an embodiment of the present application;
FIG. 21 is a schematic diagram illustrating an interface change for ending a recording process according to an embodiment of the present application;
FIG. 22 is a schematic diagram illustrating an interface change for ending a recording process according to an embodiment of the present application;
FIG. 23 is a schematic diagram showing an interface change of a note order presentation process according to an embodiment of the present application;
FIG. 24 is a flowchart of a method for providing an exemplary note order presentation process according to an embodiment of the present application;
FIG. 25 is a schematic diagram illustrating an interface change of a skip of playing progress according to an embodiment of the present application;
Fig. 26 is a schematic diagram of interface change of a label jump according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first," "second," "third," and the like, are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
For ease of understanding, the terms and concepts involved in the embodiments of the present application will be first described.
1. Video playing area (first area)
The video play area refers to an area of the interface for displaying video pictures.
2. Recording duration
The recording time length is also called as recorded time length, and refers to the time length from the time when the video is recorded to the current recording time.
3. Duration of play
The playing time length refers to the time length from the time when playing a certain video to the current playing time. It can be understood that, for the same video, when the playing time length is equal to the recording time length, the corresponding frame images in the video are the same. For example, when a certain video is recorded, the recording duration is 1 second, and the corresponding frame image is image a, and when the video is played, the playing duration is 1 second, and the corresponding frame image is also image a.
4. Video excerpt
Generally, the extract is a verb that is a part of the extract record required for selection, and the noun is a content obtained by extracting the record. In the embodiment of the application, when the video extract is taken as a noun, the video clip obtained by recording the video is taken as a verb, and when the video extract is taken as a verb, the video clip is taken as a verb, the required content is selected to record the video.
With the rapid development of electronic devices and networks, network course (network course for short) learning becomes an important learning mode. The net lessons are often presented in video form. When in a class, a user can record notes through the note APP. However, the current note APP can only record text content and sound recordings, and cannot record video. Therefore, some doubts in the course, or important blackboard writing and the like cannot be recorded in the note in the form of video, so that the video is inconvenient for users to use and influences the user experience.
Of course, the method is not limited to the net lesson, and in other scenes playing the video, the user also wants to record the screen at any time and record the key fragments in the video, so that the method has similar problems as the net lesson.
In view of this, an embodiment of the present application provides a video recording method, in which a note APP (second application) can record the content of a video playing area in a screen through a video clip APP (third application), so as to generate a video clip, that is, the note APP has a video clip function. Therefore, the user can record the video at any time in the process of watching the video, and the user experience is improved. In addition, during the process of recording the video clip, the user may further add a tag to the video clip in the note, and obtain the recording duration (also referred to as the tag adding time, hereinafter referred to as the recording duration corresponding to the tag) when the tag is added through the video clip APP. Subsequently, when the video clip is played through the note APP, bidirectional positioning of video playing and tag content display can be realized, namely: playing the video clip to a certain playing time length, and if a tag with the recording time length matched with the playing time length exists, highlighting the content of the tag; or the user clicks the content of a certain label in the note, and the playing progress of the video clip jumps to the playing time length matched with the recording time length corresponding to the label. Therefore, the video clip playing is synchronized with the content display of the tag, so that the user can check and check conveniently, and the user experience is further improved.
The video recording method provided by the embodiment of the application can be applied to electronic equipment which can be provided with application programs, such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal DIGITAL ASSISTANT, PDA) and the like, and the embodiment of the application does not limit the specific type of the electronic equipment.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194. In an embodiment of the present application, the touch sensor 180K may be configured to detect a touch operation performed by a user on a screen, and transmit the detected touch operation to the application processor, so as to determine a gesture or operation performed by the user, for example, determine a gesture performed by the user to slide down and hover with three fingers, or determine an operation performed by the user to click on a start recording control, or the like.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
In the embodiment of the present application, the software system of the electronic device 100 may be an Android system, a Windows system, an IOS system, etc., which is not limited in the present application. The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, a capability service layer, an application framework layer, an Zhuoyun rows (Android runtime) and a system library, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include a note APP (second application), a video clip APP (third application), a video APP (first application), a net lesson APP (first application), a handwriting pen APP, and the like. Of course, the application package may also include applications (not shown in fig. 2) for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, short messages, browsers, etc.
In the embodiment of the application, the note APP is used for generating notes and presenting the notes. The content recorded in the notes may include text content, handwritten content, pictures, recordings, video clips, labels, and the like. Specifically, the note APP can obtain a video clip by recording the content of a video playing area in the screen of the video clip APP. During recording of the video clip, the user may trigger by the video clip APP to tag the video clip in the note. Optionally, according to different user selections, the added label may be a plain text label or a screenshot text label. The content of the tag is displayed in the interface of the note APP. For a plain text label, the content of the displayed label may include a label icon, a recording duration corresponding to the label, a label information input area, and the like. The user can input text information in the tag information input area. For a screenshot text label, the content of the displayed label can include a label icon, a recording duration corresponding to the label, a screenshot and label information input area, and the like. The screenshot is an image of a video playing area intercepted by a video clip APP when a user triggers the addition of a screenshot text label. The label information input area in the screenshot text label is also used for the user to input label information. The process of generating video clips and tags with respect to note APP and video clip APP will be described in detail in the following embodiments.
That is, in the embodiment of the present application, the note APP can record the video clip by calling the function of the video clip APP. Optionally, the video clip APP displays an icon in the desktop of the electronic device, or does not display an icon. In other words, the video clip APP may or may not be allowed to be started by the desktop icon, but only by the portal, e.g. by the note APP.
Note that in this embodiment, the note APP and the video clip APP are two independent APPs, and the note APP records the video clip by calling the video clip APP. In some other embodiments, the video clip APP may also be part of the note APP, i.e. the note APP includes functional modules of the video clip APP. In summary, the present application does not limit the setting forms of the note APP and the video clip APP, as long as the note APP can directly or indirectly implement the video clip function.
The video APP is used for playing video. Net lesson APP refers to APP for net lesson learning. The handwriting pen APP is used for supporting the operation of the functions of the handwriting pen equipped in the electronic device. After the handwriting pen APP is started, a user can input information on the electronic device through the handwriting pen instead of fingers, such as gesture input, clicking, writing and the like. Optionally, after the stylus APP is started, the stylus APP may be displayed on the interface in a form of a suspension ball, hereinafter referred to as a stylus suspension ball. The stylus suspension ball may be configured with quick start portals for a user to quickly start a preset APP or related functions in the APP through the quick start portals.
It should be noted that, note APP, video clip APP, video APP, net lesson APP, handwriting APP, and the like described in this embodiment all refer to APPs capable of implementing corresponding functions, and do not limit specific APP products. For example, the net lesson APP may beMay also be enterprise/>May also be/>Etc.
The capability service layer is located between the application layer and the application framework layer and is used for providing services for the application program of the application layer so as to realize the functions of the application program. For example, in this embodiment, the capability service layer is configured to provide relevant services to the note APP and the video clip APP of the application layer, so as to implement the video clip function.
As shown in fig. 2, in the present embodiment, the capability service layer may include an unobstructed service, a note gesture service, a video clip service, a screen recording service, and the like. The note gesture service is used for recognizing a preset gesture, and indicating the video extraction service to start a video extraction function when the current gesture is the preset gesture and APP displayed in the current interface is APP in a preset white list. In the embodiment of the present application, the function of starting the video clip refers to starting the note APP and the video clip APP, and will not be described in detail below.
The video clip service is to initiate a video clip function in response to an indication of the note gesture service.
The barrier-free service is used for identifying a video playing area in a screen and acquiring position information of the video playing area. The location information of the video playing area is used to characterize the extent and location of the video playing area in the screen. The barrier-free service transmits the position information of the video playing area to the video clip APP, so that the video clip APP records the video clip according to the position information of the video playing area.
The screen recording service is used for providing screen recording service for the video clip APP. Alternatively, the screen recording service may implement its functionality by calling the relevant modules of the application framework layer.
It will be appreciated that in some other embodiments, these services in the capability services layer may also be located at the application architecture layer.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer may include a page management service (ACTIVITY MANAGER SERVICE, AMS), a window management service (windows MANAGER SERVICE, WMS), a system User Interface (UI) module (hereinafter, system UI module), an interface map layer assembler (SurfaceFlinger), a view system, a content provider, a phone manager, a resource manager, a notification manager, and the like.
AMS is also called an activity management service, and is mainly responsible for runtime management, controlling four major components of the Android system, controlling the start and exit of application processes, and controlling process priorities. Four major components of the Android system include Activity (Service), service (Service), broadcast receiver (browse CAST RECEIVER), content Provider (Content Provider). In the embodiment of the application, the AMS can be used for realizing the starting, exiting, managing and the like of each application program in the application program layer.
WMSs are also known as window managers for managing window programs. The WMS may obtain the size of the display screen, determine if there is a status bar, lock the screen, intercept the screen, etc. In the embodiment of the application, the note APP can be displayed in a suspended window mode through the WMS, or the note APP and the net class APP are displayed in a split screen mode, and the like.
The system UI module is responsible for the initialization and control of some components in the interface, such as Navigation Bar, status Bar, screen lock, etc. In the embodiment of the application, the system UI module can also receive the video clip APP call, so as to realize the display of the clip control column of the video clip APP and the like.
SurfaceFlinger are used to process, synthesize, etc. the drawn layers. In this embodiment, surfaceFlinger is configured to identify and filter layer data of a non-target APP in the layer data, and synthesize the remaining layer data to obtain an image. The target APP refers to an APP currently displayed in an interface for playing video, such as a video APP, a net lesson APP and the like.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a note APP icon may include a view displaying text and a view displaying a picture.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android runtime include core libraries and virtual machines. Android runtime is responsible for scheduling and management of the android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
For easy understanding, the following embodiments of the present application will take an electronic device having a structure shown in fig. 1 and fig. 2 as an example, and specifically describe a video recording method provided by the embodiments of the present application with reference to the accompanying drawings and application scenarios.
According to the method provided by the embodiment of the application, the video clip can be recorded to the note through the note APP and the video clip APP, and the tag can be added to the video clip in the note. That is, in the embodiment of the present application, the note APP has a video clip function, and can generate a note including a video clip and a tag. After the note is generated, the note APP can also present the note and realize bidirectional positioning.
The following will be a description of the specific implementation method of the above functions in sequence according to the following several procedures, taking the scenario that the user learns through the net lesson APP as an example, in combination with the timing flowchart:
1. A video clip function starting process;
2. a video clip recording process;
3. a label adding process;
4. Ending the recording process;
5. a note order presentation process;
6. a play progress jumping process;
7. And (5) a label jumping process.
1. Video clip function initiation procedure
The video clip function, namely the note APP and the video clip APP, is started. Alternatively, the video clip function may be triggered to start by several methods:
(1) Triggering and starting a video clip function through a preset gesture.
Fig. 3 is a schematic diagram illustrating an interface change for starting a video clip function according to an embodiment of the present application. As shown in fig. 3 (a), the current electronic device (for example, a folding screen mobile phone) has opened a net lesson APP, and an interface of the net lesson APP is displayed in a screen, wherein a picture of a video is displayed in a video playing area 301.
Alternatively, the user may initiate the video clip function by performing a three-finger swipe and hover gesture. Specifically, as shown in fig. 3 (a), the user can slide down with three fingers simultaneously from the top of the screen, and after a certain distance, the fingers stop sliding. If the sliding distance of the finger of the user exceeds the preset distance threshold and the residence time of the finger on the screen after stopping sliding reaches the preset time threshold, the starting extract prompt message 302 is displayed in the interface, as shown in the (b) diagram in fig. 3. The start extract prompt message 302 is used to prompt the user to lift the finger at this time to start the video extract function. Alternatively, the launch snippet prompt 302 may appear for a predetermined period of time and then disappear.
When the user lifts the finger, the mobile phone starts the note APP, and automatically creates a note. Note APP is shown in the interface in the form of a floating window 303, as shown in fig. 3 (c). An edit interface 308 of the newly created note is displayed in the floating window 303, and includes a content edit area 309 therein. The content editing area 309 is used for a user to input text, insert pictures, and display video clips and tags, etc.
At the same time, the handset activates the video clip APP, and the clip control field 304 of the video clip APP is displayed in suspension in the interface, as shown in fig. 3 (c). Alternatively, the snippet control bar (first control) 304 is also referred to as a snippet hover bar, a snippet function menu bar, and the like. As one implementation, the snippet control bar may be capsule-shaped. The snippet control column 304 may include a start record control 305, a screen capture control 306, a snippet close control 307, and the like. Activating the record control 305 is used to trigger recording of a video clip. The screen capture control 306 is used to trigger the insertion of a screen capture in a note. The snippet closure control 307 is used to trigger closure of the video snippet APP.
In fig. 3, a process of starting the video clip function is illustrated by taking a gesture of sliding down and hovering over three fingers as an example. In some other embodiments, the preset gesture may be other gestures, specifically may be designed according to actual requirements, which is not limited in the embodiments of the present application.
Corresponding to the interface change of fig. 3, in this embodiment, a specific method for starting the video clip function may be as shown in fig. 4, including:
s101, under the condition that a net lesson APP is opened, responding to a gesture of executing three-finger sliding and hovering by a user, and recognizing the gesture as the gesture of the three-finger sliding and hovering by a note gesture service.
In this step, the net lesson APP is opened, but the video in the video playing area of the net lesson APP may be in a playing state or may be in a playing pause state. In addition, when the video is in a playing state, the played video can be a local video or a video in a network, or can be a live video, etc., and the application is not limited in any way.
S102, determining whether a net lesson APP is an APP in a preset white list by the note gesture service; if yes, step S103 is executed.
Specifically, a white list may be preconfigured in the electronic device, where the white list includes all preset APPs capable of triggering the video clip function through a three-finger swipe and hover gesture. When the note gesture service recognizes that the user performs a three-finger swipe and hover gesture, determining whether the APP currently displayed in the interface is the APP in the whitelist. If yes, determining that the video clip function can be started, and executing steps S103 to S107; if not, the note gesture service determines that the video clip function cannot be started, and does not perform any operation.
Optionally, the note gesture service may determine information of the APP currently displayed in the interface through modules such as AMS and/or WMS of the application framework layer.
S103, the note gesture service sends a video clip function starting instruction to the video clip service.
The video clip function start instruction is used to instruct to start the video clip function, i.e. to start the video APP and the video clip APP.
S104, the video clip service responds to the video clip function starting instruction, starts the note APP, and sends an instruction 1 to the note APP. The instruction 1 is used for indicating the note APP to create a note and displaying an editing interface of the note.
Specifically, the video clip service may newly build an Activity through the AMS to launch the note APP.
S105, the note APP responds to the instruction 1, a note is newly built, and an editing interface of the note is displayed, wherein the editing interface of the note is displayed in a floating window mode.
Specifically, after receiving the instruction 1, the note APP creates a note, and displays an edit page of the created note through modules such as WMS and a view system. The edit page of the newly created note may be as shown at 308 in fig. 3 (c).
S106, simultaneously, the video clip service starts the video clip APP, and sends an instruction 2 to the video clip APP, wherein the instruction 2 is used for indicating the video clip APP to display a clip control bar.
Specifically, the video clip service may create an Activity by AMS to launch the video clip APP.
S107, the video clip APP responds to the instruction 2, and a clip control bar is displayed.
Specifically, after receiving instruction 2, the video clip APP may display the clip control bar through modules such as WMS and view system. The extract control column may be as shown at 304 in diagram (c) of fig. 3.
Thus, the starting of the video clip function is completed.
In the implementation mode, the user can start the video clip function through a preset gesture, so that the user operation is facilitated, and the user experience is improved. Moreover, the note gesture service sends a video clip function start instruction to the video clip service only when a preset gesture is recognized and the APP currently displayed in the interface is the APP in the preset whitelist. Therefore, the starting of the video clip function is limited to the preset scene, the false starting of the video clip function can be prevented, the use conflict of the preset gesture in different scenes can be solved, and the user experience is further improved.
(2) The video clip function is triggered and started by the stylus hover ball.
Fig. 5 is a schematic diagram illustrating another interface change of a process for starting a video clip function according to an embodiment of the present application. As shown in fig. 5 (a), the user has started the net lesson APP, and the handwriting pen APP is displayed on the interface in the form of a hover sphere 501.
The user clicks the hover ball 501, and a shortcut function card 502 is displayed in the interface, as shown in fig. 5 (b). A video clip option 503 may be included in the shortcut function card. The user clicks on the video clip option 503 and the note APP is launched and displayed in the interface in the form of a floating window 303. Moreover, the note APP automatically creates a note, and an editing interface 308 of the created note is displayed in the floating window 303; at the same time, the video clip APP is launched and the clip control field 304 of the video clip APP is displayed in hover at the interface, as shown in FIG. 5 (c). The specific contents of the editing interface 308 and the extraction control field 304 of the note are the same as those of the (c) diagram in fig. 3, and will not be repeated.
Corresponding to the interface change of fig. 5, in this embodiment, a specific method for starting the video clip function may be as shown in fig. 6, including:
S201, under the condition that the net lesson APP is opened, responding to the operation of clicking the handwriting pen suspending ball by a user, and displaying the shortcut function card by the handwriting pen APP.
Alternatively, the stylus APP may display the shortcut card through modules such as AMS and view system of the application framework layer. Shortcut function cards as shown in 502 in fig. 5 (b), the shortcut function cards include video clip options.
S202, responding to a user clicking a video clip option in a shortcut function card, and determining whether a net lesson APP is an APP in a preset white list by using a handwriting pen APP; if yes, go to step S203.
This step is similar to step S102 in the above embodiment, and will not be described again.
S203, the note gesture service sends a video clip function starting instruction to the video clip service.
S204, the video clip service responds to the video clip function starting instruction, starts the note APP, and sends an instruction 1 to the note APP. The instruction 1 is used for indicating the note APP to create a note and displaying an editing interface of the note.
S205, the note APP responds to the instruction 1, a note is newly built, and an editing interface of the note is displayed, wherein the editing interface of the note is displayed in a floating window mode.
S206, simultaneously, the video clip service starts the video clip APP, and sends an instruction 2 to the video clip APP, wherein the instruction 2 is used for indicating the video clip APP to display a clip control bar.
S207, responding to an instruction 2 by the video clip APP, and displaying a clip control column.
Steps S203 to S207 are the same as steps S103 to S107 in the above embodiment, and will not be described again.
In the implementation mode, a user can quickly and conveniently start the video clip function through the handwriting pen suspending ball, so that the user operation is simplified, and the user experience is improved. Moreover, when the currently displayed APP in the interface is the APP in the preset white list, the handwriting pen APP sends a video clip function starting instruction to the video clip service. Thus, the activation of the video clip function is limited to a predetermined scene, and erroneous activation of the video clip function can be prevented.
(3) And triggering and starting a video clip function through a control in the note interface.
Fig. 7-1 and fig. 7-2 are schematic diagrams illustrating interface changes in a process of starting a video clip function according to an embodiment of the present application. As shown in fig. 7-1 (a), the desktop of the mobile phone includes an icon of the note APP, and when the user clicks the icon of the note APP, the note APP is opened to enter the interface as shown in fig. 7-1 (b). The interface shown in figure 7-1 (b) includes a new notes control 701. When the user clicks the new notes control 701, a note is created, and the note editing interface 707 shown in fig. 7-1 (c) is entered. Included in this interface is an insert control 702.
The user clicks the insert control 702 and the insert content tab 703 is displayed in the interface as shown in figure 7-1 (d). Wherein the insert content tab 703 includes a video clip option 704. The user clicks on the video clip option 704 and the APP selection card 705 is displayed in the interface as shown in (e) of fig. 7-2. Icons of all the APPs in the preset white list are displayed in the APP tab 705. The user selects one APP in the APP tab 705, and the APP is started and displayed on an interface in a split screen with the notebook APP; at the same time, a video clip APP is started. Taking the example of the user clicking on the icon 706 of the net lesson APP, the interface after the start of the net lesson APP and the video clip APP may be as shown in fig. 7-2 (f).
In the implementation mode, the method is equivalent to adding a video clip APP and starting entries of all APPs in a preset white list in a note APP, when a user clicks a video clip option in a note APP interface, responding to clicking operation of the user, and displaying icons of all APPs in the white list by the note APP. When a user selects a certain APP, the notebook APP calls up the APP and calls up the video clip APP in response to the selected operation of the user. Therefore, a user can realize one-key starting of the video clip APP and the APP in the note APP, operation is convenient, and user experience is improved.
It should be noted that, in the case of electronic equipment support, the user may also open the video clip function in other ways, which is not limited to the above three ways. In addition, the control in the interface and the process of changing the interface in the above embodiment are only examples, and do not limit the present disclosure, as is the case in other embodiments below.
2. Video clip recording process
For convenience of explanation, in the following embodiments, description will be given taking an example in which a user records a video clip in a note titled "net lesson note".
Fig. 8 is a schematic diagram illustrating an interface change of a video clip recording process according to an embodiment of the present application. As shown in fig. 8 (a), after the note APP and the video clip APP are started, the user clicks the start record control 305 in the clip control field 304, and the video clip APP starts recording the video clip. At this time, as shown in fig. 8 (b), the start recording control 305 in the snippet control field 304 is switched to the stop recording control 801, and the remaining recordable duration 802 is displayed below the stop recording control 801. In addition, graffiti control 803 and add plain text label control 804 are also displayed in the snippet control column 304.
The stop recording control 801 is used to trigger a stop recording of a video clip. In addition, the video clip APP may set an upper limit on the recording duration of the video clip. In the embodiment of the application, the upper limit of the recording time length is 5 minutes (05:00) as an example. The remaining recordable duration 802 is used to indicate the current remaining duration that can be recorded. The graffiti control 803 is used to trigger a graffiti function for a video screen. After clicking the graffiti control 803, the user may graffiti in the video play area. The add plain text label control 804 is used to trigger the addition of a plain text label to the video clip in the note. In addition, after the video clip begins to be recorded, a screenshot control 306 in the clip control field 304 is used to trigger the addition of a screenshot text tag to the video clip in the note, as will be described in more detail in the following embodiments.
Note that, in the stop recording control 801 shown in fig. 8 (b), a dot may be white, and in an actual interface, the dot may be red. Of course, the recording stopping control 801 may be a control with another shape or color, which is not limited in the embodiment of the present application.
Alternatively, the remaining recordable duration 802 may be displayed at other locations in the interface, such as in the upper left corner of the video playback area. In addition, the remaining recordable duration 802 may not be displayed in the interface, but the actual recording duration may be specifically set according to needs, which is not limited in the present application.
With continued reference to FIG. 8 (b), the editing interface 308 for notes changes as the controls change in the snippet control bar 304. Specifically, after the video clip is recorded, a card 806 of the video clip is newly added in the content editing area 309 in the editing interface 308 of the note, and a word "video clip" may be displayed in the card 806 to prompt the user that the card is a display area of the video clip. The card 806 of the video clip is used to display information about the video clip being recorded, including the recording status of the video clip, the inserted tag, the play control of the video clip, etc. As shown in fig. 8 (b), the current video clip is being recorded, and a prompt such as "in recording" may be displayed on the card 806 of the video clip to prompt the user that the current video clip is in a recorded state. In some embodiments, after the video clip begins to be recorded, the card 806 of the video clip may also display information such as the first frame image of the video clip, which is not limited in this disclosure.
Corresponding to the interface variation of fig. 8, in this embodiment, a specific method for implementing the process of recording the video clip may be as shown in fig. 9, including:
S301, clicking a start recording control in the snippet control column by a user.
In response to the user clicking on the operation of starting the recording control, step S302 is performed.
S302, the video clip APP obtains the position information of the video playing area in the current interface from the barrier-free service.
Specifically, the video clip APP may send a request message 1 to the barrier-free service, where the request message 1 is used to request to obtain location information of a video playing area currently displayed in the screen.
S303, the barrier-free service identifies a video playing area currently displayed in the screen, and determines position information of the identified video playing area.
And S304, the barrier-free service returns the position information of the video playing area to the video clip APP.
The video playing area is generally square, and thus, as one possible implementation, the position information of the video playing area may be characterized by several information including area vertex coordinates, area width (width), and area height (height). The vertex coordinates of the region may be coordinates of one vertex of four vertices of the video playing region, for example, coordinates of an upper left vertex of the video region. The units of the region width and the region height may coincide with the units corresponding to the coordinates. The location information of the video playing area can be expressed as, for example: (50, 80),1000, 800. Where (50, 80) denotes coordinates of the upper left vertex of the video play area, 1000 denotes an area width of the video play area, and 800 denotes an area height of the video play area.
S305, sending a screen recording starting instruction to the screen recording service by the video clip APP, wherein the screen recording starting instruction carries the position information of the video playing area.
S306, the screen recording service responds to the screen recording starting instruction, and all layer data in the video playing area are obtained according to the position information of the video playing area.
S307, the screen recording service sends all the layer data and the layer filtering marks in the video playing area to SurfaceFlinger, wherein the layer filtering marks are used for indicating the layer data of the non-net class APP in all the layer data in the video playing area.
Alternatively, the layer filtering marks may be predefined and set as required, for example, may be a numerical value or a keyword. In a specific embodiment, the layer filtering indicia may be VideoNoteCapture.
And S308, surfaceFlinger, filtering layer data of the non-net lesson APP in all layer data in the video playing area, and synthesizing the rest layer data to obtain an image 1.
Specifically, surfaceFlinger can learn that layer filtering needs to be performed on the layer data sent by the screen recording service according to the layer filtering flag, so that step S308 is performed.
It can be understood that the image 1 is the picture that the net lesson APP plays currently in the video playing area.
S309, surfaceFlinger returns image 1 to the recording service.
S310, the screen recording service writes the image 1 into a database of the video clip APP according to the URI of the currently recorded video clip (recorded as video clip 1).
URI refers to a uniform resource identifier, abbreviated as uniform resource identifier. The URI is used to characterize the storage path of the data. And according to the URI, the corresponding data can be found in the electronic equipment.
It should be noted that, the storage path of the video clip represented by the URI is only used as an example and is not limited thereto. In some other embodiments, the storage path of the video clip may also be characterized by other information, such as a uniform resource locator (uniform resource locator, URL) or uniform resource naming (uniform resource name, URN), etc. The storage paths for the images are similar and will not be described in detail.
Optionally, the URI of the video clip 1 may be obtained by a screen recording service to a system, and the system allocates uniformly; the video clip APP can also be acquired from the system and then sent to the screen recording service along with the screen recording starting instruction, and the application is not limited in any way.
S311, recording the currently played audio by the screen recording service, and writing the audio into a database of the video clip APP according to the URI of the video clip 1.
Optionally, the recording service may implement recording of the currently played audio by calling a related module of the application architecture layer.
It is understood that video consists of multiple frames of images and successive audio. After the first execution according to the steps S306 to S311, the obtained image 1 is the first frame image of the video clip 1 and the audio corresponding to the first frame image. Thereafter, step S306 may be performed repeatedly, i.e. steps S306 to S311 are performed repeatedly, so as to record each frame of image and corresponding audio. The resulting image and audio may then be encoded, encapsulated, etc., to obtain the final video clip.
In addition, after step S301, the video clip APP may also perform step S312. It should be noted that, step S312 may be performed before or after any one of steps S302 to S311, or may be performed simultaneously with any one of steps S302 to S311, which is not limited in this embodiment.
S312, refreshing controls in the control bar of the video clip APP, removing the start recording control in the control bar of the original clip, displaying the stop recording control and the residual recordable time length, displaying the graffiti control, adding the plain text label control and the like.
Specifically, reference may be made to fig. 8 (b), and details thereof are omitted.
In addition, after step S304, steps S313 to S320 may be performed. Alternatively, the electronic apparatus may perform step S313 simultaneously with step S305. Specifically, the electronic device may trigger the execution of step S305 and step S313 simultaneously through two different sub-processes. In fig. 8, steps S313 to S320 are placed after step S312 for convenience of drawing, and do not represent the actual execution sequence.
S313, the video clip APP sends a screenshot instruction to SurfaceFlinger, wherein the screenshot instruction carries a layer filtering mark.
The screenshot instruction is used for indicating to acquire SurfaceFlinger the image synthesized at the current time, that is, the image currently displayed in the video playing area. The filtering layer labels are the same as those in step S307, and will not be described again.
S314, surfaceFlinger, in response to the screenshot instruction, sends image 1 (i.e. the first frame image) to the video clip APP.
It will be appreciated that in step S308 described above, surfaceFlinger has obtained image 1, and thus in this step SurfaceFlinger may send image 1 directly to the video clip APP. For ease of understanding and distinction, the following embodiments will refer to image 1 as the first frame image.
S315, the video clip APP writes the first frame image into a database of the video clip APP according to the URI of the first frame image.
S316, the video clip APP sends a video identity number (video ID) of the video clip (namely video clip 1) to which the first frame image belongs, a URI of the first frame image, a task identifier (task ID) corresponding to a task of capturing the first frame image, and the like to the note APP.
The video ID is used to characterize the unique identity of the video. During the management of video clips, the video clip APP and the note APP can distinguish between different video clips by video ID. In a specific embodiment, the video ID may be in the form of a universally unique identification code (universally unique identifier, UUID), for example, the video ID of a video clip may be expressed as: ce9ae22eS7d6dS4093S81edS a77c1dc25a6. Of course, in other embodiments, the video clips may be marked by other marking information, as long as the different video clips can be distinguished.
It will be appreciated that each time a video clip APP performs a task (also referred to as an operation), a task ID for the task may be generated. the task ID is the identification information of the task. Alternatively, the task ID may be generated based on the time at which the task was performed. Meanwhile, each task corresponds to a task execution result, so that each task ID has a unique corresponding relationship with the task execution result. For example, steps S313 to S315 complete the task of capturing the first frame image. The execution result of the task is as follows: the first frame image is obtained, so that the task ID of the task has a unique corresponding relationship with the first frame image. The video clip APP can search the first frame image in the database of the video clip APP through intercepting the task ID corresponding to the task of the first frame image.
Optionally, the note APP may include a note interface, and the video clip APP may send videoID of the video clip 1, the URI of the first frame image, and the task identifier corresponding to the first frame image to the note APP through the note interface. Of course, the note interface may also be used to transmit other data between the video clip APP and the note APP, which will not be described in detail later.
S317, copying the first frame image from a database of the video clip APP according to the URI of the first frame image by the note APP, and storing the first frame image in the database of the note APP.
S318, refreshing an editing interface of the note by the note APP, displaying a card 1 of the video clip 1 in a content editing area, displaying a prompt message of recording in the card 1, and establishing a corresponding relation between the card 1 and a video ID of the video clip 1.
It will be appreciated that the user inserts several video clips into the note and that several cards of the video clips are displayed in the editing interface of the note. The note APP establishes the corresponding relation between the card of the video clip and the video ID, so that content, such as a label, can be conveniently inserted into the corresponding card according to videoID of the video clip, and the accuracy of video clip management is improved.
The refreshed note editing interface can be referred to the (b) diagram in fig. 8, and will not be described again.
S319, the note APP returns a processing result to the video clip APP, wherein the processing result carries a task ID corresponding to the task of intercepting the first frame image.
Optionally, the note APP may return a processing result and a task ID corresponding to a task of capturing the first frame image to the video clip APP by calling a preset interface of the video clip APP.
S320, deleting the first frame image in the database of the video clip APP according to the task ID corresponding to the task of capturing the first frame image.
After the note APP saves the first frame image, the video clip APP deletes the first frame image stored in the database, so that occupation of the video clip APP database space is reduced.
In the implementation manner, when the video clip is recorded, the picture layer data of the non-net class APP in the picture layer data are identified and filtered through SurfaceFlinger, and the rest picture layer data are synthesized to obtain a frame image of the video clip. Therefore, the content of the non-net lesson APP is removed from the image, the shielding of the video picture of the net lesson APP in the video playing area is prevented, the recorded video clip display effect is better, and the user experience is further improved.
3. Label adding process
During the recording process of the video clip, the user can click on a plain text label adding control in the clip control column of the video clip APP to add a plain text label to the video clip in the note.
Fig. 10 is an exemplary schematic diagram of interface change in a label adding process according to an embodiment of the present application. Continuing with FIG. 8 (b), as shown in FIG. 10 (a), the user clicks the Add plain text label control 804 in the extract control field 304 when the remaining recordable length is 4 minutes 30 seconds (04:30), i.e., the recording length is 30 seconds(s) (00:30). In response to a click operation by the user, a plain text label 1001 is displayed in the card 806 of the currently recorded video clip, as shown in fig. 10 (b). The content of the plain text label 1001 includes a label icon 1002, a recording duration 1003 corresponding to the label, and a label information input area (also referred to as a text editing area of the label) 1004. Wherein the tag information input area 1004 is used for a user to input text information. In the case where the user does not input any information, a prompt message such as "click here to add a word" may be displayed in the tag information input area 1004 to prompt the user that the user is here the tag information input area, and text information may be input.
When the user clicks the tag information input area 1004 and inputs text information, for example, four words of "now sense", the note APP displays the content input by the user, as shown in fig. 10 (c). It should be noted that, whether it is a plain text label or a screenshot text label, the label information input area is created whenever the label is created. According to the requirement, the user can input text information in the tag information input area at any time, for example, the user can input text information in the tag information input area in the video clip recording process, and also can input text information after the video clip recording is finished.
In addition, in the recording process of the video clip, the user can click a screenshot control in a clip control column of the video clip APP, and a screenshot text label is added to the video clip in the note.
Fig. 11 is an interface change schematic diagram of another label adding process according to an embodiment of the present application. Continuing with FIG. 10 (c), as shown in FIG. 11 (a), the user clicks the screenshot control 306 in the snippet control field 304 when the remaining recordable length is 4 minutes (04:00), i.e., the recording length is 60s (00:60). In response to a click operation by the user, a screenshot text tab 1101 is displayed in the card 806 of the currently recorded video clip, as shown in fig. 11 (b). The content of the screenshot text tag 1101 includes a tag icon 1002, a screenshot 1102, a recording time 1103 corresponding to the tag, and a tag information input area 1104. The screenshot 1102 is an image displayed in the video playing area when the recording duration is 60 s.
It should be noted that, the label icon, the recording duration corresponding to the label, and the label information input area of the screenshot text label shown in this embodiment are identical to those of the plain text label shown in fig. 10. In some other embodiments, the form of this information may be different to distinguish between plain text labels and screenshot text labels.
When the user clicks the tag information input area 1104 and inputs four words of "machine art", the note APP displays the content input by the user, as shown in fig. 11 (c).
Corresponding to the interface changes of fig. 10 and 11, in this embodiment, a specific method for implementing the process of adding a tag may be as shown in fig. 12, including:
S401, in the recording process of the video clip 1, responding to clicking a control for adding the plain text labels in a clip control column by a user, and calculating the recording time length corresponding to the added plain text labels (recorded as labels 1) by the video clip APP.
Optionally, the recording duration corresponding to the label, that is, the timestamp corresponding to the label (TIME STAMP), where the timestamp is a duration label stamp.
Optionally, the video clip APP may calculate a recording duration corresponding to the tag according to the tag adding time and the recording start time. The time of adding the label and the time of starting recording are absolute time. The time of adding the tag, i.e., the current time, is, for example, 2022, 10, 18, 08:00. The recording start time refers to the time at which the recording of the current video clip is started.
In a specific embodiment, the recording duration unit corresponding to the tag is millisecond (ms), and the accuracy of the time is s, and the recording duration unit corresponding to the tag= (current time-recording start time)/1000.
S402, the video clip APP sends tag data of the tag 1 to the note APP, wherein the tag data of the tag 1 comprises a video ID of a video clip (the video clip 1) to which the tag 1 belongs, a recording time length corresponding to the tag 1, a task ID corresponding to a task of adding the tag 1, and the like.
In this step, the task ID corresponding to the task to which the tag 1 is added may not be included in the tag data of the tag 1. In the embodiment of the application, the task ID is mainly used for deleting the corresponding image or video clip in the database of the video clip APP according to the task ID when the video clip receives the screenshot processing result returned by the note APP. Here, when adding the plain text label, the video clip APP also sends the task ID corresponding to the task to which the label is added to the note APP, so that the consistency of the label data of the plain text label and the screenshot text label is higher, the data processing efficiency of the video clip APP is improved, and meanwhile, the note APP is also convenient to receive.
S403, the note APP stores the tag data of the tag 1 into a database of the note APP.
S404, the note APP determines that the card of the video clip 1 is the card 1 according to videoID of the video clip 1, and displays the content of the tag 1 in the card 1, wherein the content of the tag 1 comprises a tag icon, a recording time length corresponding to the tag 1, a tag information input area of the tag 1 and the like.
Specifically, the note APP determines, according to videoID of the video clip 1, that the card corresponding to the video ID of the video clip 1 is card 1 based on the correspondence established in step S318 in the above embodiment, and further displays the content of the tag 1 in the card 1.
S405, in response to the user inputting the text content 1 in the tag information input area of the tag 1, the note APP saves the text content 1as tag data of the tag 1, while the note APP displays the text content 1 in the tag information input area of the tag 1.
The text content 1 may be, for example, "modern sense" in the above-described (c) diagram in fig. 10.
As described above, the user can input text information in the tag information input area of the tag 1 at any time. Whenever a user inputs information, the note APP saves the information input by the user as tag data of tag 1. In other words, as long as the information in the tag information input area of the tag 1 is bound to the tag data as the tag 1, other data among other tag data. In this way, all the tag data of the tag 1 can be regarded as a whole, which has a unique recording duration.
S406, responding to clicking a screenshot control in a screenshot control column by a user, and calculating the recording time length corresponding to the added screenshot text label (marked as label 2) by the video snippet APP.
S407, the video clip APP sends a screenshot instruction to SurfaceFlinger, wherein the screenshot instruction carries a layer filtering mark.
S408, surfaceFlinger, in response to the screenshot instruction, sends the image synthesized at the current time (noted as image 2) to the video clip APP.
As described in the above embodiment, during the video clip recording process, surfaceFlinger performs layer data filtering and layer data synthesis according to the above step S308 in real time, so that, after receiving the screenshot instruction, surfaceFlinger may directly send the image 2 synthesized at the current time to the video clip APP.
S409, the video clip APP writes the image 2 into a database of the video clip APP according to the URI of the image 2.
S410, the video clip APP sends tag data of the tag 2 to the note APP, wherein the tag data of the tag 2 comprises videoID of a video clip (video clip 1) to which the tag 2 belongs, URI of the image 2, recording duration corresponding to the tag 2, task ID corresponding to a task of adding the tag 2 and the like.
S411, the note APP copies the image 2 from the database of the video clip APP according to the URI of the image 2, stores the copied image 2 in the database of the note APP, and stores the video ID of the video clip 1, the recording time corresponding to the tag 2 and the task ID corresponding to the task of adding the tag 2 into the database of the note APP.
That is, note APP acquires image 2, and stores all tag data of tag 2 into the database of note APP with image 2 as tag data of tag 2.
S412, the note APP determines that the card of the video clip 1 is card 1 according to the video ID of the video clip 1, and displays the content of the tag 2 in the card 1, wherein the content of the tag 2 comprises a tag icon, a recording duration corresponding to the tag 2, a tag information input area of the image 2 and the tag 2, and the like.
This step is similar to step S404 described above, except that in this step, tag 2 is a screenshot text tag, and thus the content of tag 2 also includes image 2.
S413, in response to the user inputting the text content 2 in the tag information input area of the tag 2, the note APP saves the text content 2 as tag data of the tag 2, while the note APP displays the text content 2 in the tag information input area of the tag 2.
The text content 2 may be, for example, "machine art" in the above-described (c) diagram in fig. 11.
Like the plain text label, whenever a user inputs information in the label information input area of the label 2, the note APP saves the information input by the user as label data of the label 2. In other words, as long as the information in the tag information input area of the tag 2 is bound to the tag data as the tag 2, other data among other tag data. In this way, all the tag data of the tag 2 can be regarded as a whole, which has a unique recording duration.
S414, the note APP returns a processing result to the video clip APP, wherein the processing result carries a task ID corresponding to the task added with the tag 2.
S415, deleting the image 2 in the database of the video clip APP according to the task ID corresponding to the task to which the tag 2 is added.
Similar to the deletion of the first frame image by the video clip APP, in this step, the deletion of image 2 in the database of the video clip APP by the video clip APP can also reduce the occupation of the video clip APP database space.
4. Ending the recording process
The ending recording of the video clip may be triggered in a number of ways, each as described below in connection with the accompanying drawings.
(1) A stop recording control in the snippet control field triggers the end of recording.
Fig. 13 is an exemplary schematic diagram of interface change for ending a recording process according to an embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 13 (a), when the remaining recordable time period is 3 minutes (03:00), i.e., the recording time period is 2 minutes (02:00), the user clicks the stop recording control 801 in the extract control field 304. In response to the click operation by the user, the video clip APP stops recording the video clip, and the interface switches to the interface shown in fig. 13 (b). At this point, the stop record control 801 in the snippet control field 304 switches to the start record control 305 and the remaining recordable duration 802, graffiti control 803, and add plain text label control 804 in the original snippet control field 304 are no longer displayed.
As shown in fig. 13 (b), the "in-recording" hint information in the card 806 of the video clip disappears in the note editing interface of the note APP while the clip control field 304 is changed, and the first frame image 1301 of the recorded video clip and the play control 1302 are displayed. The play control 1302 is used to control playing video clips, displaying play progress, and the like. Among the play control controls 1302 are a start play control 1303, a progress bar 1304, a total duration 1305 of the video clip, a current play duration 1306, a full screen display control 1307, and the like.
Corresponding to the interface change of fig. 13, in this embodiment, a specific implementation method for ending the recording process may be as shown in fig. 14, including:
s501, responding to clicking a stop recording control in an extract control column by a user, refreshing the control in the extract control column by a video extract APP, removing the stop recording control, the residual recordable time length, the graffiti control, the addition of a plain text label control and the like in the original extract control column, and displaying a start recording control.
S502, the video clip APP sets the value of the exit note APP flag bit to false.
The exit note APP flag bit is used to flag whether to exit the note APP. Optionally, exiting the note APP flag bit value false indicates not exiting the video clip function, and exiting the note APP flag bit value true indicates exiting the video clip function. Of course, the value of the exit note APP flag bit may also be defined as other values, such as 1 and 0.
In a specific embodiment, the exit note APP flag bit may be denoted by isQuitSave.
S503, sending a screen recording stopping instruction to the video clip APP to the video clip service, wherein the screen recording stopping instruction is used for indicating to stop recording a screen.
S504, the video clip service sends a screen recording stopping instruction to the screen recording service.
S505, the screen recording service responds to the screen recording stopping instruction and sends an ending frame image to the video clip APP.
Specifically, after receiving the instruction for stopping screen recording, the screen recording service stops sending all the layer data and the layer filtering marks in the video playing area to SurfaceFlinger, stops recording the audio, and sends the acquired last frame image as an end frame image to the video clip APP. Alternatively, the end frame image may have an end frame identification to characterize the frame image as an end frame image. It can be understood that after the screen recording service sends the end frame image, the screen recording is stopped without recording the screen.
S506, the video clip APP writes the ending frame image into a database of the video clip APP, and acquires the audio at the current moment.
The step is the same as the step of writing other frame images and corresponding audio in the above embodiment, and will not be described again.
S507, after the video clip APP sends the screen recording stopping instruction, periodically judging whether an end frame image exists in a database of the video clip APP, and if so, executing step S508.
S508, the video clip APP generates a video clip 1.
Specifically, as described in the above embodiment, the video clip APP encodes, encapsulates, etc. all the obtained frame images and audio to obtain the video clip 1.
S509, the video clip APP sends the information of the video clip 1 and the value of the exit note APP flag bit to the note APP. The information of video clip 1 includes: video ID of video clip 1, URI of video clip 1, task ID corresponding to task of stopping recording video clip 1, etc.
S510, the note APP copies the video clip 1 from the database of the video clip APP according to the URI of the video clip 1, and stores the video clip 1 into the database of the note APP.
S511, the note APP determines that the card of the video clip 1 is card 1 according to the video ID of the video clip 1, and the first frame image and the play control of the video clip 1 are displayed in the card 1.
S512, the note APP returns a processing result to the video clip APP, wherein the processing result carries a task ID corresponding to the task of stopping recording the video clip 1.
S513, deleting the video clip 1 in the database of the video clip APP according to the task ID corresponding to the task of stopping recording the video clip 1.
Similar to the video clip APP delete image, in this step, the video clip APP delete stored in the video clip APP database can also reduce the occupation of video clip APP database space.
(2) The save control of the note editing interface triggers the end of recording.
Fig. 15 is a schematic diagram of another interface change for ending the recording process according to an embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 15 (a), a save control 1501 is included in the note editing interface. When the remaining recordable time period is 3 minutes (03:00), i.e., the recording time period is 2 minutes (02:00), the user clicks the save control 1501. In response to a click operation by the user, the video clip APP stops recording the video clip, and the note APP saves the recorded video clip and all the contents of the current edit, and the interface switches to the interface shown in fig. 15 (b). The interface of fig. 15 (b) is similar to that of fig. 13 (b), and will not be described again.
Corresponding to the interface change of fig. 15, in this embodiment, the specific implementation method for ending the recording process is similar to the method shown in fig. 14, except that:
The step S501 includes, before: responding to the fact that a user clicks a saving control in a note editing interface, and sending a saving message to a video clip service by a note APP, wherein the saving message is used for indicating to save a currently recorded video clip; the video clip service sending a clip stop instruction to the video clip APP in response to the save message, the clip stop instruction being for instructing to stop recording the video clip;
Step S501 is replaced with: the video clip APP responds to the clip stopping instruction, refreshes the clip control column, removes the stopping recording control, the residual recordable duration, the graffiti control, the adding plain text label control and the like in the original clip control column, and displays the starting recording control;
In step S510, in addition to the video clip 1, other content edited by the user, for example, text edited by the user in the content editing area, etc., is also saved.
The rest of the steps are the same as those in fig. 14, and will not be described again.
(3) And exiting the editing interface of the note to trigger ending the recording.
The editing interface of the exiting note triggers the ending of recording, and is divided into two cases of the editing interface of the note being opened for the first time and the editing interface not being opened for the first time. The first-time opened editing interface refers to the first-time opened of the currently displayed note editing interface after the note is created for the note. The editing interface that is not opened for the first time means that the currently displayed note editing interface is not opened for the first time after the note is created. The following describes each case.
A. the editing interface of the note is an editing interface which is opened for the first time.
Fig. 16 is a schematic diagram illustrating an interface change for ending a recording process according to another embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 16 (a), the currently displayed note editing interface is the first-opened editing interface. The note editing interface includes a return previous layer control 1601, which is used to trigger a return previous layer interface of the note APP, i.e. exit the current note editing interface. When the remaining recordable time period is 3 minutes (03:00), that is, the recording time period is 2 minutes (02:00), the user clicks the return to last control 1601 in the note editing interface. In response to a user operation, the video clip APP stops recording and refreshes the clip control field 304, and the refreshed clip control field 304 is shown in fig. 16 (b).
At the same time, the note APP automatically saves the content recorded in the note editing interface (including the recorded video clip), and displays a layer of interface on the note editing interface, i.e., the note main interface 1602, as shown in fig. 16 (b).
When the user clicks "net lesson note" 1603 in the note main interface 1602 in fig. 16 (b), the interface jumps to the interface shown in fig. 16 (c). The interface includes a first frame image 1301 of the recorded video clip and a play control 1302. It follows that the video clips are automatically saved.
Of course, the user may also trigger to exit the current note editing interface through other operations, for example, sliding the edge of the screen to the inside of the screen, clicking the return control of the triple-key navigation bar, etc., which is not limited in any way by the embodiments of the present application.
Corresponding to the interface change of fig. 16, in this embodiment, a specific implementation method for ending the recording process may be as shown in fig. 17, including:
s601, responding to the operation of executing the exiting of the current note editing interface by the user, and sending a save message to the video clip service by the note APP, wherein the save message is used for indicating to save the currently recorded video clip.
S602, the video clip service responds to the storage message and sends a clip stop instruction to the video clip APP, wherein the clip stop instruction is used for indicating to stop recording the video clip.
S603, responding to an excerpt stop instruction by the video excerpt APP, refreshing an excerpt control column, removing a stop recording control, a residual recordable duration, a graffiti control, a pure text label adding control and the like in an original excerpt control column, and displaying a start recording control.
S604, the video clip APP sets the value of the exit note APP flag bit to false.
S605, the video clip APP sends a screen recording stopping instruction to the video clip service, wherein the screen recording stopping instruction is used for indicating to stop recording a screen.
S606, the video clip service sends a screen recording stopping instruction to the screen recording service.
S607, the screen recording service responds to the screen recording stopping instruction and sends an ending frame image to the video clip APP.
And S608, the video clip APP writes the ending frame image into a database of the video clip APP, and acquires the current time audio.
S609, after the video clip APP sends the screen recording stopping instruction, whether an end frame image exists in a database of the video clip APP is periodically judged, and if so, step S610 is executed.
S610, generating a video clip 1 by the video clip APP.
S611, the video clip APP sends information of the video clip 1 and a value of an exit note APP flag bit to the note APP. The information of video clip 1 includes: video ID of video clip 1, URI of video clip 1, task ID corresponding to task of stopping recording video clip 1, etc.
S612, the note APP copies the video clip 1 from the database of the video clip APP according to the URI of the video clip 1, and stores the video clip 1 into the database of the note APP.
S613, the note APP stores other contents edited by the user on the note editing interface into a note APP database.
Other contents of the note editing interface refer to contents other than the video clip 1, for example, text information input by the user, inserted pictures, and the like.
S614, the note APP returns a processing result to the video clip APP, and the processing result carries a task ID corresponding to the task of stopping recording the video clip 1.
S615, deleting the video clip 1 in the database of the video clip APP according to the task ID corresponding to the task of stopping recording the video clip 1.
S616, the note APP displays the interface of the previous layer (note main interface).
B. The editing interface of the note is an editing interface which is not opened for the first time.
Fig. 18 is a schematic diagram illustrating an interface change for ending a recording process according to another embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 18 (a), the currently displayed note editing interface is an editing interface that is not first opened. Included in the note editing interface is a return to previous layer control 1601. When the remaining recordable time period is 3 minutes (03:00), that is, the recording time period is 2 minutes (02:00), the user clicks the return to last control 1601 in the note editing interface. In response to a user operation, the video clip stops recording, and the note APP and the video clip APP refresh the interface display, respectively, and the refreshed interface is shown in fig. 18 (b). Fig. 18 (b) is similar to fig. 13 (b), except that in fig. 18 (b), a pop-up window 1801 for inquiring whether to save the editing contents is included in the note editing interface. A save control 1802 is included in the popup 1801.
The user clicks the "save" control 1802 and the interface jumps to the interface shown in fig. 18 (c). The interface is the same as that shown in the (b) diagram of fig. 16, and will not be described again.
Corresponding to the interface change of fig. 18, in this embodiment, the specific implementation method for ending the recording process is similar to the process shown in fig. 17, except that the following steps are further performed after step S612 and before step S613:
the note APP determines that a card of the video clip 1 is a card 1 according to the video ID of the video clip 1, and a first frame image and a play control of the video clip 1 are displayed in the card 1; the note APP displays a popup asking whether to store the editing content; in response to the user clicking on the "save" control, step S613 is performed.
In addition, if the user clicks the "not save" control, the note APP deletes the video clip 1 in the note APP database, and step S613 is not performed.
(3) And exiting the note APP to trigger ending the recording.
Fig. 19 is a schematic diagram illustrating an interface change for ending a recording process according to another embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 19 (a), the top of the floating window of note APP includes a tool bar 1901. When the remaining recordable time period is 3 minutes (03:00), i.e., the recording time period is 2 minutes (02:00), the user clicks the tool bar1901, and the floating window of the note APP is suspended from the top to display the window management capsule 1902, as shown in fig. 19 (b). Included in the window management capsule 1902 is a note-off option 1903. The note-off option 1903 is used to trigger the closing of the note APP.
The user clicks the note-off option 1903, and in response to the click of the user, the video clip APP stops recording the video clip, and the note APP saves the video clip and all the contents edited this time, and at the same time, the video clip APP and the note APP are both exited, and the interface is displayed as shown in fig. 19 (c).
Of course, the user may also trigger to exit the note APP through other operations, for example, deleting the task of the note APP in the latest taskbar window, which is not limited in any way by the embodiment of the present application.
Corresponding to the interface change of fig. 19, in this embodiment, a specific implementation method for ending the recording process may be as shown in fig. 20, including:
S701, responding to the operation of executing the exiting note APP by the user, and exiting the note APP to the background operation.
The note APP exits to the background, namely the note APP exits from the interface display, and operates in the background.
S702, the interface of the note APP is invisible through the video clip service.
Specifically, the video clip service may monitor the interface condition of the note APP continuously or periodically, and when the interface of the note APP is not visible, step S703 is performed.
S703, the video clip service sends an exit video clip APP instruction to the video clip APP, wherein the exit video clip APP instruction is used for indicating the operation of the exit video clip APP.
S704, responding to an instruction of exiting the video clip APP, removing a clip control column in the interface, exiting the video clip APP to background operation, and setting the value of an exiting note APP mark bit to true.
And removing the snippet control bar in the interface, namely canceling the display of the snippet control bar.
S705, the video clip service sends a screen recording service ending instruction to the screen recording service, wherein the screen recording service ending instruction is used for indicating the service for ending screen recording.
S706, stopping the video clip service.
That is, when the video clip service monitors that the interface of the note APP is invisible, it is determined that the video clip function needs to be exited, and after the video clip service sends corresponding instructions to the video clip APP and the screen recording service, the service is stopped, that is, the service related to the video clip function is not provided to the video clip APP and the note APP.
S707, the screen recording service responds to the screen recording service ending instruction and sends an ending frame image to the video clip APP.
S708, the screen recording service ends the service.
Unlike the stop screen recording instruction in the above embodiment, the screen recording service stops recording on one hand and sends an end frame image to the video clip APP after receiving the screen recording service end instruction; on the other hand, the screen recording service stops the service after the end frame image is sent, namely the screen recording service is not provided for the video clip APP.
S709, the video clip APP writes the ending frame image into a database of the video clip APP, and acquires the current time audio.
And S710, after the video clip APP receives the command of exiting the video clip APP, periodically judging whether an ending frame image exists in a database of the video clip APP, and if so, executing step S711.
S711, video clip APP generates video clip 1.
S712, the video clip APP sends the information of the video clip 1 and the value of the exit note APP flag bit to the note APP. The information of video clip 1 includes: video ID of video clip 1, URI of video clip 1, task ID corresponding to task of stopping recording video clip 1, etc.
S713, the note APP copies the video clip 1 from the database of the video clip APP according to the URI of the video clip 1, and stores the video clip 1 into the database of the note APP.
S714, storing other contents edited by the user on the note editing interface into a database of the note APP.
S715, the note APP returns a processing result to the video clip APP, wherein the processing result carries a task ID corresponding to the task of stopping recording the video clip 1.
S716, the note APP determines that the value of the note APP exit flag bit is true, and the operation is exited.
S717, the video clip APP deletes the video clip 1 in the database of the video clip APP according to the task ID corresponding to the task of stopping recording the video clip 1.
S718, the video clip APP exits operation.
(4) Exiting the video clip APP triggers ending the recording.
Fig. 21 is a schematic diagram illustrating an interface change for ending a recording process according to another embodiment of the present application. Continuing with fig. 11 (c), as shown in fig. 21 (a), when the remaining recordable time period is 3 minutes (03:00), i.e., the recording time period is 2 minutes (02:00), the user clicks the clip close control 307 in the clip control field 304 of the video clip APP. In response to a click operation by the user, the video clip APP stops recording the video clip and exits, the clip control bar is no longer displayed, the note APP saves and displays the video clip and the tag, and the interface display is shown in FIG. 21 (b).
Of course, the user may trigger exiting the video clip APP through other operations, such as deleting the task of the video clip APP in the latest taskbar window, etc., which the embodiments of the present application do not limit in any way.
Corresponding to the interface change of fig. 21, in this embodiment, the specific implementation method for ending the recording process is similar to the method shown in fig. 20, except that:
steps S701 to S703, S714 are not performed;
Step S704 is replaced with the following steps: responding to the video clip APP by clicking a clip closing control by a user, removing a clip control column in an interface, exiting the video clip APP to background operation, and setting the value of an exiting note APP mark bit to false by the video clip APP;
Step S716 is replaced with the following steps: the note APP determines that the card of the video clip 1 is card 1 according to the video ID of the video clip 1, and the first frame image and the play control of the video clip 1 are displayed in the card 1.
(5) The video play area change triggers the end of recording.
It will be appreciated that a user may zoom or move a window in which video is played while viewing the video. If a video clip is currently being recorded, the corresponding video play area changes. In this case, the video clip APP stops recording and the recorded video clip may be saved.
Fig. 22 is a schematic flowchart illustrating a recording ending process according to another embodiment of the present application. Referring to fig. 9 and 22 together, after step S303 in fig. 9, the following steps S801 to S803 and the above steps S502 to S513 may be further included:
S801, the barrier-free service monitors whether the video playing area changes.
Optionally, the change in the video playing area may include a change in a position of the video playing area and/or a change in a size of the video playing area.
S802, when the video playing area changes, the barrier-free service sends area change information to the video clip APP, wherein the area change information is used for representing that the video playing area changes.
S803, the video clip APP responds to the region change information, refreshes the clip control column, removes a stop recording control, a residual recordable duration, a graffiti control, a pure text label adding control and the like in the original clip control column, and displays a start recording control.
While five scenarios for triggering the end of recording of a video clip are provided above, it will be appreciated that the above is merely an example, and in some other embodiments, the end of recording may be triggered in other ways, and embodiments of the present application are not limited in this respect. In addition, although the numbers of the step flows are shown in the above embodiments, the execution order of the steps is not limited, and in practical application, the steps may be adjusted in order, or may be executed simultaneously, so long as the execution logic is satisfied. The same is true in the rest of the embodiments, and will not be described in detail.
According to the method provided by the above embodiments, the note APP realizes the recording and the storage of the video clip by calling the function of the video clip APP, and the note comprising the video clip is generated. Therefore, the user can record the video through the note APP at any time in the video watching process, and the user experience is improved. In addition, in the video clip recording process, a label can be added for the video clip in the note, the label comprises the recording time length of the video clip when the label is inserted, and the label information input area is included, so that a user can make notes on video content with a certain recording time length in a targeted manner, and the user experience is further improved. In addition, the tag information input in the tag information input area is used as tag data of the tag at any time and is bound with other data in the tag data so as to uniquely correspond to the recording duration, so that the time for inputting the tag information by a user is not limited, the operation of the user is convenient, and the user experience is further improved.
5. Note order presentation process
After generating the note including the video clip and the tag according to the process of the above embodiment, the user may present the generated note through the note APP. In this embodiment, the note sequential presentation means that the video clips in the note are played according to a default sequence, and the tags in the cards of the video clips are displayed in synchronization with the playing progress of the video clips.
Exemplary, fig. 23 is a schematic diagram illustrating an interface change of a note order presenting process according to an embodiment of the present application. As shown in fig. 23 (a), when the user clicks a note named "net lesson note" in the note main interface of the note APP, a note presentation interface 2301 shown in fig. 23 (b) is entered. The interface includes a video clip 806. The card 806 of the video clip includes the first frame image 1301 of the video clip and a play control 1302. Included in the play control 1302 is a launch play control 1303. Moreover, in the event that the video clip is not played, the labels in the interface are all grayed out.
The user clicks the play control to start the play control 1303, the video clip starts to play, the play control in the play control 1302 is switched to the play pause control 2302, the tag anchor 2303 is displayed in the progress bar 1304, and the current play duration 1306 is refreshed along with the play progress. Further, a progress point 2304 is displayed on the progress bar 1304, and the progress point 2304 moves with the play progress.
The tag anchor 2303 is an anchor in the progress bar according to the recording time length corresponding to the tag, so that the playing time length indicated by the tag anchor 2303 in the progress bar is consistent with the recording time length corresponding to the tag. For example, if the recording duration corresponding to the first tag is 30s (00:30), the video clip is played to the first tag anchor point, and the playing duration is 30s (00:30). That is, one tag anchor corresponds to one tag.
In addition, when the video clip is played to a certain playing time length, the content of all the tags with the recording time length smaller than or equal to the playing time length in the tags of the video clip is highlighted. Therefore, the video clip is played to a tag anchor point, the content of the tag corresponding to the tag is highlighted, and the effect that the video clip is gradually played and the tag content corresponding to the tag anchor point is gradually highlighted can be achieved.
For example, as shown in fig. 23 (c), when the playing duration of the video clip playing is 30s (00:30), the progress point 2304 in the progress bar 1304 is moved to the first tag anchor (which is covered by the progress point 2304 in the progress bar and is not visible in the drawing), and the content of the tag with the corresponding recording duration of 30s is highlighted, including the tag icon, the recording duration corresponding to the tag, and the tag information input in the tag information input area.
In addition, for a screenshot text label, the screenshot in the label is also highlighted, for example, as shown in fig. 23 (d), when the video clip is played to 1 minute (01:00), the progress point 2304 in progress bar 1304 is moved to the second label anchor for 1 minute, the content of the label with a recording duration of 30s (00:30) and 1 minute (01:00) is both highlighted, wherein the picture of the label with a recording duration of 1 minute (01:00) is also highlighted.
In some embodiments, the audio stream in the video clip may also be voice-to-text while the video clip is being played, and the voice-to-text result displayed below the video clip. Alternatively, the speech-to-text results may be scrolled to save interface space.
It will be appreciated that the above gray display, highlighting, etc. are examples, and other display modes may be set according to actual requirements, for example, different colors, different fonts, different transparency, etc. may be displayed. In addition, after the video clip is played, all the contents in the card of the video clip can be restored to the default state, that is, to the state shown in (b) of fig. 23.
Corresponding to the interface change of fig. 23, in this embodiment, a specific method for implementing the note order presenting process may be as shown in fig. 24, and the process includes:
S901, responding to the operation of opening a note by a user, and acquiring data of the note from a database of the note APP by the note APP.
The data of the note includes, but is not limited to, video clips, tag data of tags, and the like. Of course, when data such as text content, handwriting content, pictures, and recording is recorded in the note, the note APP needs to acquire the data. The embodiment of the application mainly describes the functions related to the video clip and the tag, and does not represent that the note APP does not comprise other functions.
S902, determining the position of a label anchor point corresponding to each label in a progress bar according to the total time of the video clip (taking video clip 1 as an example) and the recording duration corresponding to each label of the video clip 1 by the note APP.
Specifically, for any tag a in the video clip 1, the first ratio of the tag a is equal to the second ratio of the tag anchor a corresponding to the tag a. Wherein, the first ratio of label a is: the ratio of the recording duration corresponding to tag a to the total duration of video clip 1. The second ratio of tag anchor a is: the ratio of the distance of the tag anchor point a from the start point of the progress bar to the total length of the progress bar.
S903, displaying the first frame image of the video clip 1 by the note APP, and displaying the progress bar, the tag anchor and other controls in the play control according to the positions of the tag anchors in the progress bar.
S904, note APP grays out the content of all tags of video clip 1.
S905, responding to the fact that a user clicks a start play control in the play control, playing the video clip 1 by the note APP, and refreshing the play control.
S906, highlighting the content of all labels with recording duration smaller than or equal to the current playing duration by the note APP, and gray display the content of the rest labels.
Specifically, the note APP respectively determines whether the recording duration corresponding to each tag is smaller than or equal to the current playing duration, and if yes, all the contents of the tag are highlighted; if not, the content of the label is displayed in gray.
In the embodiment, in the process of playing the video clip, when the recording time length corresponding to the tag is smaller than or equal to the playing time length, the content of the tag is highlighted, so that the playing progress of the video clip and the highlighting progress of the tag content are synchronized, the tag content is conveniently corresponding to the content in the video clip by a user, and the user experience is improved.
6. Playing progress jumping process
And the playing progress jumping process is to locate the display progress of the tag content according to the playing progress of the video clip. Fig. 25 is an exemplary interface change schematic diagram of an example of play progress skip according to an embodiment of the present application. As shown in fig. 25 (a), when the video clip of the "net lesson note" is played to 20s (00:20), the user jumps the playing progress to 1 min (01:00) by dragging the progress bar or clicking on the tab anchor 2303 (in this figure, by taking the case that the user clicks on the second tab anchor from left to right). In response to the user operation, except for the change of the play control, the contents of the labels with the recording duration of less than or equal to 30s in the note interface are highlighted, that is, the contents of the labels with the corresponding recording durations of 00:30 and 01:00 in the figure are highlighted, as shown in a (b) diagram in fig. 25.
Corresponding to the interface change of fig. 25, in this embodiment, the method for playing progress skip is similar to that of fig. 24, and the note APP highlights the content of all the tags with recording duration less than or equal to the current playing duration according to the current playing duration, and the content of the remaining tags is displayed with gray, which is not described again.
The method provided by the embodiment realizes the display of the positioning label through the playing progress of the video clip, and is convenient for the user to correspond the label content with the content in the video clip under the condition of non-sequential playing of the video clip, thereby improving the user experience.
7. Label skipping process
The label skipping process is to locate the playing progress of the video clip by clicking the content of a certain label. Exemplary, fig. 26 is a schematic diagram of interface change of a label jump according to an embodiment of the present application. As shown in fig. 26 (a), when the video clip of the note "net lesson note" is played to 2 nd minute (02:00) and all the tags of the video clip are highlighted, the user clicks on the content of the tag with the recording duration of "00:30", for example, clicks on the recording duration "00:30" corresponding to the tag. In response to clicking operation of a user, the note APP highlights the content of all the labels with the recording duration of less than or equal to 30s, and the rest of the labels are displayed in a gray mode, namely the content of the label with the corresponding recording duration of 00:30 is highlighted, and the content of the label with the corresponding recording duration of 01:00 is displayed in a gray mode. And in the play control, the progress point on the progress bar moves to the position of the first label anchor point, and the current play duration is refreshed to be 00:30, as shown in a (b) diagram in fig. 26. In this way, the playing progress of the video clip is positioned by clicking the tag content.
Corresponding to the interface change of fig. 26, in this embodiment, the method for implementing the label skip mainly includes: responding to the operation of clicking the content of a tag (taking tag a as an example) by a user, and determining the recording time length a corresponding to the tag a by the note APP; and according to the recording time length a, adjusting the current playing time length of the video clip to a, playing the video clip from the a second, and refreshing a playing control. Meanwhile, the note APP highlights the content of all the labels with the recording time length smaller than or equal to the current playing time length according to the current playing time length, and the content of the rest labels is displayed in gray.
The method provided by the embodiment realizes the playing progress of the video clip positioned by the tag content, and is convenient for the user to correspond the tag content with the content in the video clip under the condition of non-sequential display of the tag content, thereby improving the user experience.
Examples of the video recording method provided by the embodiment of the present application are described in detail above. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each function can be divided into each functional module, for example, a detection unit, a processing unit, a display unit, and the like, and two or more functions can be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to execute the video recording method, so that the same effects as those of the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital Signal Processing (DSP) and a combination of microprocessors, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 1.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, causes the processor to execute the video recording method of any of the above embodiments.
The embodiment of the present application also provides a computer program product, which when run on a computer causes the computer to perform the above-mentioned related steps to implement the video recording method in the above-mentioned embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip executes the video recording method in the above method embodiments.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (28)

1. A video recording method performed by an electronic device, the method comprising:
Displaying a first control, a first interface of a first application and a second interface of a second application in a screen of the electronic equipment, wherein the first application is an application for playing video;
Responding to a recording starting instruction input by a user through the first control, and recording a screen of the first area; the first area is an area of the screen, in which the first application displays a video picture;
In the process of screen recording, responding to a label adding instruction input by a user through the first control, generating information of a first label, wherein the information of the first label and the adding time of the first label have a first corresponding relation, and the adding time of the first label is the recording time of screen recording when the label adding instruction is received;
And displaying the information of the first label in the second interface.
2. The method of claim 1, wherein the screen recording of the first region comprises:
identifying the first region;
acquiring the position information of the first area;
And recording the image displayed in the first area and the audio corresponding to the image according to the position information of the first area.
3. The method of claim 2, wherein the recording the image displayed in the first area and the audio corresponding to the image comprises:
Acquiring all layer data in a first area at a first moment, wherein the first moment is any moment in the screen recording process;
Filtering the layer data of the first application, which is not the layer data of all the layer data, to obtain residual layer data;
synthesizing the rest layer data to obtain a first image;
and acquiring first audio at the first moment, wherein the first audio is corresponding to the first image.
4. A method according to any one of claims 1 to 3, wherein the add tag instruction is for indicating that a plain text tag is added, and the tag information of the first tag includes at least one of an addition time of the first tag, an icon of the first tag, and a text editing area of the first tag for editing text information.
5. The method of claim 4, wherein the information of the first tag includes a text editing area of the first tag, the method further comprising:
Receiving first content input by a user in a text editing area of the first tag;
storing the first content as information of the first tag;
And displaying the first content in a text editing area of the first tag.
6. A method according to any one of claims 1 to 3, wherein the add tag instruction is for indicating to add a screenshot tag, and the information of the first tag includes at least one of a tag time of the first tag, an icon of the first tag, and a text editing area of the first tag, and the screenshot; the text editing area of the first tag is used for editing text information, and the screenshot is an image displayed in the first area and intercepted when the tag adding instruction is received.
7. The method according to any one of claims 1 to 6, further comprising:
Responding to a start recording instruction input by a user through the first control, displaying a first card in the second interface, wherein the first card has a second corresponding relation with a preset mark, and the preset mark is a video mark of a video clip finally obtained by recording on a current screen;
The displaying the information of the first label in the second interface includes:
And displaying the information of the first tag in the first card according to the preset identification based on the second corresponding relation.
8. The method of claim 7, wherein the method further comprises:
Responding to a recording ending instruction input by a user, and stopping screen recording;
generating and saving a target video clip; the preset identifier is a video identifier of the target video clip.
9. The method of claim 8, wherein the end recording instruction is:
an instruction for stopping screen recording is input through the first control;
Or instructions indicating to exit the second interface;
or instructions indicating to exit the second application.
10. The method of claim 8, wherein the first control is a control of a third application, and the end recording instruction is an instruction entered through the first control indicating to exit the third application.
11. The method of claim 7, wherein the method further comprises:
In the process of screen recording, if the position information of the first area is monitored to change, stopping screen recording;
generating and saving a target video clip; the preset identifier is a video identifier of the target video clip.
12. The method according to any one of claims 8 to 11, further comprising:
Acquiring a first frame image, wherein the first frame image is an image displayed in the first area and intercepted at a second moment, and the second moment is the moment when the recording starting instruction is received;
After the generating and saving of the target video snippet, the method further includes:
And displaying the first frame image and a second control in the second interface, wherein the second control is a play control of the target video clip.
13. The method of claim 12, wherein the displaying the first frame image and second control in the second interface comprises:
and displaying the first frame image and the second control in the first card according to the preset identification based on the second corresponding relation.
14. The method according to claim 12 or 13, wherein the second interface includes information of a plurality of labels, the plurality of labels having a third correspondence with a plurality of video identifications, the plurality of labels including the first label, the plurality of video identifications including the preset identification, the first label corresponding to the preset identification, the information of the plurality of labels being displayed in a first manner, the method further comprising:
Responding to a playing instruction input by a user through the second control, and playing the target video clip;
Determining at least one target label corresponding to the preset identifier in the plurality of labels based on the third corresponding relation;
And in the process of playing the target video clip, displaying information of the tag with the time less than or equal to the current playing time in the at least one target tag in a second mode based on the first corresponding relation, wherein the second mode is different from the first mode.
15. The method of any one of claims 12 to 14, wherein the second control includes a progress bar, the progress bar being used to display a play duration, the method further comprising:
And responding to a play instruction input by a user through the second control, displaying at least one label anchor point on the progress bar, wherein the at least one label anchor point corresponds to the at least one label one by one, a first play duration represented by the distance between a first label anchor point and the starting position of the progress bar is equal to the adding time of the label corresponding to the first label anchor point, and the first label anchor point is any one of the at least one label anchor point.
16. The method of claim 15, wherein the method further comprises:
And responding to the operation that a user clicks the first label anchor or drags the playing progress to the first label anchor, and starting to play the target video clip from the first playing duration.
17. The method according to any one of claims 14 to 16, further comprising:
and responding to the information of clicking the first label by the user, and starting to play the target video clip from a second playing time length, wherein the second playing time length is equal to the adding time of the first label.
18. The method of any of claims 1-17, wherein the first control is a control of a third application, the displaying the first control, a first interface of the first application, and a second interface of the second application in a screen of the electronic device, comprising:
when the first interface is displayed in the screen, recognizing a preset gesture executed by a user on the screen;
In response to the preset gesture, starting the second application and the third application under the condition that the first application is determined to be one of preset applications;
and displaying the first control and the second interface.
19. The method of any of claims 1-17, wherein the first control is a control of a third application, the displaying the first control, a first interface of the first application, and a second interface of the second application in a screen of the electronic device, comprising:
When the first interface and the third control are displayed in the screen, receiving an instruction which is input by a user through the third control and used for starting a video clip function;
In response to the instruction for starting the video clip function, starting the second application and the third application under the condition that the first application is determined to be one of preset applications;
and displaying the first control and the second interface.
20. The method of any of claims 1-17, wherein the displaying a first control, a first interface of a first application, and a second interface of a second application in a screen of the electronic device comprises:
When the second interface is displayed in the screen, receiving an instruction which is input by a user through the second interface and used for starting a video clip function;
Responding to the instruction for starting the video extraction function, displaying a fourth control in the second interface, wherein the fourth control comprises an icon of at least one preset application, and the at least one preset application comprises the first application;
responding to the operation of clicking the icon of the first application by a user, starting the first application, and displaying the first interface;
And displaying the first control.
21. The method of any one of claims 1 to 20, wherein the first control is a control of a third application, the electronic device includes an unobstructed service, a screen recording service, and an interface map layer synthesizer SurfaceFlinger, and the responding to the start recording instruction input by the user through the first control performs screen recording on the first area includes:
The third application responds to the recording starting instruction input by a user through the first control, and sends a first request message to the barrier-free service, wherein the first request message is used for requesting to acquire the position information of the first area;
The barrier-free service identifying the first region in response to the first request message and determining location information of the first region;
the barrier-free service transmits the position information of the first area to the third application;
the third application sends a screen recording starting instruction to the screen recording service, wherein the screen recording starting instruction carries the position information of the first area;
the screen recording service responds to the screen recording starting instruction, and obtains all layer data in the first area at a first moment according to the position information of the first area, wherein the first moment is any moment in the screen recording process;
The screen recording service sends all the layer data and layer filtering marks to the SurfaceFlinger, wherein the layer filtering marks indicate to filter the layer data of the first application which is not in the layer data of all the layer data;
the SurfaceFlinger filters the layer data which is not applied to the first application in all the layer data according to the layer filtering mark to obtain residual layer data, and synthesizes the residual layer data to obtain a first image;
The SurfaceFlinger sends the first image to the screen recording service;
The screen recording service stores the first image, and acquires first audio at the first moment, wherein the first audio is corresponding to the first image;
the third application sends a preset identifier to the second application, wherein the preset identifier is a video identifier of a video clip finally obtained by recording on a current screen;
the second application displays a first card in the second interface, and the first card and the preset mark have a second corresponding relation.
22. The method of claim 21, wherein generating the information of the first tag in response to the tag-added instruction input by the user through the first control comprises:
the third application responds to the label adding instruction input by a user through the first control, and determines the adding time of the first label;
The third application sends the adding time of the first label and the preset identifier to the second application;
the second application generates information of the first tag, wherein the information of the first tag comprises at least one of adding time of the first tag, an icon of the first tag and a text editing area of the first tag.
23. The method of claim 22, wherein displaying information of the first tag in the second interface comprises:
The second application determines that the card corresponding to the preset identifier is the first card based on the second corresponding relation;
The second application displays information of the first tag in the first card.
24. The method of claim 22 or 23, wherein the information of the first tag includes a text editing area of the first tag, the method further comprising:
The second application receives first content input by a user in a text editing area of the first tag, and stores the first content as information of the first tag;
The second application displays the first content in a text editing area of the first tag.
25. The method of any of claims 22 to 24, wherein the add tag instruction is to instruct to add a screenshot tag, the method further comprising:
The third application responds to a label adding instruction input by a user through the first control, and sends a screenshot instruction to SurfaceFlinger, wherein the screenshot instruction carries the layer filtering mark;
the SurfaceFlinger responds to the screenshot instruction to acquire a screenshot synthesized at a third moment, wherein the third moment is the moment when the third application receives the label adding instruction;
the SurfaceFlinger sends the screenshot to the third application;
The third application stores the screenshot into a database of the third application according to the uniform resource identifier URI of the screenshot;
The third application sends the URI of the screenshot to the second application;
the second application obtains the screenshot from a database of the third application according to the URI of the screenshot;
the second application saves the screenshot to a database of the second application;
the screenshot is further included in the information of the first tag.
26. The method of claim 25, wherein the method further comprises:
the third application sends a task identifier corresponding to the task added with the first label to the second application;
After the second application saves the screenshot to a database of the second application, the method further includes:
the second application returns a processing result to the third application, wherein the processing result carries the task identifier;
And deleting the screenshot in the database of the third application according to the task identifier.
27. An electronic device, comprising: a processor, a memory, and an interface;
The processor, the memory and the interface cooperate to cause the electronic device to perform the method of any one of claims 1 to 26.
28. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 26.
CN202211467870.0A 2022-11-22 2022-11-22 Video recording method and electronic device Pending CN118075540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211467870.0A CN118075540A (en) 2022-11-22 2022-11-22 Video recording method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211467870.0A CN118075540A (en) 2022-11-22 2022-11-22 Video recording method and electronic device

Publications (1)

Publication Number Publication Date
CN118075540A true CN118075540A (en) 2024-05-24

Family

ID=91099569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211467870.0A Pending CN118075540A (en) 2022-11-22 2022-11-22 Video recording method and electronic device

Country Status (1)

Country Link
CN (1) CN118075540A (en)

Similar Documents

Publication Publication Date Title
RU2766255C1 (en) Voice control method and electronic device
JP2022529159A (en) How to add comments and electronic devices
CN113766064B (en) Schedule processing method and electronic equipment
KR20150025290A (en) An electronic device and operating metod thereof
CN111147660B (en) Control operation method and electronic equipment
CN114816167B (en) Application icon display method, electronic device and readable storage medium
US20210405767A1 (en) Input Method Candidate Content Recommendation Method and Electronic Device
WO2023236794A1 (en) Audio track marking method and electronic device
CN114201097A (en) Interaction method among multiple application programs
CN112068907A (en) Interface display method and electronic equipment
CN113936699B (en) Audio processing method, device, equipment and storage medium
CN112015943A (en) Humming recognition method and related equipment
WO2022166713A1 (en) Electronic device and display method for application thereof, and medium
CN114844984A (en) Notification message reminding method and electronic equipment
CN115017534A (en) File processing authority control method and device and storage medium
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN114065312A (en) Component display method and electronic equipment
AU2017330785A1 (en) Electronic apparatus and controlling method thereof
CN117933197A (en) Method for recording content, method for presenting recorded content, and electronic device
CN115484387B (en) Prompting method and electronic equipment
CN118075540A (en) Video recording method and electronic device
CN116680019B (en) Screen icon moving method, electronic equipment and storage medium
CN116132790B (en) Video recording method and related device
CN116661635B (en) Gesture processing method and electronic equipment
WO2023160455A1 (en) Object deletion method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination