CN116708888A - Video recording method and related device - Google Patents

Video recording method and related device Download PDF

Info

Publication number
CN116708888A
CN116708888A CN202211466787.1A CN202211466787A CN116708888A CN 116708888 A CN116708888 A CN 116708888A CN 202211466787 A CN202211466787 A CN 202211466787A CN 116708888 A CN116708888 A CN 116708888A
Authority
CN
China
Prior art keywords
application
interface
terminal device
recording
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211466787.1A
Other languages
Chinese (zh)
Inventor
高伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211466787.1A priority Critical patent/CN116708888A/en
Publication of CN116708888A publication Critical patent/CN116708888A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8193Monomedia components thereof involving executable data, e.g. software dedicated tools, e.g. video decoder software or IPMP tool

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a video recording method and a related device, and relates to the technical field of terminals. The method comprises the following steps: at a first moment, displaying a floating window and an interface of a first application, wherein the floating window comprises an interface of a second application, and the interface of the second application comprises a first control and a content input area; when a trigger for the first control is received, recording content displayed by the terminal equipment; ending the recording at a second moment, and displaying the recorded video at a first position of the content input area, wherein the second moment is later than the first moment; at a third moment, receiving first text information input by a user at a second position before the first position, and displaying the first text information at the second position; the third moment is later than the second moment; at a fourth moment, when the second text information input by the user is received at a third position behind the first position, displaying the second text information at the third position; the fourth time is later than the third time. Thus, the information recording mode is enriched, and the user experience is improved.

Description

Video recording method and related device
Technical Field
The application relates to the technical field of terminals, in particular to a video recording method and a related device.
Background
With the development of terminal equipment, more terminal equipment has an information recording function, and convenience can be provided for life and work of users.
In some implementations, when the terminal device displays an interface of an application for recording information, the user may record the information by inputting text or inserting pictures in the interface. The manner of inserting the picture includes, for example: and providing a function button for inserting pictures in an interface of the application, entering a gallery where the pictures are located when a user triggers the function button, and inserting the pictures in the gallery application into the application for recording information by the terminal equipment when the user triggers the pictures in the gallery.
However, in the above implementation, the information recording manner is relatively single, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a video recording method and a related device, wherein terminal equipment can record video through a second application, display recorded video obtained by recording in a content input area of the second application, and display text information input by a user before and/or after the position where the recorded video is located, so that the information recording mode is enriched, and the user experience is improved.
In a first aspect, an embodiment of the present application provides a video recording method, including:
at a first moment, displaying a floating window and an interface of a first application, wherein the floating window is floating at the upper layer of the interface of the first application, the floating window comprises an interface of a second application, and the interface of the second application comprises a first control and a content input area; when a trigger for the first control is received, recording content displayed by the terminal equipment; ending the recording at a second moment, and displaying the recorded video obtained by recording at a first position of the content input area, wherein the second moment is later than the first moment; at a third moment, receiving first text information input by a user at a second position before the first position, displaying the first text information at the second position, and positioning the second position in a content input area; the third moment is later than the second moment; at a fourth moment, when the second text information input by the user is received at a third position behind the first position, displaying the second text information at the third position, wherein the third position is positioned in the content input area; the fourth time is later than the third time.
In this way, the terminal device can record the content displayed by the terminal device through the screen recording function of the second application, and display the recorded video obtained by recording in the content input area of the second application, and the terminal device can also display the text information input by the user in the content input area, so that not only can the video be displayed in the content input of the second application, but also the text can be displayed, the information recording mode is enriched and facilitated, and the user experience is improved.
In one possible implementation, the recorded video includes an interface for the first application and does not include a floating window. Therefore, the suspension window is not included in the recorded video, the condition that the suspension window shields the interface of the first application in the recorded video can be improved, and user experience is improved.
In one possible implementation, when receiving the trigger for the first control, the terminal device further modifies the layer name of the second application from the first layer name to the second layer name; and in the process of recording the content displayed by the terminal equipment, filtering out the layers of the second application according to the names of the second layers when the recorded video is synthesized. In this way, by modifying the layer name of the second application interface, the layer of the second application may be filtered out of the recorded video, such that the recorded video does not include the interface of the second application.
In one possible implementation, the terminal device further modifies the layer name of the second application from the second layer name to the first layer name when the recording is finished. In this way, when the screen recording is finished, the name of the layer of the second application is recovered, so that the layer of the second application does not need to be filtered when the terminal equipment performs layer synthesis in the subsequent scene without filtering the layer of the second application.
In one possible implementation manner, after receiving the trigger for the first control and before finishing recording, a preview of the recorded video is displayed at the first position of the terminal device; the first position is the position of the cursor in the suspension window at the first moment. Therefore, in the process of recording the video, the user can be prompted to record the position of the video after the recording is finished by displaying the preview image of the recorded video.
In one possible implementation, the interface of the second application further includes a second control, and the method includes: at a fifth moment, receiving a trigger aiming at the second control, and capturing a content displayed by the terminal equipment to obtain a captured image; and displaying the screenshot image at a fourth position of the content input area, wherein the fourth position is the position of the cursor in the suspension window at the fifth moment. Therefore, not only can the recorded video and text information be displayed in the content input area of the interface of the second application, but also screenshot images can be displayed, the information recording mode is enriched, and the user experience is improved.
In one possible implementation, the screenshot image includes an interface of the first application and does not include a floating window. Therefore, the shielding condition of the floating window in the screenshot image on the interface of the first application can be improved, and the user experience is improved.
In one possible implementation manner, the interface of the second application further includes a third control, and the method includes: and at the sixth moment, receiving a trigger for the third control, and displaying characters corresponding to the audio in the first application at a fifth position of the content input area, wherein the fifth position is the position of the cursor in the suspension window at the sixth moment. Therefore, characters corresponding to the audio in the first application are displayed in the content input area, manual input of a user is not needed, convenience in use of the second application is improved, and user experience is improved.
In one possible implementation, the method further includes: receiving a play operation of the recorded video displayed at the first position; in response to the play operation, the recorded video of the play state is displayed in the floating window, and the content displayed in the floating window before the play operation is received is not displayed. Therefore, when the recorded video is played, the display content with lower playing relevance to the recorded video can not be displayed, the condition that the recorded video in a playing state is blocked by the content displayed in the floating window before the playing operation is improved, and the user experience of watching the recorded video is improved.
In one possible implementation, when the recorded video in the playing state is displayed in the floating window, a fourth control is also displayed; the method further comprises the steps of: and when the trigger of the fourth control is received, the terminal equipment displays the interface of the first application and the interface of the second application in a split screen mode. Therefore, the interface of the second application can be switched from the floating window display to the split screen display, the display area of the interface of the second application is increased, and the user experience of watching the recorded video is improved.
In a second aspect, an embodiment of the present application provides a video recording apparatus, where the video recording apparatus may be a terminal device, or may be a chip or a chip system in the terminal device. The apparatus for video recording may include a processing unit and a display unit. The processing unit is configured to implement the first aspect or any method related to processing in any possible implementation manner of the first aspect. The display unit may be a display screen or the like, and the display unit may implement the first aspect or any step related to display in any one of the possible implementations of the first aspect based on the control of the processing unit. When the means for video recording is a terminal device, the processing unit may be a processor. The apparatus for video recording may further comprise a storage unit, which may be a memory. The storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a method described in the first aspect or any one of possible implementation manners of the first aspect. When the means for video recording is a chip or a system of chips in the terminal device, the processing unit may be a processor. The processing unit executes instructions stored by the storage unit to cause the terminal device to implement a method as described in the first aspect or any one of the possible implementations of the first aspect. The memory unit may be a memory unit (e.g., a register, a cache, etc.) in the chip, or a memory unit (e.g., a read-only memory, a random access memory, etc.) located outside the chip in the terminal device.
The display unit is used for displaying a floating window and an interface of a first application at a first moment, wherein the floating window is suspended at the upper layer of the interface of the first application, the floating window comprises an interface of a second application, and the interface of the second application comprises a first control and a content input area; the processing unit is used for recording the content displayed by the terminal equipment when receiving the trigger aiming at the first control; the display unit is also used for finishing recording at a second moment and displaying recorded video obtained by recording at a first position of the content input area, wherein the second moment is later than the first moment; the processing unit is further used for receiving first text information input by a user at a second position before the first position at a third moment; a display unit further configured to display the first text information at the second location, the second location being located in the content input area; the third time is later than the second time; the processing unit is further used for receiving second text information input by a user at a third position after the first position at a fourth time; a display unit, configured to display the second text information at the third location, where the third location is located in the content input area; the fourth time is later than the third time.
In one possible implementation, the recorded video includes an interface of the first application and does not include the floating window.
In a possible implementation manner, the processing unit is further configured to, when receiving a trigger for the first control, modify, by the terminal device, a layer name of the second application from a first layer name to a second layer name; and in the process of recording the content displayed by the terminal equipment, filtering out the layers of the second application according to the names of the second layers when the recorded video is synthesized.
In a possible implementation manner, the processing unit is further configured to, when the recording is finished, modify, by the terminal device, a layer name of the second application from the second layer name to the first layer name.
In a possible implementation manner, the display unit is further configured to display a preview of the recorded video at the first location of the terminal device after receiving the trigger for the first control and before ending recording; and the first position is the position of the cursor in the suspension window at the first moment.
In a possible implementation manner, the processing unit is further configured to receive, at a fifth moment, a trigger for the second control, and screenshot content displayed on the terminal device to obtain a screenshot image; and the display unit is also used for displaying the screenshot image at a fourth position of the content input area, wherein the fourth position is the position of the cursor in the floating window at the fifth moment.
In one possible implementation, the screenshot image includes an interface of the first application and does not include the floating window.
In a possible implementation manner, the processing unit is further configured to receive, at a sixth moment, a trigger for the third control; and the display unit is further used for displaying characters corresponding to the audio in the first application at a fifth position of the content input area, wherein the fifth position is the position of the cursor in the floating window at the sixth moment.
In a possible implementation manner, the processing unit is further configured to receive a play operation of the recorded video displayed at the first location; and the display unit is also used for responding to the playing operation, displaying the recorded video in the playing state in the floating window, and not displaying the content displayed in the floating window before the playing operation is received.
In a possible implementation manner, the display unit is further configured to display, when receiving a trigger to the fourth control, an interface of the first application and an interface of the second application on a split screen by the terminal device.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, the memory being configured to store code instructions, the processor being configured to execute the code instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein a computer program or instructions which, when run on a computer, cause the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method described in the first aspect or any one of the possible implementations of the first aspect.
In a sixth aspect, the present application provides a chip or chip system comprising at least one processor and a communication interface, the communication interface and the at least one processor being interconnected by wires, the at least one processor being adapted to execute a computer program or instructions to perform the method described in the first aspect or any one of the possible implementations of the first aspect. The communication interface in the chip can be an input/output interface, a pin, a circuit or the like.
In one possible implementation, the chip or chip system described above further includes at least one memory, where the at least one memory has instructions stored therein. The memory may be a memory unit within the chip, such as a register, a cache, etc., or may be a memory unit of the chip (e.g., a read-only memory, a random access memory, etc.).
It should be understood that, the second aspect to the sixth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
FIG. 1 is a schematic diagram of an interactive interface for information recording by a user using a note application according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic software structure of a terminal device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an interaction flow between modules according to an embodiment of the present application;
fig. 5 is an interface interaction schematic diagram of a recorded video according to an embodiment of the present application;
FIG. 6 is a schematic diagram of interface interaction for inserting text when recording video according to an embodiment of the present application;
fig. 7 is a schematic diagram of interface interaction for playing a recorded video according to an embodiment of the present application;
fig. 8 is a second interface interaction diagram for playing a recorded video according to an embodiment of the present application;
FIG. 9 is a schematic diagram of interface interaction for inserting screenshot images in a note application according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an interactive interface for AI identification by an AI identification control according to an embodiment of the application;
FIG. 11 is a schematic diagram of interface interaction of a selection box for collapsing functions through a zoom-out control according to an embodiment of the present application;
FIG. 12 is a schematic diagram of interface interactions for creating note A according to an embodiment of the present application;
FIG. 13 is a schematic diagram of interface interactions for creating note B according to an embodiment of the present application;
fig. 14 is a flowchart of a method for recording video according to an embodiment of the present application;
fig. 15 is a schematic flow chart of another video recording method according to an embodiment of the present application;
fig. 16 is a schematic hardware structure of another terminal device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
1. layer (c): is a component of an image, which may include one or more layers, and one layer may correspond to an interface of an application. For example, the interface of the application a is displayed on the screen of the terminal device, and the floating window of the application B is displayed on the interface of the application a, so that the interface image of the terminal device may include a layer of the interface of the application a and a layer of the floating window of the application B.
2. Other terms
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
3. Terminal equipment
The terminal device of the embodiment of the application can also be any form of electronic device, for example, the electronic device can include a handheld device with an image processing function, a vehicle-mounted device and the like. For example, some electronic devices are: a mobile phone, tablet, palm, notebook, mobile internet device (mobile internet device, MID), wearable device, virtual Reality (VR) device, augmented reality (augmented reality, AR) device, wireless terminal in industrial control (industrial control), wireless terminal in unmanned (self driving), wireless terminal in teleoperation (remote medical surgery), wireless terminal in smart grid (smart grid), wireless terminal in transportation security (transportation safety), wireless terminal in smart city (smart city), wireless terminal in smart home (smart home), cellular phone, cordless phone, session initiation protocol (session initiation protocol, SIP) phone, wireless local loop (wireless local loop, WLL) station, personal digital assistant (personal digital assistant, PDA), handheld device with wireless communication function, public computing device or other processing device connected to wireless modem, vehicle-mounted device, wearable device, terminal device in future communication network (public land mobile network), or land mobile communication network, etc. without limiting the application.
By way of example, and not limitation, in embodiments of the application, the electronic device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
In addition, in the embodiment of the application, the electronic equipment can also be terminal equipment in an internet of things (internet of things, ioT) system, and the IoT is an important component of the development of future information technology, and the main technical characteristics of the IoT are that the article is connected with a network through a communication technology, so that the man-machine interconnection and the intelligent network of the internet of things are realized.
The electronic device in the embodiment of the application may also be referred to as: a terminal device, a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, a user equipment, or the like.
In an embodiment of the present application, the electronic device or each network device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system layer. The hardware layer includes hardware such as a central processing unit (central processing unit, CPU), a memory management unit (memory management unit, MMU), and a memory (also referred to as a main memory). The operating system may be any one or more computer operating systems that implement business processes through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address book, word processing software, instant messaging software and the like.
In some implementations, applications for recording information, such as memo applications, note applications, and the like, may be installed in the terminal device. For example, a user can record experiences or things which are easy to forget at a certain moment in a text form in the application, and the life and the work of the user can be facilitated.
By way of example, FIG. 1 illustrates a schematic diagram of an interactive interface for information recording by a user using a notes application in one implementation.
For example, when the terminal device plays video through a certain video play application, the terminal device may display a video play interface as shown by a in fig. 1. When the user wants to make some notes, the notes application may be opened. For example, the user may perform a sliding operation as shown by a in fig. 1, the terminal device receives the sliding operation, and displays an interface as shown by b in fig. 1 in response to the sliding operation.
For example, when the user slides from the right side edge of the terminal device display interface to the middle area of the terminal device display interface, the terminal device may display the application menu icon 101 on the right side of the terminal device interface along with the sliding operation.
An application selection box 102 may be included in the interface shown in fig. 1 b, where the application selection box 102 is displayed on the right side of the display interface of the terminal device, and an icon of a music application, an icon of a gallery application, an icon of a camera application, an icon of a note application, and an icon of a browser application may be included in the application selection box 102.
It will be appreciated that the user may slide down within the application selection box 102 and that the terminal device may display other applications installed in the terminal device under the browser application as the user slides.
As shown in fig. 1 b, when the user clicks an icon of the note application in the application selection box 102, the terminal device may receive a click operation of the user on the icon of the note application and display an interface as shown in fig. 1 c in response to the click operation.
The interface shown in fig. 1 c may include a video playing interface, and a floating window 103 is displayed above the video playing interface, where the floating window 103 includes an interface of a note application. The interface of the note application comprises note content 1, note content 2, note content 3, note content 4, note content 5, note content 6, a control of newly-built notes and the like.
As shown in fig. 1 c, when the user clicks the control of the new note, the terminal device may receive a click operation of the user on the control of the new note, and display an interface as shown in fig. 1 d in response to the click operation.
In the interface shown in fig. 1 d, the floating window 103 includes an interface of a new note, where the interface includes a title input area of the new note, a time of creating the note, a content input area of the new note, a list control, a style control, a picture control, a voice control, and a handwriting control, and a cursor is displayed in the content input area.
The list control can assist a user in inputting characters in a list format in the content input area, the style control can set the size, the font, the color and the like of the characters input in the content input area by the user, the picture control can insert pictures into the content input area by photographing and selecting from a gallery, the voice control can convert the voice input by the user into the characters to be inserted into the content input area, and the handwriting control can insert the characters, symbols and other contents handwritten by the user into the content input area.
In the interface shown in fig. 1 d, the user can record information in the newly created note by inputting text and inserting pictures during the process of watching video. For example, when the user wants to insert a picture in the interface as shown in fig. 1 d, the terminal device may receive a click operation of the user on the picture control, and display the gallery control and the photographing control in response to the operation. The terminal equipment can receive triggering operation of a user on a gallery control, display pictures in the gallery, select the pictures in the gallery, and insert the pictures selected by the user into the note application. Or the terminal equipment can receive triggering operation of the user on the photographing control, photograph the picture through the photographing function of the terminal equipment, and insert the photographed picture into the note application.
It will be appreciated that when the video playing interface is displayed by the terminal device, an application menu icon or an application selection box may also be displayed on the left side of the display interface of the terminal device. When the terminal device displays the video play interface as shown in fig. 1 a, the user may also slide from the left edge of the display interface of the terminal device to the middle area of the display interface of the terminal device, and the terminal device may display an application menu icon on the left side of the display interface of the terminal device and continue to display an application selection box on the left side of the terminal device along with the sliding operation. The embodiment of the application does not specifically limit the display position of the application selection frame.
However, the form of the application record information in the above implementation is relatively single, so that the user experience is poor.
In view of this, the embodiment of the present application provides a video recording method, which can set a recording function in a note application for recording information, when a terminal device plays video content of a video application. Therefore, in the note application displayed in the floating window, a user can record text content, record video content based on the screen recording function of the note application, and the recorded content can be inserted into a note, so that information recording modes are enriched and convenient, and user experience is improved.
In some possible implementations, when the note application records the video content, the layer of the floating window where the note application is located may also be filtered, so that the recorded video content does not include the content in the floating window. It can be understood that, in a typical screen recording application, notes cannot be taken, and all contents including a floating window in the interface of the terminal device are recorded, and the content under the floating window is blocked by the floating window in the recorded video. Compared with the common screen recording application, the screen recording application provided by the embodiment of the application has the advantage that the shielding condition in the recorded video obtained during screen recording is improved.
In order to better understand the embodiments of the present application, the following describes the structure of the electronic device according to the embodiments of the present application: by way of example, fig. 2 shows a schematic structural diagram of an electronic device.
The electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device. In other embodiments of the present application, the electronic device may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The electronic device implements display functions via a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device may implement shooting functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor. For example, the video recording method of the embodiment of the present application may be performed.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 3 is a software configuration block diagram of a terminal device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 3, the application package may include applications for cameras, calendars, phones, maps, games, and the like. For example, in the embodiment of the present application, the display of the interface and the interface interaction of the user may be implemented at the application layer.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. For example, in the embodiment of the application, the system side can provide the bottom layer implementation of the shortcut application card, including related operations of creating, managing, removing and the like of the stack of the application program.
As shown in fig. 3, the application framework layer may include a window manager, resource manager, notification manager, content manager, window management service (window manager service, WMS), screen capture service (Media Projection Manager), view system and layer composition system SurfaceFlinger, and the like.
The window manager is used for managing window programs. The window manager may obtain the display screen size, determine if there is a status bar, lock screen, touch screen, drag screen, intercept screen, etc.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in a status bar, giving out a prompt tone, vibrating a terminal device, flashing an indicator light, etc.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
And the window management service is used for determining the title of the application window and synchronizing the name of the application window to the layer composition system.
The screen acquisition service is used for managing a call interface of screen acquisition, and the call interface of screen acquisition service management can comprise a recording screen interface Media project and a coding and decoding tool interface Media Recorder. Media project may be used to create virtual display VitrualDisplay, vitrualDisplay in the graphics layer system as a virtual display in the memory of the physical screen of the electronic device, to obtain the content of the interface displayed by the terminal device and generate a corresponding interface image, so as to implement a mirror copy of the interface displayed by the terminal device. Media Recorder may be used to create a Surface container that may be used to receive interface images sent by the visual display. Media Recorder can also be used to integrate frame image and audio data in a Surface container to generate video.
A map composition system for rendering a composite image. Multiple visual displays may be included in the graphics layer system.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), graphics processing Libraries (e.g., openGL ES), graphics engines (e.g., SGL), graphics composition, etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer may include display drivers, camera drivers, audio drivers, sensor drivers, and the like.
The modules in the software structure shown in fig. 3 are mutually matched, so that content displayed by the terminal equipment can be recorded through a screen recording function set in the note application. The following describes interactions between modules in the video recording method according to the embodiment of the present application with reference to the software structure shown in fig. 3. Fig. 4 shows a schematic diagram of an interaction flow between modules in an embodiment of the present application.
For example, the scenario corresponding to the interaction flow shown in fig. 4 may be: the terminal device displays a video playing interface shown as a in fig. 1, and a floating window is displayed on the video playing interface, wherein the floating window comprises the content of the note application, as shown as d in fig. 1. The difference from the interface content of the note application shown in d in fig. 1 is that the interface of the second application may include a screen recording control, and the interface of the note application may be shown in b in fig. 5, which is not described herein. When the user wants to record the played video content through the screen recording function of the note application, the video recording method of the terminal device can comprise the following steps:
s401, the note application receives the trigger aiming at the first control, modifies the layer name of the note application, and synchronizes the modified new layer name to the window management service.
In the embodiment of the application, the note application is an application capable of recording information in the forms of screen recording, screenshot, text input, picture insertion and the like. The interface of the note application can be referred to as the interface displayed in the floating window 501 among the interfaces shown in fig. 5 described below. The first control may be a screen recording control that may be used to trigger a screen recording function of the note application, and the first control may be displayed in an interface of the note application, for example, a screen recording control in the function selection box 502 shown in fig. 5 b.
For example, the name of the layer corresponding to the note application may include the window name of the note application. For example, the note application window name is aaittle, and the layer name corresponding to the interface of the note application may be 1 aaittle, 2 aaittle, or the like. The window names of the note application are not particularly limited in the embodiment of the application. In the embodiment of the application, the note application can modify the layer name of the note application by modifying the window name. The note application modification window name may include: the note application modifies some or all of the characters in the window name to other characters, e.g., the window name of the note application may be modified from mtile to ntiole. Or, the note application adds a preset identifier in the window name, or the note application deletes part of characters in the window name. The embodiment of the application does not limit the preset mark in detail.
For example, when the note application receives the trigger of the user for the first control, whether the interface of the note application is displayed through the floating window may be determined first, and when the interface of the note application is determined to be displayed through the floating window, the note application may modify the window name of the note application and synchronize the modified window name to the window management service.
For example, when the terminal device receives a trigger for the first control, a function selection box 503 in the interface shown in c in fig. 5 may be displayed, or the color or shape of the first control may be changed, which is not limited by the embodiment of the present application.
S402, the window management service synchronizes the received new layer name to the layer combination system.
It will be appreciated that the notes application may synchronize the new layer name of the notes application by the layer system through the window management service.
S403, the note application calls a screen acquisition service.
For example, the note application may invoke a screen capture service in response to an operation for the first control when the operation is received. Or the note application may also invoke the screen capture service after modifying the window name.
S404, the screen acquisition service creates a virtual display in the image layer management system.
For example, the screen capture service may create a virtual display visual in the layer management system through the recording interface Media project.
And S405, the layer synthesis system acquires the content displayed by the terminal equipment, traverses the layer corresponding to the display interface through a virtual display in the layer synthesis system, filters the layer with the name of the layer with the modified note application in the name, and performs layer synthesis on the filtered layer to generate a frame image.
In the embodiment of the application, the interface of the note application can be corresponding to one or more layers, and the video playing interface can be corresponding to one or more layers. The content displayed by the terminal equipment acquired by the layer composition system at least comprises the content of the layer corresponding to the video playing interface and the content of the layer corresponding to the floating window.
For example, when layer composition is performed by the virtual display, the virtual display may traverse the layers in collectivlibblelayers of the composition engine, and in the process of traversing the layers, the layers of the modified layer names may be filtered out.
In this way, the image layer corresponding to the interface of the note application is filtered through the virtual display, so that the synthesized frame image does not comprise a floating window, and the situation that the floating window shields the playing video interface in the frame image can be improved.
S406, the screen capture service records the audio data through a Media Recorder and creates a Surface container.
In the embodiment of the application, the audio data is audio data obtained by recording system sound by the terminal equipment, and the system sound is sound corresponding to video played by the video playing interface.
Illustratively, after the note application invokes the screen capture service, the screen capture service may record audio data through the codec tool interface Media Recorder and create a Surface container through the Media Recorder.
S407, the layer combination system sends the frame image to the Surface container.
For example, the graphics layer composition system may send the frame images in the virtual display to the Surface container.
S408, the screen acquisition service inserts the received first frame image into a content input area of the note application interface.
For example, the screen capture service may insert the first frame image received by the Surface container into a display interface of the note application, and the interface for displaying the first frame image on the interface of the note application may be the interface shown as c in fig. 5 described below.
S409, the note application receives the operation aiming at stopping the screen recording control, recording is ended, the layer name of the modified note application is restored to the original layer name, and the restored layer name is synchronized to the window management service.
In one possible implementation, the stop screen control may be a stop screen control in a screen recording function selection box 503 in the interface shown as c in fig. 5, and the operation received by the note application for the stop screen control may correspond to the interface shown as c in fig. 5.
In another possible implementation, the stop screen recording control may be a first control after changing the color or a first control after changing the shape, and the operation received by the note application for the stop screen recording control may be an operation for the first control after changing the color or an operation for the first control after changing the shape.
The manner of recovering the layer name by the note application is opposite to the manner of modifying the layer described above, and will not be described in detail herein.
Therefore, after the screen recording is finished, the name of the note application window is restored to the original name, so that the content of the layer of the note application is included in the image generated by the terminal equipment in the scene of the layer corresponding to the interface of the note application without filtering.
S410, the window management service synchronizes the received recovered layer names to the layer combination system.
The window management service can synchronize the window name of the recovered note application to the layer combination system, so that when the layer combination system performs layer combination, the layer corresponding to the interface of the note application does not need to be filtered.
S411, the screen acquisition service synthesizes the frame images and the audio into a recorded video through a coding and decoding tool interface Media Recorder.
Illustratively, the Media Recorder may integrate the frame images and audio in the Surface container to synthesize a recorded video.
It can be understood that the Media Recorder may synthesize the recorded video in real time, or may synthesize the recorded video in a unified manner after the terminal device stops recording, which is not limited in the embodiment of the present application.
S412, the screen acquisition service receives the call of the note application, and inserts the recorded video obtained by recording into the content input area of the note application interface.
For example, when the note application receives an operation of stopping the screen recording control from the user, the screen capture service may be invoked, so that the screen capture service may insert the recorded video into the note application based on the invocation, and display the recorded video on an interface of the note application.
It will be appreciated that when the screen capture service inserts the recorded video into the note application, the recorded video may be used to replace the first frame image that has been inserted into the note application such that the recorded video is displayed in the location of the first frame image.
The screen capture service may also send the generated recorded video to a database corresponding to the note application for storage when the recorded video is inserted into the note application, so that when the user views the video, the note application may call the recorded video from the database and play the recorded video on the interface of the terminal device.
In the embodiment of the application, when the terminal equipment displays the video playing interface, video recording can be performed through the screen recording function in the note application, and the generated recorded video does not comprise the interface content of the note application, so that the shielding condition of the interface of the note application in the recorded video on the video playing interface can be improved, and the user experience is improved.
It can be understood that the screenshot function may be provided in the note application, and when the screenshot function is used, similar to the steps S401-S405 in fig. 4, the layers of the note application may be filtered, so that the screenshot image in the note application does not include a floating window where the note application is located, and the shielding condition of the floating window in the screenshot image to the video playing interface is improved, similar to the implementation of the screen recording application in the note application, and will not be described in detail.
In the following, a user interface is used to describe in detail possible interface interactions during video recording and during screenshot in the embodiment of the present application. The content displayed by the recording terminal device may correspond to fig. 5 to fig. 6, the recorded video obtained by playing and recording may correspond to fig. 7 to fig. 8, and the screenshot process may correspond to fig. 9.
It is to be understood that the application for recording information in the embodiment of the present application may be a note application, a memo application, or the like shown in fig. 1, and the embodiment of the present application is not limited to the application for recording information specifically. For convenience of description, the following user interfaces are described by taking the note application shown in fig. 1 as an example.
Fig. 5 is an interface interaction schematic diagram of a recorded video according to an embodiment of the present application.
For example, as shown in a of fig. 5, the terminal device displays a video playing interface, and a floating window is displayed on the video playing interface, where the floating window includes contents of the note application. The floating window is the same as the content of the floating window 103 shown in c in fig. 1, and the manner of displaying the interface shown in a in fig. 5 by the terminal device can be referred to the manner of displaying the floating window 103 shown in fig. 1, which is not described herein.
As shown in a of fig. 5, when the user clicks a control of a new note, the terminal device may receive a click operation for the new note control and display an interface as shown in b of fig. 5 in response to the click operation.
A video playing interface and a floating window 501 may be included in the interface shown in b in fig. 5, a content input area, a function selection box 502, and a cursor is displayed in the content input area, where the function selection box 502 includes a screen recording control, a screenshot control, an AI (artificial intelligence ) recognition control, and a zoom-out control. The floating window 501 may further include the content of the floating window 103 shown in fig. 1 c, which is not described herein.
It will be appreciated that the user may drag the function selection box 502 to any area of the floating window 501, and the embodiment of the present application is only described by taking the example that the function selection box 502 is located at the left edge of the floating window 501 as an example, and does not limit the position of the function selection box 502.
As shown in b of fig. 5, when the user wants to record the video played by the terminal device into a note, the terminal device may receive a click operation for the screen recording control, and record the content displayed by the terminal device in response to the click operation.
For example, when the terminal device records the displayed content, the generated first frame image may be inserted as a preview image of the recorded video into the position of the cursor in the floating window 501, and an interface as shown in c in fig. 5 may be displayed. The method for generating the first frame image and inserting the first frame image into the note application by the terminal device may be referred to the related description in fig. 4, and will not be described herein.
It will be appreciated that when the terminal device displays the preview, the cursor may be displayed below the preview, with the cursor obscured by the function selection box in the interface shown as c in fig. 5.
For example, during the recording process, the terminal device may adjust the display position of the preview image in the content input area according to the received operation for adjusting the position of the preview image. For example, the terminal device may receive a drag operation of the user with respect to the preview image, and adjust a display position of the preview image in the content input area according to the drag operation of the user. Or the terminal device may receive that the user inserts a blank line over the preview image or inputs text information, etc. to adjust the display position of the preview image in the content input area. Therefore, when the terminal equipment records the screen, the user can know the position of the recorded video generated later in advance, the user can conveniently adjust the position of the recorded video in the screen recording process, and the user experience is improved.
In the interface shown as c in fig. 5, the preview inserted in the floating window includes a video playback interface, and the preview does not include the content in the floating window. Thus, the shielding condition of the floating window on the video playing interface in the preview image can be improved.
The interface shown in c in fig. 5 may further include a screen recording function selection box 503, where the screen recording function selection box 503 may include a switch of a microphone, a stop screen recording control, and a screen recording duration. The state of the switch of the microphone may be on or off. In the interface shown as c in fig. 5, the switch of the microphone is in an off state. When the switch of the microphone is in an on state, the terminal equipment can record the external audio while recording the picture and the audio of the played video, and when the switch of the microphone is in an off state, the terminal equipment cannot record the external audio.
It should be understood that the screen recording function selection box 503 may be displayed in the area of the video playing interface or in the area of the floating window, and the embodiment of the present application is not limited to this, and the interface shown in c in fig. 5 is illustrated by taking the area of the screen recording function selection box 503 displayed in the video playing interface as an example.
As shown in c of fig. 5, when the user clicks the stop screen recording control, the terminal device receives a clicking operation for the stop screen recording control, and in response to the clicking operation, stops recording the content displayed by the terminal device, and generates a recorded video. The terminal device may insert the generated recorded video into the floating window and display an interface as shown by d in fig. 5.
In the interface shown as d in fig. 5, the recorded video is displayed at the position where the cursor is located in the floating window in the interface shown as b in fig. 5, i.e., at the position of the preview. A cursor may be displayed below the recorded video, the cursor being obscured by the function selection box in an interface as shown by d in fig. 5.
Any frame of image in the recorded video generated by the terminal device does not include the floating window 501, and the process of generating the recorded video by the terminal device can be shown in fig. 4, which is not repeated herein.
It can be understood that after the terminal device finishes recording and displays the recorded video in the floating window, the terminal device may receive an operation for adjusting the position of the recorded video, and adjust the display position of the recorded video in the content input area. The method for adjusting the display position of the recorded video in the content input area by the terminal device is similar to the method for adjusting the display position of the preview image in the content input area, and will not be described herein.
For example, when the terminal device performs video recording through the recording function of the note application, the floating window in the interface shown in fig. 5 c does not include the preview. And inserting the generated recorded video at the position of the cursor in the floating window when the video recording is finished. The embodiment of the application is only described by taking the insertion of the preview image into the floating window as an example, and is not limited in any way.
For example, as shown in fig. 5 b, when the terminal device receives a click operation of the user on the screen recording control, the terminal device may change the color of the screen recording control, or change the shape of the screen recording control, so as to prompt the user that the terminal device is recording video. Thus, the terminal device need not display the screen recording function selection box 503 in the interface as shown in c in fig. 5 at the time of screen recording.
It can be understood that when the color or shape of the screen recording control is changed, when the user clicks the screen recording control after the color is changed or clicks the screen recording control after the shape is changed, the terminal device receives the operation of the user, and in response to the operation, stops recording the displayed content of the terminal device, and generates the recorded video.
As shown in the interface interaction schematic diagram in fig. 5, the terminal device may record the played video content through the screen recording function of the note application, and insert the generated recorded video into the note application, so that the information recording form through the note application is enriched and convenient, and the user experience is improved.
For example, as shown in fig. 5, in the process of recording the video content, the user may perform text recording in the new note. Fig. 6 shows a schematic diagram of an interface interaction for inserting text when recording video.
As shown in the interface of fig. 6 a, the terminal device records the played video content, inserts the preview image of the video playing interface into the floating window, and when the terminal device receives a trigger operation for a blank area above the preview image, a cursor (not shown in the figure) may be displayed above the preview image, and the terminal device may receive the text "this video is a slow motion guiding video" input by the user, and inserts the text input by the user into the position where the cursor is located above the preview image. After the terminal device inserts the text "this video is a slow motion guide video" above the preview, an interface as shown by b in fig. 6 may be displayed.
It will be appreciated that the user may insert text behind the preview, and the interactive interface is similar to that of fig. 6, and will not be described again here.
Therefore, the terminal equipment can record the played video and the input words together, enriches the information recording mode and improves the user experience.
For example, after the terminal device inserts the recorded video and the text "this video is slow motion guide video" input by the user into the floating window, the terminal device may play the recorded video in response to the trigger operation. Fig. 7 shows a schematic diagram of an interface interaction for playing a recorded video.
As shown in a of fig. 7, the video of the video playback interface is in a playback state. When the user clicks the record video playing control, the terminal device may receive a clicking operation for the record video playing control, and in response to the clicking operation, play the record video in the note application in the floating window, and may display an interface as shown in b in fig. 7 or an interface as shown in c in fig. 7.
In the interface shown in b in fig. 7, the video of the video playing interface is in a pause playing state, and the recorded video is in a playing state. Therefore, when the terminal equipment plays the recorded video, the video playing interface is paused to play the video, the condition that the video picture and the audio of the video interfere with the recorded video in the floating window during playing can be reduced, and the user experience is improved.
Of course, in the interface shown in b in fig. 7, the terminal device may receive a triggering operation of the video playing control for controlling video playing or pausing, and control the video of the video playing interface to enter a playing state.
In the interface shown in fig. 7 c, the video of the video playback interface is in a playback state. The terminal device may receive a trigger operation for a video playing control for controlling video playing or pausing playing, and control video of the video playing interface to enter a pause playing state, and display an interface as shown by d in fig. 7. Therefore, the experience of the user when watching and recording the video is improved by enabling the video played by the video playing interface to be in a pause state.
For example, as shown in b or c of fig. 7, the terminal device plays the recorded video in the floating window, and the viewing experience of the user may be affected due to the smaller display area of the floating window. In order to enhance the user's experience of watching recorded video, the display area of the floating window may be adjusted. Fig. 8 shows a second interface interaction diagram for playing a recorded video.
As shown in a of fig. 8, when the user clicks the control 801 above the floating window, the terminal device may receive a clicking operation for the control 801 and display an interface as shown in b of fig. 8 in response to the operation.
In the interface shown in b in fig. 8, a display mode selection box 802 may be displayed above the floating window, where the controls from left to right in the display mode selection box 802 are a full-screen display control, a half-screen display control, a floating window display control, a minimize control, and a close control, respectively. The floating window display control is in the selected state in the interface shown as b in fig. 8.
It may be appreciated that, when the terminal device does not receive the operation of the user on the control in the display manner selection box 802 within the preset duration, the display manner selection box 802 may enter a hidden state and not be displayed on the interface of the terminal device, and the terminal device resumes displaying the interface shown as a in fig. 8. For example, when the terminal device does not receive the user operation for the control in the display mode selection box 802 within 2 seconds, the display mode selection box 802 may enter a hidden state.
As shown in b of fig. 8, when the user clicks the half-screen display control in the display mode selection box 802, the terminal device may receive a click operation for the half-screen display control in the display mode selection box 802 and display an interface as shown in c of fig. 8 in response to the click operation.
In the interface shown in c in fig. 8, the terminal device displays the recorded video playing interface and the video playing interface in the floating window in a split screen mode, and the recorded video playing interface and the video playing interface respectively occupy one half of the screen of the terminal device.
As shown in c in fig. 8, the user may click on the full-screen display control in the display mode selection box 802, and the terminal device receives a click operation for the full-screen display control in the display mode selection box 802 and displays an interface as shown in d in fig. 8 in response to the click operation.
In the interface shown as d in fig. 8, the terminal device displays the recorded video playing interface in full screen.
Therefore, the size of the display area of the floating window can be flexibly adjusted, namely, the size of a recorded video playing interface is adjusted, the richness and convenience of use are improved, and the user experience is improved.
Fig. 5-8 above illustrate video recording using the recording control in the function selection box 502, and an interactive interface for playing recorded video, where the function selection box 502 further includes a screenshot control, an AI-identifying control, and a zoom-out control, as shown in b in fig. 5. The functions corresponding to the screenshot control, the AI identification control and the shrinking control are respectively described below. FIG. 9 illustrates an interface interaction diagram for inserting screenshot images in a notes application.
As shown in a of fig. 9, when the user clicks the screenshot control, the terminal device may receive a clicking operation for the screenshot control, and in response to the clicking operation, acquire content displayed on a screen of the terminal device, and generate a screenshot image, and display an interface as shown in b of fig. 9. The image displayed in the lower left corner of the interface as shown in b in fig. 9 is a screenshot image generated by the terminal device.
As shown in b in fig. 9, the screenshot image generated by the terminal device includes a video playing interface, and does not include the content of the floating window.
Further, the terminal device may insert the generated screenshot image into the position of the cursor in the floating window, and display an interface as shown in c in fig. 9. The method for generating the screenshot image and inserting the screenshot image into the floating window by the terminal device is similar to the method for generating the first frame image and inserting the first frame image into the note application by the terminal device shown in fig. 4 and will not be described herein.
Alternatively, as shown in a of fig. 9, the terminal device may receive a click operation for the screenshot control, and in response to the operation, display an interface as shown in c of fig. 9, instead of displaying an interface as shown in b of fig. 9. The embodiment of the present application is described by taking the interface shown in b of fig. 9 as an example only, and is not limited in any way.
Therefore, the terminal equipment can perform screenshot on the video playing interface displayed by the terminal equipment through the screenshot function corresponding to the screenshot control, so that the information recording mode is enriched, and the user experience is improved.
In a possible implementation, when the terminal device displays the document, the screenshot function corresponding to the screenshot control can be used for screenshot the interface of the document, so that the content displayed by the terminal device can be rapidly recorded, the convenience of information recording is improved, and the user experience is improved.
By way of example, FIG. 10 illustrates a schematic diagram of an interactive interface for AI identification via an AI identification control.
As shown in a of fig. 10, a user may click on the AI-recognition control, and the terminal device receives a click operation for the AI-recognition control, and in response to the click operation, obtains audio corresponding to a video played by the terminal device in real time, and recognizes the audio as corresponding text using the AI-recognition function. As the terminal device recognizes the audio corresponding to the video as text, the recognized text is displayed at the position where the cursor is located in the floating window, and after the audio recognition is finished, the terminal device may display an interface as shown in b in fig. 10.
The interface shown in b in 10 includes an audio-corresponding text and a cursor, which is displayed behind the audio-corresponding text.
Therefore, the terminal equipment can convert the audio corresponding to the played video into characters and insert the characters into the note application of the floating window, the user does not need to manually input the characters corresponding to the audio, the use convenience is improved, and the user experience is improved.
For example, when the function selection box obscures content in the floating window, the function selection box may be collapsed by the zoom-out control. FIG. 11 illustrates an interface interaction diagram for collapsing function selection boxes through a zoom-out control.
As shown in a of fig. 11, when the user clicks the zoom-out control, the terminal device may receive a click operation for the zoom-out control and display an interface as shown in b of fig. 11 in response to the operation.
In the interface shown as b in fig. 11, the function selection box is in a collapsed state.
It will be appreciated that when the terminal device displays an interface as shown in b in fig. 11 and the user wants to use the function of the function selection box, the user can click the function selection box in the collapsed state, the terminal device can receive this operation, expand the function selection box, so that the user can perform the operation on the control in the function selection box.
Fig. 5-11 above illustrate interface interaction situations after each control in the function selection box is triggered. Taking creating the note a as an example, a possible interface interaction in the process of creating the note a will be described in conjunction with functions corresponding to the controls in the function selection box. FIG. 12 shows an interface interaction diagram for creating note A.
The interface shown as a in fig. 12 may include a video playing interface, on which a floating window is displayed, and the floating window may include the content of note a. In the content input area of note a, the cursor is behind the text "skip action guidance video".
By way of example, an interface as shown in a in fig. 12 may be displayed by: when the terminal device displays an interface as shown in b in fig. 5, the terminal device may receive a note title "note a" input by the user and display the "note a" in an area of the note title in the floating window. The terminal device may also receive a text introduction "skip action guide video" of the video played by the terminal device, which is input by the user, and display the "skip action guide video" in the content input area in the floating window.
As shown in a of fig. 12, when the user clicks the screen recording control, the terminal device may receive a clicking operation for the screen recording control, and in response to the operation, record content displayed by the terminal device, and generate a first frame image. The terminal device may insert the generated first frame image as a preview of the recorded video into the floating window. When the terminal device inserts the preview into the floating window, an interface as shown in b in fig. 12 may be displayed.
The interface shown in b in fig. 12 may include a screen recording function selection box, where the recording duration of the screen display content in the screen recording function selection box is 5 seconds. The interface shown in b in fig. 12 also includes a preview image and a cursor that is displayed below the preview image (obscured by the function selection box).
As shown in b in fig. 12, the terminal device may receive the text "the text version corresponding to the action guidance video" input by the user: and inserting the text input by the user into the position of the cursor. The terminal equipment takes the following characters as character versions corresponding to the action guidance video: "after being inserted below the preview, an interface as shown by c in fig. 12 may be displayed. The text "text version corresponding to the action guidance video" entered by the user in the interface shown as c in fig. 12 is as follows: "displayed below the preview.
As shown in c of fig. 12, when the user wants to record the audio content played by the terminal device, the user may click on the AI identification control, and the terminal device receives the click operation of the user on the AI identification control, and in response to the click operation, identifies the audio corresponding to the played video as text, and inserts the identified text into the position of the cursor in the floating window. When the terminal device inserts the text corresponding to the audio, an interface as shown by d in fig. 12 may be displayed. The cursor is displayed after the text "audio content … …" in the interface shown as d in fig. 12.
Therefore, characters corresponding to the recorded video and the audio of the video can be inserted into the note A, and when a user is in a scene inconvenient to play the recorded video, the characters corresponding to the audio of the video can be checked, so that user experience is improved.
As shown in d of fig. 12, the terminal device may receive the text "highlight moment" input by the user, and insert the text input by the user into the position where the cursor is located. After the terminal device inserts the word "highlight instant" into the position where the cursor is located, an interface as shown by e in fig. 12 may be displayed. In the interface shown as e in fig. 12, the cursor is displayed after the word "highlight instant".
As shown in e in fig. 12, when the user clicks the screenshot control, the terminal device may receive a click operation of the user on the screenshot control, and in response to the operation, acquire content displayed by the terminal device, and generate a screenshot image. The terminal device may insert the generated screenshot image into the floating window where the cursor is located. After the terminal device inserts the generated screenshot image into the floating window, an interface as shown in f in fig. 12 may be displayed. In the interface shown as f in fig. 12, a cursor is displayed below the screenshot image (not shown in the interface shown as f in fig. 12).
As shown in fig. 12 f, when the user clicks the record stop control, the terminal device may receive a click operation of the record stop control by the user, and in response to the operation, stop recording the content displayed by the terminal device, and generate a recorded video. The terminal device may also insert the generated recorded video into the position of the preview in note a of the floating window. After the terminal device inserts the recorded video into note a of the floating window, an interface as shown in g in fig. 12 may be displayed.
As shown in g of fig. 12, when the user clicks the note completion control, the terminal device may receive a clicking operation of the user with respect to the note completion control, and display an interface as shown by h of fig. 12 in response to the operation. In the interface shown as h in fig. 12, note a is at the top of the plurality of notes displayed in the interface.
For example, in an interface shown as h in fig. 12, when the user clicks on the region of note a, the terminal device may receive and display the contents of note a in the floating window in response to the user's operation. The content of note a displayed by the terminal device is similar to the content of the floating window shown in g in fig. 12, and the display mode of the content of note a in the embodiment of the present application is not specifically limited.
By way of example, FIG. 13 shows an interface interaction diagram for creating note B.
The manner in which the terminal device displays the interface shown as a in fig. 13 is similar to the manner in which the interface shown as a in fig. 12 is displayed, and will not be described again.
As shown in a of fig. 13, the terminal device may receive a trigger operation for the screen recording control, record the content displayed by the terminal device, and display an interface as shown in b of fig. 13.
In the interface shown as b in fig. 13, a cursor is displayed below the preview of the recorded video, and the function selection box is in a collapsed state.
It can be understood that, in the process that the terminal device enters the interface shown in b in fig. 13 from the interface shown in a in fig. 13, interface interaction of the stowage function selection box shown in fig. 11 may be included, which is not described herein.
As shown in b in fig. 13, when the recording duration is 12 seconds, the terminal device may receive a trigger operation for stopping the screen recording control, and the terminal device ends video recording to obtain a recorded video, and may display an interface shown as c in fig. 13.
As shown in the interface c in fig. 13, the video recording is displayed at the position where the preview image is located in the interface b in fig. 13, and the cursor is displayed below the video recording.
As shown in c of fig. 13, when the terminal device receives a click operation by the user for a position before recording video, a cursor may be displayed at the position before recording video (not shown in the interface shown in c of fig. 13). The terminal device may receive the text "guidance video for jumping motion (second recording)" entered by the user at a position before recording the video, and display an interface as shown by d in fig. 13.
In the interface shown as d in fig. 13, the text "guidance video for jump motion (second recording)" is displayed at a position before recording the video, and a cursor is displayed after the text.
As shown by d in fig. 13, when the terminal device receives a click operation by the user for a position after recording video, a cursor may be displayed at the position after recording video (not shown in the interface shown by d in fig. 13). The terminal device may receive the text "different from the first recording in … …" entered by the user at the location after the video was recorded and display an interface as shown by e in fig. 13.
In the interface shown as e in fig. 13, the text "differs from the first recording in that … …" is displayed at a position after the video is recorded, and a cursor is displayed after the text.
Therefore, the content displayed by the terminal equipment can be recorded through the screen recording function of the note application, the recorded video obtained through recording is displayed on the interface of the note application, the text input by the user can be displayed at the position before the video is recorded and the position after the video is recorded, the information recording form is enriched, and the user experience is improved.
The following will schematically describe a flow embodiment of a video recording method according to an embodiment of the present application. Note that "when … …" in the embodiment of the present application may be an instant when a certain situation occurs, or may be a period of time after a certain situation occurs, which is not particularly limited. Fig. 14 is a flowchart of a method for recording video according to an embodiment of the present application. As shown in fig. 14, the method includes:
s1401, at a first moment, the terminal device displays a floating window and an interface of a first application, wherein the floating window is floating on the upper layer of the interface of the first application, the floating window comprises an interface of a second application, and the interface of the second application comprises a first control and a content input area.
The first application may be a video playing application for playing video, and the interface of the first application may be a video playing interface as shown in the foregoing embodiment. The second application may be a note application, a memo application, or the like for recording information, and the floating window may correspond to the floating window including the interface of the note application shown in the above-described embodiments, and the embodiment of the present application is not particularly limited to the first application and the second application.
In the embodiment of the present application, the interface displayed by the terminal device at the first time may correspond to the interface shown in b in fig. 5 or the interface shown in a in fig. 13.
S1402, recording content displayed by the terminal device when the terminal device receives the trigger for the first control.
The first control may be a screen recording control of an interface display of the notes application.
For example, the terminal device receiving the trigger for the first control may be that the terminal device receives a click operation of a user for the screen recording control.
S1403, at the second moment, the recording is finished, and the recorded video obtained by recording is displayed at the first position of the content input area.
Wherein the second time is later than the first time.
In the embodiment of the present application, the first position may be a position where the cursor is located in the content input area of the second application. The terminal device may display the recorded video recorded at the first position of the content input area, which corresponds to the interface shown as d in fig. 5 or the interface shown as c in fig. 13.
At the third time, the first text information input by the user is received at a second position before the first position, the first text information is displayed at the second position, and the second position is located in the content input area.
Wherein the third time is later than the second time.
In the embodiment of the present application, displaying the first text information at the second position by the terminal device may correspond to the interface shown by b in fig. 12 or the interface shown by d in fig. 13.
At the fourth time, when the second text information input by the user is received at the third position after the first position, the second text information is displayed at the third position, and the third position is located in the content input area.
Wherein the fourth time is later than the third time.
In the embodiment of the present application, displaying the first text information at the second position by the terminal device may correspond to the interface shown in c in fig. 12 or the interface shown in e in fig. 13.
In summary, in the embodiment of the present application, the terminal device may record the content displayed by the terminal device through the screen recording function of the second application, and display the recorded video obtained by recording in the content input area of the first application, in addition, the terminal device may display the first text information at a position before the video is recorded, and display the second text information at a position after the video is recorded, so that not only the recorded video but also the text information may be displayed in the interface of the second application, thereby enriching the information recording manner and improving the user experience.
In a possible implementation, the recorded video recorded by the terminal device may include an interface of the first application, and does not include a floating window. For example, a recorded video in the interface shown as d in fig. 5, or a recorded video in the interface shown as c in fig. 13.
In one possible implementation, after the terminal device receives the trigger for the first control and before the recording is finished, a preview of the recorded video is displayed at the first position of the terminal device. The first position is the position of the cursor in the suspension window at the first moment. The interface at which the preview of the recorded video is displayed at the first location of the terminal device may correspond to the interface shown by c in fig. 5 or the interface shown by b in fig. 13.
In a possible implementation, the interface of the second application further includes a second control, and the method includes: at a fifth moment, receiving a trigger aiming at the second control, and capturing a content displayed by the terminal equipment to obtain a captured image; and displaying the screenshot image at a fourth position of the content input area, wherein the fourth position is the position of the cursor in the suspension window at the fifth moment.
In the embodiment of the present application, the second control may be a screenshot control shown in the foregoing embodiment. The interface of the terminal device displaying the screenshot image in the content input area may correspond to the interface shown by c in fig. 9 or the interface shown by f in fig. 12. It will be understood that, the fourth position of the cursor in the interface shown in c in fig. 9 is different from the fourth position of the cursor in the interface shown in f in fig. 12, and the fourth position of the cursor is not specifically limited in the embodiments of the present application.
In one possible implementation, the screenshot image includes an interface of the first application and does not include a floating window.
In a possible implementation, the interface of the second application further includes a third control, and the method includes: and at the sixth moment, receiving a trigger for the third control, and displaying characters corresponding to the audio in the first application at a fifth position of the content input area, wherein the fifth position is the position of the cursor in the suspension window at the sixth moment.
In the embodiment of the present application, the third control may be an AI identification control shown in the foregoing embodiment, and the triggering for the third control may refer to a clicking operation for the AI identification control in an interface shown in a in fig. 10 or in an interface shown in c in fig. 12. The interface where the terminal device displays the text corresponding to the audio in the first application may correspond to the interface shown by b in fig. 10 or the interface shown by d in fig. 12.
In a possible implementation, the method further includes: receiving a play operation of the recorded video displayed at the first position; in response to the play operation, the recorded video of the play state is displayed in the floating window, and the content displayed in the floating window before the play operation is received is not displayed.
In the embodiment of the present application, the play operation of the recorded video displayed at the first position may be a click operation for a recorded video play control in the interface shown in fig. 7 a. The interface of the terminal device displaying the recorded video of the play state in the floating window may correspond to the interface shown by b in fig. 7, c in fig. 7, or a in fig. 8.
In one possible implementation, when the recorded video in the playing state is displayed in the floating window, a fourth control is also displayed; the method further comprises the steps of: and when the trigger of the fourth control is received, the terminal equipment displays the interface of the first application and the interface of the second application in a split screen mode.
In the embodiment of the present application, the fourth control may be a half-screen display control described in the foregoing embodiment. The terminal device receiving the trigger for the fourth control may receive a click operation for the half-screen display control as shown in b in fig. 8. The terminal device split screen display interface of the first application and the interface of the second application may correspond to the interface shown as c in fig. 8.
It may be appreciated that the terminal device may also receive a trigger for a full screen display control, to full screen display an interface of the second application, for example, a recorded video playing interface of the terminal device full screen display note application in the interface shown in c in fig. 8.
In a possible implementation, when receiving the trigger for the first control, the terminal device further modifies the layer name of the second application from the first layer name to the second layer name; and in the process of recording the content displayed by the terminal equipment, filtering out the layers of the second application according to the names of the second layers when the recorded video is synthesized. The embodiment of the application does not limit the first layer name and the second layer name of the second application.
In the embodiment of the present application, the terminal device modifies the layer name similar to steps S401-S402 in fig. 4, and the method for filtering the layer of the second application according to the second layer name during the synthesis of the recorded video is similar to step S405 in fig. 4, and will not be described herein.
In a possible implementation, when recording is finished, the terminal device further modifies the layer name of the second application from the second layer name to the first layer name.
In the embodiment of the present application, the method for modifying the layer name of the second application from the second layer name to the first layer name is similar to the steps S409-S410 in fig. 4, and is not repeated here.
For the sake of easy understanding of the video recording method according to the embodiment of the present application, the second application is an ultranote application, notePad, for example, and an internal implementation process of the terminal device when recording video will be described. Fig. 15 is a schematic flow chart of another video recording method according to an embodiment of the present application. As shown in fig. 15, the method includes the steps of:
S1501, the terminal device starts recording video.
In the embodiment of the present application, the triggering operation of the terminal device to start recording video may be shown in the above embodiment, and will not be described herein.
S1502, the terminal equipment modifies the window name of the super note application in the floating window.
For example, the terminal device may modify the window name of the ultranote application in the floating window to videonotepaddscreenrecorder.
The internal interaction process when the terminal device modifies the window name of the ultranote application may be shown in fig. 4, and will not be described herein.
S1503, the terminal device filters out the image layer of the super note application in the floating window during the video synthesis of the recording screen.
For example, the terminal device may traverse the Layer in the collectivlibles of the composition engine, and filter the Layer containing the modified window name title in the Layer name layername, and the specific implementation manner may be shown in fig. 4, which is not described herein.
S1504, the terminal equipment finishes recording the video.
In the embodiment of the present application, the triggering operation of ending recording video by the terminal device may be shown in the above embodiment, and will not be described herein.
S1505, the terminal equipment restores the window name of the super note application in the floating window.
For example, the terminal device may restore the window name of the ultranote application in the floating window to the name before modification by the videonotepaddscreen recorder.
When the video recording is finished, the window name of the super note application in the restore suspension window of the terminal equipment is mTitle.
It can be understood that, the embodiment of the present application is only described by taking the implementation manner of modifying the layer name of the ultranote application as an example, and when the video recording manner provided by the embodiment of the present application is implemented in an application of other application information recording, the modified window name may be set according to the actual situation, and the embodiment of the present application is specifically limited.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the present application may be implemented in hardware or a combination of hardware and computer software, as the method steps of the examples described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
According to the embodiment of the application, the device for realizing the video recording method can be divided into the functional modules according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 16 is a schematic diagram of a hardware structure of another terminal device according to an embodiment of the present application, as shown in fig. 16, where the terminal device includes a processor 1601, a communication line 1604 and at least one communication interface (illustrated in fig. 16 by way of example as a communication interface 1603).
The processor 1601 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present application.
Communication line 1604 may include circuitry for communicating information between the components described above.
Communication interface 1603, using any transceiver-like device, is used to communicate with other devices or communication networks, such as ethernet, wireless local area network (wireless local area networks, WLAN), etc.
Possibly, the terminal device may also comprise a memory 1602.
Memory 1602 may be, but is not limited to, read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, as well as electrically erasable programmable read-only memory (EEPROM), compact disc-read only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via communication line 1604. The memory may also be integrated with the processor.
The memory 1602 is used for storing computer-executable instructions for executing aspects of the present application, and is controlled by the processor 1601. The processor 1601 is configured to execute computer-executable instructions stored in the memory 1602, thereby implementing the video recording method provided by the embodiment of the present application.
Possibly, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not limited in particular.
In a particular implementation, as one embodiment, the processor 1601 may include one or more CPUs, such as CPU0 and CPU1 in fig. 16.
In a specific implementation, as an embodiment, the terminal device may include a plurality of processors, such as processor 1601 and processor 1605 in fig. 16. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Fig. 17 is a schematic structural diagram of a chip according to an embodiment of the present application. The chip 170 includes one or more (including two) processors 1701, communication lines 1702, communication interfaces 1703, and memory 1704.
In some implementations, the memory 1704 stores the following elements: executable modules or data structures, or a subset thereof, or an extended set thereof.
The methods described above for embodiments of the present application may be applied to the processor 1701 or implemented by the processor 1701. The processor 1701 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 1701. The processor 1701 may be a general-purpose processor (e.g., a microprocessor or a conventional processor), a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 1701 may implement or perform the methods, steps, and logic blocks related to the disclosed processes in embodiments of the application.
The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a state-of-the-art storage medium such as random access memory, read-only memory, programmable read-only memory, or charged erasable programmable memory (electrically erasable programmable read only memory, EEPROM). Which is located in a memory 1704 and a processor 1701 reads information from the memory 1704 and, in combination with its hardware, performs the steps of the method described above.
Communication between the processor 1701, the memory 1704 and the communication interface 1703 may be performed through a communication line 1702.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in the memory in advance, or may be downloaded in the form of software and installed in the memory.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.), or semiconductor medium (e.g., solid state disk, SSD)) or the like.
The embodiment of the application also provides a computer readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
As one possible design, the computer-readable medium may include compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk memory; the computer readable medium may include disk storage or other disk storage devices. Moreover, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (digital versatile disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.

Claims (13)

1. A video recording method, applied to a terminal device, the method comprising:
at a first moment, displaying a floating window and an interface of a first application, wherein the floating window is floating at the upper layer of the interface of the first application, the floating window comprises an interface of a second application, and the interface of the second application comprises a first control and a content input area;
when the trigger aiming at the first control is received, recording the content displayed by the terminal equipment;
ending the recording at a second moment, and displaying the recorded video obtained by recording at a first position of the content input area, wherein the second moment is later than the first moment;
at a third moment, receiving first text information input by a user at a second position before the first position, and displaying the first text information at the second position, wherein the second position is positioned in the content input area; the third time is later than the second time;
a fourth moment, when second text information input by a user is received at a third position behind the first position, displaying the second text information at the third position, wherein the third position is positioned in the content input area; the fourth time is later than the third time.
2. The method of claim 1, wherein the recorded video includes an interface of the first application and does not include the floating window.
3. The method of claim 2, wherein upon receiving a trigger for the first control, the terminal device further modifies a layer name of the second application from a first layer name to a second layer name; and in the process of recording the content displayed by the terminal equipment, filtering out the layers of the second application according to the names of the second layers when the recorded video is synthesized.
4. A method according to claim 3, characterized in that the terminal device also modifies the layer name of the second application from the second layer name to the first layer name when recording is finished.
5. The method of any of claims 1-4, wherein upon receipt of a trigger for the first control and prior to ending recording, a preview of the recorded video is displayed at the first location of the terminal device; and the first position is the position of the cursor in the suspension window at the first moment.
6. The method of any of claims 1-5, wherein the interface of the second application further comprises a second control therein, the method comprising:
At a fifth moment, receiving a trigger aiming at the second control, and capturing a content displayed on the terminal equipment to obtain a captured image;
and displaying the screenshot image at a fourth position of the content input area, wherein the fourth position is the position of the cursor in the floating window at the fifth moment.
7. The method of claim 6, wherein the screenshot image includes an interface of the first application and does not include the floating window.
8. The method of any of claims 1-7, wherein a third control is further included in the interface of the second application, the method comprising:
and at a sixth moment, receiving a trigger for the third control, and displaying characters corresponding to the audio in the first application at a fifth position of the content input area, wherein the fifth position is the position of the cursor in the floating window at the sixth moment.
9. The method according to any one of claims 1-8, further comprising:
receiving a play operation of the recorded video displayed at the first position;
and responding to the playing operation, displaying the recorded video in a playing state in the floating window, and not displaying the content displayed in the floating window before the playing operation is received.
10. The method of claim 9, wherein a fourth control is also displayed in the floating window when the recorded video in the play state is displayed; the method further comprises the steps of:
and when the trigger of the fourth control is received, the terminal equipment displays the interface of the first application and the interface of the second application in a split screen mode.
11. A terminal device, comprising: a memory for storing a computer program and a processor for executing the computer program to perform the method of any of claims 1-10.
12. A computer readable storage medium storing instructions that, when executed, cause a computer to perform the method of any one of claims 1-10.
13. A computer program product comprising a computer program which, when run, causes an electronic device to perform the method of any one of claims 1-10.
CN202211466787.1A 2022-11-22 2022-11-22 Video recording method and related device Pending CN116708888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211466787.1A CN116708888A (en) 2022-11-22 2022-11-22 Video recording method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211466787.1A CN116708888A (en) 2022-11-22 2022-11-22 Video recording method and related device

Publications (1)

Publication Number Publication Date
CN116708888A true CN116708888A (en) 2023-09-05

Family

ID=87832761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211466787.1A Pending CN116708888A (en) 2022-11-22 2022-11-22 Video recording method and related device

Country Status (1)

Country Link
CN (1) CN116708888A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389103A (en) * 2008-11-04 2009-03-18 飞图科技(北京)有限公司 Method applied on multimedia notepad in mobile phone
CN110825289A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Method and device for operating user interface, electronic equipment and storage medium
CN111833917A (en) * 2020-06-30 2020-10-27 北京印象笔记科技有限公司 Information interaction method, readable storage medium and electronic device
CN112087657A (en) * 2020-09-21 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device
CN114115674A (en) * 2022-01-26 2022-03-01 荣耀终端有限公司 Method for positioning sound recording and document content, electronic equipment and storage medium
CN114283427A (en) * 2021-12-17 2022-04-05 合肥讯飞读写科技有限公司 Note mixed arranging method and device, electronic equipment and storage medium
CN114489422A (en) * 2022-01-26 2022-05-13 荣耀终端有限公司 Display method of sidebar and electronic equipment
CN114518824A (en) * 2022-01-26 2022-05-20 维沃移动通信有限公司 Note recording method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389103A (en) * 2008-11-04 2009-03-18 飞图科技(北京)有限公司 Method applied on multimedia notepad in mobile phone
CN110825289A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Method and device for operating user interface, electronic equipment and storage medium
CN111833917A (en) * 2020-06-30 2020-10-27 北京印象笔记科技有限公司 Information interaction method, readable storage medium and electronic device
CN112087657A (en) * 2020-09-21 2020-12-15 腾讯科技(深圳)有限公司 Data processing method and device
CN114283427A (en) * 2021-12-17 2022-04-05 合肥讯飞读写科技有限公司 Note mixed arranging method and device, electronic equipment and storage medium
CN114115674A (en) * 2022-01-26 2022-03-01 荣耀终端有限公司 Method for positioning sound recording and document content, electronic equipment and storage medium
CN114489422A (en) * 2022-01-26 2022-05-13 荣耀终端有限公司 Display method of sidebar and electronic equipment
CN114518824A (en) * 2022-01-26 2022-05-20 维沃移动通信有限公司 Note recording method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110597512B (en) Method for displaying user interface and electronic equipment
WO2020238356A1 (en) Interface display method and apparatus, terminal, and storage medium
CN111970401B (en) Call content processing method, electronic equipment and storage medium
WO2022100221A1 (en) Retrieval processing method and apparatus, and storage medium
CN111881315A (en) Image information input method, electronic device, and computer-readable storage medium
CN112068907A (en) Interface display method and electronic equipment
CN114816167A (en) Application icon display method, electronic device and readable storage medium
WO2022247541A1 (en) Method and apparatus for application animation linking
WO2022057889A1 (en) Method for translating interface of application, and related device
CN113039513B (en) Recommendation method for candidate content of input method and electronic equipment
CN113312115A (en) Information collection method, electronic device and computer readable storage medium
CN112416984B (en) Data processing method and device
CN114564101A (en) Three-dimensional interface control method and terminal
CN116708888A (en) Video recording method and related device
CN115994006A (en) Animation effect display method and electronic equipment
CN116743908B (en) Wallpaper display method and related device
CN114513575B (en) Method for collection processing and related device
WO2024139934A1 (en) Multi-window presentation method for application programs, and electronic device
WO2024046010A1 (en) Interface display method, and device and system
CN116204093B (en) Page display method and electronic equipment
CN115567666B (en) Screen recording method, electronic device and readable storage medium
CN118193092A (en) Display method and electronic equipment
CN116737037A (en) Stack management method in interface display and related device
CN117332138A (en) Searching method and electronic equipment
CN117132984A (en) Text recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination