CN116661635B - Gesture processing method and electronic equipment - Google Patents

Gesture processing method and electronic equipment Download PDF

Info

Publication number
CN116661635B
CN116661635B CN202211467876.8A CN202211467876A CN116661635B CN 116661635 B CN116661635 B CN 116661635B CN 202211467876 A CN202211467876 A CN 202211467876A CN 116661635 B CN116661635 B CN 116661635B
Authority
CN
China
Prior art keywords
handwriting
view
gesture
state
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211467876.8A
Other languages
Chinese (zh)
Other versions
CN116661635A (en
Inventor
张静
范明超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202410311135.3A priority Critical patent/CN118151803A/en
Priority to CN202211467876.8A priority patent/CN116661635B/en
Publication of CN116661635A publication Critical patent/CN116661635A/en
Application granted granted Critical
Publication of CN116661635B publication Critical patent/CN116661635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a gesture processing method and electronic equipment, wherein the method comprises the following steps: displaying a first interface of a first application; receiving a first gesture input by a user on a first interface; determining an operation state of the first application, wherein the operation state is used for representing the operation condition of the function of the first application; and responding to the first gesture according to the running state. The method can solve the problem of gesture conflict and improve user experience.

Description

Gesture processing method and electronic equipment
Technical Field
The application relates to the technical field of electronics, in particular to a gesture processing method and electronic equipment.
Background
A note Application (APP) is an APP commonly used by users. Currently, note APP is mainly used to record handwritten content, text content, and pictures. To improve the user experience, researchers have begun to work to extend the functionality of notebook APP, for example, to add to the functionality of recording video, audio, forms, and dynamic pictures of graphics interchange formats (graphic interchange format, GIF) for notebook APP.
However, the expansion of the note APP function may result in gestures that trigger different functions being potentially identical, thus presenting a gesture conflict problem.
Disclosure of Invention
The application provides a gesture processing method and electronic equipment, which can solve the problem of gesture conflict of note APP.
In a first aspect, the present application provides a gesture processing method, the method being performed by an electronic device, the method comprising: displaying a first interface of a first application; receiving a first gesture input by a user on a first interface; determining an operation state of a first application, wherein the operation state is used for representing the operation condition of the function of the first application; and responding to the first gesture according to the running state.
Alternatively, the first application may be an APP for recording content, such as a note APP. The first interface may be an editing interface for notes. Alternatively, the operational status may be used to characterize whether a function is operational. The operation state may include an operation state of one function or an operation state of a plurality of functions.
In the implementation manner, when a first gesture input by a user in a first interface is received, the running state of the first application is determined, and the first gesture is responded based on the running state. The running state characterizes the running condition of the function of the first application, so that the application scene of the user to the first application can be reflected, the gesture response result can be matched with the application scene of the user according to the running state, the problem of gesture conflict is solved, the accuracy of gesture response can be improved, and the user experience is further improved.
In one possible implementation manner, the operation state includes at least one of an input state and a streaming media state, the input state is used for representing the operation condition of the handwriting input function of the first application, and the streaming media state is used for representing the operation condition of the streaming media playing function of the first application; the input state is one of a handwriting state and a non-handwriting state, and the streaming media state is one of a recording playing state and a non-recording playing state.
The handwriting state characterizes the handwriting input function of the first application in operation, and the non-handwriting state characterizes the handwriting input function of the first application in non-operation. The recording playing state represents that the recording playing function of the first application is not operated in operation.
Optionally, in some other embodiments, the streaming media state may further include a video playing state and a non-video playing state. The video playing state characterizes that the video playing function of the first application is not running in the running process of the video playing function of the first application.
In this implementation, the input state and the streaming media state can characterize the running conditions of the handwriting input function and the streaming media playing function of the first application. Therefore, according to the input state and the streaming media state, whether the application scene of the first application by the user is a handwriting input scene or a recording playing scene can be determined, so that a corresponding response strategy can be determined, the problem of gesture conflict is solved, the gesture response accuracy is improved, and the user experience is further improved.
In one possible implementation, before responding to the first gesture according to the operation state, the method further includes: identifying a gesture type of the first gesture; the gesture type is one of a sliding gesture, a clicking gesture and a long-press gesture; according to the running state, responding to the first gesture comprises: and responding to the first gesture according to the running state and the gesture type.
Under the application scene corresponding to the same function of the first application, different gestures may correspond to different response results. Therefore, in the implementation manner, according to the running state and the gesture type, the response results corresponding to different gestures in the same application scene can be distinguished, so that the problem of gesture conflict is further solved, the accuracy of gesture response is improved, and further the user experience is improved.
In one possible implementation, responding to the first gesture according to the running state and the gesture type includes: if the gesture type is a sliding gesture and the input state is a non-handwriting state, scrolling the first interface.
In the implementation manner, the gesture type is a sliding gesture, the input state is a non-handwriting state, and it can be determined that a specific application scene of the user on the first application is a sliding scrolling interface scene. Thus, in response to the first gesture, the first interface is scrolled. According to the implementation mode, the scene of the sliding rolling interface can be accurately identified, the sliding gesture in the scene of the rolling interface is accurately distinguished from the sliding gesture in the handwriting state, the conflict problem of the sliding gesture is solved, the first gesture is accurately responded, and the user experience is improved.
In one possible implementation, responding to the first gesture according to the running state and the gesture type includes: responding to the first gesture according to the running state, the drop point position of the first gesture and the gesture type; the drop point position refers to the position of the pressing event corresponding to the first gesture.
In this implementation, the drop point location of the first gesture is added in response to the first gesture. Different drop point positions and gesture objects can be different, and different responses are required to be made corresponding to different application scenes. According to the implementation mode, according to the running state, the drop point position and the gesture type, the first gesture is responded, different application scenes corresponding to objects with different gestures can be distinguished, gesture response is accurately made, the problem of gesture conflict is further solved, and user experience is improved.
In one possible implementation, responding to the first gesture according to the running state, the drop point position of the first gesture, and the gesture type includes: if the streaming media state is a recording and playing state, a first handwritten content exists at a drop point position, the first handwritten content has a corresponding first recording duration, and the gesture type is a click gesture or a long press gesture, the first handwritten content is displayed in a preset mode, and the voice playing progress is jumped according to the first recording duration;
If the streaming media state is a recording and playing state, the input state is a non-handwriting state, the first webpage content exists at the drop point position, the first webpage content has a corresponding second recording duration, and the gesture type is a click gesture, the first handwriting content is displayed according to a preset mode, and the voice playing progress is jumped according to the second recording duration.
In the implementation manner, if the streaming media state is a recording playing state, the first handwriting content exists at the drop point position, the first handwriting content exists for a corresponding first recording duration, and the gesture type is a click gesture or a long press gesture, then it can be determined that a specific application scene of the user to the first application is a scene of positioning the recording playing progress through the handwriting content. Thus, in response to the first gesture, the first handwritten content is displayed in a preset manner and the voice playing progress is jumped according to the first recording time. According to the implementation mode, the scene of the recording playing progress can be accurately identified through the handwriting content, the gesture in the scene can be accurately responded, the problem of gesture conflict is solved, and the user experience is improved.
Meanwhile, in the implementation manner, if the streaming media state is a recording playing state, the input state is a non-handwriting state, the first webpage content exists at the drop point position, the first webpage content exists at the corresponding second recording duration, and the gesture type is a click gesture, then a specific application scene of the user on the first application can be determined to be a scene of positioning the recording playing progress through the webpage content. Thus, in response to the first gesture, the first web content is displayed in a preset manner, and the voice playing progress is jumped according to the second recording time. According to the implementation mode, the scene of the recording playing progress can be accurately identified through webpage content positioning, gestures in the scene can be accurately responded, the problem of gesture conflict is solved, and user experience is improved.
In one possible implementation, responding to the first gesture according to the running state, the drop point position of the first gesture, and the gesture type includes: if the input state is a non-handwriting state, the drop point position is positioned in the video area, and the gesture type is a long-press gesture, a first control is displayed in a first interface; the video area is used for displaying videos and content related to the videos, and the first control is used for controlling deletion of the videos.
Alternatively, the video area may be an area of a video clip. The video region may be used to display the first frame image of the video clip, the play control of the video clip, the tag to which the video clip corresponds, and so on.
In this implementation manner, if the input state is a non-handwriting state, the drop point position is located in the video area, and the gesture type is a long-press gesture, it may be determined that a specific application scenario of the user to the first application is a scenario of the long-press video area. Thus, in response to the first gesture, a first control is displayed in the first interface. The implementation method can accurately identify the scene of the long-press video area, solve the problem of gesture conflict between the scene of the long-press video area and the scene of the long-press picture, the long-press text content and the like, and improve user experience.
In one possible implementation, responding to the first gesture according to the running state, the drop point position of the first gesture, and the gesture type includes: if the input state is a non-handwriting state, the streaming media state is a non-recording playing state, a first picture exists at a drop point position, and the gesture type is a click gesture, displaying an original image corresponding to the first picture in a first interface;
if the input state is a non-handwriting state, a first picture exists at the drop point position, and the gesture type is a long-press gesture, a second control is displayed in the first interface, and the second control is used for selecting a processing mode of the first picture.
In the implementation manner, if the input state is a non-handwriting state, the streaming media state is a non-recording playing state, the first picture exists at the drop point position, and the gesture type is a click gesture, it can be determined that a specific application scene of the user to the first application is an operation picture scene, and the gesture is a click gesture, that is, the actual application scene is a view picture scene. Thus, in response to the first gesture, an artwork corresponding to the first picture is displayed in the first interface. According to the implementation mode, the picture viewing scene can be accurately identified, the problem that the picture viewing scene conflicts with the click gesture of the scene such as the text editing entering scene by locating the recording playing progress through the webpage content is solved, and the user experience is improved.
Meanwhile, in the implementation manner, if the input state is a non-handwriting state, the first picture exists at the drop point position, and the gesture type is a long-press gesture, it can be determined that a specific application scene of the user to the first application is a long-press picture scene. Thus, in response to the first gesture, a second control is displayed in the first interface. The implementation method can accurately identify the long-press picture scene, solve the problem of gesture conflict between the long-press picture scene and the long-press video area, the long-press text content and other scenes, and improve the user experience.
In one possible implementation, responding to the first gesture according to the running state, the drop point position of the first gesture, and the gesture type includes: if the input state is a non-handwriting state, the streaming media state is a recording and playing state, the first text content exists at the drop point position, and the gesture type is a long-press gesture, first prompt information is displayed in the first interface, and the first prompt information is used for prompting to pause recording and playing;
if the input state is a non-handwriting state, the streaming media state is a non-recording playing state, the first text content exists at the drop point position, and the gesture type is a long-press gesture, a third control is displayed in the first interface, and the third control is used for selecting a processing mode of the first text content.
In the implementation manner, if the input state is a non-handwriting state, the streaming media state is a recording and playing state, the first text content exists at the drop point position, and the gesture type is a long-press gesture, then it can be determined that a specific application scene of the user to the first application is a scene of the long-press text content in the recording and playing state. Thus, in response to the first gesture, a first hint information is displayed in the first interface. The implementation method can accurately identify the long-press text content scene and solve the problem of gesture conflict between the long-press text content scene and the long-press video area, the long-press picture and other scenes. In addition, the realization mode also identifies the streaming media state, so that the long-time text content scene in the recording playing state and the non-recording playing state are distinguished, the problem of gesture conflict in the long-time text content scene is solved, and the user experience is improved.
In one possible implementation, responding to the first gesture according to the running state, the drop point position of the first gesture, and the gesture type includes:
if the input state is a non-handwriting state, the streaming media state is a non-recording playing state, a preset object exists at the drop point position, the gesture type is a click gesture, the function of the preset object is executed, and an interface corresponding to the function of the preset object is displayed in the first interface; the preset object is one of a preset control or content in a preset format.
Alternatively, the preset control may be, for example, an insert PDF file control, a manifest entry mark control, or the like. The content of the preset format includes, for example, a web page link, a mailbox, a telephone number, and the like.
In this implementation manner, if the input state is a non-handwriting state, the streaming media state is a non-recording playing state, the preset object exists at the drop point position, and the gesture type is a click gesture, it may be determined that the specific application scene of the user to the first application is a click preset object scene. Thus, in response to the first gesture, a function of the preset object is performed, and an interface corresponding to the function of the preset object is displayed in the first interface. According to the implementation mode, the click preset object scene can be accurately identified, the click gesture in the preset object scene is accurately distinguished from the click gesture in the scene such as picture viewing, text editing entering and the like, the conflict problem of the click gesture is solved, the first gesture is accurately responded, and the user experience is improved.
In one possible implementation, responding to the first gesture according to the running state and the gesture type includes: responding to the first gesture according to the running state, the drop point position of the first gesture, the gesture input mode and the gesture type of the first gesture.
In this implementation, a gesture input mode of the first gesture is added when responding to the first gesture. Optionally, the gesture input mode may include finger input and stylus input.
In some embodiments, note APP may also have the following functions: in the non-handwriting state, the handwriting APP automatically switches the input state to the handwriting state by inputting a click gesture through the handwriting pen. Therefore, for the clicking gesture, the scene of the 'handwriting pen input clicking gesture' can be distinguished from other scenes by combining a gesture input mode, so that gesture response can be accurately made, the problem of gesture conflict is further solved, and user experience is improved.
In one possible implementation manner, responding to the first gesture according to the running state, the drop point position of the first gesture, the gesture input mode of the first gesture and the gesture type includes: if the input state is a non-handwriting state, the gesture input mode is handwriting pen input, a preset object does not exist at the drop point position, the gesture type is a click gesture, the input state is switched to be the handwriting state, and a track corresponding to the first gesture is used as handwriting content and displayed on a first interface; the preset object is one of a preset control or content in a preset format.
In the implementation manner, if the input state is a non-handwriting state, the gesture input mode is handwriting pen input, the preset object does not exist at the drop point position, and the gesture type is a click gesture, it may be determined that a specific application scene of the user to the first application is a "handwriting pen input click gesture" scene. Thus, in response to the first gesture, the input state is switched to the handwriting state, and the track corresponding to the first gesture is displayed on the first interface as handwriting content. According to the implementation mode, the scene of 'handwriting pen input clicking gesture' can be accurately identified, the clicking gesture in the scene is accurately distinguished from the clicking gesture in the scene of viewing pictures, entering text editing and the like, the conflict problem of the clicking gesture is solved, and therefore the first gesture is accurately responded, and user experience is improved.
In one possible implementation, responding to the first gesture according to the operational state includes: and if the input state is the handwriting state, displaying the track of the first gesture as handwriting content on a first interface.
In this implementation manner, as long as the input state is determined to be the handwriting state, a specific application scene of the first application by the user can be determined to be a scene of the input gesture in the handwriting state, and the track of the first gesture is displayed as handwriting content on the first interface. The implementation method can simply and directly identify the scene of the input gesture in the handwriting state, solve the conflict problem of the clicking gesture, the sliding gesture and the long-press gesture in the handwriting state and the non-handwriting state, and improve the user experience.
In one possible implementation, the first interface includes: the first handwriting view is covered on the upper layer of the first webpage view, and the scrolling view wraps the first handwriting view and the first webpage view; the first handwriting view is used for displaying handwriting content, the first webpage view is used for displaying webpage content, and the scrolling view is used for realizing scrolling of the first handwriting view and the first webpage view.
Alternatively, the first handwriting view and the first web page view may be used as a view group, wrapped by scrolling the views.
Alternatively, the web page content may include text, still pictures, GIF moving pictures, video, audio, and the like.
In this implementation, the first interface includes a first handwriting view and a first web page view. The first handwriting view is overlaid on top of the first web page view. The first handwriting view can display handwriting content, and the first webpage view can display various types of content such as text, static pictures, GIF dynamic pictures, video, audio and the like. Therefore, the first application can record and display various contents, and the user experience is improved. Moreover, the first handwriting view and the first webpage view are wrapped through the scrolling view, so that scrolling of the handwriting view and the webpage view can be realized through the scrolling view, and the problem of interface scrolling under the condition that the webpage view and the handwriting view are higher is solved.
In one possible implementation, the scroll view is located in a preset area in a screen of the electronic device, and displaying a first interface of the first application includes: loading an initialized webpage view and an initialized handwriting view in response to a first operation of a user, wherein the first operation is used for indicating to open a first interface; acquiring all webpage contents and all handwritten contents; all the webpage contents are all the webpage contents for displaying in the preset area, and all the handwriting contents are all the handwriting contents for displaying in the preset area; rendering all webpage contents to the initialized webpage view, and expanding the initialized webpage view to obtain a second webpage view, wherein the second webpage view comprises the first webpage view; writing first to-be-displayed contents into the initialized handwriting view to obtain a first handwriting view, wherein the first to-be-displayed contents are handwriting contents to be displayed in a preset area to form a first interface in all handwriting contents; determining the height of the second webpage view; calculating the height of a second handwriting view according to all handwriting contents, wherein the second handwriting view is the handwriting view comprising all handwriting contents; and initializing a scrolling view according to the height of the second webpage view and the height of the second handwriting view.
The first content to be displayed may be the handwriting content 1 to be displayed in the specific embodiment.
Optionally, in the case where the first application is a note APP, the preset area may be a content editing area.
In this implementation, where the first interface includes a multi-layer view, an initialized display of the first interface is achieved. After the initial webpage view is loaded, all webpage contents are rendered at one time, so that when a subsequent user executes a scrolling interface operation, corresponding areas in the second webpage view are directly displayed, rendering is not needed, jamming in the interface scrolling process is reduced, and user experience is improved.
Moreover, for the handwritten content, not all writing is performed, but only the content to be displayed currently is written, and the rest of the handwritten content is rewritten at the time of the subsequent scroll interface. That is, the handwritten content is written stepwise to the handwriting view with the scrolling interface. As more resources are consumed in writing the handwriting content, the method provided in the implementation mode can save the resources of the electronic equipment and improve the operation efficiency of the equipment.
In one possible implementation, the method further includes: responding to the operation of a user scrolling interface, and acquiring second to-be-displayed content, wherein the second to-be-displayed content is to be added and displayed in a preset area to form the handwritten content of the scrolled interface on the basis of the first handwritten view; expanding the height of the first handwriting view according to the second content to be displayed; writing the second content to be displayed into the expanded first handwriting view to obtain a third handwriting view; the second web page view and the third handwriting view are scrolled.
The second content to be displayed may be handwriting content 2 to be displayed in a specific embodiment.
In the implementation manner, the second content to be displayed is obtained and written in response to the operation of the user scrolling interface, namely, the handwritten content is written in the handwriting view step by step along with the scrolling interface, so that the resources of the electronic equipment can be saved, and the running efficiency of the equipment is improved.
In one possible implementation, the method further includes: receiving second handwriting content input by a user under the condition that the input state of the first application is handwriting state; expanding the third handwriting view according to the input position of the second handwriting content, and writing the second handwriting content into the expanded third handwriting view to obtain a fourth handwriting view; determining a target view height according to the height of the fourth handwriting view in response to the operation of the user exiting the handwriting state; if the height of the target view is greater than or equal to the height of the second webpage view, rendering a line-feed symbol in the second webpage view, and expanding the height of the second webpage view along with rendering to obtain a third webpage view; the third web page view has a height equal to the target view height.
In the implementation mode, when the user exits from the handwriting state, the line-feed symbol is automatically rendered into the webpage view under the condition that the height of the webpage view is smaller than that of the handwriting view, so that the automatic expansion of the height of the webpage view is realized. Therefore, when the user edits the webpage content in the webpage view, the cursor can be directly displayed at the position of the last editing, so that the user can continue to edit the previous editing content, the user can use the webpage content conveniently, and the user experience is improved.
In one possible implementation, determining the target view height from the height of the fourth handwriting view includes: if the height of the fourth handwriting view is an integral multiple of the preset height, determining the height of the fourth handwriting view as the target view height; if the height of the fourth handwriting view is not an integral multiple of the preset height, expanding the height of the fourth handwriting view to obtain a fifth handwriting view, and determining the height of the fifth handwriting view as the height of the target view; the height of the fifth handwriting view is the minimum value in the height to be selected, and the height to be selected is larger than the height of the fourth handwriting view and is an integer multiple of the preset height.
Alternatively, the preset height may be a preset height of one page of handwriting view. In a particular embodiment, the height of a page of handwriting view may be the height of a scrolling view. The height of the handwriting view is an integer multiple of the preset height, namely the handwriting view is an integer page handwriting view.
In the implementation mode, when the handwriting view is not an integer page handwriting view, the height of the handwriting view is expanded into the integer page handwriting view, and then the webpage view is expanded according to the expanded handwriting view. Thus, both the handwriting view and the web page view are integer page views. The integer page view can reduce the view expansion times and save the power consumption of the electronic equipment.
In one possible implementation, determining the target view height from the height of the fourth handwriting view includes: the height of the fourth handwriting view is determined as the target view height.
According to the implementation mode, whether the whole page is judged is not carried out on the handwriting view, and the current handwriting view is directly used as the final handwriting view. Therefore, the editing position of the user is displayed at the bottommost end of the scrolling view, so that the user can conveniently know the final position of the edited content, and the user experience is improved.
In one possible implementation, the method further includes: responding to a second operation of a user, and if the height of the third handwriting view is smaller than that of the second webpage view, expanding the height of the third handwriting view to obtain a sixth handwriting view; the height of the sixth handwriting view is equal to the height of the second webpage view; the second operation is for instructing the first application to enter a handwriting state.
In this implementation, the height of the current handwriting view is confirmed upon entering the handwriting state. And expanding the height of the current handwriting view to the height of the current webpage view under the condition that the height of the current handwriting view is smaller than the height of the current webpage view. Therefore, when entering a handwriting state, the handwriting input editing position is automatically set at the position of the last editing, the user does not need to manually scroll the screen, and the user experience is improved.
In one possible implementation, the first application includes an interface module, a handwriting view component, a web page view component, a scrolling view component, a Java Script (JS) interface rendering module, and a storage module.
The handwriting view component is used for realizing the display of the handwriting view; the webpage view component is used for realizing the display of the webpage view. The JS interface rendering module is used for rendering data to the webpage view so as to display webpage contents in the webpage view. The scrolling view component is for enabling scrolling of sub-views of the scrolling view.
In one possible implementation, displaying a first interface of a first application includes:
in response to a first operation by a user, the interface module sends a first load instruction to the scroll-view component. The first load instruction is to instruct loading of a view of the first interface. The scroll view component sends a web page view load instruction to the web page view component in response to the first load instruction. The web page view load instruction is to instruct loading of the first web page view. And the webpage view component responds to the webpage view loading instruction to initialize the webpage view, so as to obtain an initialized webpage view. The scroll view component sends a first handwriting view load instruction to the handwriting view component. The first handwriting view loading instruction is used for indicating loading of the first handwriting view. And the handwriting view component responds to the first handwriting view loading instruction to initialize the handwriting view, so as to obtain an initialized handwriting view. The interface module inquires all webpage contents and all handwritten contents corresponding to the preset area from the storage module. And the storage module sends all the webpage contents corresponding to the preset area to the interface module. The interface module sends all the webpage contents to the JS interface rendering module. The JS interface rendering module renders all web page content to the initialized web page view. The webpage view component follows the rendering, expands the height of the initialized webpage view, and obtains a second webpage view. The web page view component sends the height of the second web page view to the scroll view component. The interface module sends all the handwritten content to the handwriting view component. The handwriting view component reads all handwriting content into memory. And determining a first content to be displayed according to all the handwriting contents by the handwriting view component, and writing the first content to be displayed into the initialized handwriting view to obtain a first handwriting view. The handwriting view component calculates the height of the second handwriting view from all handwriting content. The handwriting view component sends the height of the handwriting view to the scroll view component. The scroll view component initializes the scroll view based on the height of the second web page view and the height of the second handwriting view.
In one possible implementation, the method further includes: in response to operation of the user scrolling interface, the interface module sends a scroll instruction to the scroll view component. The scroll view component sends a second handwriting view load instruction to the handwriting view component in response to the scroll instruction. The second handwriting view loading instruction is used for indicating loading of a third handwriting view, and the third handwriting view comprises handwriting content in the first handwriting view and second content to be displayed. And the handwriting view component responds to the second handwriting view loading instruction, expands the height of the first handwriting view, writes second content to be displayed into the expanded first handwriting view, and obtains a third handwriting view. The scroll view component scrolls the second web page view and the third handwriting view.
In a second aspect, the present application provides an apparatus, which is included in an electronic device, and which has a function of implementing the electronic device behavior in the first aspect and possible implementations of the first aspect. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a receiving module or unit, a processing module or unit, etc.
In one embodiment, the apparatus comprises: the device comprises a display module, a receiving module, a determining module and a response module;
the display module is used for displaying a first interface of a first application; the receiving module is used for receiving a first gesture input by a user on a first interface; the determining module is used for determining the running state of the first application, and the running state is used for representing the running condition of the functions of the first application; the response module is used for responding to the first gesture according to the running state.
In a third aspect, the present application provides an electronic device, the electronic device comprising: a processor, a memory, and an interface; the processor, the memory and the interface cooperate with each other such that the electronic device performs any one of the methods of the technical solutions of the first aspect.
In a fourth aspect, the present application provides a chip comprising a processor. The processor is configured to read and execute a computer program stored in the memory to perform the method of the first aspect and any possible implementation thereof.
Optionally, the chip further comprises a memory, and the memory is connected with the processor through a circuit or a wire.
Further optionally, the chip further comprises a communication interface.
In a fifth aspect, the present application provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor causes the processor to perform any one of the methods of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising: computer program code which, when run on an electronic device, causes the electronic device to perform any one of the methods of the solutions of the first aspect.
Drawings
FIG. 1 is a schematic diagram of an exemplary note editing interface according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an editing interface for an example note provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an example of an electronic device 100 according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a software architecture of an example electronic device 100 according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an interface change of an open note according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating an exemplary interface display method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another exemplary note editing interface provided by embodiments of the present application;
FIG. 8 is a schematic diagram of a further exemplary note editing interface according to embodiments of the present disclosure;
FIG. 9 is a schematic diagram of an interface change of a content-oriented recording playback progress provided in an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an interface change of a gesture input by a stylus in a non-handwriting state according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating interface changes of an input gesture in a handwriting state according to an embodiment of the present disclosure;
FIG. 12 is a schematic illustration of interface changes for manipulating content in a video clip according to an embodiment of the present application;
fig. 13 is an interface change schematic diagram of an example of an operation picture according to an embodiment of the present application;
FIG. 14 is a schematic diagram of an interface change of an example operation preset control according to an embodiment of the present application;
FIG. 15 is a schematic diagram of interface changes of content in an operation preset format according to an embodiment of the present application;
FIG. 16 is a schematic illustration of an interface change for entering text editing according to an embodiment of the present application;
FIG. 17-1 is a flowchart illustrating an example gesture processing method according to an embodiment of the present disclosure;
FIG. 17-2 is a flow chart of another gesture processing method according to an embodiment of the present application;
FIG. 18 is a flowchart of another gesture processing method according to an embodiment of the present disclosure;
fig. 19 is a flowchart of another interface display method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first," "second," "third," and the like, are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may explicitly or implicitly include one or more such feature.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
For ease of understanding, the terms and concepts involved in the embodiments of the application will first be described.
1. Note editing interface
The note editing interface is also referred to as an editing interface for short, and refers to an interface capable of editing note contents. The interface presented by a user after opening a note through the note APP, the interface presented by the user in the process of editing the content, and the interface presented by the user after editing and clicking the storage control in the interface to store the content are all called a note editing interface. For convenience of description, an interface in which a user does not perform any operation after opening a note is referred to as an initial editing interface in the embodiments of the present application.
2. Content editing area
The note editing interface includes a content editing area. The content editing area refers to an area for editing and recording note content. In the embodiment of the application, the content editing area can be used for editing and recording handwritten content, text content, still pictures, video, audio, GIF dynamic pictures, tables and the like.
3. Handwritten state and non-handwritten state
In the embodiment of the present application, the input state of the note APP may be divided into a handwriting state and a non-handwriting state. The handwriting state characterizes that the handwriting input function of the notebook APP is in an on state, and a user can write characters or draw figures in the content editing area through a finger or a handwriting pen. The non-handwriting state means that the handwriting input function is in an unopened state, and the user cannot write characters or draw figures in the content editing area by means of a finger or a handwriting pen. In the non-handwriting input state, text contents can be typed in through a soft keyboard in a content editing area, and contents such as pictures, videos, audios or tables can be inserted in the content editing area through an insertion control.
4. Handwritten content and web page content
The handwriting content refers to content input when the note APP is in a handwriting state. The handwritten content mainly comprises handwritten strokes. The web page content refers to content input when the note APP is in a non-handwriting state. Web page content may include text content, pictures, video, audio, tables, and the like.
5. Drop point location of gesture
It will be appreciated that a user may enter gestures into the screen via a finger or stylus. One gesture corresponds to a set (multiple) of touch events. A push event (down event) and a lift event (up event) may be included in the set of touch events. Optionally, the set of touch events may further include at least one movement event (move event). The drop point position of the gesture refers to the position of the pressing event corresponding to the gesture. In this embodiment, the drop point position of the gesture may also be simply referred to as a drop point position.
Currently, note APP is mainly used to record handwritten content, text content, and pictures. Wherein, the picture is a still picture. However, as the use demands of users increase, there is a need to expand the functions of the note APP, so as to diversify the content recorded by the note APP, for example, enable the note APP to record video, audio, tables, GIF moving pictures, and the like. Wherein the video recorded by the note APP may be a video clip of the clip, and thus also referred to as video clip.
For the expansion of the function of the note APP, on the one hand, an acquisition entry of new types of content needs to be added to the note APP, and on the other hand, how to display various types of content on an interface at the same time, that is, how to display handwritten content, text content, still pictures, video, audio, GIF moving pictures, tables and the like on an editing interface of the note needs to be considered.
In view of the problem of the second aspect, an embodiment of the present application provides an interface display method, which divides content input by a user into two types of handwriting content and web page content, wherein the handwriting content is displayed through a handwriting view (handwriting view), and the web page content is displayed through a web page view (webview). The webpage view can be compatible with various webpage contents, and the problem of display of a note editing interface is solved. In addition, the handwriting view and the webpage view are wrapped through a scroll view (scrollview), so that the problem of interface scrolling in the display process is solved.
Exemplary, fig. 1 is a schematic structural diagram of an exemplary note editing interface according to an embodiment of the present application. As shown in fig. 1, in the embodiment of the present application, the editing interface of the note includes a handwriting view 101, a web page view 102, and a scroll view 103. Wherein the handwriting view 101 is overlaid on top of the web page view 102. The handwriting view 101 and the web page view 102 constitute a view group (view group) which is wrapped by the scroll view 103 as a child view of the scroll view 103. It will be appreciated that handwriting view 101 and web page view 102 are layered structures, and thus in some embodiments handwriting view is also referred to as handwriting layer, and web page view is also referred to as web page view layer. The scrolling view has the role of a container and is thus also referred to as a scrolling view container or the like.
In this application, the names are merely examples, and are not limiting. In addition, for convenience of distinction, in the drawings of the present application, handwriting views are illustrated as filled diamonds or rectangles, web page views and scrolling views are illustrated as unfilled diamonds or rectangles, and the description thereof will not be repeated.
The handwriting view 101 is used to display handwritten content. The web page view 102 is used to display web page content. Alternatively, the web page view 102 may be used to display content edited via hypertext markup language (hyper text markup language, HTML), including, but not limited to, text, still pictures, moving pictures, audio, video, forms, and the like. Alternatively, the HTML language may be, for example, the 5 th generation HTML (also referred to as HTML5.0, abbreviated as H5) language.
The scroll view 103 is used to implement scrolling of its sub-views, etc. In this embodiment, the scroll view 103 is used to implement scrolling of the handwriting view 101 and the web page view 102. Alternatively, the scroll view 103 may be provided in a content editing area in the note editing interface. Of course, the scroll view 103 may also correspond to all contents in the note editing interface, that is, the scroll view is the outermost view of the note editing interface, which is not limited in any way in the embodiments of the present application. The following embodiment will be described taking as an example a content editing area in which a scroll view is provided in a note editing interface.
Exemplary, fig. 2 is a schematic diagram of an exemplary note editing interface according to an embodiment of the present application. A certain actual note editing interface may be shown in fig. 2 (a). Wherein the interface includes a web page view 201 and a handwriting view 202. It can be seen that the height of the web page view 201 and the handwriting view 202 in the interface is high, and cannot be displayed on the screen entirely. Thus, both views may be scrolled through the scroll view. Fig. 2 (b) is a note editing interface visible to the user. As shown in fig. 2 (b), the screen frame is 204, the view group formed by the web page view 201 and the handwriting view 202 is wrapped by the scroll view 203, and the visible part in the actual note editing interface is the part in the scroll view 203. When the user needs to view other parts of the view group, a scroll interface operation (e.g., sliding, or dragging a scroll bar, etc.) is performed, and the scroll view 203 scrolls the web page view 201 and the handwriting view 202 together so that the other parts are visible, as shown in fig. 2 (c).
By the method, display of various note contents in the interface can be realized, and scrolling of the view can be realized. However, the increase of view structures and the enrichment of note functions may cause gesture conflict problems in editing and viewing note contents and the like. For example, typically, the gesture that triggers the interface to scroll is a swipe gesture, i.e., the user clicks on the screen with a finger or stylus and swipes a distance and then lifts up. In the process of editing the handwritten content, a sliding operation may be performed when the user handwriting the content. There is a conflict between these two gestures. Therefore, when the interface display is realized according to the method, the problem of gesture conflict needs to be solved.
In view of this, the method provided by the embodiment of the application can also identify the application scene according to the information of the gesture input by the user, the current state of the related function of the note APP and the like, and then make a corresponding response according to the gesture and the application scene, so as to solve the problem of gesture conflict and improve the user experience.
The method provided in the embodiment of the present application is described below. An electronic device to which the method is applied will be described first.
The gesture processing method provided by the embodiment of the application can be applied to electronic devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and application programs can be installed on the electronic devices, and the specific types of the electronic devices are not limited.
Fig. 3 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called directly from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may parse out a voice signal based on the vibration signal of the vocal part vibration bone piece obtained by the bone conduction sensor 180M, and implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 180M, so that a heart rate detection function is realized.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
In this embodiment of the present application, the software system of the electronic device 100 may be an Android system, a Windows system, an IOS system, or the like, which is not limited in this application. The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 4 is a software configuration block diagram of the electronic device 100 of the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 4, the application package may include a note APP. Of course, the application package may also include applications (not shown in fig. 4) for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
In this embodiment, the note APP may include a User Interface (UI) layer, a logic control layer, and an application kernel layer. Wherein the UI layer may include an interface module. The interface module is used for displaying an interface in the operation process of the note APP and receiving instructions or information input by a user through the note interface. Alternatively, the interface module may implement its functionality by invoking related modules of the application kernel layer and/or the application architecture layer.
The logic control layer is used for realizing the relevant functions of notes through logic control, such as a recording function, a recording and playing function, a bidirectional positioning function of recording and playing and content displaying, a video clip function, a video clip playing function, a bidirectional positioning function of video clip playing and label displaying, a document inserting function and the like. The above functions will be described in the following embodiments with reference to the drawings.
Optionally, the logic control layer may include a recording and playing control module, an extracting and playing control module, and the like, which are not listed in this application. The recording and playing control module is used for controlling related modules in the application program architecture layer to record, and is also used for realizing playing of the recording and the like. The excerpt and play control module is used for controlling other modules or APP to record the video excerpt, and is also used for realizing the play of the video excerpt and the like. In a specific embodiment, the snippet and play control module may record the video snippet by controlling the video snippet APP.
The application kernel layer may include a scroll view component, a handwriting view component, a web page view component, a Java Script (JS) interface rendering module, a storage module, and the like. The scrolling view component is for enabling scrolling of sub-views of the scrolling view. The handwriting view component is for enabling display of handwriting views. The webpage view component is used for realizing the display of the webpage view. The JS interface rendering module is used for rendering data to the webpage view so as to display webpage contents in the webpage view. The storage module is used for storing data, such as web page content, handwriting content and the like. Of course, the storage module may also be used for storing other data, such as a recording file recorded by the recording and playing control module, a video clip recorded by the clip and playing control module, text content, pictures or handwriting content input by a user, and the like.
In addition, the application kernel layer may also include an HTML editor and a handwriting editor. An HTML editor is an editor that performs editing based on the HTML language. Alternatively, the HTML editor may be an H5 editor. The HTML editor is used for editing the content input by the user in the non-handwriting state to generate webpage content.
And the handwriting editor is used for editing the content input by the user in the handwriting state and generating the handwriting content. The data and the content generated by the HTML editor can be rendered to the webpage view through the JS interface rendering module.
It should be noted that, the scroll view component, the handwriting view component, the web page view component, the JS interface rendering module, etc. may also be partially or fully disposed in the application framework layer or the system library. The embodiments of the present application are not limited in any way.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android runtimes include core libraries and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following embodiments of the present application will take an electronic device having a structure shown in fig. 3 and fig. 4 as an example, and specifically describe a gesture processing method provided by the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
First, an initialization display process of the note editing interface will be described.
Exemplary, fig. 5 is an interface change schematic diagram of an example of an open note according to an embodiment of the present application. Taking the electronic device as a mobile phone for example, as shown in the (a) diagram of fig. 5, an icon of a note APP is included on a desktop of the mobile phone, when a user clicks the icon of the note APP, the note APP is opened, and an interface shown in the (b) diagram of fig. 5 is entered. The title of the plurality of notes is displayed in the interface. When the user clicks on the title name "note 1", the initial editing interface 501 of note 1 is entered, as shown in fig. 5 (c). The initial editing interface 501 includes a content editing area 502. The view structure corresponding to the content editing area 502 may be as shown in fig. 2, and will not be described herein.
In the initial editing interface 501, the contents of the first page of the web page view and the contents of the first page of the handwriting view are displayed in the content editing area 502. It will be appreciated that the size and location of the scroll view is fixed, and the web page view and the handwriting view scroll within the scroll view. For ease of computation and display, the web page view and the handwriting view may be divided into pages (or referred to as multi-screens) according to the height of the scrolling view. The height of a page (also called a screen) is equal to the height of the scrolling view. The page count order increases from top to bottom. For the initial editing page, the user does not perform the scroll interface operation, and thus both the web page view and the handwriting view display the first page content. As shown in fig. 5 (c), the contents of the first page of the web page view include text contents 503 and pictures 504. The content of the first page of the handwriting view includes text handwriting content 505. Additionally, a scroll bar 506 is included in the initial editing interface 501.
Thereafter, the user performs an up scroll interface operation within the content editing area 502 in the initial editing interface 501, and the web page view and the handwriting view scroll up together. As shown in fig. 5 (d), the scrolled interface includes text content 507 of the second page of the web page view and handwritten content 508 of the second page of the handwritten view.
Alternatively, the scroll-up interface operation may be an operation of inputting a slide-up gesture on the screen by the user, or an operation of dragging the scroll bar 506 downward, which is not limited in this embodiment of the present application.
Corresponding to the interface changing process of the diagrams (b) to (d) in fig. 5, the process of implementing interface display by the method provided in the embodiment of the present application may be as shown in fig. 6, including:
s101, responding to the operation of opening the note 1 by a user, and sending a loading instruction 1 to a scrolling view component of a kernel layer by an interface module of the interface layer. The load instruction 1 is used to instruct the loading of a view of the content editing area in the initial editing interface of note 1.
Optionally, the interface module may implement the step S101 and other steps executed by the interface module through the note editing process (note edit activity), which will not be described in detail later.
Alternatively, in the case where a content editing area is set in a scroll view in the note editing interface, the editing interface of the note may include other views in addition to the scroll view, the web page view, and the handwriting view. In a specific embodiment, the editing interface of the note also includes a content view (content view). The content view serves as the parent of the scroll view wrapping the scroll view. Correspondingly, the kernel layer of the note APP can also comprise a content view component. The content view component is for enabling display of the content view.
When the content view is included in the interface, in response to a user opening note 1, the interface module may send a load instruction 2 to the content view component, the load instruction 2 being used to instruct loading of the initial editing interface. Thereafter, the content view component sends load instruction 1 to the scroll view component.
In the case where the scrolling view is the outermost view of the note editing interface, the interface module may send load instruction 1 directly to the scrolling view component.
That is, the interface module may send load instruction 1 directly or indirectly to the scroll-view component. In the following embodiments, the same is true for other instructions sent by the interface module to the scrolling view, and will not be described in detail.
S102, the scrolling view component responds to the loading instruction 1 and sends a webpage view loading instruction to the webpage view component. The webpage view loading instruction is used for indicating loading of the webpage view 1. The web page view 1 contains all web page contents corresponding to the content editing area.
The content corresponding to the content editing area is web page content that can be displayed in the content editing area. In other words, web page view 1 contains web page content for all pages, namely: the web page view 1 includes both web page contents to be displayed in the initial editing interface and visible to the user (i.e., web page contents of the first page) and web page contents not to be displayed in the initial editing interface and not visible to the user (i.e., web page contents of other pages than the first page).
S103, responding to the webpage view loading instruction by the webpage view component, and initializing the webpage view.
The height of the initialized webpage view is preset height 1. Alternatively, the preset height 1 may be equal to the height of the scroll view, that is, the height of one page.
S104, the scrolling view component sends a handwriting view loading instruction 1 to the handwriting view component. Handwriting view loading instruction 1 is used to instruct loading of handwriting view 1. Only the handwritten content 1 to be displayed is included in the handwritten view 1.
The handwritten content to be displayed refers to the handwritten content to be displayed in the content editing area. For the initial editing interface, the handwritten content 1 to be displayed is the first page of handwritten content in the handwritten view. That is, the handwriting view 1 includes the handwriting content 1 to be displayed in the initial editing interface and visible to the user (i.e., the handwriting content of the first page), but does not include the handwriting content not to be displayed in the initial editing interface and invisible to the user (i.e., the handwriting content of the other pages than the first page).
Alternatively, this step may be performed simultaneously with step S102, or may be performed after or before step S102, which is not limited in any way in the embodiments of the present application. In addition, in other embodiments of the present application, the execution sequence of the steps is not limited, so long as the steps conform to logic, and will not be described in detail later.
S105, the handwriting view component responds to the handwriting view loading instruction 1 to initialize the handwriting view.
The height of the initialized handwriting view is a preset height 2. Alternatively, the preset height 2 may be equal to the height of the scroll view, that is, the height of one page. The preset height 2 is equal to the preset height 1, or may be unequal.
S106, the interface module inquires all webpage contents and all handwriting contents corresponding to the content editing area from the storage module of the kernel layer.
And S107, the storage module sends all the webpage contents corresponding to the content editing area to the interface module.
S108, the interface module sends all the webpage contents (or all the webpage contents for short) corresponding to the content editing area to the JS interface rendering module.
And S109, the JS interface rendering module renders all webpage contents to the initialized webpage view.
S110, the webpage view component follows the rendering, expands the height of the initialized webpage view, and obtains the webpage view 1.
That is, after initialization, the JS interface rendering module renders all the web page contents corresponding to the content editing area, gradually expands the height of the web page view in the rendering process, and finally completes rendering to obtain the web page view 1. Therefore, when the subsequent user executes the scrolling interface operation, the corresponding area in the webpage view 1 is directly displayed without rendering, so that the jamming in the interface scrolling process is reduced, and the user experience is improved.
Optionally, in the rendering process of step S109, the scrolling function of the scrolling view component may be disabled, so as to prevent the web content outside the first page from being visible to the user, and improve the user experience.
S111, the webpage view component sends the height of the webpage view 1 to the scroll view component.
And S112, the interface module sends all the handwriting contents (or all the handwriting contents for short) corresponding to the content editing area to the handwriting view component.
S113, the handwriting view component reads all handwriting contents to the memory.
S114, determining the to-be-displayed handwritten content 1 in all the handwritten contents by the handwritten view component, and writing the to-be-displayed handwritten content 1 into the initialized handwritten view to obtain the handwritten view 1.
That is, for the handwritten content corresponding to the content editing area, not all writing is performed, but only the handwritten content 1 to be displayed that is currently to be displayed is written, and the remaining handwritten contents are rewritten at the time of display (see step S120 in particular). I.e. the handwritten content is written stepwise as it is displayed. Because writing of handwritten content consumes more resources, the method provided by the embodiment can save the resources of electronic equipment and improve the running efficiency of the equipment.
Fig. 7 is a schematic structural diagram of another interface view according to an embodiment of the present application. In a specific embodiment, after step S114 is performed, the relationship of web page view 1 (701 shown in the figure), handwriting view 1 (702 shown in the figure), and scrolling view (703 shown in the figure) may be as shown in fig. 7. It can be seen that the web page content of web page view 1 has been fully rendered, and that handwritten view 1 has been written with only handwritten content 505 of the first page.
S115, the handwriting view component calculates the height of the handwriting view 2 according to all the handwriting contents.
Handwriting view 2 refers to a handwriting view that includes all handwriting content. In other words, the handwriting view 2 includes both the handwriting content 1 to be displayed (i.e., the handwriting content of the first page) and the handwriting content other than the handwriting content 1 to be displayed (i.e., the handwriting content of the other pages other than the first page).
It should be noted that, here, no handwriting content is actually written, no initialization handwriting view is expanded, no handwriting view 2 is obtained, and only the height of handwriting view 2 is calculated according to all handwriting contents.
In particular, the handwritten content is characterized by information of a plurality of handwriting touch points input by a user. The location of each handwritten touch point may be included in the handwritten content. The position of the handwriting touch point may be represented by coordinates in handwriting view 2. And determining the height of the handwriting view 2 according to the coordinates of the handwriting touch point at the topmost position and the coordinates of the handwriting touch point at the bottommost position in all the handwriting contents.
And S116, the handwriting view component sends the height of the handwriting view 2 to the scrolling view component.
S117, initializing a scrolling view by the scrolling view component according to the height of the webpage view 1 and the height of the handwriting view 2.
It will be appreciated that after the above steps S101 to S117 are completed, the respective views are presented so that an initial editing interface is displayed, as shown in fig. 5 (c).
S118, responding to the upward scrolling interface operation of the user, and sending an upward scrolling instruction to the scrolling view component by the interface module.
S119, the scroll view component responds to the upward scroll instruction and sends a handwriting view loading instruction 2 to the handwriting view component. Handwriting view loading instruction 2 is used to instruct loading of handwriting view 3. The handwriting view 3 contains the handwriting content in the handwriting view 1 and the handwriting content 2 to be displayed.
The handwritten content 2 to be displayed is to wait for the handwritten content displayed in the content editing area to be added after the scroll-up interface operation is performed.
And S120, the handwriting view component responds to the handwriting view loading instruction 2, expands the height of the handwriting view 1, and writes the handwriting content 2 to be displayed in the expanded handwriting view 1 to obtain the handwriting view 3.
Optionally, the handwriting view component may expand the height of the handwriting view 1 according to the position of the handwriting content 2 to be displayed, until the handwriting content 2 can be written. The handwriting view component may also extend the height of the handwriting view 1 in whole pages, for example, one page at a time.
S121, the handwriting view component sends the height of the handwriting view 3 to the scrolling view component.
S122, the scrolling view component scrolls the webpage view 1 and the handwriting view 3.
Fig. 8 is a schematic structural diagram of another interface view according to an embodiment of the present application. In a specific embodiment, after step S122 is performed, the relationship of web page view 1 (701 shown in the figure), handwriting view 3 (801 shown in the figure), and scrolling view (703 shown in the figure) may be as shown in fig. 8. It can be seen that handwriting view 3 is higher than handwriting view 1 in fig. 7, web page view 3 contains handwriting content 508, and both web page view 1 and handwriting view 3 are shifted up, i.e. scrolled up.
After scrolling web page view 1 and handwriting view 3, the user-viewable interface is shown in fig. 5 (d).
In this embodiment, in the editing interface of the note, the interface of the content editing area includes a handwriting view and a web page view. The handwriting view is overlaid on top of the web page view. The handwriting view can display handwriting content, and the webpage view can display various types of content such as text, static picture, GIF dynamic picture, video, audio and the like. Therefore, the note APP can record and display various contents, and user experience is improved. Moreover, the handwriting view and the webpage view are taken as a view group, and the scroll view is wrapped, so that scrolling of the handwriting view and the webpage view can be realized through the scroll view, and the interface scrolling problem under the condition that the webpage view and the handwriting view are higher is solved.
In the following, a process of solving the problem of gesture collision in the gesture processing method provided in the embodiment of the present application is described. Gestures referred to in this application mainly include: tap gestures, swipe gestures, and long press gestures. For easy understanding, first, an application scenario, a gesture, and a response result of the gesture involved in the method will be described.
In a specific embodiment, the correspondence between the application scenario, the gesture, and the response result of the gesture may be as shown in the following table 1:
TABLE 1
The interface diagrams of the respective scenes in table 1 and the interface diagrams after responding to gestures are described below with reference to the accompanying drawings.
1. Scroll interface
When the user inputs a slide gesture in a content editing area in the note editing interface in a case where the note APP is in a non-handwriting state, the interface in the area scrolls with a user scroll operation. The procedure of scrolling the interface is shown in fig. 5 (c) and (d), and will not be described here.
2. Recording and playing progress through content positioning
In the embodiment of the application, the note APP can realize bidirectional positioning of recording and playing and content display. The bi-directional positioning means: synchronously highlighting the recorded content when recording the recording while playing the recording; when a user adjusts the playing progress of the recording to a certain playing duration, the content corresponding to the playing progress is highlighted, and the process is called as positioning the content through the playing progress of the recording, and is also called as forward positioning operation; otherwise, when the user clicks the content at a certain position in the interface, the recording playing progress jumps to the playing progress corresponding to the content, and the process is called as positioning the recording playing progress through the content, and is also called as reverse positioning operation. In addition, the content recorded by the user during recording can be handwritten content or web page content. The web page content may include text content, pictures, tables, and the like.
Fig. 9 is an exemplary schematic diagram of an interface change of a playback progress of a recording by content positioning according to an embodiment of the present application. Taking an electronic device as a mobile phone for example, as shown in fig. 9 (a), a note editing interface includes a recording control 900, and a recording corresponding to the recording control 900 is in a playing state, where the current playing duration is 20 seconds (00:00:20). Further, within the content editing area 502, handwriting content 901, text content 902, and picture 903 are displayed. The contents are recorded by the user when the recording is recorded. The "now" word in the handwritten content 901 is the content recorded by the user when the recording duration is 1 minute (00:01:00), the "middle twentieth century" in the text content 902 is the content recorded by the user when the recording duration is 3 minutes (00:03:00), and the picture 903 is the content recorded by the user when the recording duration is 4 minutes and 20 seconds (00:04:20). The current playing time is 20 seconds, and the handwritten content 901, the text content 902 and the picture 903 are all gray displayed.
The recording, the content, and the recording of the corresponding relationship between the recording and the content may be optionally implemented by a recording and playing control module of the note APP, an HTML editor, a handwriting editor, and other modules, which are not described in detail in the embodiment of the present application.
When the user clicks "now" in the handwritten content 901, the "now" word is highlighted, and the play progress jumps to the play duration corresponding to "now", i.e. 1 minute (00:01:00), as shown in fig. 9 (b).
When the user clicks on "middle of twentieth century", and all the contents before in the text content 902 are highlighted, and the play progress jumps to the play duration corresponding to "middle of twentieth century", i.e., 3 minutes (00:03:00), as shown in (c) of fig. 9. Of course, in some embodiments, when the user clicks "middle of twentieth" in the text content 902, only "middle of twentieth" may be highlighted, and the content before "middle of twentieth" may not be highlighted, which is not limited in any way by the embodiments of the present application.
When the user clicks on the picture 903, the picture 903 and all the previous contents are highlighted, and the playing progress jumps to the playing duration corresponding to the picture 903, that is, 4 minutes and 20 seconds (00:04:20), as shown in the (d) diagram in fig. 9.
It should be noted that the above gray display and highlighting are only examples, and are not limiting, and in some other embodiments, the display manner of the content corresponding to the audio recording may be other.
Fig. 9 illustrates an example of user input tap gesture selection. For the handwriting content corresponding to the recording, when the recording is played, the user inputs a long-press gesture, and the handwriting content can be selected, so that the function of positioning the playing progress of the recording through the content is realized. Moreover, whether the gesture is a click gesture or a long press gesture, the gesture response can locate the selected content through the drop point position of the gesture, so that as long as the drop point positions of the gestures are the same, the response results corresponding to the two gestures are the same.
3. In a non-handwriting state, gestures are input through a handwriting pen
It will be appreciated that a user may enter gestures on the screen via a finger, or may enter gestures on the screen via a stylus. In the embodiment of the application, when the user inputs the click gesture through the handwriting pen under the condition that the note APP is in the non-handwriting state, the note APP is switched to the handwriting state, the handwriting operation interface is displayed, and the track of the gesture is written as handwriting content. Therefore, the user can conveniently and directly input the handwriting content, the user does not need to manually switch the state, and the user experience is improved.
Fig. 10 is a schematic diagram illustrating an interface change of a gesture input by a handwriting pen in a non-handwriting state according to an embodiment of the disclosure. As shown in fig. 10 (a), the current note APP is in a non-handwriting state, and an input method interface 1001 is displayed in the interface. Of course, in some embodiments, in the non-handwriting state, the input method interface 1001 may not be displayed in the interface, for example, an interface shown in fig. 5 (c) or (d).
As shown in fig. 10 (a), when the user inputs a tap gesture in the content editing area 502 through the handwriting pen, the state of the note APP is switched to the handwriting state, a handwriting operation interface 1002 is displayed in the interface, and a locus 1003 of the gesture input by the user is written as handwriting content in the handwriting view, as shown in fig. 10 (b).
4. Input gesture in handwriting state
When the note APP is in the handwriting state, and the user inputs a click, long press or slide gesture, corresponding content is input according to the gesture track of the user.
Fig. 11 is a schematic diagram illustrating interface changes of an input gesture in a handwriting state according to an embodiment of the present application. As shown in fig. 11 (a), the current note APP is in a handwriting state, and a handwriting operation interface 1002 is displayed. When the user inputs a click, long press, or swipe gesture in the content editing area 502, the note APP writes the trajectory of the gesture input by the user as handwritten content into the handwriting view. For example, if the user writes two words by finger, the two words are displayed in the interface as shown in 1101 in fig. 11 (b).
5. Manipulating content in video clip cards
In this embodiment of the present application, the note APP may be used for recording a video clip, and in the process of recording the video clip, a tag may also be added to the note, where the tag corresponds to a tag time, and the tag time is a recording duration when the tag is inserted in the process of recording the video clip. After the video clip is recorded, the first frame image of the video clip and a play control of the video clip can be displayed in the interface. The play of the video clip can be controlled by a play control. And the labels are synchronously displayed in the playing process of the video clips, and the bidirectional positioning of the playing of the video clips and the display of the labels can be realized. The bidirectional positioning is similar to the bidirectional positioning of the recording and playing, and is not repeated. The above functions related to video clips may be implemented by a clip and play control module, an HTML editor, and a video clip APP, etc., which are not described in detail herein.
Alternatively, the first frame image, the play control, and the label of the video clip may be provided on the same card, which is displayed in the content editing area in the editing interface of the note. When the user operates the content in the video clip, the note APP responds differently according to the different gestures. Optionally, when the user inputs a click gesture in the video clip, the clip and play control module controls the display according to preset logic, including but not limited to implementing the bi-directional positioning related display described above. When a user inputs a long-press gesture in the video clip, a video deletion suspension control is displayed on the video clip, wherein the video deletion suspension control comprises controls such as deletion, deletion cancellation and the like.
Fig. 12 is a schematic diagram illustrating an interface change of content in an operation video clip according to an embodiment of the present application. Taking a folding-screen mobile phone as an example, as shown in fig. 12 (a), a video clip 1201 is displayed in the content editing area 502 in the editing interface of the note. The video clip 1201 includes a first frame image 1202 of the video clip, a play control 1203, a tag 1204, a tag 1205, and the like. Where label 1204 is a plain text label and label 1205 is a screenshot text label. The screenshot text tag comprises a screenshot.
When the user inputs a click gesture in the video clip 1201, the clip and play control module controls the display according to preset logic. For example, as shown in fig. 12 (a), when the video clip is in a playing state, if the user clicks on the content of the tag 1204, the video clip playing progress jumps to the tag time 30 seconds (00:30) corresponding to the tag 1204, as shown in fig. 12 (b). In addition, when the user inputs a click gesture in the video clip 1201, the clip and play control module may also respond to other responses, which may specifically be set according to actual requirements, which is not limited in any way in the embodiments of the present application.
As shown in fig. 12 (c), when the user inputs a long press gesture in the video clip 1201, a video deletion hover control 1206 is displayed on the video clip, and the deletion control 1207, the cancel control 1208, and the like are included in the video deletion hover control 1206, as shown in fig. 12 (d).
6. Operation picture
A picture (still picture or GIF moving picture) may be included in the content editing area in the note editing interface, and in general, the picture displayed in the interface may be a thumbnail or a compressed image. If the user clicks the picture, the note APP displays the picture details, namely, the thumbnail or the original image corresponding to the compressed image is displayed on the interface, and the effect of amplifying the picture is displayed on the visual effect. If the user presses the picture for a long time, a picture operation suspension control is displayed on the picture, wherein the picture operation suspension control comprises the options of copying, saving, deleting, extracting characters, sharing and the like.
Fig. 13 is an interface change schematic diagram of an example of an operation picture according to an embodiment of the present application. As shown in fig. 13 (a), a picture 1301 is displayed in the content editing area 502 in the note editing interface. When the user clicks on this picture 1301, the note APP displays details of this picture 1301, as shown by 1302 in (b) of fig. 13.
When the user presses the picture 1301 for a long time, a picture operation suspension control 1303 is displayed on the picture 1301, and the picture operation suspension control 1303 includes a copy option 1304, a delete option 1305, a save picture option 1306, a extract text option 1307, a share option 1308, and the like.
7. Operating preset controls or content in preset format
In some embodiments, some preset controls or content in a preset format may be displayed within a content editing area in the note editing interface. Preset controls include, for example: insert PDF file controls, manifest entry mark controls, etc. The content of the preset format includes, for example, a web page link, a mailbox, a telephone number, and the like. When the user clicks the preset controls or the content in the preset format, the corresponding functions are executed, and the corresponding interfaces are displayed.
Specifically, when the user clicks the control for inserting the PDF file, the note APP invokes the function of inserting the PDF file, and a card for selecting a PDF insertion path is displayed in the interface. When the user clicks the list item marking control, the note APP changes the state of the control and the corresponding list item, for example, before the user clicks, the control and the corresponding list item are in an unselected state, and after the user clicks, the control and the corresponding list item are in a selected state, and vice versa. When the user clicks the webpage link, the note APP pops up a pop-up frame for selecting a webpage link processing mode in the interface, and options of jumping to a website corresponding to the webpage link, copying the webpage link, adding to a bookmark and the like can be included in the pop-up frame. When the user clicks the mailbox, the note APP pops up a popup window for the user to select a mailbox processing mode, and the popup window can comprise options of sending mails, copying the mailbox, editing and the like. When the user clicks the telephone number, the note APP pops up a pop-up window in the interface for the user to select a telephone number processing mode, and the pop-up window can include options of dialing a telephone, sending information, copying the telephone number, saving the telephone number and the like.
In the following, description will be given by taking the list entry mark control and the phone number as examples, and other preset controls and content in preset formats are similar to those described above, and will not be repeated.
Fig. 14 is an exemplary schematic diagram of interface change of an operation preset control according to an embodiment of the present application. As shown in fig. 14 (a), a list 1401 inserted by the user is displayed in the content editing area 502 in the note editing interface, and the list 1401 includes a plurality of list entries 1402, and each list entry 1402 corresponds to one list entry mark control 1403. Inventory item marking control 1403 is used to mark the status of inventory items. In fig. 14 (a), each of the list entries 1402 is in an unselected state.
When the user clicks on the list item mark control 1403 corresponding to the first list item 1402 (ham), the list item 1402 (ham) is selected, and the display state is changed, and the list item mark control 1403 also changes the display state to show that the list item 1402 (ham) is in the selected state, as shown in (b) of fig. 14.
Exemplary, fig. 15 is an interface change schematic diagram of an example of content in an operation preset format according to an embodiment of the present application. As shown in fig. 15 (a), a telephone number 1501 is included in the content editing area 502 in the note editing interface. When the user clicks on the telephone number 1501, a pop-up 1502 pops up in the interface, and the pop-up 1502 is used for the user to select a mode of processing the telephone number. Optionally, the pop-up 1502 may include a call making option 1503, a send information option 1504, a copy to clipboard option 1505, a new contact option 1506, a save to existing contact option 1507, an edit option 1508, a cancel option 1509, and the like.
8. Long press text
When a user presses characters in a content editing area in a note editing interface for a long time: if the current note APP is in a non-handwriting state and audio is being played, displaying a prompt for prompting to pause the recording and playing in an interface, for example, displaying a word of clicking the record playing in the pause position; if the current note APP is in a non-handwriting state and the sound recording is not played, displaying a text operation suspension control on the text, wherein the text operation suspension control comprises the options of copying, storing, deleting, sharing and the like.
In the embodiment of the present application, a long-term text is taken as an example for explanation. It will be appreciated that the scene is not limited to text, punctuation, and the like.
9. Entering text editing
When a click or long press gesture is input in a scene other than the scenes described in 1 to 8, the note APP enters text editing, an input method is started, and an input method interface is displayed.
Exemplary, fig. 16 is a schematic diagram of an interface change for entering text editing according to an embodiment of the present application. As shown in fig. 16 (a), the current note APP is in a non-handwriting state. If the current note APP does not play a recording, the user inputs a click, long press or slide gesture through the finger at the position of 1601 in the interface, and the corresponding content at 1601 is not the content in the video clip, is not a picture, is not a preset control or is not the content in the preset format, the note APP starts the input method, and displays the input method interface 1001, as shown in (b) in fig. 16.
In one embodiment, the judging conditions corresponding to the application scenarios and the decision-making component are shown in table 2. The gesture in table 2 and the judgment condition of the application scene determine the response result of the gesture together. The decision component is a view component that performs a judgment condition, recognizes a gesture, and determines an application scenario.
TABLE 2
In connection with table 2, in the method provided in the embodiment of the present application, the processing procedure for the gesture may be as shown in fig. 17-1 and 17-2, and includes the following steps S201 to S235. Wherein fig. 17-2 is continued from fig. 17-1.
S201, responding to a user input gesture, determining whether a note APP is in a non-handwriting state by a scrolling view component, and determining whether the gesture is a sliding gesture; if yes, go to step S202; if not, step S203 is performed.
Specifically, in response to a user input gesture, the interface module sends the gesture to the scroll-view component.
S202, scrolling the handwriting view and the webpage view by a scrolling view component. After the execution of step S202, the flow ends.
S203, the scrolling view component determines whether the condition of positioning the record playing progress through the handwriting content is met; if yes, go to step S204; if not, step S206 is performed.
The "condition of locating the progress of recording and playing through handwriting content" means: the note APP is currently playing a recording, there is handwritten content at the drop point position of the gesture, and the handwritten content (taking handwritten content 3 as an example) has a corresponding recording duration.
In a specific embodiment, the scroll-view component determines whether the recording duration corresponding to the handwritten content 3 exists, which may be directly determined by the scroll-view component, or may send the determined result to the scroll-view component after the determination by the handwriting editor. Whether it is a scrolling view component or a handwriting editor, can be determined based on a preset correspondence 1. Optionally, the preset correspondence 1 is used for representing a correspondence between the handwritten content and a recording duration when the handwritten content is written.
S204, the scrolling view component sends a reverse positioning message to the handwriting editor, wherein the reverse positioning message carries the handwriting content 3 at the drop point position of the gesture. The reverse positioning message is used to instruct the reverse positioning operation to be performed.
S205, the handwriting editor responds to the reverse positioning message, controls the highlight display of the handwriting content 3, and determines the recording duration 1 corresponding to the handwriting content 3 according to the preset corresponding relation 1.
Alternatively, the handwriting editor may instruct the handwriting view component to highlight the handwritten content 3. That is, highlighting the handwritten content 3 may be accomplished by a handwriting view component.
S206, the handwriting editor sends the recording duration 1 to a recording and playing control module.
S207, the recording and playing control module jumps the playing progress of the recording to the playing time length corresponding to the recording time length 1 according to the recording time length 1. Step S207 ends the flow after execution.
Of course, according to different preset logic, the note APP may also implement positioning of the recording playing progress through the handwriting content in other manners through other modules, which is not limited in any way in the embodiment of the present application. In summary, the scroll view component sends a reverse positioning message to a module in the note APP controlling the bidirectional positioning in case it is determined that the condition for positioning the progress of the recording playing by the handwritten content is satisfied, so that the module performs the reverse positioning operation in response to the current gesture.
S208, the scrolling view component determines whether the condition of inputting the gesture by the handwriting pen in the non-handwriting state is met; if yes, go to step S209; if not, step S211 is performed.
"condition for inputting a gesture by a handwriting pen in a non-handwriting state" means: the note APP is currently in a non-handwriting state, the drop point position of the gesture is not any of the preset controls or content in a preset format, the gesture is a handwriting pen input and the gesture is not a long-press gesture, and the drop point position of the gesture is located in the scrolling view.
Optionally, the interface module may monitor the state of the note APP, and set different flags for different states, so that each view component may learn, through the flag, the state of the note APP.
S209, the scrolling view component sends a handwriting state entering instruction to the interface module, wherein the handwriting state entering instruction is used for indicating that the state of the note APP is switched to a handwriting state.
S210, the interface module responds to the handwriting state instruction, sets the state of the note APP to be the handwriting state, and controls and displays a handwriting operation interface. Step S211 is performed after step S210.
S211, the scrolling view component sends the gesture to the handwriting view component.
S212, the handwriting view component determines whether the note APP is currently in a handwriting state; if yes, go to step S213; if not, step S214 is performed.
It is understood that, for the gesture of proceeding to step S211 through the no branch in step S208, step S213 may be directly performed without performing step S212.
S213, the handwriting view component processes the gesture, and the track of the gesture is used as handwriting content to be written into the handwriting view. Step S213 ends the flow after execution.
S214, the handwriting view component sends the gesture to the webpage view component.
S215, the webpage view component determines whether the gesture is a long-press gesture, and the gesture drop point position is a video clip; if yes, go to step S216; if not, step S217 is performed.
S216, the webpage view component displays a video deletion suspension control on the video clip, wherein the video deletion suspension control comprises controls such as deletion, deletion cancellation and the like. S216 ends the flow after execution is completed.
S217, the webpage view component determines whether the gesture is a long-press gesture, and the gesture drop point position is a picture; if yes, go to step S218; if not, step S219 is executed.
Step S218 to step S235 refer to FIG. 17-2.
S218, the webpage view component displays a picture operation suspension control on the picture, wherein the picture operation suspension control comprises the options of copying, saving, deleting, extracting characters, sharing and the like. Step S218 ends the flow after execution.
S219, the webpage view component determines whether the gesture is a long-press gesture, and the gesture drop point position is a word; if yes, go to step S220; if not, step S223 is performed.
S220, the webpage view component determines whether the note APP is playing a recording; if yes, go to step S221; if not, step S222 is performed.
S221, displaying a pause record playing prompt by the webpage view component. Step S221 ends the flow after execution.
S222, displaying a text operation suspension control on the text by the webpage view component. Step S222 ends the flow after execution.
S223, the webpage view component determines whether the gesture is not a long press gesture, and the gesture drop point position is a video clip; if yes, go to step S224; if not, step S226 is performed.
S224, the webpage view component sends gesture information to the excerpt and play control module.
The information of the gesture may include, for example, a time and a position of a press event of the gesture, a time and a position of a lift event, and the like.
S225, the extract and play control module controls display according to preset logic according to the gesture information. Step S225 ends the flow after execution.
Specifically, the snippet and play control module may implement functions such as bidirectional positioning of video snippet play and tag display according to gesture information, which is not limited herein.
S226, the webpage view component determines whether a gesture is not a long press gesture, the gesture drop point position is a picture, and the note APP does not play a record; if yes, go to step S227; if not, step S228 is performed.
And S227, displaying the picture details by a webpage view component. Step S227 ends the flow after execution.
S228, the webpage view component determines whether a preset position operation condition is met; if yes, go to step S229; if not, step S230 is performed.
"preset position operating condition" means: the gesture is not a long-press gesture, the gesture drop point position is one of a preset control or content in a preset format, and the note APP does not play a record.
S229, the webpage view component sends an instruction to the corresponding module to instruct to execute the corresponding function of the preset control or the content in the preset format, and displays the interface corresponding to the content in the preset control or the preset format. Step S229 ends the flow after execution.
S230, the webpage view component determines whether the condition of positioning the record playing progress through webpage content is met; if yes, go to step S231; if not, step S235 is performed.
The term "condition of positioning the playing progress of the recording through the webpage content" means that: the gesture is not a long press gesture, the note APP is in a non-handwriting state, the note APP is currently playing a recording, webpage content exists at the drop point position of the gesture, and the webpage content (taking webpage content 1 as an example) has a corresponding recording duration.
This step is similar to step S203, except that the web page content is determined in this step. In this step, it is determined whether the web page content 1 has a corresponding recording duration, which may be directly determined by the web page view component, or may be determined by the HTML editor and then sent to the web page view component. Whether it is a web page view component or an HTML editor, can be determined based on the preset correspondence 2. Optionally, the preset corresponding relation 2 is used for representing the corresponding relation between the webpage content and the recording duration when the webpage content is written.
S231, the webpage view component sends a reverse positioning message to the HTML editor, wherein the reverse positioning message carries webpage content 1 at the drop point position of the gesture. The reverse positioning message is used to instruct the reverse positioning operation to be performed.
S232, the HTML editor responds to the reverse positioning message, controls the highlight display of the webpage content 1, and determines the recording duration 2 corresponding to the webpage content 1 according to the preset corresponding relation 2.
Alternatively, the HTML editor may instruct the web page view component to highlight web page content 1. That is, highlighting web page content 1 may be accomplished by a web page view component.
S233, the HTML editor sends the recording duration 2 to a recording and playing control module.
S234, the recording and playing control module jumps the playing progress of the recording to the playing time length corresponding to the recording time length 2 according to the recording time length 2. Step S234 ends the flow after execution.
Steps S231 to S234 are similar to steps S204 to S207, and will not be described again.
S235, the webpage view component displays an input method interface. Step S235 ends the flow after execution.
In the implementation manner, after the scrolling view component receives the gesture input by the user, the gesture is sent to the corresponding view component or module by identifying the application scene and combining the application scene and the gesture, and the view component or module performs corresponding processing so as to solve the problem of gesture conflict under the multi-layer view component, accurately identify the user intention, accurately respond to the user gesture and improve the user experience.
In a specific embodiment, each view component may recognize a gesture through relevant information of a touch event, and combine the logic of recognizing the gesture with the logic of recognizing the scene to implement the gesture response of table 2 above.
Specifically, referring to fig. 18, for any one touch event a, the processing procedure of each view component may include the following steps S301 to S339. The execution bodies of steps S301 to S315 are scroll view components, the execution bodies of steps S316 to S320 are handwriting view components, and the execution bodies of steps S321 to S339 are web page view components.
S301, the scrolling view component receives a touch event a.
S302, the scrolling view component determines whether the note APP is in a non-handwriting state, and the gesture corresponding to the touch event a is a sliding gesture.
Specifically, if the scroll view component determines that the touch event a is a movement event and the positions of the touch event a and the last touch event b are greater than the preset distance threshold, determining that the gesture corresponding to the touch event a is a sliding gesture. The touch event b is a touch event in the gesture corresponding to the touch event a.
S303, the scrolling view component scrolls the handwriting view and the webpage view according to the position of the touch event a.
If step S303 is performed, the consuming touch event a is completed.
S304, a scrolling view component determines whether a note APP is playing a recording and a handwriting view exists in an interface; if yes, go to step S305; if not, step S309 is performed.
It may be understood that, as described in the above embodiments, in the method provided in the embodiments of the present application, the interface includes the handwriting view, so in this embodiment, the step may not determine whether the interface includes the handwriting view. However, in the step, whether the handwriting view exists in the interface or not is judged, so that the method is applicable to the note APP which does not comprise the handwriting view, and the compatibility of the method is improved.
S305, the scrolling view component determines whether the touch event a is a lift-up event; if yes, go to step S306; if not, step S308 is executed, and step S315 is executed after step S308.
S306, determining whether handwriting content exists at the drop point position of the gesture corresponding to the touch event a by the scrolling view component, wherein the handwriting content (taking handwriting content 3 as an example) has corresponding recording duration; if yes, go to step S307; if not, step S309 is performed.
Specifically, the scroll view component may determine, according to coordinates of the drop point position of the gesture, whether handwriting content corresponding to the coordinates exists in all handwriting contents; if yes, the scrolling view component may further determine whether the handwritten content has a corresponding recording duration, if yes, the scrolling view component determines that the current application scenario is to locate a recording playing progress through the handwritten content, and step S307 is executed.
S307, the scrolling view component sends a reverse positioning message to the handwriting editor, wherein the reverse positioning message carries handwriting content 3. The reverse positioning message is used to instruct the reverse positioning operation to be performed.
After this step, the note APP also performs steps S204 to S207, which are not described here again.
S308, the scrolling view component acquires the drop point position of the gesture corresponding to the touch event a.
Specifically, if the touch event a is not a lifting event but a pressing event or a moving event, the drop point position of the gesture corresponding to the touch event a is obtained, that is, the position of the pressing event corresponding to the touch event a is determined.
S309, the scroll view component determines whether the touch event a is a handwriting touch event and the note APP is currently in a non-handwriting state.
A handwriting touch event refers to a touch event input through a handwriting pen. Optionally, a handwriting pen identifier may be carried in the handwriting touch event, where the handwriting pen identifier is used to characterize that the touch event is a handwriting touch event, and the handwriting pen identifier is, for example, a tool type (tool type). When it is determined that the touch event a carries a tool type, the scroll view component determines that the touch event a is a handwriting touch event.
S310, a scroll view component determines whether the drop point position of the gesture corresponding to the touch event a is one of a preset control or content in a preset format; if yes, go to step S311; if not, step S315 is performed.
S311, the scrolling view component determines whether the drop point position of the gesture corresponding to the touch event a is located in the scrolling view, and the touch event a is a lifting event; if yes, go to step S312; if not, step S315 is performed.
Optionally, the scroll view component may determine whether the drop point location of the gesture is a location within the web page view; if yes, determining that the falling point position of the gesture is located in the scrolling view; if not, determining that the falling point position of the gesture is located outside the scrolling view.
The judgment condition of the step comprises that the drop point position of the gesture corresponding to the touch event a is positioned in the scrolling view, so that the gesture with the input position outside the scrolling view can be eliminated, and the accuracy of gesture recognition is improved.
S312, the scrolling view component determines whether the gesture corresponding to the touch event a is a long press gesture.
Optionally, the scroll view component may determine a time difference between a time of the touch event a and a time of the press event corresponding to the touch event a, and if the time difference is greater than a preset duration, determine that the gesture corresponding to the touch event a is a long press gesture, otherwise determine that the gesture corresponding to the touch event a is not a long press gesture.
S313, the scrolling view component sends a gesture cancel message to the webpage view component, wherein the gesture cancel message is used for indicating that the touch event a does not need to be processed by the webpage view component.
It will be appreciated that the current note APP is determined to be in a non-handwriting state in step S309, and the drop point position of the gesture is determined to be within the scroll view in step S311, so that the touch event may be determined to be acting in the web page view. In general, gestures that are applied to a web page view should be handled by the web page view component. However, after the judgment in steps S310 to S312, it is determined that the current gesture is a gesture input by the stylus pen in the non-handwriting state, and thus, it is necessary to enter the handwriting state without the web view component processing the touch event a, so that a gesture cancel message is sent to the web view component. The step can prevent the webpage view component from waiting for processing the touch event a all the time to cause errors, and improve the operation accuracy of the algorithm.
S314, the scrolling view component sends a handwriting state entering instruction to the interface module, wherein the handwriting state entering instruction is used for indicating that the state of the note APP is switched to a handwriting state.
After receiving the command for entering the handwriting state, the interface module executes step S210, which is not described herein.
In this embodiment, step S315 is performed after step S314. Of course, in some embodiments, after step S314 is performed, the flow may also be ended, that is, the consuming touch event a is completed.
S315, the scrolling view component sends a touch event a to the handwriting view component.
That is, the scroll view component distributes touch event a to the handwriting view component. To this end, the scroll view component processes touch event a.
S316, the handwriting view component determines whether the note APP is currently in a handwriting state; if yes, go to step S317; if not, step S320 is performed.
The no branch of the step S305 and the no branch of the step S309 are finally executed to the step S315. That is, the scroll view component may send the touch event a to the handwriting view component in the handwriting state and also send the touch event a to the handwriting view in the non-handwriting state, so this step S316 determines whether the note APP is in the handwriting state again.
S317, the handwriting view component determines whether the touch event a is a lifting event; if yes, go to step S318; if not, step S319 is performed, and step S320 is performed after step S319.
S318, the handwriting view component writes the track of the gesture corresponding to the touch event a into the handwriting view as handwriting content.
If step S318 is performed, the handwriting view component consumes the touch event a to completion.
S319, the gesture view component determines the drop point position of the gesture corresponding to the touch event a.
S320, the handwriting view component sends a touch event a to the webpage view component.
That is, the handwriting view component passes touch event a through to the web page view component. To this end, the processing of touch event a by the handwriting view component ends.
S321, the webpage view component determines whether the gesture corresponding to the touch event a is a long-press gesture; if yes, go to step S322; if not, step S330 is performed.
S322, the webpage view component determines whether the drop point position of the gesture corresponding to the touch event a is a video clip; if yes, go to step S323; if not, step S324 is performed.
S323, the webpage view component displays a video clip deletion suspension control on the video clip.
If step S323 is executed, the web page view component consumes the touch event a to completion.
S324, the webpage view component acquires whether the drop point position of the gesture corresponding to the touch event a is a picture or not; if yes, go to step S325; if not, step S326 is performed.
S325, the webpage view component displays a picture operation suspension control on the picture.
If step S325 is performed, the web page view component consumes touch event a to completion.
S326, the webpage view component determines whether the drop point position of the gesture corresponding to the touch event a is a word; if yes, go to step S327; if not, step S330 is performed.
S327, a webpage view component determines whether a note APP is playing a recording; if yes, go to step S328; if not, step S329 is performed.
S328, the webpage view component displays a pause record playing prompt.
If step S328 is performed, the handwriting view component consumes touch event a to completion.
S329, the webpage view component displays a prompting text operation suspension control on text.
If step S329 is performed, the handwriting view component consumes touch event a to completion.
S330, the webpage view component determines whether the touch event a is a lift-up event; if yes, go to step S332; if not, go to step S331.
S331, the webpage view component acquires the drop point position of the gesture corresponding to the touch event a.
If step S331 is performed, the handwriting view component consumes the touch event a.
S332, the webpage view component determines whether the drop point position of the gesture corresponding to the touch event a is a video card; if yes, go to step S333; if not, step S334 is performed.
S333, the webpage view component sends gesture information corresponding to the touch event a to the video clip and play control module.
S334, the webpage view component determines whether the note APP is playing a recording; if yes, go to step S335; if not, step S337 is performed.
S335, the webpage view component determines whether the webpage content exists at the drop point position of the gesture corresponding to the touch event a, and the corresponding recording duration exists in the webpage content (taking the webpage content 1 as an example continuously); if yes, go to step S336; if not, step S337 is performed.
This step is similar to step S306 and will not be described again.
S336, the webpage view component sends a reverse positioning message to the HTML editor, wherein the reverse positioning message carries webpage content 1. The reverse positioning message is used to instruct the reverse positioning operation to be performed.
If step S336 is performed, the handwriting view component consumes the touch event a to completion.
In addition, after the step S336, the note APP further executes steps S231 to S234, which will not be described again.
S337, the webpage view component determines whether the drop point position of the gesture corresponding to the touch event a is a picture; if yes, go to step S338; if not, step S339 is performed.
S338, displaying the details of the picture by the webpage view component.
S339, the webpage view component determines whether the drop point position of the gesture corresponding to the touch event a is one of a preset control or content in a preset format; if yes, go to step S340; if not, step S341 is performed.
S340, the webpage view component sends an instruction to a module corresponding to the preset control or the content in the preset format, instructs to execute the function corresponding to the preset control or the content in the preset format, and displays an interface corresponding to the preset control or the content in the preset format.
If step S340 is performed, the handwriting view component consumes the touch event a to completion.
S341, displaying an input method interface by the webpage view component.
If step S341 is performed, the handwriting view component consumes touch event a to completion.
Through the above process, the gesture is recognized through the touch event, meanwhile, the application scene is recognized according to the information of the gesture and the state of the note APP, the gesture recognition process is combined with the application scene recognition process, different gestures in different application scenes are responded accurately, the problem of gesture conflict in a multi-layer view is solved, and the user experience is improved.
In addition, it is understood that when the note APP is in a handwriting state, the user can write handwriting content into the handwriting view. In the writing process, as the handwriting position moves down, the handwriting view component expands the height of the handwriting view. And after expansion, the handwriting view may be higher in height than the web page view. In this case, if the user exits from the handwriting state and edits the handwriting state in the web page view, the position of the cursor in the web page view is at the bottommost part of the web page view, but the height of the web page view is smaller than that of the handwriting view, so that the position of the cursor is not the last edited position, and the user cannot directly continue the previous handwriting content to continue inputting the web page content. For this case, the related art note APP requires manual line feed by the user, moves the cursor down, and expands the height of the web page view by the web page view component with the line feed operation by the user. That is, in the related art, after the user exits the handwriting state and enters the editing state of the web page view, the user needs to manually wrap the line to set the cursor at the position of the latest editing, which is inconvenient for the user to use.
In view of this, in the method provided by the embodiment of the present application, when the user exits from the handwriting state, the height of the web page view is automatically extended, so that when the user edits the web page content in the web page view, the cursor can be directly displayed at the position of the last edit, thereby continuing to edit the previous edit content, facilitating the use of the user, and improving the user experience.
The process of expanding the height of a web page view is described below. In this embodiment, the handwriting view and the web page view are both described as being loaded on the whole page, that is, the heights of the handwriting view and the web page view are integer multiples of the height of the web page view.
Fig. 19 is a schematic flow chart of another interface display method according to an embodiment of the present application, as shown in fig. 19, where the method further includes:
s401, under the condition that the note APP is in a handwriting state, responding to the operation of writing the handwriting content 4 on a screen by a user, and distributing a touch event to the handwriting view component through the scrolling view component by the interface module.
S402, the handwriting view component processes the touch event, writes handwriting content 4 into the handwriting view (taking handwriting view 3 as an example), and expands the height of the handwriting view 3 along with the writing of the handwriting content 4 by the handwriting view component to obtain handwriting view 4.
In these two steps, the specific implementation of receiving the touch event and writing the handwriting content 4 into the handwriting view 3 may be referred to as fig. 18, and will not be described herein.
The handwriting 4 is written and the handwriting after expanding the height is noted as handwriting 4.
S403, responding to the operation of exiting the handwriting state of the user, and sending an exiting handwriting instruction to the handwriting view component by the interface module, wherein the exiting handwriting instruction is used for indicating to exit the handwriting state.
Optionally, the operation of exiting the handwriting state may include, for example: clicking a save control in a note editing interface in a handwriting state, such as a 1005 control in a diagram (b) in fig. 10; alternatively, in the handwriting state, clicking a text entry control in the handwriting interface, such as control 1004 in the diagram (b) in fig. 10; or clicking a handwriting canceling handwriting option in a handwriting APP suspension ball in a handwriting state; or, in the handwriting state, clicking the insert photo control, etc. The embodiment of the application does not limit the operation of exiting the handwriting state.
Optionally, the interface module may send the exit handwriting instruction to the handwriting view component through the scrolling view component, and may also directly send the exit handwriting instruction to the handwriting view component.
S404, responding to the exiting handwriting instruction, determining whether the height of the handwriting view 4 is an integral multiple of the preset height of one page view by the handwriting view component; if yes, go to step S406; if not, step S405 is executed, and step S406 is executed after step S405.
It is determined whether the height of the handwriting view 4 is an integer multiple of a preset one-page view height, that is, whether an incomplete page view exists in the handwriting view 4.
If the height of the handwriting view 4 is not an integer multiple of the preset height of one page view, it is indicated that there is a non-whole page view in the handwriting view 4, step S405 is executed to expand the height of the handwriting view 4, and step S406 is executed after the expansion is completed. If the height of the handwriting view 4 is an integer multiple of the preset height of one page view, it is indicated that there is no non-whole page view in the handwriting view 4, and the handwriting view 4 is directly executed in step S406 without expansion.
S405, expanding the height of the handwriting view 4 by the handwriting view component to obtain a handwriting view 5; the height of the handwriting view 5 is the minimum value among heights satisfying the whole page condition: is larger than the height of the handwriting view 4 and is an integer multiple of the preset height of one page view.
In other words, the portion of the handwriting view 4 which is not the whole page is extended to a whole page view, and the handwriting view 5 is obtained.
S406, the handwriting view component calculates the height of the current handwriting view.
It will be appreciated that when the height of the handwriting view 4 is an integer multiple of the preset height of one page view, the current handwriting view, i.e. the height of the handwriting view 4; when the height of the handwriting view 4 is not an integer multiple of the preset height of one page view, the current handwriting view, i.e., the height of the handwriting view 5.
S407, the handwriting view component sends the height of the current handwriting view and the handwriting content 4 to the interface module.
S408, after the interface module receives the height of the current handwriting view, determining whether the height of the current handwriting view is larger than the height of the current webpage view (taking webpage view 1 as an example); if yes, executing step S409; if not, step S413 is performed.
S409, the interface module sends an expansion instruction to the JS interface rendering module, wherein the expansion instruction carries the height of the current handwriting view, and the expansion instruction is used for indicating writing a line-wrapping character into the webpage view 1 so as to expand the height of the webpage view 1, and the height of the expanded webpage view 1 is equal to the height of the current handwriting view.
And S410, the JS interface rendering module responds to the expansion instruction, and writes a line feed character into the webpage view 1.
S411, following the written line-feed character, the webpage view component expands the height of the webpage view 1 until the height of the expanded webpage view 1 is equal to the height of the current handwriting view.
And S412, the JS interface rendering module sends an expansion completion message to the interface module, wherein the expansion completion message is used for notifying the interface module that the expansion of the webpage view 1 is completed.
S413, the interface module responds to the expansion completion message and stores the handwriting content 4 to the storage module.
The above description has been given by taking the handwritten view and the web page view as examples of the whole page loading. It will be appreciated that in some embodiments, the handwritten and web page views may also be loaded not in full pages, but rather in following the locations in the views that contain the content. In this case, the above steps S404 and S405 need not be performed, and the height of the handwriting view 4 may be directly determined as the height of the current handwriting view.
In the implementation mode, when the user exits from the handwriting state, the line-feed symbol is automatically rendered into the webpage view under the condition that the height of the webpage view is smaller than that of the handwriting view, so that the automatic expansion of the height of the webpage view is realized. Therefore, when the user edits the webpage content in the webpage view, the cursor can be directly displayed at the position of the last editing, so that the user can continue to edit the previous editing content, the user can use the webpage content conveniently, and the user experience is improved.
In addition, when the note APP enters a handwriting state, whether the height of the current handwriting view is smaller than that of the webpage view can be determined; if yes, expanding the height of the handwriting view to the height of the webpage view; if not, the expansion is not performed. Therefore, when entering a handwriting state, the handwriting editing position can be automatically set at the position of the last editing, a user does not need to manually scroll a screen, and user experience is improved.
Examples of the gesture processing method provided in the embodiments of the present application are described above in detail. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The embodiment of the present application may divide the functional modules of the electronic device according to the above method examples, for example, may divide each function into each functional module corresponding to each function, for example, a detection unit, a processing unit, a display unit, or the like, or may integrate two or more functions into one module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The electronic device provided in this embodiment is configured to execute the gesture processing method, so that the same effect as that of the implementation method can be achieved.
In case an integrated unit is employed, the electronic device may further comprise a processing module, a storage module and a communication module. The processing module can be used for controlling and managing the actions of the electronic equipment. The memory module may be used to support the electronic device to execute stored program code, data, etc. And the communication module can be used for supporting the communication between the electronic device and other devices.
Wherein the processing module may be a processor or a controller. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, digital signal processing (digital signal processing, DSP) and microprocessor combinations, and the like. The memory module may be a memory. The communication module can be a radio frequency circuit, a Bluetooth chip, a Wi-Fi chip and other equipment which interact with other electronic equipment.
In one embodiment, when the processing module is a processor and the storage module is a memory, the electronic device according to this embodiment may be a device having the structure shown in fig. 3.
The embodiment of the application also provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, causes the processor to execute the gesture processing method of any of the above embodiments.
The embodiment of the application also provides a computer program product, which when running on a computer, causes the computer to execute the related steps to implement the gesture processing method in the embodiment.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component, or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer-executable instructions, and when the device is operated, the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the gesture processing method in each method embodiment.
The electronic device, the computer readable storage medium, the computer program product or the chip provided in this embodiment are used to execute the corresponding method provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding method provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A gesture processing method performed by an electronic device, the method comprising:
displaying a first interface of a first application; the first interface comprises a first handwriting view, a first webpage view and a scrolling view, the first handwriting view is covered on the upper layer of the first webpage view, the scrolling view wraps the first handwriting view and the first webpage view, the first handwriting view is used for displaying handwriting content, the first webpage view is used for displaying webpage content, and the scrolling view is used for realizing scrolling of the first handwriting view and the first webpage view;
receiving a first gesture input by a user on the first interface;
determining an operation state of the first application, wherein the operation state is used for representing the operation condition of the function of the first application;
Responding to the first gesture according to the running state;
when the input state of the first application is handwriting state, responding to the input of second handwriting content by a user, expanding a third handwriting view according to the input position of the second handwriting content, and writing the second handwriting content into the expanded third handwriting view to obtain a fourth handwriting view; the third handwriting view is obtained by expanding the first handwriting view and then writing the content;
responding to the operation of the user exiting the handwriting state, and determining the height of the target view according to the height of the fourth handwriting view;
if the height of the target view is larger than that of the second webpage view, rendering a line feed symbol in the second webpage view, and expanding the height of the second webpage view along with rendering to obtain a third webpage view; the height of the third webpage view is equal to the height of the target view; the second webpage view comprises the first webpage view.
2. The method of claim 1, wherein the operational state comprises at least one of an input state and a streaming state, the input state being used to characterize an operational aspect of a handwriting input function of the first application, the streaming state being used to characterize an operational aspect of a play streaming function of the first application;
The input state is one of a handwriting state and a non-handwriting state, and the streaming media state is one of a recording playing state and a non-recording playing state.
3. The method of claim 2, wherein the method further comprises, prior to responding to the first gesture in accordance with the operational state:
identifying a gesture type of the first gesture; the gesture type is one of a sliding gesture, a clicking gesture and a long-press gesture;
the responding to the first gesture according to the running state comprises the following steps:
and responding to the first gesture according to the running state and the gesture type.
4. A method according to claim 3, wherein said responding to said first gesture in accordance with said operational state and said gesture type comprises:
and if the gesture type is the sliding gesture and the input state is the non-handwriting state, scrolling the first interface.
5. A method according to claim 3, wherein said responding to said first gesture in accordance with said operational state and said gesture type comprises:
responding to the first gesture according to the running state, the drop point position of the first gesture and the gesture type; the drop point position is the position of the pressing event corresponding to the first gesture.
6. The method of claim 5, wherein responding to the first gesture according to the operational state, the drop point location of the first gesture, and the gesture type comprises:
if the streaming media state is the recording and playing state, a first handwriting content exists at the drop point position, a corresponding first recording duration exists in the first handwriting content, the gesture type is the click gesture or the long press gesture, the first handwriting content is displayed according to a preset mode, and the voice playing progress is jumped according to the first recording duration;
if the streaming media state is the record playing state, the input state is the non-handwriting state, first webpage content exists at the drop point position, corresponding second record duration exists in the first webpage content, the gesture type is the click gesture, the first handwriting content is displayed according to the preset mode, and the voice playing progress is jumped according to the second record duration.
7. The method of claim 5, wherein responding to the first gesture according to the operational state, the drop point location of the first gesture, and the gesture type comprises:
If the input state is the non-handwriting state, the drop point position is located in a video area, and the gesture type is the long-press gesture, a first control is displayed in the first interface; the video area is used for displaying videos and content related to the videos, and the first control is used for controlling deletion of the videos.
8. The method of claim 5, wherein responding to the first gesture according to the operational state, the drop point location of the first gesture, and the gesture type comprises:
if the input state is the non-handwriting state, the streaming media state is the non-recording playing state, a first picture exists at the drop point position, and the gesture type is the click gesture, displaying an original image corresponding to the first picture in the first interface;
and if the input state is the non-handwriting state, the first picture exists at the drop point position, and the gesture type is the long-press gesture, displaying a second control in the first interface, wherein the second control is used for selecting a processing mode of the first picture.
9. The method of claim 5, wherein responding to the first gesture according to the operational state, the drop point location of the first gesture, and the gesture type comprises:
If the input state is the non-handwriting state, the streaming media state is the recording and playing state, first text content exists at the drop point position, and the gesture type is the long-press gesture, first prompt information is displayed in the first interface and used for prompting to pause recording and playing;
and if the input state is the non-handwriting state, the streaming media state is the non-recording playing state, the first text content exists at the drop point position, and the gesture type is the long-press gesture, a third control is displayed in the first interface, and the third control is used for selecting a processing mode of the first text content.
10. The method of claim 5, wherein responding to the first gesture according to the operational state, the drop point location of the first gesture, and the gesture type comprises:
if the input state is the non-handwriting state, the streaming media state is the non-recording playing state, a preset object exists at the drop point position, the gesture type is the click gesture, the function of the preset object is executed, and an interface corresponding to the function of the preset object is displayed in the first interface; the preset object is one of a preset control or content in a preset format.
11. A method according to claim 3, wherein said responding to said first gesture in accordance with said operational state and said gesture type comprises:
and responding to the first gesture according to the running state, the drop point position of the first gesture, the gesture input mode of the first gesture and the gesture type.
12. The method of claim 11, wherein the gesture input mode includes handwriting input, and the responding to the first gesture according to the running state, the drop point position of the first gesture, the gesture input mode of the first gesture, and the gesture type includes:
if the input state is the non-handwriting state, the gesture input mode is the handwriting pen input mode, the preset object does not exist at the drop point position, and the gesture type is the click gesture, the input state is switched to be the handwriting state, and a track corresponding to the first gesture is used as handwriting content and is displayed on the first interface; the preset object is one of a preset control or content in a preset format.
13. The method of claim 2, wherein responding to the first gesture according to the operational state comprises:
And if the input state is the handwriting state, displaying the track of the first gesture as handwriting content on the first interface.
14. The method of any of claims 1 to 13, wherein the scroll view is located in a preset area in the electronic device screen, the displaying a first interface of a first application comprising:
loading an initialized webpage view and an initialized handwriting view in response to a first operation of a user, wherein the first operation is used for indicating to open the first interface;
acquiring all webpage contents and all handwritten contents; the all webpage contents are all webpage contents for displaying in the preset area, and the all handwritten contents are all handwritten contents for displaying in the preset area;
rendering all the webpage contents to the initialized webpage view, and expanding the initialized webpage view to obtain the second webpage view along with rendering;
writing first to-be-displayed contents into the initialization handwriting view to obtain the first handwriting view, wherein the first to-be-displayed contents are handwriting contents to be displayed in the preset area to form the first interface in all handwriting contents;
Determining the height of the second webpage view;
calculating the height of a second handwriting view according to all the handwriting contents, wherein the second handwriting view is the handwriting view comprising all the handwriting contents;
initializing the scrolling view according to the height of the second webpage view and the height of the second handwriting view.
15. The method of claim 14, wherein the method further comprises:
responding to the operation of a user scrolling interface, and acquiring second content to be displayed, wherein the second content to be displayed is handwriting content to be additionally displayed in the preset area to form a scrolled interface on the basis of the first handwriting view;
expanding the height of the first handwriting view according to the second content to be displayed;
writing the second content to be displayed into the expanded first handwriting view to obtain the third handwriting view;
scrolling the second web page view and the third handwriting view.
16. The method of claim 1, wherein the determining the target view height from the height of the fourth handwriting view comprises:
if the height of the fourth handwriting view is an integral multiple of a preset height, determining the height of the fourth handwriting view as the target view height;
If the height of the fourth handwriting view is not the integral multiple of the preset height, expanding the height of the fourth handwriting view to obtain a fifth handwriting view, and determining the height of the fifth handwriting view as the height of the target view; the height of the fifth handwriting view is the minimum value in the height to be selected, and the height to be selected is larger than the height of the fourth handwriting view and is an integer multiple of the preset height.
17. The method of claim 1, wherein the determining the target view height from the height of the fourth handwriting view comprises:
and determining the height of the fourth handwriting view as the target view height.
18. An electronic device, comprising: a processor, a memory, and an interface;
the processor, the memory and the interface cooperate to cause the electronic device to perform the method of any one of claims 1 to 17.
19. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, causes the processor to perform the method of any of claims 1 to 17.
CN202211467876.8A 2022-11-22 2022-11-22 Gesture processing method and electronic equipment Active CN116661635B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410311135.3A CN118151803A (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment
CN202211467876.8A CN116661635B (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211467876.8A CN116661635B (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410311135.3A Division CN118151803A (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116661635A CN116661635A (en) 2023-08-29
CN116661635B true CN116661635B (en) 2024-04-05

Family

ID=87710563

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211467876.8A Active CN116661635B (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment
CN202410311135.3A Pending CN118151803A (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202410311135.3A Pending CN118151803A (en) 2022-11-22 2022-11-22 Gesture processing method and electronic equipment

Country Status (1)

Country Link
CN (2) CN116661635B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1991701A (en) * 2005-12-28 2007-07-04 中兴通讯股份有限公司 Keyboard and hand-write synergic input system and realization method thereof
CN108700996A (en) * 2016-02-23 2018-10-23 迈思慧公司 System and method for multi input management
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
CN114783067A (en) * 2022-06-14 2022-07-22 荣耀终端有限公司 Gesture-based recognition method, device and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619038B2 (en) * 2012-01-23 2017-04-11 Blackberry Limited Electronic device and method of displaying a cover image and an application image from a low power condition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1991701A (en) * 2005-12-28 2007-07-04 中兴通讯股份有限公司 Keyboard and hand-write synergic input system and realization method thereof
CN108700996A (en) * 2016-02-23 2018-10-23 迈思慧公司 System and method for multi input management
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
CN114783067A (en) * 2022-06-14 2022-07-22 荣耀终端有限公司 Gesture-based recognition method, device and system

Also Published As

Publication number Publication date
CN118151803A (en) 2024-06-07
CN116661635A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
JP6997734B2 (en) Handwritten keyboard for screen
CN113805743B (en) Method for switching display window and electronic equipment
WO2020211709A1 (en) Method and electronic apparatus for adding annotation
CN110597512B (en) Method for displaying user interface and electronic equipment
WO2021232930A1 (en) Application screen splitting method and apparatus, storage medium and electric device
RU2675153C2 (en) Method for providing feedback in response to user input and terminal implementing same
CN113766064B (en) Schedule processing method and electronic equipment
US10331297B2 (en) Device, method, and graphical user interface for navigating a content hierarchy
CN110119296A (en) Switch method, the relevant apparatus of parent page and subpage frame
CN113132526B (en) Page drawing method and related device
CN114816167B (en) Application icon display method, electronic device and readable storage medium
CN108287918A (en) Method for playing music, device, storage medium based on five application page and electronic equipment
CN113805744A (en) Window display method and electronic equipment
CN115801943B (en) Display method, electronic device and storage medium
CN115185440B (en) Control display method and related equipment
CN114095610B (en) Notification message processing method and computer readable storage medium
CN116661635B (en) Gesture processing method and electronic equipment
CN116048311B (en) Window display method, electronic device, and computer-readable storage medium
US20240211280A1 (en) Quick interface return method and electronic device
WO2022222728A1 (en) Text reading method and device
EP2806364B1 (en) Method and apparatus for managing audio data in electronic device
CN116700554B (en) Information display method, electronic device and readable storage medium
CN116682465A (en) Method for recording content and electronic equipment
CN117519521A (en) Extraction method and electronic equipment
CN118093067A (en) Method for displaying card, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant