CN112689054A - Assistance method, terminal, and storage medium - Google Patents

Assistance method, terminal, and storage medium Download PDF

Info

Publication number
CN112689054A
CN112689054A CN202011537639.5A CN202011537639A CN112689054A CN 112689054 A CN112689054 A CN 112689054A CN 202011537639 A CN202011537639 A CN 202011537639A CN 112689054 A CN112689054 A CN 112689054A
Authority
CN
China
Prior art keywords
information
interactive
interactive information
communication
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011537639.5A
Other languages
Chinese (zh)
Inventor
刘滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202011537639.5A priority Critical patent/CN112689054A/en
Publication of CN112689054A publication Critical patent/CN112689054A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Telephone Function (AREA)

Abstract

The application discloses an auxiliary method, a terminal and a storage medium, which can acquire first interactive information generated in the communication process and process the first interactive information to acquire second interactive information, wherein on one hand, the first interactive information and the second interactive information can be any two of character information, voice information and sign language information, and the first interactive information can be processed into the second interactive information convenient to identify and understand after being acquired, so that the communication between the disabled and the healthy and healthy people and the disabled can be facilitated. On the other hand, the first interactive information can be environmental information in the communication process, and the second interactive information is prompt information of the first interactive information, namely, the auxiliary method can also convert the acquired environmental information into prompt information corresponding to the environmental information, so that the disabled can acquire the information of the external environment in time, and the disabled can acquire the information and take countermeasures in time when an emergency occurs.

Description

Assistance method, terminal, and storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an assistance method, a terminal, and a storage medium.
Background
The disabled can have a lot of inconvenience in life due to the limitation of physical conditions, for example, the disabled has the ideas of hearing disabilities and deafness and dumb but can not freely express the disabled, and the communication can be performed in the ways of sign language, writing with paper and pens and the like, but the communication way is quite inconvenient, and poor communication experience can be brought to the disabled and people communicating with the disabled. The disabled people are inconvenient for communication between the people, and the group is also inconvenient for communication with the external environment, for example, the disabled people cannot timely take actions for protecting their safety when danger occurs because they cannot hear the alarm sound, and also, for example, the disabled people cannot hear the ring sound and miss emergency matters such as telephone and the like on the terminal which need to be handled in time. As can be seen from the above, communication by disabled persons is very inconvenient.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an assisting method, a terminal and a storage medium, which can solve the technical problem of inconvenient communication for the disabled.
In order to solve the above technical problem, the present application provides an assistance method, including:
s11, acquiring first interaction information generated in the communication process;
s12, processing the first interaction information to obtain second interaction information, wherein the first interaction information and the second interaction information are any two of character information, voice information and sign language information, and/or the first interaction information is environmental information in the communication process, and the second interaction information is prompt information of the first interaction information.
Optionally, before step S11, the method further includes:
acquiring characteristic information of a current user of the mobile terminal;
and determining whether the current user is a preset target user or not based on the characteristic information, if so, executing the step S11, and if not, stopping executing the steps of the auxiliary method.
Optionally, the environment information includes at least one of: the vehicle rings a bell, an earthquake alarm sound, a fire alarm sound and an emergency prompt message;
step S12 includes: and starting a prompting lamp with a preset prompting mode, and/or starting a preset vibration mode, and/or starting a preset ringing mode to prompt the first interactive information.
Optionally, after step S12, the method further includes:
and receiving editing operation on the second interactive information, and sending the edited second interactive information to an interactive party, wherein the editing operation comprises at least one of deleting, adding, modifying and changing the information type of the second interactive information.
Optionally, after receiving an editing operation on the second interactive information, the auxiliary method further includes:
and associating the first interactive information with the edited second interactive information, and determining the edited second interactive information as the second interactive information when the first interactive information is acquired again.
Optionally, the communication in step S11 includes at least one of: text communication, voice message, voice call, video call, face-to-face communication.
Optionally, step S11 further includes: recognizing emotion information in the communication process;
step S12 further includes: the emotion information is added to the second interaction information.
Optionally, step S11 is followed by:
determining an application program generating the first interactive information, and turning on a prompting lamp, and/or turning on a vibration, and/or turning on a ring according to the application program to prompt the first interactive information.
Optionally, step S12, further includes:
and converting the collected sign language information into corresponding text information and/or voice information.
Optionally, the converting the collected sign language information into corresponding text information includes:
s121, converting the collected sign language video into a feature vector by using a three-dimensional residual error network;
and S122, coding the characteristic vector by using a bidirectional long-and-short-term memory network to generate character information corresponding to the sign language video.
Alternatively, the feature vector obtained in step S121 is represented as:
Figure BDA0002853999240000031
alternatively,
Figure BDA0002853999240000032
representing sign language video segmented by sliding window
Figure BDA0002853999240000033
The video segment is obtained, T represents that the sign language video has T frame images, gammaθRepresenting a three-dimensional residual network feature extractor, N representing the number of video segments obtained by sliding window processing, ft=Γθt)∈RdRepresenting the feature expression obtained after each video segment obtained by sliding the window passes through a three-dimensional residual error network, and d represents the dimensionality of the video feature;
the probability that the tth video segment belongs to the sign language information z in step S122 is represented as Y ═ Y (Y)t,z)=[y1,...,yN]TDetermining sign language information with the highest probability as character information corresponding to the tth video segment, and traversing the video segments of the sign language video to obtain the character information corresponding to the sign language video;
optionally, the output of the bidirectional long-and-short term memory network is
Figure BDA0002853999240000034
Mapping the output result to the character information logarithm probability space through a full connection layer to obtain yt=Wfc1·et+bfc1And R represents a bidirectional long-time memory network.
The present application further provides a mobile terminal, including: the system comprises a memory and a processor, wherein the memory stores the assistant program for the disabled, and the assistant program for the disabled realizes the steps of the method when being executed by the processor.
The present application also provides a computer storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of the method as described above.
As described above, the assistance method of the present application can help disabled persons and healthy persons and disabled persons to communicate with each other by acquiring first interaction information such as text information, voice information, sign language information, and the like during communication, and processing the first interaction information into second interaction information (which may be text information, voice information, or sign language information) that facilitates recognition and understanding of another party during communication. Optionally, the auxiliary method provided by the application may further convert the environmental information (first interaction information) in the communication process into a prompt information (second interaction information) corresponding to the environmental information, so that the disabled person can conveniently perform "communication" with the external environment, and when an important event/emergency occurs, the disabled person can timely obtain the corresponding information and timely take a countermeasure. Through the mode, the problem that communication between disabled persons, between the disabled persons and healthy persons and between the disabled persons and the external environment is inconvenient can be improved, and the disabled persons can have higher communication experience.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a first assistance method provided in an embodiment of the present application;
FIG. 4 is a first interaction mode provided by an embodiment of the present application;
FIG. 5 is a second interaction mode provided by an embodiment of the present application;
FIG. 6 is a schematic flow chart of a second assistance method provided in the embodiments of the present application;
fig. 7 is a first conversion manner of the interactive information provided in the embodiment of the present application;
fig. 8 is a second conversion manner of the interactive information provided in the embodiment of the present application;
fig. 9 is a schematic flowchart of a third auxiliary method provided in the embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and optionally, identically named components, features, and elements in different embodiments of the present application may have different meanings, as may be determined by their interpretation in the embodiment or by their further context within the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S11 and S12 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S12 first and then S11 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present application may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Alternatively, the radio frequency unit 101 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. Alternatively, the touch panel 1071 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 1, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Optionally, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
The embodiment of the application provides an auxiliary method, which can be applied to a mobile terminal, and the auxiliary method can help the disabled person to communicate with the disabled person/the healthy person, and can also help the mobile terminal user (including the disabled person and the healthy person) to communicate with the external environment, and the two auxiliary communication modes are respectively called as a first auxiliary mode and a second auxiliary mode in the application. Referring to fig. 3, the auxiliary method provided by the present application includes:
and S11, acquiring first interaction information generated in the communication process.
And S12, processing the first interactive information to obtain second interactive information.
As for the first auxiliary mode, the communication mode in the communication process in step S11 includes one or more of character communication, voice message, voice call, video call, face-to-face communication, and the like. For text communication, including short message communication and text communication through an application program on the mobile terminal, the communication mode generates text information. For voice messages and voice calls, the voice messages include voice messages sent for communication based on an application program on the mobile terminal, and also include voice messages which are not communicated and are sent for communication such as mobile phones and voice calls. The video call can include video communication based on an application program, in the embodiment of the application, the communication mode of the video call mainly generates sign language information (convenient for communication with the disabled), and the sign language information embodiment mode can be video, dynamic view or one-frame-by-one image.
Both parties of the communication in the first auxiliary manner may be referred to as a first user and a second user, respectively, and on the one hand, the first user may communicate with the second user through a first terminal, that is, only one terminal is used in the communication, see fig. 4. In the interaction process, the terminal refers to the received information of the first user and the second user, in this case, the information directly generated by the first user and the second user is referred to as first interaction information, the information processed by the terminal is referred to as second interaction information, for example, a blind person and a deaf-mute can realize face-to-face communication through the terminal, the blind person can speak to generate voice information (first interaction information), and the voice information can be converted into text information or sign language information (second interaction information) after being processed by the terminal.
On the other hand, the first user can communicate with each other through the first terminal and the second user can communicate with the second terminal, i.e. both terminals are used in the communication, see fig. 5. In this example, in a first case, after receiving first interaction information generated by a first user, a first terminal may convert the first interaction information into second interaction information, and then send the second interaction information to a second user through communication between the first terminal and the second terminal. In the second case, after receiving the first interactive information generated by the first user, the first terminal may send the first interactive information to the second user through communication between the first terminal and the second terminal, and after receiving the first interactive information, the second user may convert the first interactive information into the second interactive information. Certainly, the two situations are not real-time communication, and for the real-time communication (the third situation), the first interactive information and the second interactive information are converted in real time in the communication process, for example, a sound person and a deaf-mute have a video call, the sound person can speak voice, at this time, the mobile terminal can convert the voice information into sign language information in real time in the video call process, so that the deaf-mute can obtain the content expressed by the sound person, and similarly, when the deaf-mute plays sign language, the mobile terminal can convert the sign language information into character information or voice information in real time, so that the sound person can obtain the content expressed by the deaf-mute.
Optionally, in the first auxiliary manner, the user may further perform an editing operation on the processed second interaction information, and the second interaction information after the editing operation is the actual first interaction information expressed by the user. The editing operation includes deleting, adding, modifying the second interactive information itself and information in the second interactive information, where the information in the second interactive information may be characters (corresponding to text information), sound segments (corresponding to voice information), and views of images, video segments, etc. (corresponding to sign language information). Optionally, the information type of the second interactive information may also be modified, for example, modified into text information, voice information or sign language information. For example, when the first interactive information is text information and the processed second interactive information is voice information, the second interactive information can be adjusted to be sign language information, so that a user can select a better communication mode and obtain better communication experience.
Optionally, the edited second interactive information may be sent to the other party in the communication; optionally, the first interactive information and the edited second interactive information may be stored in the mobile terminal in a correlated manner, and when the first interactive information is obtained again, the correlated and edited second interactive information is directly searched and recorded as the processed second interactive information, so that the computing resource of the second interactive information obtained by processing the first interactive information can be saved.
For the second auxiliary mode, the communication in the communication process of step S11 refers to the communication between the mobile terminal user and the outside, and mainly refers to the fact that the mobile terminal transmits the external situation/information to the disabled person, so that the disabled person can perform operations such as risk avoidance and emergency in time. Optionally, the environment information in the second auxiliary mode may include at least one of the following: the vehicle is ringing, earthquake alarm sound, fire alarm sound, and emergency prompt message (may include short message sent by government through base station, and these prompt messages may include earthquake, fire, etc. field information, and may also be information for preventing fraud, etc.). Optionally, the mobile terminal may determine the type/content of the environment information after acquiring the environment information, determine corresponding prompt information, for example, query a preset environment information prompt mode, and prompt the environment information by turning on a prompt lamp of the preset prompt mode, turning on a preset vibration mode, and/or turning on a preset ring mode. For example, after the mobile terminal acquires a fire alarm sound, the mobile terminal may turn on a red breathing lamp (a warning lamp) to warn the blind by turning on a strong shock for three seconds and a weak shock for one second. Of course, the preset environment information prompting mode can be set by the user.
The auxiliary method provided by the application can help the disabled and the healthy and disabled to communicate by acquiring the first interactive information such as the character information, the voice information, the sign language information and the like in the communication process and processing the first interactive information into the second interactive information (which can be the character information, the voice information or the sign language information) which is convenient for the other party to recognize and understand in the communication process. Optionally, the auxiliary method provided by the application may further convert the environmental information (first interaction information) in the communication process into a prompt information (second interaction information) corresponding to the environmental information, so that the disabled person can conveniently perform "communication" with the external environment, and when an important event/emergency occurs, the disabled person can timely obtain the corresponding information and timely take a countermeasure. Through the mode, the problem that communication between disabled persons, between the disabled persons and healthy persons and between the disabled persons and the external environment is inconvenient can be improved, and the disabled persons can have higher communication experience.
Further possible embodiments of the method according to the invention based on the method described above will be described below.
The embodiment of the present application further provides an auxiliary method, which may help the disabled person to communicate with the disabled person/the healthy person, and certainly, the auxiliary method provided by the present application may also help the disabled person to communicate with the external environment, and the communication in the communication process described in the embodiment of the present application mainly includes one or more of communication modes such as text communication, voice message, voice communication, video communication, and face-to-face communication. For text communication, including short message communication and text communication through an application program on the mobile terminal, the communication mode generates text information. For voice messages and voice calls, the voice messages include voice messages sent for communication based on an application program on the mobile terminal, and also include voice messages which are not communicated and are sent for communication such as mobile phones and voice calls. The video call can include video communication based on an application program, in the embodiment of the application, the communication mode of the video call mainly generates sign language information (convenient for communication with the disabled), and the sign language information embodiment mode can be video, dynamic view or one-frame-by-one image.
In the embodiment of the present application, two communication parties are respectively referred to as a first user and a second user, on one hand, the first user can communicate with the second user through a first terminal, that is, only one terminal is used in the communication, see fig. 4, in the interaction process, the terminal refers to the received information of the first user and the second user, in this case, the information directly generated by the first user and the second user is referred to as first interaction information, and the information processed by the terminal is referred to as second interaction information, for example, a blind and a deaf-mute can realize communication and communication between the first user and the second user through the terminal surface-to-surface, the blind can speak to generate voice information (first interaction information), and the voice information can be converted into text information or sign language information (second interaction information) after being processed by the terminal.
On the other hand, the first user can communicate with each other through the first terminal and the second user can communicate with the second terminal, i.e. both terminals are used in the communication, see fig. 5. In this example, in a first case, after receiving first interaction information generated by a first user, a first terminal may convert the first interaction information into second interaction information, and then send the second interaction information to a second user through communication between the first terminal and the second terminal. In the second case, after receiving the first interactive information generated by the first user, the first terminal may send the first interactive information to the second user through communication between the first terminal and the second terminal, and after receiving the first interactive information, the second user may convert the first interactive information into the second interactive information. Certainly, the two situations are not real-time communication, and for the real-time communication (the third situation), when the communication occurs in real time, the first interactive information and the second interactive information are converted in real time, for example, a sound person and a deaf-mute have a video call, the sound person can speak voice, at this time, the mobile terminal can convert the voice information into sign language information in real time in the process of the video call, so that the deaf-mute can obtain the content expressed by the sound person, and simultaneously, when the deaf-mute plays the sign language, the mobile terminal can convert the sign language information into character information or voice information in real time, so that the sound person can obtain the content expressed by the deaf-mute.
The embodiment of the present application mainly takes a second situation that two mobile terminals are used for communication as an example to introduce the auxiliary method provided by the present application, that is, after a first user can communicate with each other through the first terminal and a second user through the second terminal (two terminals are used in the communication), the first terminal receives first interaction information generated by the first user, the first interaction information can be converted into second interaction information, and then the second interaction information is sent to the second user through the communication between the first terminal and the second terminal.
Referring to fig. 6, the method may be applied to a mobile terminal, and the method includes the steps of:
s101, obtaining characteristic information of a current user of the mobile terminal.
S102, determining whether the current user is a preset target user or not based on the characteristic information, if so, executing a step S103, and if not, stopping executing the steps of the auxiliary method.
The characteristic information includes behavior characteristics of the mobile terminal user in the process of using the mobile terminal, including gait data (the mobile terminal can acquire the gait data of the mobile terminal user based on a position sensor, a speed sensor, an acceleration sensor and the like), use habit data (which can include data such as a use application program and use duration of the user, and a use time period of the mobile terminal), and information such as an unlocking password, a fingerprint, a human face characteristic and the like for unlocking the mobile terminal by the user. Different users of the mobile terminal have different feature information, which is the current user of the mobile terminal is determined based on the feature information, and when the user of the mobile terminal is a disabled person or a user (a preset target user) having close communication with the disabled person, the step S103 may be continuously performed. If the current user of the mobile terminal is not the preset target user, that is, the current user of the mobile terminal is not a disabled person and cannot communicate with the disabled person, the subsequent steps in the auxiliary method of the embodiment of the application may be stopped to save the computing resources of the mobile terminal. The embodiment of the application takes the current user of the mobile terminal as the preset target user as an example to continue the introduction.
S103, acquiring first interaction information generated in the communication process, and identifying emotion information in the communication process.
The first interactive information and the second interactive information introduced later may be text information, voice information or/and sign language information corresponding to different communication modes such as text communication, voice message, voice communication, video communication and face-to-face communication.
Besides information, the communication process can also transmit the emotions of the two parties. Emotion is a general term for a series of subjective cognitive experiences, and refers to a psychological and physiological state that a user synthetically produces through various senses, ideas and behaviors. And the emotion reflects the psychological state of the user during man-machine voice interaction, and correspondingly, in order to provide better communication interaction experience for the user, the emotion information of both communication interaction parties needs to be transmitted besides the basic information. Different emotion information can be generated in different communication modes, for example, characters which show emotion such as 'haha' and 'bad mood' appearing in character communication can be determined as the emotion information in the character communication, in two communication modes of voice message leaving and voice communication, the emotion of a speaker can be determined based on tone and tone characteristics, and for sign language videos/sign language images, the emotion information can be acquired based on expressions in portraits.
The emotion recognition can be realized based on an emotion recognition model, which refers to a model trained in advance based on a deep learning algorithm, wherein the deep learning algorithm can include deep learning algorithms such as a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Optionally, the spoken speech can be converted into a speech spectrogram, the speech recognition is converted into an image recognition, and then the conversation spectrogram is directly subjected to image recognition through the emotion recognition model, so that a complicated speech feature extraction intermediate process in the speech recognition process is avoided. The training algorithm of the model is not limited in the embodiment of the application, and any deep learning algorithm capable of realizing image recognition can be applied to the embodiment of the application.
S104, determining an application program generating first interaction information, and starting a prompting lamp in a preset prompting mode and/or starting a preset vibration mode and/or starting a preset ringing mode according to the application program to prompt the first interaction information
In the embodiment of the application, the generation of the first interaction information is generated based on application programs on the mobile terminal (including various applications on the mobile terminal, and also including system applications such as short messages and telephones on the mobile terminal), the application program generating the first interaction information can be determined based on the first interaction information, and different application programs have different urgency degrees, so that prompt lights in different prompt modes (different colors/different display modes) on the mobile terminal can be turned on according to different application programs, or/and different vibration modes on the mobile terminal are turned on, or/and different ringing modes are turned on to prompt the mobile terminal. Optionally, the user may preset a prompting mode (a prompting mode of a prompting lamp, a vibration mode, a ringing mode) of the application program, and when the interaction information of the corresponding application program is acquired, the user may be prompted according to the preset prompting mode.
And S105, processing the first interactive information to obtain second interactive information, and adding the emotion information into the second interactive information.
As described above, the first interactive information and the second interactive information may be any two of text information, voice information and sign language information. In the embodiment of the present application, the first interactive information and the second interactive information may be translated in a manner shown in fig. 7 or fig. 8, and optionally, for how to convert the sign language video into the text information, reference may be made to the description in the following. Optionally, in this embodiment of the application, it is further required to add the identified emotion information in the communication process to the second interaction information. For example, the corresponding tone is added to the voice information, and the "haha" pseudonym is added to the character information. Expressions corresponding to emotion information are added to sign language information (video/image).
In step S105 of the embodiment of the present application, the first interactive information needs to be processed (translated) into the second interactive information, and corresponding to the six cases shown in fig. 7, the text information can be translated into the voice information (case 1 in fig. 7), and the sign language information can be translated into the text information (case 3 in fig. 7). Alternatively, the first interactive information may be processed (translated) into the second interactive information in a manner as shown in fig. 8, that is, the mutual conversion between the voice information and the sign language information needs to be performed with the character information as an intermediary, and the translation of the sign language information into the voice information needs to translate the sign language information into the character information first (case 9 in fig. 8) and then translate the character information into the voice information (case 7 in fig. 8).
S106, receiving editing operation on the second interactive information, and sending the edited second interactive information to an interactive party.
In the embodiment of the application, two terminals are used for communication, after the first terminal receives the first interactive information generated by the first user, the first interactive information can be converted into the second interactive information, and then the second interactive information is sent to the second user through the communication between the first terminal and the second terminal. It can be understood that the second interactive information obtained by the processing may have a certain difference from the actual meaning of the first interactive information, and at this time, the first user may edit the second interactive information again, and send the edited second interactive information to the interactive party after editing. The editing operation in the embodiment of the present application includes deleting, adding, and modifying the second interactive information itself and information in the second interactive information, where the information in the second interactive information may be characters (corresponding to text information), sound segments (corresponding to voice information), and views of images, video segments, and the like (corresponding to sign language information). Optionally, the information type (including text information, voice information, and sign language information) of the second interactive information may also be modified, for example, when the first interactive information is text information and the processed second interactive information is voice information, the second interactive information may be adjusted to be sign language information, so that the user may select a better communication mode to obtain better communication experience.
And S107, associating the first interactive information with the edited second interactive information, and determining the edited second interactive information as the second interactive information when the first interactive information is acquired again.
The mobile terminal stores all the associated first interactive information and second interactive information, and when the recorded first interactive information is obtained again, the associated information can be determined as the second interactive information without processing the first interactive information to obtain the second interactive information, because the second interactive information needing to be modified for the second time is probably obtained after the first interactive information is processed. This is actually equivalent to a correction feedback, the expression habit of the user is recorded, and the second interactive information corresponding to the first interactive information is directly determined according to the history record, so that the user experience is improved conveniently.
A video containing T frames
Figure BDA0002853999240000191
Mapping to a callout sequence s ═ { s } containing L wordsiE.v | i ═ 1.., L }, optionally, h × w is the image xtC is the dimension of the input data, c is 3 for RGB video. The mathematical form of continuous sign language recognition is based on Bayesian decision theory, and the recognition result
Figure BDA0002853999240000192
Is the estimation result decision with the maximum probability, and records all possible decoding sequences as s*The recognition result can be expressed as follows:
Figure BDA0002853999240000193
optionally, in some embodiments, when the first interactive information is a sign language video (containing sign language information) and the second interactive information is text information, the processing of the first interactive information (sign language video) to obtain the second interactive information (text information) in step S105 may be implemented by:
s1051, converting the collected sign language video into a feature vector by using a three-dimensional residual error network.
Given sign language video containing T-frame images
Figure BDA0002853999240000194
xiIs the ith frame image in the video, and the video is divided by using a sliding window with the window length of 8 and the step length of 4 to obtain a video segment containing 50% of overlap, and the video segment is recorded as
Figure BDA0002853999240000201
N represents the number of video segments resulting from the sliding window process.
ΓθRepresenting a three-dimensional residual network feature extractor, for each video segment v obtained by sliding windowtWhich is passed through a three-dimensional residual network to obtain a feature expression ft=Γθt)∈RdOptionally, d represents the dimension of the video feature. The video features obtained by the three-dimensional residual network can be represented as follows:
Figure BDA0002853999240000202
due to the consideration of GPU video memory and computational complexity, the 512-dimensional response of the pooling layer is extracted as the feature expression of the video segment by using an 18-layer three-dimensional residual convolutional neural network.
And S1052, encoding the feature vectors by using a bidirectional long-and-short-term memory network to generate character information corresponding to the sign language video.
The output of the bidirectional long-and-short time memory network is
Figure BDA0002853999240000203
Mapping the output result to the character information logarithm probability space through a full connection layer to obtain yt=Wfc1·et+bfc1And R represents a bidirectional long-time memory network.
For sign language videos containing N video segments, each video is summarized through the output category of the bidirectional long-time memory networkThe rate distribution can be expressed as Y ═ Y (Y)t,z)=[y1,...,yN]TWherein Y isi,lThe probability that the tth video segment belongs to the sign language word l can be determined, the sign language word with the highest probability can be determined as the character information of the tth video segment, and the character information of the video can be obtained by traversing all the video segments of the video.
The embodiment of the present application further provides an assisting method, which can help the disabled and the sound people to obtain the information of the external environment in time, please refer to fig. 9, and the assisting method includes:
s201, acquiring environment information in the communication process.
S202, processing the first interactive information to obtain prompt information corresponding to the first interactive information.
The communication here refers to communication between a mobile terminal user and the outside, and mainly refers to that the mobile terminal transmits the situation/information occurring in the outside to the disabled, so that the disabled can perform operations such as risk avoidance and emergency in time. The environmental information herein includes at least one of: the vehicle sounds a ring, an earthquake alarm sound, a fire alarm sound, and an emergency prompt message (including a prompt message sent by a government through a base station in a group, wherein the prompt message may include field information such as earthquake, fire, etc., and may also be information for preventing fraud, etc.).
The mobile terminal may determine the type/content of the environment information after acquiring the environment information, query a preset environment information prompting mode, and determine corresponding prompting information, for example, by turning on a prompting lamp of the preset prompting mode, and/or turning on a preset vibration mode, and/or turning on a preset ringing mode to prompt the environment information. For example, after the mobile terminal acquires a fire alarm sound, the mobile terminal may turn on a red breathing lamp (a warning lamp) to warn the blind by turning on a strong shock for three seconds and a weak shock for one second. Of course, the preset environment information prompting mode can be set by the user.
The present application further provides a mobile terminal device, where the terminal device includes a memory and a processor, and the memory stores an auxiliary method program, and the auxiliary method program implements the steps of the auxiliary method in any of the above embodiments when executed by the processor.
The present application further provides a computer readable storage medium having stored thereon an auxiliary method program, which when executed by a processor, implements the steps of the auxiliary method in any of the above embodiments.
In the embodiments of the mobile terminal and the computer-readable storage medium provided in the present application, all technical features of the embodiments of the auxiliary method are included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An assistance method, comprising:
s11, acquiring first interaction information generated in the communication process;
s12, processing the first interaction information to obtain second interaction information, wherein the first interaction information and the second interaction information are any two of character information, voice information and sign language information, and/or the first interaction information is environment information in the communication process, and the second interaction information is prompt information of the first interaction information.
2. The assistance method according to claim 1, wherein the assistance method is applied to a mobile terminal, and the step S11 is preceded by:
acquiring characteristic information of a current user of the mobile terminal;
and determining whether the current user is a preset target user or not based on the characteristic information, if so, executing the step S11, and if not, stopping executing the steps of the auxiliary method.
3. The assistance method according to claim 1, wherein the environmental information includes at least one of: the vehicle rings a bell, an earthquake alarm sound, a fire alarm sound and an emergency prompt message;
the step S12 includes: and starting a prompting lamp with a preset prompting mode, and/or starting a preset vibration mode, and/or starting a preset ringing mode to prompt the first interactive information.
4. The assistance method according to claim 1, wherein after the step S12, the assistance method further includes:
and receiving editing operation on the second interactive information, and sending the edited second interactive information to an interactive party, wherein the editing operation comprises at least one of deleting, adding, modifying and changing the information type of the second interactive information.
5. The auxiliary method of claim 4, wherein after receiving the editing operation on the second interactive information, the auxiliary method further comprises:
and associating the first interactive information with the edited second interactive information, and determining the edited second interactive information as the second interactive information when the first interactive information is acquired again.
6. Auxiliary method according to any one of claims 1 to 5, characterized in that said step S11 further comprises: recognizing emotion information in the communication process;
step S12 further includes: adding the emotion information to the second interaction information.
7. Auxiliary method according to any one of claims 1 to 5, characterized in that said step S11 is followed by further comprising:
determining an application program generating the first interactive information, and starting a prompting lamp, and/or starting a vibration, and/or starting a ring to prompt the first interactive information according to the application program.
8. Auxiliary method according to any one of claims 1 to 5, characterized in that said step S12, further comprises:
and converting the collected sign language information into corresponding text information and/or voice information.
9. A terminal, characterized in that the terminal comprises: memory, processor, wherein the memory has stored thereon an auxiliary program which, when executed by the processor, carries out the steps of the auxiliary method according to any one of claims 1 to 8.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the auxiliary method according to any one of claims 1 to 8.
CN202011537639.5A 2020-12-23 2020-12-23 Assistance method, terminal, and storage medium Pending CN112689054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011537639.5A CN112689054A (en) 2020-12-23 2020-12-23 Assistance method, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011537639.5A CN112689054A (en) 2020-12-23 2020-12-23 Assistance method, terminal, and storage medium

Publications (1)

Publication Number Publication Date
CN112689054A true CN112689054A (en) 2021-04-20

Family

ID=75451077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011537639.5A Pending CN112689054A (en) 2020-12-23 2020-12-23 Assistance method, terminal, and storage medium

Country Status (1)

Country Link
CN (1) CN112689054A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780013A (en) * 2021-07-30 2021-12-10 阿里巴巴(中国)有限公司 Translation method, translation equipment and readable medium
CN115457981A (en) * 2022-09-05 2022-12-09 安徽康佳电子有限公司 Method for facilitating hearing-impaired person to watch video and television based on method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780013A (en) * 2021-07-30 2021-12-10 阿里巴巴(中国)有限公司 Translation method, translation equipment and readable medium
CN115457981A (en) * 2022-09-05 2022-12-09 安徽康佳电子有限公司 Method for facilitating hearing-impaired person to watch video and television based on method

Similar Documents

Publication Publication Date Title
CN108572764B (en) Character input control method and device and computer readable storage medium
CN108009136B (en) Message correction method, mobile terminal and computer readable storage medium
CN109036420B (en) Voice recognition control method, terminal and computer readable storage medium
CN109256151B (en) Call voice regulation and control method and device, mobile terminal and readable storage medium
CN112533189A (en) Transmission method, mobile terminal and storage medium
CN109584897B (en) Video noise reduction method, mobile terminal and computer readable storage medium
CN112004174A (en) Noise reduction control method and device and computer readable storage medium
CN112689054A (en) Assistance method, terminal, and storage medium
CN112306799A (en) Abnormal information acquisition method, terminal device and readable storage medium
CN112489647A (en) Voice assistant control method, mobile terminal and storage medium
CN112437472B (en) Network switching method, equipment and computer readable storage medium
CN113126844A (en) Display method, terminal and storage medium
CN109561221B (en) Call control method, device and computer readable storage medium
CN108876387B (en) Payment verification method, payment verification equipment and computer-readable storage medium
CN115086479B (en) Terminal control method, intelligent terminal and storage medium
CN108076230B (en) Call volume adjusting method, mobile terminal and computer readable storage medium
CN115633124A (en) Help seeking method, intelligent terminal and storage medium
CN112395032B (en) Theme switching method, device and computer readable storage medium
CN114627872A (en) Virtual human voice tone control method, equipment and computer readable storage medium
CN114065168A (en) Information processing method, intelligent terminal and storage medium
CN109656658B (en) Editing object processing method and device and computer readable storage medium
CN113286106A (en) Video recording method, mobile terminal and storage medium
CN112770001A (en) Output volume regulation method, device and computer readable storage medium
CN112700783A (en) Communication sound changing method, terminal equipment and storage medium
CN112882675A (en) Screen-casting character input method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination