CN110634478A - Method and apparatus for processing speech signal - Google Patents

Method and apparatus for processing speech signal Download PDF

Info

Publication number
CN110634478A
CN110634478A CN201810660145.2A CN201810660145A CN110634478A CN 110634478 A CN110634478 A CN 110634478A CN 201810660145 A CN201810660145 A CN 201810660145A CN 110634478 A CN110634478 A CN 110634478A
Authority
CN
China
Prior art keywords
information
voice
user
signal
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810660145.2A
Other languages
Chinese (zh)
Inventor
刘昊骋
张西旺
李明路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810660145.2A priority Critical patent/CN110634478A/en
Publication of CN110634478A publication Critical patent/CN110634478A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method and a device for processing a voice signal. One embodiment of the method comprises: acquiring user voice characteristic information from a received voice signal; responding to the matching of the user voice characteristic information and the user voice information in the user voice library, and acquiring text information of the voice signal; performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content; displaying the operation information and the prompt information; and responding to the matching of the received voice confirmation signal corresponding to the prompt message and the user voice message, and operating the operation information according to the voice confirmation signal. The embodiment improves the information processing efficiency and the information processing safety.

Description

Method and apparatus for processing speech signal
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a method and a device for processing a voice signal.
Background
Along with the development of science and technology, intelligent terminal has possessed stronger and stronger data processing function. Things needing to be processed on site by a user can be completed through application installed on an intelligent terminal, and the working efficiency of the user is greatly improved.
As the most basic way for human beings, voice recognition technology is applied to intelligent terminals, and it is a trend of future development to realize control of functions of such intelligent terminals through natural voice. When people use the intelligent terminals, the intelligent terminals are not limited to the original basic operation functions, but pursue more intelligent, humanized and convenient functional requirements.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing a voice signal.
In a first aspect, an embodiment of the present application provides a method for processing a speech signal, where the method includes: acquiring user voice characteristic information from a received voice signal, wherein the user voice characteristic information is used for representing the user identity corresponding to the voice signal; responding to the matching of the user voice characteristic information and the user voice information in the user voice library, and acquiring text information of the voice signal; performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content; displaying the operation information and prompt information, wherein the prompt information is used for indicating a user to operate the operation information through a voice confirmation signal; and responding to the matching of the received voice confirmation signal corresponding to the prompt message and the user voice message, and operating the operation information according to the voice confirmation signal.
In some embodiments, the user speech library is constructed by: in response to detecting the user information input request, displaying identity prompt information, wherein the identity prompt information is used for indicating a user to input the identity information of the user; responding to the received user identity information corresponding to the identity prompt information, and displaying voice input prompt information, wherein the voice input prompt information is used for indicating a user to input a specified voice signal; in response to receiving a specified voice signal corresponding to the voice input prompt message, acquiring reference user voice feature information from the specified voice signal, and combining the specified voice signal and the reference user voice feature information to serve as user voice information; and establishing a corresponding relation between the user voice information and the identity information, and establishing a user voice library according to the corresponding relation.
In some embodiments, the performing semantic analysis on the text information to obtain the operation instruction and the operation content corresponding to the text information includes: performing semantic analysis on the text information to obtain semantic information corresponding to the text information, and dividing the semantic information into at least one entry; responding to the condition that the entry in the at least one entry is successfully matched with the reference operation instruction in the operation instruction set, and taking the reference operation instruction which is successfully matched as the operation instruction of the entry; and determining operation content from the at least one entry according to the operation instruction.
In some embodiments, the determining the operation content from the at least one entry according to the operation instruction includes: and screening the entries in the at least one entry according to the instruction content field to obtain operation content.
In some embodiments, the operation instruction includes an instruction name field, and the constructing operation information according to the operation instruction and the operation content includes: and filling the name and the operation content of the operation instruction into the instruction name field and the instruction content field respectively to obtain operation information.
In some embodiments, the operating the operation information according to the voice confirmation signal includes: responding to the text information corresponding to the voice confirmation signal as confirmation, and executing the operation information; and in response to the fact that the text information corresponding to the voice confirmation signal is cancelled, deleting the operation information.
In some embodiments, the above method further comprises: and constructing a voice operation record according to the operation information, the voice signal and the text information.
In a second aspect, an embodiment of the present application provides an apparatus for processing a speech signal, the apparatus including: the voice recognition device comprises a user voice characteristic information acquisition unit, a voice recognition unit and a voice recognition unit, wherein the user voice characteristic information acquisition unit is configured to acquire user voice characteristic information from a received voice signal, and the user voice characteristic information is used for representing the identity of a user corresponding to the voice signal; a text information acquiring unit, which is used for responding the matching of the user voice characteristic information and the user voice information in the user voice library and is configured to acquire the text information of the voice signal; an operation information acquisition unit configured to perform semantic analysis on the text information, acquire an operation instruction and operation content corresponding to the text information, and construct operation information according to the operation instruction and the operation content; a display unit configured to display the operation information and prompt information for instructing a user to operate the operation information by a voice confirmation signal; and the operation unit is used for responding to the received voice confirmation signal corresponding to the prompt message and matching with the user voice message and is configured to operate the operation information according to the voice confirmation signal.
In some embodiments, the apparatus includes a user speech library construction unit, and the user speech library construction unit includes: an identity prompt information display subunit, configured to display identity prompt information for instructing a user to input identity information of the user in response to detecting the user information input request; a voice input prompt information display subunit, configured to display voice input prompt information in response to receiving user identity information corresponding to the identity prompt information, the voice input prompt information being used to instruct a user to input a specified voice signal; a user voice information obtaining subunit, in response to receiving the specified voice signal corresponding to the voice input prompt information, configured to obtain reference user voice feature information from the specified voice signal, and combine the specified voice signal and the reference user voice feature information as user voice information; and the user voice library construction subunit is configured to establish a corresponding relation between the user voice information and the identity information, and construct a user voice library according to the corresponding relation.
In some embodiments, the operation information acquiring unit includes: the entry acquisition subunit is configured to perform semantic analysis on the text information to obtain semantic information corresponding to the text information, and divide the semantic information into at least one entry; the operation instruction acquisition subunit is used for responding to the successful matching of the entry in the at least one entry and the reference operation instruction in the operation instruction set, and is configured to take the reference operation instruction which is successfully matched as the operation instruction of the entry; and the operation content acquisition sub-unit is configured to determine the operation content from the at least one entry according to the operation instruction.
In some embodiments, the operation instruction includes an instruction content field, and the operation content obtaining subunit includes: and the operation content acquisition module is configured to screen entries in the at least one entry according to the instruction content field to obtain operation content.
In some embodiments, the operation instruction includes an instruction name field, and the operation information obtaining unit includes: and the operation information acquisition subunit is configured to fill the name and the operation content of the operation instruction into the instruction name field and the instruction content field respectively to obtain operation information.
In some embodiments, the operation unit includes: the operation execution subunit, in response to the text information corresponding to the voice confirmation signal being confirmation, is configured to execute the operation information; and the operation deleting subunit is configured to delete the operation information in response to the text information corresponding to the voice confirmation signal being cancelled.
In some embodiments, the above apparatus further comprises: and the recording unit is configured to construct a voice operation record according to the operation information, the voice signal and the text information.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a memory on which one or more programs are stored, a microphone that converts a sound signal into an electrical signal; the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method for processing a speech signal of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program is used to implement the method for processing a speech signal according to the first aspect when the computer program is executed by a processor.
According to the method and the device for processing the voice signal, firstly, the received voice signal is processed, and after the voice information is confirmed to be matched with the voice information of the user, corresponding operation information is obtained according to the voice information; and executing the operation information after the user confirms through voice. The method and the device improve the information processing efficiency and the information processing safety through processing the voice signals.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for processing a speech signal according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for processing a speech signal according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for processing a speech signal according to the present application;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for processing a speech signal according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the method for processing a speech signal or the apparatus for processing a speech signal of the embodiments of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as an audio acquisition application, an audio processing application, a voice recognition application, a voice analysis application, a voice control application, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having microphones and supporting voice control, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a server that processes a voice signal transmitted from the terminal apparatuses 101, 102, 103. The server can analyze and process the received data such as the voice signal and the like, and feed back the processing result to the terminal equipment.
It should be noted that the method for processing the voice signal provided in the embodiment of the present application is generally executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for processing the voice signal is generally disposed in the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing a speech signal according to the present application is shown. The method for processing a speech signal comprises the steps of:
step 201, obtaining user voice characteristic information from the received voice signal.
In the present embodiment, the execution subject of the method for processing a voice signal (e.g., the terminal apparatuses 101, 102, 103 shown in fig. 1) may receive a voice signal uttered by a user through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In practice, the application installed on the terminal device usually integrates a plurality of operation menus, and each operation menu is subdivided into a plurality of operation options. The screen of the terminal device is usually small, so when the user operates the operation options on the operation interface of the application, an operation error or the like is easy to occur. In addition, after the application is started, the existing method generally cannot supervise the operator of the application and the control behavior of the operator, so that certain safety risks exist in the operation process of the application.
For this reason, the terminal devices 101, 102, 103 of the present application may first acquire user voice feature information from the voice signal after receiving the voice signal of the user. The terminal devices 101, 102, and 103 may acquire the user voice feature information from the voice signal by using a method such as voice recognition. The user voice characteristic information can be used for representing the user identity corresponding to the voice signal. For example, the user voice feature information may include gender information (e.g., male voice, female voice), dialect information, frequency information, etc. of the user, as the case may be.
Step 202, in response to the matching of the user voice feature information and the user voice information in the user voice library, obtaining the text information of the voice signal.
The terminal devices 101, 102, 103 may have a library of user voices stored thereon. The user voice library may store user voice information of the current application. And when the user voice characteristic information is matched with the user voice information in the user voice library, the user corresponding to the user voice characteristic information is a legal user of the application. At this time, the terminal apparatuses 101, 102, 103 may acquire text information of the voice signal by means of voice recognition or the like.
In some optional implementations of the present embodiment, the user speech library may be constructed by the following steps:
in a first step, in response to detecting a user information input request, identity prompt information is displayed.
The terminal devices 101, 102, and 103 need to first acquire the user voice library, and then can perform matching operation on the user voice feature information. Generally, the terminal device 101, 102, 103 may display the identity hint information upon detecting a user information input request related to an application. Wherein, the identity prompt information can be used for instructing the user to input the identity information of the user. For example, the identity hint information may be: information such as "please input your name", "please input your account", "please input your password", etc.
It should be noted that the user information input request may be automatically triggered when the application is first started up by the terminal devices 101, 102, and 103, or may be triggered by the user when a new user joins, depending on the actual situation.
And secondly, responding to the received user identity information corresponding to the identity prompt information, and displaying voice input prompt information.
After the terminal devices 101, 102, and 103 receive the user identity information corresponding to the identity prompt information, in order to further improve the security of the user using the application on the terminal devices 101, 102, and 103, the terminal devices 101, 102, and 103 may display the voice input prompt information. Wherein the voice input prompt information may be used to instruct the user to input a specified voice signal. The specified speech signal may be used to obtain pronunciation characteristic information of the user.
And thirdly, in response to receiving the specified voice signal corresponding to the voice input prompt message, acquiring reference user voice characteristic information from the specified voice signal, and combining the specified voice signal and the reference user voice characteristic information to be used as the user voice information.
When the terminal devices 101, 102, 103 receive the specified voice signal corresponding to the voice input prompt information, the reference user voice feature information can be acquired from the specified voice signal by means of voice recognition or the like. After that, the terminal apparatuses 101, 102, 103 can combine the specified voice signal and the reference user voice feature information as the user voice information. The reference user voice feature information can be used for identifying a voice signal of a user acquired subsequently. The specified voice signal can be used for tracing the voice feature information of the reference user, and when the voice feature information of the reference user has the problems of recognition accuracy and the like, the voice feature information of the reference user is obtained again.
And fourthly, establishing a corresponding relation between the user voice information and the identity information, and establishing a user voice library according to the corresponding relation.
In order to implement the control of the application through the voice signal, the terminal devices 101, 102, and 103 may establish a corresponding relationship between the user voice information and the identity information, and construct a user voice library according to the corresponding relationship.
Step 203, performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content.
In order to implement control through a voice signal, the terminal devices 101, 102, and 103 further need to perform semantic analysis on the text information to obtain an operation instruction and operation content corresponding to the text information, thereby implementing improvement of information processing efficiency of an application through the voice signal. The operation instruction may be a specific "action" for controlling the application. For example, the operation instruction may be: "information transmission", "information reading", "information writing", "information modification", "information conversion", and the like. The operation content may be an object of "action". For example, the operation content may be: "Party A", "Party B", "sender", "recipient", etc. In practice, the operation instruction and the operation content are different according to specific applications, and are not described in detail herein.
In some optional implementation manners of this embodiment, the performing semantic analysis on the text information to obtain an operation instruction and operation content corresponding to the text information may include the following steps:
the method comprises the steps of firstly, carrying out semantic analysis on the text information to obtain semantic information corresponding to the text information, and dividing the semantic information into at least one entry.
The speech signal of the user is usually different from the actual instructions of the application, and correspondingly, the text information corresponding to the speech signal is also different from the actual instructions of the application. For this purpose, the terminal devices 101, 102, 103 may perform semantic analysis on the text information to obtain semantic information corresponding to the text information. Semantic analysis may be considered as a modification of the text information such that the modified semantic information of the text information is more suitable for execution of the application. For example, the text information may be: "Send YY information to XX"; the semantic information corresponding to the text information may be: "information YY is transmitted from the current user to the receiving party XX". After obtaining the semantic information, the terminal devices 101, 102, and 103 may further divide the semantic information into at least one entry. For example, the semantic information is: "send information YY from current user to receiver XX", the divided entries may be: "current user", "receiving party XX", "transmission", and "information YY". The terminal devices 101, 102, 103 may also divide the same semantic information into different forms of entries according to specific applications, depending on actual needs. It should be noted that the "receiving party XX" is generally a contact, a friend, or the like stored in the application. Otherwise, the terminal device 101, 102, 103 may issue an alert message.
And secondly, responding to the successful matching between the entry in the at least one entry and the reference operation instruction in the operation instruction set, and taking the reference operation instruction which is successfully matched as the operation instruction of the entry.
Applications typically have a corresponding set of operation instructions, which typically includes a plurality of reference operation instructions, each having a relatively fixed expression. The terminal devices 101, 102, and 103 may match the obtained entry with a reference operation instruction in the operation instruction set. In practice, each voice signal usually corresponds to an operation instruction. Accordingly, there is typically one entry in the at least one entry for characterizing the operating instruction. When a certain entry in the at least one entry is successfully matched with the reference operation instruction in the operation instruction set, the terminal devices 101, 102, and 103 may use the reference operation instruction successfully matched as the operation instruction of the entry. Therefore, the corresponding operation instruction is searched through the text information.
And thirdly, determining operation content from the at least one entry according to the operation instruction.
The operation instructions typically have a fixed format. After the operation instruction is determined, the operation content corresponding to the operation instruction can be determined from the at least one entry.
In some optional implementation manners of this embodiment, the operation instruction may include an instruction content field, and the determining, according to the operation instruction, the operation content from the at least one entry may include: and screening the entries in the at least one entry according to the instruction content field to obtain operation content.
The instruction content field of the operation instruction corresponds to the operation content. The terminal devices 101, 102, and 103 may filter entries in the at least one entry according to the instruction content field to obtain the operation content. For example, if a certain operation command is used to transmit information, the operation command may be: "information ___: ___, from ___ to ____ ". The instruction content field of the operation instruction may be "transfer ___, from ___ to ____". Therefore, after determining the operation instruction, the terminal device 101, 102, 103 may determine the operation content from the last at least one entry according to the instruction content field. For example, the entry may be: "current user", "receiving party XX", "transmission", and "information YY". The operation instruction may be: "information ___: transport ___, from ___ to ____ ", the entry corresponding to the operation command may be" send ". Then, the terminal device 101, 102, 103 may filter out the entry corresponding to the instruction content field from the at least one entry: "current user", "receiving party XX", and "information YY", and further, the operation content can be obtained: ' transmissionInformation YYFromCurrent userToReceiver XX”。
In some optional implementation manners of this embodiment, the operation instruction includes an instruction name field, and the constructing operation information according to the operation instruction and the operation content includes: and filling the name and the operation content of the operation instruction into the instruction name field and the instruction content field respectively to obtain operation information.
Also with the above-described operation instruction "information ___: transport ___, from ___ to ____ "For example, the instruction content field of the operation instruction may be "transport ___, from ___ to ____"; the instruction name field of the operation instruction may be "info ___". The terminal devices 101, 102, and 103 may fill the name and the operation content of the operation instruction into the instruction name field and the instruction content field, respectively, to obtain the operation information: ' informationTransmission of: transmission ofInformation YYFromCurrent userToReceiver XX". That is, the operation information can be regarded as an operation instruction after the information is filled.
And step 204, displaying the operation information and the prompt information.
Having obtained the operation information, the terminal apparatuses 101, 102, 103 can display the operation information and the prompt information on the screen for the user to confirm. The prompt message may be used to instruct the user to operate the operation message through a voice confirmation signal. For example, the reminder information may be: "confirm", "cancel", "modify", etc. The prompt information may be different corresponding to different applications or operation information.
Step 205, in response to the received voice confirmation signal corresponding to the prompt message matching with the user voice message, operating the operation message according to the voice confirmation signal.
The terminal devices 101, 102, and 103 may receive a voice confirmation signal corresponding to the above prompt information from the user, and perform processing such as voice recognition on the voice confirmation signal. The voice confirmation signal can be used for representing the judgment of the user on the operation information. For example, the voice confirmation signal may be "confirm", "cancel", or the like. When the voice confirmation signal matches the user voice information, the current user issuing the voice confirmation signal may be considered consistent with the legitimate user of the application. At this time, the terminal apparatuses 101, 102, 103 can operate the above-described operation information according to the voice confirmation signal. Therefore, the safety of information processing of the user through the voice information control application is improved.
In some optional implementation manners of this embodiment, the operating the operation information according to the voice confirmation signal may include: responding to the text information corresponding to the voice confirmation signal as confirmation, and executing the operation information; and in response to the fact that the text information corresponding to the voice confirmation signal is cancelled, deleting the operation information.
Matching the voice confirmation signal with the user voice information may improve the security of the application. In order to further operate the operation information according to the voice confirmation signal, voice recognition needs to be performed on the voice confirmation signal to obtain corresponding text information. And then, operating the operation information according to the text information. When the text information corresponding to the voice confirmation signal is 'confirmation', the user considers that the operation information is correct. At this time, the terminal apparatuses 101, 102, 103 may execute the above operation information; when the text message corresponding to the voice confirmation signal is 'cancel', the user considers that the operation information is wrong. At this time, the terminal apparatuses 101, 102, 103 may delete the above operation information. In addition, the text information may also be other content according to the specific content of the prompt information, and is not described in detail here.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for processing a speech signal according to the present embodiment. In the application scenario of fig. 3, the user opens the mailbox application on the terminal device 102 and sends the voice signal "to YYY for file a". The terminal device 102 obtains the user voice feature information from the voice signal, and when the user voice feature information matches with the user voice information, it indicates that the current user sending the voice signal is a valid user. Then, the terminal device 102 obtains text information corresponding to the voice signal, obtains an operation instruction and operation content corresponding to the mailbox application from the text information, and further obtains operation information: "file transfer: transmitting file A fromCurrent userToYYY". The terminal device 102 may display the operation information and the prompt information on a screen of the terminal device 102 for selection by the user. And when the voice confirmation signal sent by the user is matched with the voice information of the user, the user who operates at present is a legal user, and the operation information is operated according to the voice confirmation signal.
The method provided by the embodiment of the application comprises the steps of firstly processing a received voice signal, and obtaining corresponding operation information according to voice information after confirming that the voice information is matched with the voice information of a user; and executing the operation information after the user confirms through voice. The method and the device improve the information processing efficiency and the information processing safety through processing the voice signals.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for processing a speech signal is shown. The flow 400 of the method for processing a speech signal comprises the steps of:
step 401, obtaining user voice feature information from the received voice signal.
The specific content of this step is the same as that of step 201, and is not described in detail here.
Step 402, in response to the matching of the user voice feature information and the user voice information in the user voice library, obtaining text information of the voice signal.
The specific content of this step is the same as that of step 202, and is not described in detail here.
Step 403, performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content.
The specific content of this step is the same as that of step 203, and is not described in detail here.
And step 404, displaying the operation information and the prompt information.
The specific content of this step is the same as that of step 204, and is not described in detail here.
Step 405, in response to the received voice confirmation signal corresponding to the prompt message matching with the user voice message, operating the operation information according to the voice confirmation signal.
The specific content of this step is the same as that of step 205, and is not described in detail here.
And 406, constructing a voice operation record according to the operation information, the voice signal and the text information.
The terminal apparatuses 101, 102, 103 may also construct a voice operation record from the operation information, the voice signal, and the text information for the purpose of saving, tracing, and the like of the application operation information.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for processing a speech signal, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for processing a speech signal of the present embodiment may include: a user voice feature information acquisition unit 501, a text information acquisition unit 502, an operation information acquisition unit 503, a display unit 504, and an operation unit 505. The user voice feature information obtaining unit 501 is configured to obtain user voice feature information from a received voice signal, where the user voice feature information is used to represent a user identity corresponding to the voice signal; a text information obtaining unit 502 configured to obtain text information of the voice signal in response to matching of the user voice feature information with the user voice information in the user voice library; the operation information obtaining unit 503 is configured to perform semantic analysis on the text information, obtain an operation instruction and operation content corresponding to the text information, and construct operation information according to the operation instruction and the operation content; the display unit 504 is configured to display the operation information and prompt information, where the prompt information is used to instruct a user to operate the operation information through a voice confirmation signal; the operation unit 505, in response to the received voice confirmation signal corresponding to the prompt information matching the user voice information, is configured to operate the operation information according to the voice confirmation signal.
In some optional implementations of the present embodiment, the apparatus 500 for processing a voice signal may include a user voice library constructing unit (not shown in the figure), and the user voice library constructing unit may include: an identity prompt information display subunit (not shown in the figure), a voice input prompt information display subunit (not shown in the figure), a user voice information acquisition subunit (not shown in the figure), and a user voice library construction subunit (not shown in the figure). The identity prompting information display subunit is configured to display identity prompting information in response to the detection of the user information input request, wherein the identity prompting information is used for indicating the user to input the identity information of the user; a voice input prompt information display subunit, configured to display voice input prompt information in response to receiving user identity information corresponding to the identity prompt information, the voice input prompt information being used to instruct a user to input a specified voice signal; a user voice information obtaining subunit, in response to receiving the specified voice signal corresponding to the voice input prompt information, configured to obtain reference user voice feature information from the specified voice signal, and combine the specified voice signal and the reference user voice feature information as user voice information; the user voice library construction subunit is configured to establish a corresponding relationship between the user voice information and the identity information, and construct a user voice library according to the corresponding relationship.
In some optional implementation manners of this embodiment, the operation information obtaining unit 503 may include: an entry obtaining sub-unit (not shown in the figure), an operation instruction obtaining sub-unit (not shown in the figure), and an operation content obtaining sub-unit (not shown in the figure). The entry acquiring subunit is configured to perform semantic analysis on the text information to obtain semantic information corresponding to the text information, and divide the semantic information into at least one entry; the operation instruction acquisition subunit is used for responding to the successful matching of the entry in the at least one entry and the reference operation instruction in the operation instruction set, and is configured to take the reference operation instruction which is successfully matched as the operation instruction of the entry; the operation content acquiring subunit is configured to determine the operation content from the at least one entry according to the operation instruction.
In some optional implementation manners of this embodiment, the operation instruction may include an instruction content field, and the operation content obtaining subunit includes: and an operation content obtaining module (not shown in the figure) configured to filter entries in the at least one entry according to the instruction content field to obtain operation content.
In some optional implementations of this embodiment, the operation instruction may include an instruction name field, and the operation information obtaining unit 503 may include: and an operation information obtaining subunit (not shown in the figure) configured to fill the name and the operation content of the operation instruction into the instruction name field and the instruction content field, respectively, to obtain operation information.
In some optional implementations of the present embodiment, the operation unit 505 may include: an operation execution subunit (not shown in the figure) and an operation deletion subunit (not shown in the figure). The operation execution subunit, in response to the text information corresponding to the voice confirmation signal being confirmation, is configured to execute the operation information; and the operation deleting subunit is configured to delete the operation information in response to the text information corresponding to the voice confirmation signal being cancelled.
In some optional implementations of this embodiment, the apparatus 500 for processing a speech signal may further include: and a recording unit (not shown in the figure) configured to construct a voice operation record based on the operation information, the voice signal, and the text information.
The present embodiment also provides an electronic device, including: one or more processors; a memory on which one or more programs are stored, a microphone that converts a sound signal into an electrical signal; the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method for processing speech signals described above.
The present embodiment also provides a computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, carries out the above-mentioned method for processing a speech signal.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing an electronic device (e.g., terminal devices 101, 102, 103 of FIG. 1) of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601.
It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a user voice feature information acquisition unit, a text information acquisition unit, an operation information acquisition unit, a display unit, and an operation unit. Here, the names of these units do not constitute a limitation of the unit itself in some cases, and for example, the operation unit may also be described as a "unit that operates the operation information according to the voice confirmation signal".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring user voice characteristic information from a received voice signal, wherein the user voice characteristic information is used for representing the user identity corresponding to the voice signal; responding to the matching of the user voice characteristic information and the user voice information in the user voice library, and acquiring text information of the voice signal; performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content; displaying the operation information and prompt information, wherein the prompt information is used for indicating a user to operate the operation information through a voice confirmation signal; and responding to the matching of the received voice confirmation signal corresponding to the prompt message and the user voice message, and operating the operation information according to the voice confirmation signal.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (16)

1. A method for processing a speech signal, comprising:
acquiring user voice characteristic information from a received voice signal, wherein the user voice characteristic information is used for representing a user identity corresponding to the voice signal;
responding to the matching of the user voice characteristic information and the user voice information in the user voice library, and acquiring text information of the voice signal;
performing semantic analysis on the text information, acquiring an operation instruction and operation content corresponding to the text information, and constructing operation information according to the operation instruction and the operation content;
displaying the operation information and prompt information, wherein the prompt information is used for indicating a user to operate the operation information through a voice confirmation signal;
and responding to the matching of the received voice confirmation signal corresponding to the prompt message and the user voice message, and operating the operation message according to the voice confirmation signal.
2. The method according to claim 1, wherein the user speech library is constructed by:
in response to detecting the user information input request, displaying identity prompt information, wherein the identity prompt information is used for indicating a user to input identity information of the user;
responding to the received user identity information corresponding to the identity prompt information, and displaying voice input prompt information, wherein the voice input prompt information is used for indicating a user to input a specified voice signal;
in response to receiving a specified voice signal corresponding to the voice input prompt message, acquiring reference user voice feature information from the specified voice signal, and combining the specified voice signal and the reference user voice feature information to serve as user voice information;
and establishing a corresponding relation between the user voice information and the identity information, and establishing a user voice library according to the corresponding relation.
3. The method of claim 1, wherein the semantic analyzing the text information to obtain the operation instruction and the operation content corresponding to the text information comprises:
performing semantic analysis on the text information to obtain semantic information corresponding to the text information, and dividing the semantic information into at least one entry;
responding to the condition that the entry in the at least one entry is successfully matched with the reference operation instruction in the operation instruction set, and taking the reference operation instruction which is successfully matched as the operation instruction of the entry;
and determining operation content from the at least one entry according to the operation instruction.
4. The method of claim 3, wherein the operation instruction includes an instruction content field, an
The determining operation content from the at least one entry according to the operation instruction comprises:
and screening the entries in the at least one entry according to the instruction content field to obtain operation content.
5. The method of claim 4, wherein the operation instruction includes an instruction name field, an
The constructing operation information according to the operation instruction and the operation content comprises:
and filling the name and the operation content of the operation instruction into the instruction name field and the instruction content field respectively to obtain operation information.
6. The method of claim 1, wherein said operating said operation information according to the voice acknowledgement signal comprises:
responding to the text information corresponding to the voice confirmation signal as confirmation, and executing the operation information;
and in response to the fact that the text information corresponding to the voice confirmation signal is cancelled, deleting the operation information.
7. The method of any of claims 1-6, wherein the method further comprises:
and constructing a voice operation record according to the operation information, the voice signal and the text information.
8. An apparatus for processing a speech signal, comprising:
the voice recognition device comprises a user voice characteristic information acquisition unit, a voice recognition unit and a voice recognition unit, wherein the user voice characteristic information acquisition unit is configured to acquire user voice characteristic information from a received voice signal, and the user voice characteristic information is used for representing the identity of a user corresponding to the voice signal;
a text information acquisition unit configured to acquire text information of the voice signal in response to matching of the user voice feature information with the user voice information in the user voice library;
the operation information acquisition unit is configured to perform semantic analysis on the text information, acquire an operation instruction and operation content corresponding to the text information, and construct operation information according to the operation instruction and the operation content;
a display unit configured to display the operation information and prompt information for instructing a user to operate the operation information by a voice confirmation signal;
and the operation unit is used for responding to the received voice confirmation signal corresponding to the prompt message and matching with the user voice message and is configured to operate the operation information according to the voice confirmation signal.
9. The apparatus according to claim 8, wherein the apparatus comprises a user speech library construction unit comprising:
an identity prompt information display subunit, configured to display identity prompt information for instructing a user to input identity information of the user, in response to detecting the user information input request;
a voice input prompt information display subunit configured to display voice input prompt information for instructing a user to input a specified voice signal in response to receiving user identity information corresponding to the identity prompt information;
a user voice information obtaining subunit, in response to receiving a specified voice signal corresponding to the voice input prompt information, configured to obtain reference user voice feature information from the specified voice signal, and combine the specified voice signal and the reference user voice feature information as user voice information;
and the user voice library construction subunit is configured to establish a corresponding relation between the user voice information and the identity information, and construct a user voice library according to the corresponding relation.
10. The apparatus of claim 8, wherein the operation information acquisition unit comprises:
the entry acquisition subunit is configured to perform semantic analysis on the text information to obtain semantic information corresponding to the text information, and divide the semantic information into at least one entry;
the operation instruction acquisition subunit is used for responding to the successful matching of the entry in the at least one entry and the reference operation instruction in the operation instruction set, and is configured to take the reference operation instruction which is successfully matched as the operation instruction of the entry;
and the operation content acquisition sub-unit is configured to determine operation content from the at least one entry according to the operation instruction.
11. The apparatus of claim 10, wherein the operation instruction comprises an instruction content field, an
The operation content acquisition subunit includes:
and the operation content acquisition module is configured to screen entries in the at least one entry according to the instruction content field to obtain operation content.
12. The apparatus of claim 11, wherein the operation instruction comprises an instruction name field, and
the operation information acquisition unit includes:
and the operation information acquisition subunit is configured to fill the name and the operation content of the operation instruction into the instruction name field and the instruction content field respectively to obtain operation information.
13. The apparatus of claim 8, wherein the operation unit comprises:
the operation execution subunit is used for responding to the text information corresponding to the voice confirmation signal as confirmation and is configured to execute the operation information;
and the operation deleting subunit is used for responding to the cancellation of the text information corresponding to the voice confirmation signal and deleting the operation information.
14. The apparatus of any one of claims 8-13, wherein the apparatus further comprises:
and the recording unit is configured to construct a voice operation record according to the operation information, the voice signal and the text information.
15. An electronic device, comprising:
one or more processors;
a memory having one or more programs stored thereon;
a microphone converting a sound signal into an electric signal;
the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201810660145.2A 2018-06-25 2018-06-25 Method and apparatus for processing speech signal Pending CN110634478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810660145.2A CN110634478A (en) 2018-06-25 2018-06-25 Method and apparatus for processing speech signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810660145.2A CN110634478A (en) 2018-06-25 2018-06-25 Method and apparatus for processing speech signal

Publications (1)

Publication Number Publication Date
CN110634478A true CN110634478A (en) 2019-12-31

Family

ID=68967720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810660145.2A Pending CN110634478A (en) 2018-06-25 2018-06-25 Method and apparatus for processing speech signal

Country Status (1)

Country Link
CN (1) CN110634478A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112309388A (en) * 2020-03-02 2021-02-02 北京字节跳动网络技术有限公司 Method and apparatus for processing information
CN112837690A (en) * 2020-12-30 2021-05-25 科大讯飞股份有限公司 Audio data generation method, audio data transcription method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104078043A (en) * 2013-04-26 2014-10-01 腾讯科技(深圳)有限公司 Method and system for recognition of voice operational command of network transaction system
CN104135489A (en) * 2014-08-13 2014-11-05 百度在线网络技术(北京)有限公司 Login authentication method and device
CN104571033A (en) * 2014-12-26 2015-04-29 广东美的制冷设备有限公司 Method and device for controlling household appliance through short message
CN105740686A (en) * 2016-01-28 2016-07-06 百度在线网络技术(北京)有限公司 Application control method and device
WO2017036243A1 (en) * 2015-09-06 2017-03-09 中兴通讯股份有限公司 Login authentication method, authentication server, authentication client and login client
CN107104803A (en) * 2017-03-31 2017-08-29 清华大学 It is a kind of to combine the user ID authentication method confirmed with vocal print based on numerical password
CN107748500A (en) * 2017-10-10 2018-03-02 三星电子(中国)研发中心 Method and apparatus for controlling smart machine
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104078043A (en) * 2013-04-26 2014-10-01 腾讯科技(深圳)有限公司 Method and system for recognition of voice operational command of network transaction system
CN104135489A (en) * 2014-08-13 2014-11-05 百度在线网络技术(北京)有限公司 Login authentication method and device
CN104571033A (en) * 2014-12-26 2015-04-29 广东美的制冷设备有限公司 Method and device for controlling household appliance through short message
WO2017036243A1 (en) * 2015-09-06 2017-03-09 中兴通讯股份有限公司 Login authentication method, authentication server, authentication client and login client
CN105740686A (en) * 2016-01-28 2016-07-06 百度在线网络技术(北京)有限公司 Application control method and device
CN107104803A (en) * 2017-03-31 2017-08-29 清华大学 It is a kind of to combine the user ID authentication method confirmed with vocal print based on numerical password
CN107748500A (en) * 2017-10-10 2018-03-02 三星电子(中国)研发中心 Method and apparatus for controlling smart machine
CN108133707A (en) * 2017-11-30 2018-06-08 百度在线网络技术(北京)有限公司 A kind of content share method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112309388A (en) * 2020-03-02 2021-02-02 北京字节跳动网络技术有限公司 Method and apparatus for processing information
CN112837690A (en) * 2020-12-30 2021-05-25 科大讯飞股份有限公司 Audio data generation method, audio data transcription method and device
CN112837690B (en) * 2020-12-30 2024-04-16 科大讯飞股份有限公司 Audio data generation method, audio data transfer method and device

Similar Documents

Publication Publication Date Title
CN107863108B (en) Information output method and device
CN107731229B (en) Method and apparatus for recognizing speech
US10311877B2 (en) Performing tasks and returning audio and visual answers based on voice command
CN111739553B (en) Conference sound collection, conference record and conference record presentation method and device
JP2019091418A (en) Method and device for controlling page
JP2019053286A (en) Method and device for information verification
US11270690B2 (en) Method and apparatus for waking up device
US20180103376A1 (en) Device and method for authenticating a user of a voice user interface and selectively managing incoming communications
CN107733722B (en) Method and apparatus for configuring voice service
CN107342083B (en) Method and apparatus for providing voice service
CN109949806B (en) Information interaction method and device
CN110138654B (en) Method and apparatus for processing speech
CN110634478A (en) Method and apparatus for processing speech signal
US10997963B1 (en) Voice based interaction based on context-based directives
CN109286554B (en) Social function unlocking method and device in social application
CN111447191A (en) Information interaction method and device and electronic equipment
CN110659387A (en) Method and apparatus for providing video
CN113065879A (en) Data stream quality inspection method and system
CN108766429B (en) Voice interaction method and device
CN110442416B (en) Method, electronic device and computer-readable medium for presenting information
CN109995543B (en) Method and apparatus for adding group members
CN107608718B (en) Information processing method and device
CN113299285A (en) Device control method, device, electronic device and computer-readable storage medium
CN112309387A (en) Method and apparatus for processing information
CN107622766B (en) Method and apparatus for searching information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination