CN106024013B - Voice data searching method and system - Google Patents

Voice data searching method and system Download PDF

Info

Publication number
CN106024013B
CN106024013B CN201610279205.7A CN201610279205A CN106024013B CN 106024013 B CN106024013 B CN 106024013B CN 201610279205 A CN201610279205 A CN 201610279205A CN 106024013 B CN106024013 B CN 106024013B
Authority
CN
China
Prior art keywords
search
voice
voice data
information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610279205.7A
Other languages
Chinese (zh)
Other versions
CN106024013A (en
Inventor
徐桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610279205.7A priority Critical patent/CN106024013B/en
Publication of CN106024013A publication Critical patent/CN106024013A/en
Application granted granted Critical
Publication of CN106024013B publication Critical patent/CN106024013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/54Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for retrieval

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

The invention discloses a voice data searching method and a voice data searching system, and belongs to the field of data retrieval. The method comprises the following steps: receiving a search instruction, voice information and a fuzzy search range which are input by a user through operating a search key; determining a search condition according to the received voice information and the fuzzy search range, wherein the search condition is the number of characters spaced between keywords determined by the voice information input by a user and keywords determined by the fuzzy search range, and/or the voiceprint characteristics of the voice information; and searching the voice data according to the confirmed search condition, and finding the voice data matched with the search condition. Therefore, the voice data which the user wants to search can be quickly positioned, the method is convenient and quick, and the user experience is enhanced.

Description

Voice data searching method and system
Technical Field
The invention relates to the field of data retrieval, in particular to a voice data searching method and system.
Background
Voiceprint Recognition (VPR) is a technology for automatically recognizing the identity of a speaker according to voice parameters reflecting physiological and behavioral characteristics of the speaker in a voice waveform, and mainly comprises two parts of characteristic extraction and mode matching. The technology develops from research to the present, and special attention is paid to the advantages of convenience, economy, safety, accuracy and the like. As one of biometric identification technologies, the technology is widely used in the fields of internet and communication, such as voice dialing, telephone banking, telephone shopping, database access, information services, security control, and the like.
Speech recognition technology is a technology that lets a machine convert a speech signal into a corresponding text or command through a recognition and understanding process. The application field of the voice recognition is very wide, a voice input system is commonly applied, and compared with keyboard input, the voice recognition system is more accordant with the daily habit of people, and is more natural and efficient.
The existing voice data searching mode is very complex, and a user needs to rename a file in advance to mark the content of the file, then searches through a corresponding storage position, or sequentially audits voice data to accurately locate the corresponding voice data or position.
With the development of social software, voice communication has become one of the main communication modes on various large social application software. However, the increase of voice data causes difficulty in re-viewing. The traditional text information can preview the content of the communication at a glance by browsing the history record and is convenient for searching the desired information, but the voice data is inconvenient for viewing and positioning the history information.
Disclosure of Invention
The invention mainly aims to provide a voice data searching method and a voice data searching system, and aims to solve the problem of difficulty in voice data searching.
In order to achieve the above object, the present invention provides a method for searching voice data, comprising the steps of: receiving a search instruction, voice information and a fuzzy search range which are input by a user through operating a search key; determining a search condition by combining received voice information and a fuzzy search range, wherein the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters at intervals between the characters of the keywords in the search condition; and searching the voice data according to the confirmed search condition, and finding the voice data matched with the search condition.
Optionally, the fuzzy search range is obtained by detecting a pressure value of the user pressing the search key through a pressure sensor.
Optionally, the method further comprises the step of: and converting the voice data into character information through a voice recognition technology, and storing the character information in a keyword database.
Optionally, the method further comprises the step of: voice print feature extraction is carried out on voice data, voice print information is generated and stored in a voice print database; performing voiceprint feature extraction on the received voice information to determine a search condition as the voiceprint feature of the voice information; and matching the voiceprint characteristics of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data so as to find the voice data matched with the search condition.
Optionally, the voiceprint feature extraction includes calculating a pitch period parameter by using a cepstrum method, converting the power of the voice signal into Mel cepstrum coefficients by using a Mel filter, and forming a voiceprint parameter by using two parameters by using a feature extraction algorithm to establish a gaussian mixture model for the speaker.
Optionally, the method further comprises the step of: establishing a search mode of voice data search, wherein the search mode comprises a keyword search mode, a voiceprint search mode and a dual-condition search mode; and receiving a search mode selected by a user, and executing corresponding search operation according to the search mode selected by the user.
In addition, to achieve the above object, the present invention further provides a voice data search system, including: the receiving unit is used for receiving a search instruction, voice information and a fuzzy search range which are input by a user through operating a search key; a determining unit, configured to determine a search condition by combining received voice information and a fuzzy search range, where the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters spaced between the keywords in the search condition; and the searching unit is used for searching the voice data according to the confirmed searching condition and finding the voice data matched with the searching condition.
Optionally, the system further comprises: and the establishing unit is used for converting the voice data into character information through a voice recognition technology and storing the character information in the keyword database.
Optionally, the establishing unit is further configured to perform voiceprint feature extraction on the voice data, generate voiceprint information, and store the voiceprint information in a voiceprint database; the determining unit is further configured to perform voiceprint feature extraction on the received voice information to determine that the search condition is the voiceprint feature of the voice information; the searching unit is further used for matching the voiceprint features of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data so as to find the voice data matched with the searching conditions.
Optionally, the establishing unit is further configured to establish a search mode for voice data search, where the search mode includes a keyword search mode, a voiceprint search mode, and a dual-condition search mode; the receiving unit is further configured to receive a search mode selected by a user.
The voice data searching method and the voice data searching system provided by the invention can search the voice data meeting the conditions according to the keywords determined by the voice information input by the user and the character number of the intervals between the keywords determined by the fuzzy searching range and/or the voiceprint characteristics of the voice information, quickly locate the voice data to be searched for the user, are convenient and quick, and enhance the user experience.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an alternative mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a flowchart of a voice data searching method according to a first embodiment of the present invention;
fig. 4 is a flowchart of a voice data searching method according to a second embodiment of the present invention;
fig. 5 is a flowchart of a voice data searching method according to a third embodiment of the present invention;
fig. 6 is a flowchart of a voice data searching method according to a fourth embodiment of the present invention;
FIG. 7 is a schematic interface diagram of a voice data search method according to the present invention;
fig. 8 is a block diagram of a voice data search system according to a fifth embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of digital video broadcasting-handheld (DVB-H), and the like. The broadcast receiving module 111 may receive a signal broadcast by using various types of broadcasting systems. In particular, the broadcast receiving module 111 may receive digital broadcasting by using a digital broadcasting system such as a data broadcasting system of multimedia broadcasting-terrestrial (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video broadcasting-handheld (DVB-H), forward link media (MediaFLO @), terrestrial digital broadcasting integrated service (ISDB-T), and the like. The broadcast receiving module 111 may be constructed to be suitable for various broadcasting systems that provide broadcast signals as well as the above-mentioned digital broadcasting systems. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal. The module may be internally or externally coupled to the terminal. The wireless internet access technology to which the module relates may include WLAN (wireless LAN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave access), HSDPA (high speed downlink packet access), and the like.
The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include bluetooth (TM), Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbee (TM), and the like.
The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of the location information module is a GPS (global positioning system). According to the current technology, the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS module 115 can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 1220, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the construction of the mobile terminal. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device. The sensing unit 140 may include a proximity sensor 1410 as will be described below in connection with a touch screen.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (incomingmunication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 1810 for reproducing (or playing back) multimedia data, and the multimedia module 1810 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 2750.
Each BS270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz,5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each sector of a particular BS270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The GPS module 115 as shown in fig. 1 is generally configured to cooperate with satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS270 to transmit forward link signals to the mobile terminal 100.
Based on the above mobile terminal hardware structure and communication system, the present invention provides various embodiments of the method.
The voice data searching method provided by the invention is applied to the mobile terminal and used for searching the voice data stored in the mobile terminal according to the searching condition input by the user. The voice data includes, but is not limited to, audio files or voice messages once received in various application software of the mobile terminal.
Example one
As shown in fig. 3, a first embodiment of the present invention provides a method for searching voice data, including the following steps:
and S100, receiving a search instruction input by a user.
Specifically, a search key is provided in the mobile terminal, and when a user needs to search for voice data, the search key is operated (e.g., touched or pressed) to issue a search instruction, and a function of searching for voice data is activated. In this embodiment, the search key may be an entity home key or a virtual home key.
And S200, receiving voice information input by a user.
Specifically, after receiving a search instruction of a user, prompting the user to input voice information on a display interface of the mobile terminal. In this embodiment, the user continuously touches or presses the search key to input voice information, and may select to input through a microphone configured with an earphone or a mobile terminal. For example, after the user is prompted to input voice information on the display interface, the user continuously presses the search key and speaks "we go to a mountain bar" to the microphone, and then the mobile terminal receives the voice information "we go to a mountain bar" input by the user.
And S300, receiving the fuzzy search range input by the user.
Specifically, a pressure sensor is provided in the mobile terminal, and the pressure sensor may be provided under the search key for receiving pressure information when the user operates the search key. And after the user inputs the voice information, the display interface prompts the user to input the fuzzy search range. The user applies pressure to the search key area by pressing the search key, the pressure sensor sends the pressure value to the processor after detecting the pressure value, and the processor determines the corresponding fuzzy search range according to the pressure value. The fuzzy search range refers to the number N of characters spaced between characters in the voice information input by the user. The number of characters N dynamically increases as the pressure value detected by the pressure sensor increases. In this embodiment, the number of characters N refers to the number of characters between each letter of the voice message. For example, when the user presses the search key, the pressure sensor detects that the pressure value is 2, and the number of characters spaced between each word of the voice message is 2.
In other embodiments, the number of characters N may also refer to the number of characters between two adjacent words. By setting the number N of the characters, fuzzy search of input voice information can be realized, and the search speed can be increased by setting different numbers N of the interval characters.
And S400, determining a search condition by combining the received voice information and the fuzzy search range.
Specifically, the speech information determines a keyword in the search condition, and the fuzzy search range determines the number of characters spaced between characters of the keyword in the search condition. For example, when the received speech information is "we go to the mountain bar" and the fuzzy search range is the number of interval characters is 2, the search condition is determined as "i'm's go's.
Further, the speech information and the fuzzy search range also determine the order of keywords in the search condition.
In another embodiment, if the user skips the step of inputting the fuzzy search range and does not input the fuzzy search range, only the keywords provided by the voice information are considered in determining the search condition, and the number of characters spaced between the characters of the keywords and the sequence of the keywords are not considered.
And S500, searching the voice data to be matched according to the confirmed search condition, and finding the voice data matched with the search condition.
Specifically, matching with the search condition means that there are keywords identical to the search condition in the content of the voice data, and the number of characters spaced between adjacent keyword characters is not greater than the number N of characters in the search condition. And after the search is finished, displaying a search result on a display interface, namely displaying the voice data matched with the search condition.
Further, the user can listen on trial to the voice data in the search results. If the voice data which meets the requirement is not considered to be available after audition, the user can expand the range of fuzzy search by continuously applying the pressure value to the search key, and the mobile terminal carries out matching and display again according to the new search condition.
According to the voice data searching method provided by the embodiment, the searching condition is determined through the voice information input by the user and the fuzzy searching range, so that the matched voice data is searched in the mobile terminal, the method is convenient and fast, and the user experience is enhanced.
Example two
As shown in fig. 4, a second embodiment of the present invention provides a voice data searching method. In the second embodiment, the voice data searching method is similar to the steps of the first embodiment except that the step S100 is preceded by the step of:
and S102, establishing a keyword database corresponding to the voice data.
Specifically, voice data in the mobile terminal is converted into text information through a voice recognition technology, the text information is uniformly stored in a keyword database, and each piece of text information is associated with corresponding voice data.
In this embodiment, step S500 specifically includes searching the keyword database corresponding to the voice data to be matched according to the confirmed search condition to find the voice data matched with the search condition.
According to the voice data searching method provided by the embodiment, the keyword database is established in advance, the voice data in the mobile terminal is converted into the text information in advance, the voice data to be matched is prevented from being converted again during matching, the matching speed can be increased, and the voice data to be searched can be quickly located for the user.
EXAMPLE III
As shown in fig. 5, a third embodiment of the present invention proposes a voice data searching method. In the third embodiment, the voice data searching method is similar to the steps of the second embodiment except that step S102 is replaced with step S104, and step S300 and step S400 are replaced with step S304. The method specifically comprises the following steps:
and S104, establishing a voiceprint database corresponding to the voice data.
Specifically, voice print feature extraction is carried out on voice data in the mobile terminal, voice print information is generated and stored in a voice print database in a unified mode, and each piece of voice print information is associated with corresponding voice data. The voiceprint feature extraction comprises the steps of calculating a fundamental tone period parameter by using a cepstrum method, converting the power of a voice signal into Mel cepstrum coefficients (MFCC) through a Mel filter, forming a voiceprint parameter by using a feature extraction algorithm and establishing a Gaussian mixture model for each speaker of voice data.
And S100, receiving a search instruction input by a user.
And S200, receiving voice information input by a user.
S304, voice print feature extraction is carried out on the voice information to determine a search condition.
Specifically, the search condition is a voiceprint feature of the voice information.
And S500, searching the voice data to be matched according to the confirmed search condition, and finding the voice data matched with the search condition.
In this embodiment, the step specifically includes matching the voiceprint feature of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data to be matched, so as to find the voice data matched with the search condition. Matching with the search condition means that the voiceprint feature in the voiceprint information associated with the voice data is the same as the voiceprint feature of the voice information. And after the search is finished, displaying a search result on a display interface, namely displaying the voice data matched with the search condition.
In the voice data searching method provided by this embodiment, the voice information input by the user is subjected to voiceprint feature extraction to determine the searching condition, so that the matched voice data is searched in the mobile terminal. And voice data in the mobile terminal is subjected to voice print feature extraction in advance by pre-establishing a voice print database, so that the voice data to be matched is prevented from being extracted again during matching, the matching speed can be increased, and the voice data to be searched can be quickly positioned for a user.
Example four
As shown in fig. 6, a fourth embodiment of the present invention provides a method for searching voice data, including the steps of:
and S106, establishing a search mode of voice data search.
Specifically, the search patterns include, but are not limited to: a keyword search mode, a voiceprint search mode, and a dual condition search mode. The search basis in the keyword search mode is only the voice content in the voice information input by the user and is irrelevant to the voiceprint characteristics. The basis of searching in the voiceprint searching mode is only the voiceprint characteristics in the voice information input by the user and is irrelevant to the voice content. The basis of searching in the dual-condition searching mode is to simultaneously meet the requirements of the voiceprint feature and the voice content.
And S108, receiving the search mode selected by the user, and executing corresponding search operation according to the search mode selected by the user.
Specifically, when the user selects the keyword search mode, the steps in the first embodiment or the second embodiment are performed. When the user selects the voiceprint search mode, the steps in the third embodiment are performed. When the user selects the two-condition search mode, the steps in the first embodiment or the second embodiment are combined with the steps in the third embodiment to perform a search operation. In the dual-condition search mode, the search condition includes a keyword determined according to the voice information input by the user, the number of characters spaced between the characters of the keyword determined according to the fuzzy search range input by the user, and the voiceprint feature of the voice information.
The voice data searching method provided by the embodiment can enable a user to select a proper searching mode according to the own needs, accurately search the voice data to be searched under different searching conditions, and enhance the user experience.
It should be understood that in other embodiments, in order to implement accurate search, steps may be added to the above-mentioned embodiments one to four to adjust the search condition.
For example, the mobile terminal annotates attribution (e.g., WeChat, QQ, etc.) or path information of the voice data when acquiring the voice data, and classifies the voice data according to the attribution or the path information. After receiving the voice information input by the user, the user can be prompted to input the attribution or the path of the voice data needing to be searched, and searching is carried out from the folder meeting the conditions.
For another example, when generating voice data, the mobile terminal adds a flag bit, where the flag bit is used to indicate whether the voice data is recorded through an earphone or a microphone of the terminal. A search can be made from the same voice data as the way in which the user currently inputs voice information.
As shown in fig. 7, it is an interface schematic diagram of a voice data search method proposed by the present invention, including:
(1) the user presses the search key to send a search instruction, and the mobile terminal receives the search instruction input by the user and starts a search function.
(2) The display interface prompts the user to input voice information. The user keeps pressing the search key to input the voice information "we go to the mountain bar".
(3) And the mobile terminal receives the voice information and displays the voice information in a display interface.
(4) The display interface prompts the user to enter the fuzzy search range. When a user presses a search key, the pressure sensor detects that the pressure value is 2, and the mobile terminal receives a fuzzy search range input by the user, namely the number of characters at intervals between every two characters of the voice information is 2.
(5) The mobile terminal determines a search condition "i x mountain x bar" in combination with the received voice information and the fuzzy search range, and displays the search condition in the display interface. The mobile terminal searches in the voice data according to the search condition.
The invention further provides a voice data searching system which is operated in the mobile terminal and used for searching the voice data stored in the mobile terminal according to the searching condition input by the user. The voice data includes, but is not limited to, audio files or voice messages once received in various application software of the mobile terminal.
EXAMPLE five
As shown in fig. 8, a fifth embodiment of the present invention provides a voice data search system, including:
the receiving unit 802 is configured to receive a search instruction, voice information, and a fuzzy search range input by a user.
Specifically, a search key is provided in the mobile terminal, and when a user needs to search for voice data, the search key is operated (e.g., touched or pressed) to issue a search instruction, and a function of searching for voice data is activated. In this embodiment, the search key may be an entity home key or a virtual home key.
When the receiving unit 802 receives a search instruction from a user, the user continuously touches or presses the search key to input voice information, and may select to input through a microphone configured in the headset or the mobile terminal. For example, the user continuously presses the search key and says "we go to the bar" to the microphone, the receiving unit 802 receives the voice information "we go to the bar" input by the user.
A pressure sensor is arranged in the mobile terminal, and the pressure sensor can be arranged below the search key and used for receiving pressure information when the user operates the search key. The user applies pressure to the search key region by pressing the search key, the pressure sensor sends the pressure value to the receiving unit 802 after detecting the pressure value, and the receiving unit 802 determines the corresponding fuzzy search range according to the pressure value.
The fuzzy search range refers to the number N of characters spaced between characters in the voice information input by the user. The number of characters N dynamically increases as the pressure value detected by the pressure sensor increases. In this embodiment, the number of characters N refers to the number of characters between each letter of the voice message. For example, when the user presses the search key, the pressure sensor detects that the pressure value is 2, and the number of characters spaced between each word of the voice message is 2. In other embodiments, the number of characters N may also refer to the number of characters between two adjacent words
A determining unit 804, configured to determine a search condition by combining the received voice information and the fuzzy search range.
Specifically, the speech information determines a keyword in the search condition, and the fuzzy search range determines the number of characters spaced between characters of the keyword in the search condition. For example, when the received speech information is "we go to the mountain bar" and the fuzzy search range is the number of interval characters is 2, the determination unit 804 determines the search condition as "i'm ' go ' mountain bar".
Further, the speech information and the fuzzy search range also determine the order of keywords in the search condition.
In another embodiment, if the user does not input the fuzzy search range, the determining unit 804 only considers the keywords provided by the voice information when determining the search condition, and does not consider the number of characters spaced between the characters of the keywords and the order of the keywords.
The searching unit 806 is configured to search the voice data to be matched according to the confirmed search condition, and find the voice data matched with the search condition.
Specifically, matching with the search condition means that there are keywords identical to the search condition in the content of the voice data, and the number of characters spaced between adjacent keyword characters is not greater than the number N of characters in the search condition. And after the search is finished, displaying a search result on a display interface, namely displaying the voice data matched with the search condition.
Further, the system may further include:
the establishing unit 800 is configured to establish a keyword database corresponding to the voice data.
Specifically, the establishing unit 800 converts voice data in the mobile terminal into text information through a voice recognition technology, stores the text information in a keyword database in a unified manner, and associates each piece of text information with corresponding voice data.
The searching unit 806 is further configured to search the keyword database corresponding to the voice data to be matched according to the confirmed search condition to find the voice data matched with the search condition.
The voice data search system provided by the embodiment determines the search condition through the voice information input by the user and the fuzzy search range, so that the matched voice data is searched in the mobile terminal. And through establishing the keyword database in advance, the voice data in the mobile terminal is converted into the text information in advance, the voice data to be matched is prevented from being converted again during matching, the matching speed can be increased, and the voice data to be searched can be quickly positioned for the user.
EXAMPLE six
A sixth embodiment of the present invention provides a voice data search system. In the sixth embodiment, the voice data search system is similar to the fifth embodiment except that:
the establishing unit 800 may also be configured to establish a voiceprint database corresponding to the voice data.
Specifically, the establishing unit 800 performs voiceprint feature extraction on voice data in the mobile terminal, generates voiceprint information, uniformly stores the voiceprint information in a voiceprint database, and associates each piece of voiceprint information with corresponding voice data. The voiceprint feature extraction comprises the steps of calculating a fundamental tone period parameter by using a cepstrum method, converting the power of a voice signal into Mel cepstrum coefficients (MFCC) through a Mel filter, forming a voiceprint parameter by using a feature extraction algorithm and establishing a Gaussian mixture model for each speaker of voice data.
The determining unit 804 is further configured to perform voiceprint feature extraction on the voice information to determine a search condition. Specifically, the search condition is a voiceprint feature of the voice information.
The searching unit 806 is further configured to match the voiceprint feature of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data to be matched, so as to find the voice data matched with the searching condition. Matching with the search condition means that the voiceprint feature in the voiceprint information associated with the voice data is the same as the voiceprint feature of the voice information.
The voice data search system provided in this embodiment determines a search condition by extracting a voiceprint feature of voice information input by a user, so as to search for matching voice data in a mobile terminal. And voice data in the mobile terminal is subjected to voice print feature extraction in advance by pre-establishing a voice print database, so that the voice data to be matched is prevented from being extracted again during matching, the matching speed can be increased, and the voice data to be searched can be quickly positioned for a user.
EXAMPLE seven
A seventh embodiment of the present invention provides a voice data search system. In the seventh embodiment, the voice data search system is similar to the sixth embodiment except that:
the establishing unit 800 is further configured to establish a search mode for voice data search.
Specifically, the search patterns include, but are not limited to: a keyword search mode, a voiceprint search mode, and a dual condition search mode. The search basis in the keyword search mode is only the voice content in the voice information input by the user and is irrelevant to the voiceprint characteristics. The basis of searching in the voiceprint searching mode is only the voiceprint characteristics in the voice information input by the user and is irrelevant to the voice content. The basis of searching in the dual-condition searching mode is to simultaneously meet the requirements of the voiceprint feature and the voice content.
The receiving unit 802 is further configured to receive a search mode selected by a user, and then the receiving unit 802, the determining unit 804 and the searching unit 806 perform a corresponding search operation according to the search mode selected by the user.
The voice data searching system provided by the embodiment can enable a user to select a proper searching mode according to the own needs, accurately search the voice data to be searched under different searching conditions, and enhance the user experience.
It should be understood that in other embodiments, the above-described methods and systems may be applied not only to the mobile terminal, but also to a fixed terminal such as a digital TV, a desktop computer, or the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A method for searching voice data, the method comprising the steps of:
receiving a search instruction, voice information and a fuzzy search range which are input by a user through operating a search key; the fuzzy search range is obtained by detecting the pressure value of the search key pressed by the user through a pressure sensor;
determining a search condition by combining received voice information and a fuzzy search range, wherein the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters at intervals between the characters of the keywords in the search condition;
and searching the voice data according to the confirmed search condition, and finding the voice data matched with the search condition.
2. The voice data searching method of claim 1, further comprising, before the step of receiving a search instruction input by a user by operating a search key, the steps of:
and converting the voice data into character information through a voice recognition technology, and storing the character information in a keyword database.
3. The voice data searching method according to claim 2, further comprising the steps of: voice print feature extraction is carried out on voice data, voice print information is generated and stored in a voice print database; wherein the content of the first and second substances,
determining a search condition by combining the received voice information and a fuzzy search range, wherein the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters at intervals between the characters of the keywords in the search condition, and comprises the following steps: performing voiceprint feature extraction on the received voice information to determine a search condition as the voiceprint feature of the voice information;
searching the voice data according to the confirmed search condition, wherein the step of finding the voice data matched with the search condition comprises the following steps: and matching the voiceprint characteristics of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data so as to find the voice data matched with the search condition.
4. The method of claim 3, wherein the voiceprint feature extraction comprises calculating pitch period parameters by cepstrum, converting the power of the speech signal into Mel cepstral coefficients by Mel filter, and using feature extraction algorithm to combine the two parameters into a voiceprint parameter to create a Gaussian mixture model for the speaker.
5. The voice data searching method according to claim 3, further comprising, before the step of receiving a search instruction input by a user by operating a search key, the steps of:
establishing a search mode of voice data search, wherein the search mode comprises a keyword search mode, a voiceprint search mode and a dual-condition search mode;
receiving a search mode selected by a user, and executing corresponding search operation according to the search mode selected by the user, wherein the search basis in the keyword search mode is the voice content in the voice information input by the user, the search basis in the voiceprint search mode is the voiceprint feature in the voice information input by the user, and the search basis in the dual-condition search mode is the basis meeting the requirements of the voiceprint feature and the voice content at the same time.
6. A voice data search system, comprising:
the receiving unit is used for receiving a search instruction, voice information and a fuzzy search range which are input by a user through operating a search key; the fuzzy search range is obtained by detecting the pressure value of the search key pressed by the user through a pressure sensor;
a determining unit, configured to determine a search condition by combining received voice information and a fuzzy search range, where the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters spaced between the keywords in the search condition;
and the searching unit is used for searching the voice data according to the confirmed searching condition and finding the voice data matched with the searching condition.
7. The voice data search system of claim 6, further comprising:
and the establishing unit is used for converting the voice data into character information through a voice recognition technology and storing the character information in the keyword database.
8. The voice data search system according to claim 7, wherein:
the establishing unit is also used for carrying out voiceprint feature extraction on the voice data to generate voiceprint information and storing the voiceprint information in a voiceprint database;
determining a search condition by combining the received voice information and a fuzzy search range, wherein the voice information determines keywords in the search condition, and the fuzzy search range determines the number of characters at intervals between the characters of the keywords in the search condition, and comprises the following steps: performing voiceprint feature extraction on the received voice information to determine a search condition as the voiceprint feature of the voice information;
searching the voice data according to the confirmed search condition, wherein the step of finding the voice data matched with the search condition comprises the following steps: and matching the voiceprint characteristics of the voice information with the voiceprint information in the voiceprint database corresponding to the voice data so as to find the voice data matched with the search condition.
9. The voice data search system according to claim 8, wherein:
the establishing unit is also used for establishing a search mode of voice data search, and the search mode comprises a keyword search mode, a voiceprint search mode and a dual-condition search mode;
the receiving unit is further used for receiving a search mode selected by a user;
the basis of searching in the keyword searching mode is the voice content in the voice information input by the user, the basis of searching in the voiceprint searching mode is the voiceprint feature in the voice information input by the user, and the basis of searching in the dual-condition searching mode is that the requirements of the voiceprint feature and the voice content are met at the same time.
CN201610279205.7A 2016-04-29 2016-04-29 Voice data searching method and system Active CN106024013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610279205.7A CN106024013B (en) 2016-04-29 2016-04-29 Voice data searching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610279205.7A CN106024013B (en) 2016-04-29 2016-04-29 Voice data searching method and system

Publications (2)

Publication Number Publication Date
CN106024013A CN106024013A (en) 2016-10-12
CN106024013B true CN106024013B (en) 2022-01-14

Family

ID=57081729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610279205.7A Active CN106024013B (en) 2016-04-29 2016-04-29 Voice data searching method and system

Country Status (1)

Country Link
CN (1) CN106024013B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610699A (en) * 2017-09-06 2018-01-19 深圳金康特智能科技有限公司 A kind of intelligent object wearing device with minutes function
CN107622137A (en) * 2017-10-23 2018-01-23 腾讯音乐娱乐科技(深圳)有限公司 The method and apparatus for searching speech message
CN107818786A (en) * 2017-10-25 2018-03-20 维沃移动通信有限公司 A kind of call voice processing method, mobile terminal
CN110232071A (en) * 2019-04-26 2019-09-13 平安科技(深圳)有限公司 Search method, device and storage medium, the electronic device of drug data
CN111597435B (en) * 2020-04-15 2023-08-08 维沃移动通信有限公司 Voice search method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996195A (en) * 2009-08-28 2011-03-30 ***通信集团公司 Searching method and device of voice information in audio files and equipment
CN102142030A (en) * 2011-03-22 2011-08-03 北京优视网络有限公司 Data searching method and data searching device
CN103870491A (en) * 2012-12-13 2014-06-18 联想(北京)有限公司 Information matching method and electronic device
CN104462262A (en) * 2014-11-21 2015-03-25 北京奇虎科技有限公司 Method and device for achieving voice search and browser client side
CN104598527A (en) * 2014-12-26 2015-05-06 盈世信息科技(北京)有限公司 Voice search method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100498789C (en) * 2006-09-08 2009-06-10 丁光耀 Characteristic character string matching method based on dispersion, cross and incompleteness
KR20120113717A (en) * 2009-12-04 2012-10-15 소니 주식회사 Search device, search method, and program
CN102968493A (en) * 2012-11-27 2013-03-13 上海量明科技发展有限公司 Method, client and system for executing voice search by input method tool
CN105335415A (en) * 2014-08-04 2016-02-17 北京搜狗科技发展有限公司 Search method based on input prediction, and input method system
CN104679855B (en) * 2015-02-13 2019-04-05 Oppo广东移动通信有限公司 A kind of play-list creation method and terminal device
CN104751847A (en) * 2015-03-31 2015-07-01 刘畅 Data acquisition method and system based on overprint recognition
CN105005630B (en) * 2015-08-18 2018-07-13 瑞达昇科技(大连)有限公司 The method of multi-dimensions test specific objective in full media

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996195A (en) * 2009-08-28 2011-03-30 ***通信集团公司 Searching method and device of voice information in audio files and equipment
CN102142030A (en) * 2011-03-22 2011-08-03 北京优视网络有限公司 Data searching method and data searching device
CN103870491A (en) * 2012-12-13 2014-06-18 联想(北京)有限公司 Information matching method and electronic device
CN104462262A (en) * 2014-11-21 2015-03-25 北京奇虎科技有限公司 Method and device for achieving voice search and browser client side
CN104598527A (en) * 2014-12-26 2015-05-06 盈世信息科技(北京)有限公司 Voice search method and device

Also Published As

Publication number Publication date
CN106024013A (en) 2016-10-12

Similar Documents

Publication Publication Date Title
KR101466027B1 (en) Mobile terminal and its call contents management method
CN106157970B (en) Audio identification method and terminal
CN106024013B (en) Voice data searching method and system
CN106911850B (en) Mobile terminal and screen capturing method thereof
CN105577532B (en) Application message processing method and device based on keywords and mobile terminal
CN104750420A (en) Screen capturing method and device
CN105139851A (en) Desktop application icon organization mobile terminal and method
CN106130734A (en) The control method of mobile terminal and control device
CN106778176B (en) Information processing method and mobile terminal
CN107148012B (en) Remote assistance method and system between terminals
CN105718071A (en) Terminal and method for recommending associational words in input method
CN106547439B (en) Method and device for processing message
CN104932697B (en) Gesture unlocking method and device
CN106598538B (en) Instruction set updating method and system
CN104809221A (en) Recommending method for music information and device
CN106534560B (en) Mobile terminal control device and method
CN105100428A (en) Linkman display method and system
CN105262819A (en) Mobile terminal and method thereof for achieving push
CN104980549A (en) Information processing method and mobile terminal
CN104914998A (en) Mobile terminal and multi-gesture desktop operation method and device thereof
CN105205159B (en) Device and method for automatically feeding back information
CN104811565A (en) Voice change communication realization method and terminal
CN106341554B (en) Method and device for quickly searching data content and mobile terminal
CN104780278A (en) Communication data-based route generation method and device
CN104980576A (en) Method and device for automatically extracting number for mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant