CN109407916A - Method, terminal, user images display interface and the storage medium of data search - Google Patents

Method, terminal, user images display interface and the storage medium of data search Download PDF

Info

Publication number
CN109407916A
CN109407916A CN201810981347.7A CN201810981347A CN109407916A CN 109407916 A CN109407916 A CN 109407916A CN 201810981347 A CN201810981347 A CN 201810981347A CN 109407916 A CN109407916 A CN 109407916A
Authority
CN
China
Prior art keywords
text
search
data
response text
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810981347.7A
Other languages
Chinese (zh)
Inventor
西蒙·埃克斯特兰德
杜乐
钱莎
葛鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810981347.7A priority Critical patent/CN109407916A/en
Publication of CN109407916A publication Critical patent/CN109407916A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides method, terminal, user images display interface and the storage medium of a kind of data search, scans for for the input data to numerous types of data, obtains more accurate search result, closer to the search intention of user.This method comprises: display search interface, search interface includes being used for the first icon and speech recognition icon;First icon when the user clicks, the first recognizer is called to receive the first input data, also identification obtains the first response text and shows simultaneously, and the first input data is augmented reality AR data, image data or audio data, and the first recognizer is recognizer corresponding with the first input data;Speech recognition icon when the user clicks calls speech recognition program to receive input voice data, while also identification input voice data obtains the second response text and shows, input voice data is the data with the first response textual association;Display scans for obtaining target search result based on the second response text.

Description

Method, terminal, user images display interface and the storage medium of data search
Technical field
This application involves information retrieval field, in particular to a kind of method of data search, terminal, user images show boundary Face and storage medium.
Background technique
With the increase of information content, useful information is if desired extracted, then needs to scan in a large amount of information.One In a little search engines, can be inputted by text, upload image and scan for, and such case can not meet it is increasingly rich Rich usage scenario.For example, other pictures with the photo similar features will be searched for after obtaining the photo uploaded.
But in existing scheme, search for data such as picture, video, voice or texts, search result is often terminal Directly the data of input are identified, and the result identified is directly scanned for obtaining, are difficult to describe for some Content, for example, video, picture etc., terminal is scanned for according only to the content recognized, can not accurately determine searching bar Part causes search result not to be inconsistent with practical desired search result.
Summary of the invention
The application provides method, terminal, user images display interface and the storage medium of a kind of data search, for pair The input data of numerous types of data scans for, and obtains more accurate search result, closer to the search intention of user, improves For experiencing.
In view of this, the application first aspect provides a kind of method of data search, comprising:
Show search interface, which includes the first icon for calling the first recognizer and for calling The speech recognition icon of speech recognition program;When detecting that user clicks the operation of first icon, first identification is called Program receives the first input data;Wherein, which is augmented reality AR data, image data or audio number According to first recognizer is AR recognizer corresponding with first input data, image recognition program or audio identification Program, when the first input data is augmented reality (augmented reality, AR) data, the first recognizer is AR knowledge Other program, when first input data is image data, which is image recognition program, when first input When data are audio data, which is audio identification program.It shows in search interface by the first recognizer The the first response text obtained after processing;When detect the user when the search interface clicks the operation of the speech recognition icon, Speech recognition program is called to receive input voice data, which is the data with the first response textual association; The the second response text obtained after speech recognition program is handled is presented in search interface;It is carried out based on the second response text Search, obtains target search result, and show the target search result.
In the application embodiment, object to be searched can be carried out by the combination of at least two input modes Search.AR data, image data or audio data can be inputted first, and corresponding recognizer is called to be identified, obtained Corresponding first response text, inputs input voice data later, is identified by speech recognition program, obtains the second response Text can be scanned for based on the second response text later, obtain target search result.Therefore, the application embodiment can It, can for inenarrable object search can further input input voice data on the basis of the first input data Specifically to be shown to object search, then further increase voice and retouch by AR data, image data, audio data etc. It states, treats object search and further described, the search result made is more acurrate, and the search closer to user is anticipated Figure improves user experience.And the application can be inputted by voice further to the object of search indicated by the first input data Additional notes are carried out, relative to text input, user can more easily be inputted, and improve the input efficiency of user, be improved The experience of user.
Optionally, in some possible embodiments, it is scanned for based on the second response text, obtains target search As a result, and show the search result, may include:
The first search program relevant to the second response text is called, and in first search program to second response Text scans for, and obtains the target search result, shows the target search result in first search program.
In the application embodiment, the first search program with the second response text can be called, searches for journey first The first response text and/or the second response text are scanned in sequence, obtain target search result, and in the first search program Interface in show the target search result.Therefore, relevant search program can be opened by input voice data, to first Response text and/or the second response text are further searched for, and vivider search result is obtained.
Optionally, in some possible embodiments, which is scanned for, obtains target search knot Fruit, and show the search result, may include:
It is scanned for based on the first response text and the second response text or the second response text, obtains the target Search result, and the target search result is shown in the search interface.
In this application, scanned in addition to the second response text can be based on, be also based on the first response text with Second response text scans for, and after obtaining target search result, is presented in search interface, makes more accurately to be searched Hitch fruit.
Optionally, in some possible embodiments, in the search interface speech recognition is clicked obtaining the user Before the operation of icon, this method can also include:
The first search result after scanning for the first response text is shown in the search interface;It is answered obtaining second After answering text, within the scope of first search result, which is scanned for, the target search result is obtained, And the target search result is shown on the search interface.
In the application embodiment, the first response text is scanned for obtain the first search result, and on search circle It is shown in face.Therefore, user can be made to get search result more in time with real-time display to the search result of input data. Later after obtaining the second response text, the second response text is scanned in the range of the first search result.It reduces Search range, the search result made are more accurate.
Optionally, in some possible embodiments, it is shown after the processing of the first recognizer in the search interface The first obtained response text may include:
First recognizer is called, identifies first input data, obtains the first response text collection, first response It include at least one text corresponding with first input data in text collection;It determines and meets in the first response text collection The text of preset condition shows the first response text as the first response text in the search interface.
It should be noted that in this application, at least one is one or more, multiple to refer to two or more.
In the application embodiment, call the first recognizer when identifying to the first input data, available the One response text collection, at least one text corresponding with the first input data in the first response text collection.It can pass through Preset condition screens the text gathered in the first response text, will meet the text of preset condition as the first response Text, and it is used for subsequent search.Wherein, which can be in terminal or internet, and search rate is higher than threshold value Text, the text after big data analysis, or the text etc. with input data most relevance.Therefore, the application is implemented The first response text that mode obtains, will be closer to search condition, so that more acurrate to search result.
It optionally, in some possible embodiments, can be with when the first input data is AR data or image data It shows picture corresponding with AR data or image data, and will identify that the multiple texts come are shown on picture, selected by user The text shown on picture is selected, as the first response text.
Therefore, in the application embodiment, the text of response corresponding to AR data or image data can be selected as user This, can make search result closer to the search intention of user, and then obtain the desired search result of user, improve user's body It tests.
Optionally, in some possible embodiments, this method can also include:
If the first response text collection include at least two texts corresponding with first input data, from this first At least one alternative text is determined in response text collection, shows at least one alternative text in the search interface.
After selecting text as the first response text according to preset rule from the first response text collection, Ke Yi Alternative text is shown in search interface, which is to determine from the first response text collection, can be the first response text Search rate is higher than the text of threshold value in this set, is also possible in the first response text collection with data with more relevance Text.User can choose any one in alternative text, to replace the first response text, and be used for subsequent searches, in turn It can further make search result closer to user search intention.
Optionally, in some possible embodiments, the voice is clicked in the search interface in the acquisition user to know Before the operation of other icon, this method can also include:
The operation that the user clicks target alternative text is obtained, using the target alternative text as the first response text, And the first response text is shown in the search interface, any one is standby at least one alternative text for the target alternative text Selection sheet.
In the application embodiment, alternative text relevant to the first input data can also be shown in search interface This, user can choose target alternative text, scan for as the first response text, make search result searching closer to user Suo Yitu improves user experience.
Optionally, in some possible embodiments, this method can also include:
Determining at least one associating key word with the first keyword and/or the second keyword association, in the search interface Middle display at least one associating key word, first keyword are to extract to obtain from the first response text, second pass Key word is to extract to obtain from the second response text.
In the application embodiment, keyword can be extracted from the first response text and/or the second response text, and At least one association keyword is obtained according to the first keyword and/or the second keyword, and is shown in search interface.To use Family can choose any one association keyword, replaces response text or addition search condition, keeps search result more acurrate.
Optionally, in some possible embodiments, this method can also include:
The operation that user clicks target keywords is obtained, using the target keywords as third response text, and is searched at this The third response text is shown in rope interface, which is that any of at least one associating key word is crucial Word;It is scanned for based on the target search result and institute's third response text, obtains the second search result, and in search circle Second search result is shown in face.
It is scanned for it should be noted that can be based on target search result and third response text, is also possible to base It scans in the first search result and third response text, can specifically be adjusted according to practical application scene.
In the application embodiment, after detecting the operation that user clicks target keywords, target keywords are made For third response text, and shown in search interface.It is searched later based on target search result and third response text Rope, available second search result, and shown in search interface.Allow user's data associated for input data into Row search, obtains more accurate search result.
Optionally, in some possible embodiments, this method can also include:
If first input data is AR data or image data, picture corresponding with the AR data or image data is reduced Size, obtain thumbnail corresponding with the AR data or image data, the thumbnail shown in the search interface.
When the first input data is AR data or image data, figure corresponding to AR data or image data can be reduced The size of piece obtains smaller thumbnail, and shows in search interface.So that user is in search process, it can be true Whether correct recognize object search, and thumbnail and search result are display together, family can be used and tied in conjunction with thumbnail and search Fruit is combined analysis jointly, keeps search result vivider.
Optionally, in some possible embodiments, it presents in search interface and is obtained after speech recognition program is handled The the second response text arrived may include:
Usual input voice data is that progress typing obtains after user clicks speech recognition icon, can pass through speech recognition Procedure identification goes out text corresponding to input voice data, then will be corresponding to input voice data by preset language model Text conversion be the first response text.Language model can be trained to obtain by the historical search data to terminal, should Text corresponding to input voice data can be carried out semantics recognition by language model, and being converted to terminal can directly scan for Text.
Therefore, in the application embodiment, the input voice data of user's input can be carried out by language model Conversion, obtains the second response text.And by the conversion of language model, can more accurately identify corresponding to input voice data Text expressed by semanteme, more meet the input habit of user, the search result made is more acurrate.
The embodiment of the present application second aspect provides a kind of terminal, which, which has, realizes above-mentioned first aspect data processing The function of method.The function can also execute corresponding software realization by hardware realization by hardware.The hardware is soft Part includes one or more modules corresponding with above-mentioned function.
The application third aspect provides a kind of graphic user interface GUI, which stores in the terminal, the end End includes display screen, one or more memories, one or more processors, and the one or more processors are for executing storage One or more computer programs in the one or more memory, the graphic user interface include:
Show search interface, which includes the first icon for calling the first recognizer and for calling voice The speech recognition icon of recognizer;Response user clicks the operation of first icon, shows the interface of first recognizer; It is shown in the search interface and shows the first response text obtained after the processing of the first recognizer;It responds the user and searches at this The operation of the speech recognition icon is clicked at rope interface, shows the interface of the speech recognition program;Process is shown in search interface The the second response text obtained after speech recognition program processing;The operation scanned for the second response text is responded, is shown Show target search result.
Optionally, in some possible embodiments, which can specifically include:
Show the interface of the first search program relevant to the second response text, response is in first search program to this The operation that second response text scans for, in the interface display of first search program target search result.
Optionally, in some possible embodiments, which can specifically include:
The operation scanned for based on the first response text and the second response text is responded, in the search interface Show the target search result.
Optionally, in some possible embodiments, which can also include:
The operation scanned for the first response text is responded, shows the first search result in the search interface;
Response is within the scope of first search result, to the operation that the second response text scans for, in search circle The target search result is shown on face.
Optionally, in some possible embodiments, which can specifically include:
In response to determining that meeting the response text of preset condition in the first response text collection, as the first response text Operation shows that the first response text, the first response text collection are to call first recognizer in the search interface Afterwards, identify that first input data obtains.
Optionally, in some possible embodiments, which can also include:
At least one alternative text is shown in the search interface, which is from the first response text It is determined in set, which includes at least two texts corresponding with first input data.
Optionally, in some possible embodiments, which can also include:
It responds the user and clicks target alternative text, using the target alternative text as the operation of the first response text, The first response text is shown in the search interface, any one is alternative at least one alternative text for the target alternative text Text.
Optionally, in some possible embodiments, which can also include:
In response to determining that the operation at least one associating key word of the first keyword and/or the second keyword association, At least one associating key word is shown in the search interface, which is to extract from the first response text It arrives, which is to extract to obtain from the second response text.
Optionally, in some possible embodiments, which can also include:
It responds user and clicks target keywords, using the target keywords as the operation of third response text, in the search The third response text is shown in interface, which is any one keyword at least one associating key word;
The operation scanned for based on the target search result and institute's third response text is responded, in the search interface Display.
Optionally, in some possible embodiments, which can also include:
If first input data is AR data or image data, response reduces corresponding with the AR data or image data The operation of the size of picture shows thumbnail corresponding with the AR data or image data in the search interface.
The embodiment of the present application fourth aspect provides a kind of terminal, may include:
Processor, memory and input/output interface, the processor, the memory are connect with the input/output interface; The memory, for storing program code;The processor executes the application first party when calling the program code in the memory The step of method that face or second aspect any embodiment provide.
The 5th aspect of the application provides a kind of chip system, which includes processor, for supporting terminal to realize Function involved in above-mentioned aspect, for example, for example handling data and/or information involved in the above method.One kind can In the design of energy, the chip system further includes memory, the memory, for saving the necessary program instruction of the network equipment And data.The chip system, can be made of chip, also may include chip and other discrete devices.
Wherein, the processor that any of the above-described place mentions, can be a general central processor (CPU), and microprocessor is special Determine application integrated circuit (application-specific integrated circuit, ASIC) or one or more is used for Control the integrated circuit that the program of above-mentioned first aspect data search method executes.
The 6th aspect of the embodiment of the present application provides a kind of storage medium, it should be noted that the technical solution essence of this hair On in other words all or part of the part that contributes to existing technology or the technical solution can with software produce mouth shape Formula embodies, which is stored in a storage medium, for being stored as calculating used in above equipment Machine software instruction, it includes be program designed by terminal for executing any optional embodiment in above-mentioned first aspect.
The storage medium includes: USB flash disk, mobile hard disk, read-only memory (english abbreviation ROM, full name in English: Read-Only Memory), random access memory (english abbreviation: RAM, full name in English: Random Access Memory), magnetic disk or light The various media that can store program code such as disk.
The 7th aspect of the embodiment of the present application provides a kind of computer program product comprising instruction, when it is transported on computers When row, so that computer executes the method as described in the application first aspect any optional embodiment.
In the method for data search provided by the present application, at least two input datas, including voice data can be inputted, And augmented reality (augmented reality, AR) data, image data or audio data, therefore, when input AR data, After image data or audio data, voice data can be further inputted, to be more particularly described to search target, is obtained More accurate search result.For example, for some inenarrable data, picture, video, music, scene etc. are obtained in identification After response text, voice description can be further increased, keeps search condition more specific, obtained search result is more accurate, improves The validity of data search is more met the desired search result of user, improves user experience.
Detailed description of the invention
Fig. 1 is a kind of structural schematic diagram of terminal in the application;
Fig. 2 is a kind of flow diagram of the method for data search in the application;
Fig. 3 is another flow diagram of the method for data search in the application;
Fig. 4 A is one of method of data search schematic diagram of interface display in the application;
Fig. 4 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 4 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 4 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 4 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 4 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 4 G is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 A is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 5 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 A is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 6 G is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 A is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 7 G is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 A is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 8 G is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 A is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 B is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 C is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 D is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 E is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 F is another schematic diagram of interface display in the application in the method for data search;
Fig. 9 G is another schematic diagram of interface display in the application in the method for data search;
Figure 10 A is another schematic diagram of interface display in the application in the method for data search;
Figure 10 B is another schematic diagram of interface display in the application in the method for data search;
Figure 10 C is another schematic diagram of interface display in the application in the method for data search;
Figure 10 D is another schematic diagram of interface display in the application in the method for data search;
Figure 10 E is another schematic diagram of interface display in the application in the method for data search;
Figure 11 is another structural schematic diagram of terminal in the application;
Figure 12 is another structural schematic diagram of terminal in the application.
Specific embodiment
The application provides method, terminal, user images display interface and the storage medium of a kind of data search, for pair The input data of numerous types of data scans for, and obtains more accurate search result, closer to the search intention of user, improves For experiencing.
Firstly, the method for data search provided by the present application can be applied to various terminals, for example, mobile phone, tablet computer, Laptop, television set, intelligent wearable device and other electronic equipments with display screen etc..Certainly, in following implementation In example, the concrete form of the terminal is not intended to be limited in any.Wherein, the system that terminal can carry may include Or other operating systems etc., the embodiment of the present application do not make this any Limitation.
Illustratively, to carryFor the terminal 100 of operating system, as shown in Figure 1, terminal 100 is from logic On can be divided into hardware layer 21, operating system 161 and application layer 31.Hardware layer 21 includes application processor 101, microcontroller The hardware resources such as device unit 103, modem 107, Wi-Fi module 111, sensor 114, locating module 150.Application layer 31 Including one or more application program, such as application program 163, application program 163 can be social category application, e-commerce Using any type of application programs such as, browsers.Operating system 161 is as in the software between hardware layer 21 and application layer 31 Between part, be the computer program for managing and controlling hardware and software resource.
In one embodiment, operating system 161 includes kernel 23, hardware abstraction layer (hardware abstraction Layer, HAL) 25, library and (libraries and runtime) 27 and frame (framework) 29 when operation.Wherein, interior Core 23 is used to provide first floor system component and service, such as: power management, memory management, thread management, hardware drive program Deng;Hardware driver package includes Wi-Fi driving, sensor driving, locating module driving etc..Hardware abstraction layer 25 is driven to kernel The encapsulation of dynamic program provides interface to frame 29, shields the realization details of low layer.Hardware abstraction layer 25 operates in user's space, And Kernel Driver operates in kernel spacing.
Library and operation when 27 also referred to as run-time library, it for executable program provide at runtime required for library file and Performing environment.27 include 271 and library 273 etc. (Android Runtime, ART) when Android is run when library and operation.ART 271 be the virtual machine or virtual machine instance that the bytecode of application program can be converted to machine code.Library 273 is for journey can be performed Sequence provides the program library of support at runtime, including browser engine (such as webkit), script executing engine (such as JavaScript engine), graphics processing engine etc..
Frame 29 is used to provide common component and the service on various bases, such as window for the application program in application layer 31 Management, location management etc..Frame 29 may include telephone supervisor 291, resource manager 293, location manager 295 etc..
The function of the various components of operations described above system 161 can execute memory by application processor 101 The program that stores in 105 is realized.
Those skilled in the art is understood that terminal 100 may include than less or more component shown in FIG. 1, figure The terminal shown in 1 only includes component more relevant to multiple implementations disclosed in the embodiment of the present application.
Terminal usually supports a variety of application programs (Application, APP) of installation, such as word-processing application, phone Application program, email application, instant message application program, photo management application program, web browsing application program, Digital music player application, and/or video frequency player application program.
In data search method provided by the present application, it can input user by mixed input mode and need to retrieve Content, to obtain more accurate search result.Specifically, data search method provided by the present application process as shown in Fig. 2, May include:
201, search interface is shown.
Firstly, the search interface of terminal can be entered before carrying out data search.It specifically, can be on the desktop of terminal Including searching for icon, search interface can be entered by searching for icon, or search interface can be entered by gesture operation.
Search interface may include the first icon and speech recognition icon.First icon can with corresponding A R recognizer, Image recognition program or audio identification program receive the first input data.
202, the operation that user clicks the first icon is obtained, the first recognizer is called to receive the first input data.
After detecting that user clicks the operation of the first icon, the first recognizer is called to receive the first input data, the One input data can be AR data, image data or audio data etc., and the first recognizer can be corresponding AR identification journey Sequence, image recognition program or audio identification data etc., the first recognizer can identify the first input data, including know Content, title or the location information etc. of other first input data.
203, the first response text obtained after the processing of the first recognizer is shown in search interface.
The first input data is identified by the first recognizer, obtains the corresponding response text of the first input data.
When the first input data is AR data, the first recognizer is AR recognizer, calls AR recognizer, identification Current Scan scene may include text information, geography information, location information etc..When the first input data is image data, First recognizer is image recognition program, calls image recognition program, identifies the first input data, may include in image The information such as text information, picture material, title.When the first input data is audio data, the first recognizer is audio knowledge Other program calls audio identification program, identifies the first input data, may include text information, music work in audio data The information such as person, musical designation.After being identified via the first recognizer to the first input data, available first response text This.The corresponding text of i.e. the first input data of first response text, can be used for terminal and scans for.
First response text can be understood as text information corresponding to the first input data, and terminal can pass through search the One response text obtains data relevant to the first input data.
204, the operation that user clicks speech recognition icon in search interface is obtained, speech recognition program is called to receive voice Input data.
When detecting user after search interface clicks the operation of speech recognition icon, calling is corresponding with speech recognition icon Speech recognition program receive input voice data, and input voice data is identified.
Input voice data can be user and click speech recognition icon, start typing after speech recognition program.Voice It may include voice corresponding with text in input data, can be identified via speech recognition program, obtain inputting with voice The corresponding speech text of data.
It should be noted that the input voice data and audio data in the application be not identical, input voice data can be with It is that the voice recording that terminal inputs current typing user obtains, for example, the voice that user can click on terminal display screen is defeated Enter icon, the voice that then typing user issues obtains input voice data.It may include to search pair in input voice data The description of elephant, or information relevant to object search.Audio data can be the audio data saved in terminal, be also possible to end The data in the environment of typing are held, can be one section of music file, the music data etc. of typing.For example, if being currently played A certain song, user can click the typing icon of terminal, and the currently playing song of typing obtains audio data, and terminal can To identify that the data of typing as audio data, and identify song corresponding to the audio data.
205, the second response text obtained after speech recognition program is handled is shown in search interface.
Input voice data is identified via speech recognition program, and obtains corresponding with input voice data second Response text.Specifically, input voice data directly can be converted into the corresponding text of voice, is also possible to input voice The corresponding speech text of data carries out semantic analysis, obtains the text that can be scanned for.
Second response text joins with the first response text dependent in terms of content, and the second response text may include partially or complete The first response of portion text, is also possible to further describing to object search corresponding to the first response text.For example, if One response text is " restaurant XX ", then, voice input can be " navigating to the restaurant XX ", i.e., can wrap in input voice data Include the content of part or all of first response text.In another example if the first response text is " XX mouse ", it is possible to voice It inputs " white ", the second response text identification is " white ", i.e. the second response text can be to corresponding to the first response text Object search further describe.
Specifically for example, if the first input data is picture, then, the second input data can be voice or text, with right The indicated object search searched for of first input data is described in more detail, and further embodies search intention.The application Search data can be described in more detail by Mixed design, so that terminal is more accurately identified search condition, obtain more Accurate search result.And inputted by voice, input voice data is identified, relative to text input, can be reduced The workload of user can input to obtain search condition more conveniently by voice, more in the scene of some inconvenient text inputs The search intention that user can accurately be obtained, obtains more accurate search result, improves the experience of user.
206, it is scanned for based on the second response text, obtains target search result.
After obtaining the corresponding second response text of input voice data, terminal can be searched based on the second response text Rope obtains target search result, and shows the target search result on the display interface of terminal.
It is also possible to scan for based on the first response text with the second response text, obtains target search result.
When terminal carries out data search, it can be and scanned in the database of terminal local, be also possible to pass through Internet scans in some internet databases.The object of search can be architecture information, location information, commodity letter Breath etc., can be applied to several scenes.
Illustratively, if the first input data is to build picture to identify picture then can star picture recognition In building title, i.e. the first response text.And needing the content searched for is to build location information indicated in picture, second Input data can be text or voice, can be " where this place is ".Then identify that the second input data obtains " this Where place is ", it can analyze to obtain the second response text be " position ".Then according to the first response text and second Response text, i.e. building title and " position " scan for, to obtain the location of the building in building picture, i.e., first Search result.
Therefore, it in the application embodiment, can be inputted by multiple input modes, voice input can be inputted Data and AR data, image data or audio data etc. can be obtained in identification AR data, image data or audio data On the basis of first response text, input voice data is further inputted, keeps search condition more acurrate, obtained search knot Fruit is more acurrate, closer to the search intention of user, improves user experience.
Further, the method for data search provided by the present application is described more specifically below.
Referring to Fig. 3, another embodiment schematic diagram of the application data search method, may include:
301, search interface is shown.
Firstly, terminal shows search interface before carrying out data search.May include in search interface the first icon and Voice inputs icon.First icon is corresponding with the first input data, and the first input data can be AR data, image data or sound Frequency according to etc..
It specifically, may include search icon on terminal desktop, after detecting that user clicks search icon, terminal is shown Search interface.Or user shows search interface by preset gesture trigger terminal by gesture operation.
302, corresponding application program is called to obtain the first input data.
When terminal detects that user clicks the first icon, the first recognizer of calling receives the first input data.When first When input data is AR data, AR recognizer is called, to obtain AR data and be identified.When the first input data is image When data, image recognition program is called, receives image data, and identified.When the first input data is audio data, adjust It with audio identification program, receives audio data and is identified, and be switched to the interface of the first recognizer, to pass through local choosing It selects or scans etc. and receive the first input data.
It should be noted that the first input data may be used also other than it can be AR data, image data or audio data To be other input datas, for example, video data, text data etc., can specifically be adjusted according to practical application scene, this Place is not construed as limiting.
Wherein, in search interface, the first icon may include, the data such as AR data, image data, audio data it is defeated Enter icon, is also possible to after clicking the first icon, the defeated of the data such as AR data, image data, audio data is further unfolded Enter icon.When terminal detects that user clicks wherein any one icon, that is, corresponding recognizer is called and is switched to, To receive the first input data.
Specific input mode may include various ways, other than clicking the first icon and being inputted, can also be straight It connects in drag content to input frame, obtains the first input data, the data type of the first input data of terminal recognition, starting corresponds to Recognizer, to obtain the first response text.It, can be in the input in search interface and after obtaining the first response text The first response text is shown in frame.
In addition, can reduce in AR data or image data if the first input data is AR data or image data and include Picture size, generate thumbnail, and show the thumbnail in search interface.
303, corresponding application program is called to identify the corresponding first response text of the first input data.
When the first input data is AR data, AR recognizer is called, to obtain AR data and be identified.When first When input data is image data, image recognition program is called, receives image data, and identified.When the first input data When for audio data, audio identification program is called, audio data is received and is simultaneously identified.And it can be switched at this time by search interface Search interface is switched to after obtaining the first response text to the interface of corresponding recognizer.
The result for calling corresponding application program to identify the first input data may have multiple, i.e. the first response text This set.The text for meeting preset condition can be selected as the first response text from the first response text.Wherein, item is preset Part can be the highest text of search rate in terminal, search in search duration and the shortest text of present interval or internet Highest text of frequency etc..
In one embodiment, when the first input data is AR data or image data, if AR data or picture number The text that picture recognition in comes out can be multiple, can be when calling corresponding recognizer, at the interface of display picture It will identify that the text come is shown in corresponding position, the text of search then can be selected by user, as the first response text This, then switches to search interface.
In addition, the can be switched to from search interface when terminal calls the first recognizer to receive the first input data The interface of one recognizer, to carry out AR scanning, image selection or audio selection etc., after identification obtains the first response text, It can continue to be switched to search interface.
In one embodiment, it when the text in the first response text collection includes at least two, is answered from first It answers after determining the first response text in text collection, can also continue to determine alternative text from the first response text collection, tool Body can be determines that search rate is higher than the text of threshold value from the first response text collection, alternately text.Alternative text It can be one or more.After terminal detects that user clicks target alternative text, answered target alternative text as first Text is answered, and is shown in search interface.The target alternative text is alternative by any one of user's click in alternative text Text.
Such as: user inputs " I is hungry " speech analysis and is, user may want that the thing done includes that " finding restaurant " " is found Supermarket " " fixed to take out ", then according to big data analysis as a result, the corresponding search of " finding restaurant " this text clicks probability most It is high.Therefore " I is hungry " is translated into response text " finding restaurant ", and alternately text is shown by other optional search conditions Show, user can choose alternative text replacement " finding restaurant ".
304, correlation tag is shown in search interface.
After obtaining the first response text, terminal can extract the keyword in the first response text.For example, if first Response text is " finding the restaurant XX ", then, the keyword of the first response text can be " restaurant XX ".Obtaining the first response After keyword in text, terminal can be looked into the database or internet of terminal according to the keyword in the first response text One or more associations word relevant to the keyword is looked for, and generates correlation tag, and show in search interface, each association Label corresponds to a conjunctive word.
In practical applications, user can choose correlation tag, to carry out the search of next step, can increase searching bar Part reduces search range, obtains more accurate search result.
305, the first search result is shown.
After obtaining the corresponding first response text of the first input data, the first response text can be scanned for, be obtained To the first search result.So that can be with the search result of real-time display input data in search interface.
In the application embodiment, the first response text can be scanned for obtaining the first search result, it can also be with Active search is not carried out to the first response text, the step 305 in the application embodiment is optional step.
In one embodiment, if search interface includes correlation tag, when terminal detects that user clicks any one After correlation tag, determines the corresponding conjunctive word of correlation tag that user clicks, the first response text can be replaced with into the association Word is also possible to show in search interface using the conjunctive word as further response text.It later, can be based on click The corresponding conjunctive word of correlation tag scan for, available second search result, and being shown in search interface.It can be with It is the corresponding conjunctive word of correlation tag based on click, calls the corresponding search program of the conjunctive word, i.e. the second search program, cut The interface for changing to the second search program scans in corresponding search program, obtains the second search result, and search second It is shown in the interface of Suo Chengxu.
306, speech recognition program is called to receive input voice data.
After obtaining the first response text, user can continue to click voice input icon.When terminal detects user's point After hitting voice input icon, speech recognition program is called, receives input voice data, and identify to input voice data.
307, the corresponding second response text of identification input voice data.
After terminal detects that user clicks voice input icon, calls speech recognition program to receive voice and inputs text, And input voice data is identified by speech recognition program.Input voice data can be user and click voice input figure After mark, the voice data of typing.Input voice data includes voice messaging corresponding with text.Speech recognition program can be to language Text corresponding to voice in sound input data is identified, obtains the corresponding text of voice in input voice data, and right The corresponding text-processing of voice obtains the second response text.
Wherein, the second response text and the first response text are associated in terms of content.For example, if the first response text is " XX mouse ", the second response text can be " yellow ", then the second response text is to search corresponding to the first response text Rope object further describes, i.e., the second response text is associated with the first response text.Therefore, corresponding to voice in terminal When text is handled, the content that the first response text includes can be combined with, generate associated with the first response text the Two response texts.
It in one embodiment, may include search language model in terminal, which is according to user The history voice data of input is trained to obtain, the instruction that can be identified by the search language model outlet terminal, i.e., Second response text.For example, if user inputs voice " I is hungry ", after identifying text " I is hungry ", the text that will identify that This input language model is exported " finding restaurant ", i.e., " the addressing restaurant " of output is used as the second response text.
Specifically, the data that the voice data of historical search can be searched in during cycle in terminal inner. For example, it may be the input voice data inputted in the browser of terminal, APP or global search results box.And search language Model is mainly used for recording common search language describing mode, including common input language and by the response after machine translation The matching relationship of text.For example, commonly search cuisines when the problem of include " dining room XX nice ", " dining room XX good or not " or " how is the dining room XX " etc., terminal can be with the response texts after default translation for " evaluation in the dining room XX ".Therefore, terminal is to input Recording with the matching relationship of the response text used described in user for data, can be used in next similar scene to input The priority match response text of data.
In one embodiment, the second response text can also include some or all of the first response text content. For example, the corresponding text of input voice data is " navigating to there " if the first response text is " restaurant XX ", then, pass through The corresponding text of input voice data is handled, the second response text " navigating to the restaurant XX " is obtained.
308, correlation tag is updated.
After obtaining the second response text, the keyword of the first response text and the secondth response text can be combined, into One step updates correlation tag, is associated with correlation tag with object search to be searched.
It, can be according to the keyword of the first response text and/or the pass of the second response text in the application embodiment Key word determines one or more associations word, and generates one or more associations label, each correlation tag and a conjunctive word pair It answers.It, can be by conjunctive word corresponding to target association label after terminal detects that user clicks the operation of target association label It is added to response text, as the search condition of addition, further to be searched for.
It should be noted that can choose display correlation tag in the application in embodiment, can not also show Correlation tag, therefore, step 304 step 308 are optional implementation steps.
309, displaying target search result.
After being identified to obtain the second response text to input voice data, it can be searched based on the second response text Rope obtains target search result.It is also possible to scan for the first response text and the second response text, obtains target and search Rope is as a result, and the displaying target search result in the interface of terminal.
Terminal can be to be scanned in local database, is also possible to scan for by internet.And it searches for Object can be the information such as building, commodity, music, navigation, can apply and several scenes.
In one embodiment, it can be and call corresponding with the second response text first to search according to the second response text Suo Chengxu is switched to the interface of the first search program, and to the first response text and/or the second response in the first search program The content for including in text obtains target search result, and the displaying target search result in the interface of the first search program.Example Such as, if the first response text is " restaurant XX ", the second response text is " XX map ", it is possible to open " XX map " application Program is switched to the interface of " XX map " application program, and searches for " restaurant XX " in " XX map " application program, obtains " XX Position result of the restaurant in XX map ", i.e. target search result, and the displaying target search result in " XX map ".
In one embodiment, it can be terminal directly to carry out based on the first response text and/or the second response text Search, obtains search result, and show in search interface.
In one embodiment, if before receiving input voice data, the first response text is scanned for obtaining First search result is receiving after input voice data obtains the second response text, can be right in the range of the first search result Second response text scans for, and obtains target search result.It can thus be avoided simultaneously to the first response text and second Response text scans for, and obtains the search result of input data in real time, and carry out under the search result of a upper input data Further screening obtains more accurately as a result, and the efficiency of data search can be improved.
Therefore, in the application embodiment, terminal can identify the data type of input data, and according to data type It is analyzed accordingly, specifically for example: if input data type is text type, starting semantics recognition and obtain text language Justice;If it is determined that then enabling image recognition when input data type is picture, being analyzed from picture material, including on image The element and title for including, the tone of image, composition, the keyword on image, similar image, the relevant big data mark of image Label etc.;If input content type is audio, audio analysis is enabled, audio is first converted to text and carries out semantic analysis again;If Input data type is video, then enables video analysis, intercepts key frame of video, and extract the data in key frame of video, is wrapped Characteristics of image, audio, the subtitle etc. of key frame are included, the picture material of key frame is further identified using image recognition technology, The corresponding response text of input data is obtained to determine the semanteme of input data using audio analysis techniques identification contents semantic, And scanned for according to response text, obtain accurate search result.
In the application embodiment, the first input data can be inputted first, and the first input data includes AR data, figure As data or audio data etc., and different recognizers is called according to different data types, obtain and the first input data Corresponding first response text.First response text can be scanned for, obtain the first search result, it therefore, can be real-time The data of input are scanned for, search result is obtained.It can continue to input voice input number after obtaining the first response text According to, and speech recognition program is called to identify input voice data, obtain the second response text.Second response text and One response text is relevant in terms of content, is scanned for based on the second response text, available target search result.Therefore, In the application embodiment, after inputting the first input data, can continue input voice data, can to object search into Row further describes, and obtains more accurate search condition, the search result made is more acurrate.And it can also be according to voice Input data calls corresponding search program, and shows search result in corresponding search program, so that obtained search knot Fruit is more acurrate, closer to the search intention of user.
The aforementioned detailed process to data search method provided by the present application is illustrated, and is used below with the figure of terminal For family interface (Graphical User Interface, GUI), vivider explanation is carried out.Wherein, terminal may include place Device, display screen etc. are managed, display screen is used for the display interface of display terminal, and processor can be used for at the data in terminal Reason, for example, picture recognition, data search etc..It specifically, include a variety of recognizers in terminal, when handling data, Different recognizers can be called to carry out identifying processing to data.Continuation is illustrated by taking two kinds of mixed inputs methods as an example. Input mode may include multiple combinations, for example, picture and text/voice, outdoor scene scanning with text/voice, audio and voice/ Text or mixing voice text etc., wherein specific routine call process etc., it can be in conjunction with described in earlier figures 2 and Fig. 3 Process, it can be understood as in the display interface of following figure 4 A to Figure 10 E, the step of being executed by terminal, can be based on earlier figures 2 with described in Fig. 3 and corresponding embodiment the step of.
Some combinations are illustrated individually below.
One, outdoor scene scanning and text/voice
Firstly, initial search interface is as shown in Figure 4 A.May include in search interface heat search recommendation and heat search recommendation Search result.It wherein, may include that voice inputs corresponding icon below initial ranging interface, it can be by clicking voice Icon carries out voice input, and search interface can also include more multimode input icon, the i.e. icon in the upper right corner in Fig. 4 A.
The icon for selecting more input modes in initial search interface, such as the icon that the upper right corner Fig. 4 A is clicked, exhibition It opens as shown in Figure 4 B, shows camera, video, picture and audio, camera can be selected by user, camera can be used for shooting or sweeping It retouches, when icons such as terminal display camera, video, picture and audios, can not show that voice inputs icon.
It should be understood that initial search interface can also show simultaneously voice input icon and camera, video, picture with And the icons such as audio.
After terminal detects that user clicks camera icon, it is switched to camera interface, current scene is scanned Shooting, scanning obtain current scene figure, i.e. AR data, as shown in Figure 4 C.Then AR recognizer is called to obtain scene to scanning Picture is received and identified, and the content identified may include " Hai Shawan seafood guild hall " and " salmon ", and scheme Corresponding position is shown in piece interface, has user to select the corresponding text of picture.Illustratively, it can choose " extra large shark gulf seafood Guild hall ".
As shown in Figure 4 D, after selection " Hai Shawan seafood guild hall ", it is switched to search interface, that is, includes that voice inputs icon And the decoding of expanded view target, or the icons such as icon and camera, video, picture and audio are inputted including voice, it can incite somebody to action The scene figure of scanning reduces size and generates thumbnail, and thumbnail is shown on display search interface, such as can be in upper left Angle shows thumbnail, to show input content.
As shown in Figure 4 E, after user has selected corresponding search text, the text of user's selection is handled, is obtained The response text that can be scanned for.For example, after selection " Hai Shawan seafood guild hall ", according to the history input record of user or Preset corresponding relationship, available response text " finding Hai Shawan seafood guild hall ", and scan for.It is searched for after search As a result, and showing.In addition, further including alternative response text, for example, " navigating to Hai Shawan seafood guild hall ", " pre- Dinghai shark gulf Seafood guild hall seat " and " search Hai Shawan seafood guild hall " etc., response text " finding Hai Shawan seafood guild hall " can also replace It changes, user is also an option that alternative response text, to replace response text " finding Hai Shawan seafood guild hall ".Meanwhile terminal Obtain with the first response text, the i.e. associated data of " Hai Shawan seafood guild hall ", such as: " body shop difficult to understand ", " Xin Jie Kou shop " and Information such as " various schools of thinkers Hu Dian ", and generate correlation tag and show.User can directly select correlation tag and scan for, with obtain into The search result of one step.
Meanwhile display interface shows voice input icon and Text Entry.Voice input icon can continue searching The lower section at rope interface shows that input frame can be shown in the top edge of search interface.And the search interface upper right corner can also continue to Including more multimode input icon, selection can be continued, more input modes, interface as shown in Figure 4 B is unfolded.
After the search result for obtaining " Hai Shawan seafood guild hall ", it can continue to input.For example, it is " good to input text Eat ", as illustrated in figure 4f.Specifically, the mode for inputting text can be keyboard input, duplication is pasted etc..
Then " nice " can be mapped as " commenting according to the history input record of preset language model or user Valence ", and keyword is extracted to " finding Hai Shawan seafood guild hall " and " evaluation " and is scanned for, to obtain search result.Together When, information or corresponding mapping relations are inputted also according to history, obtains closing with " finding Hai Shawan seafood guild hall " and " evaluation " The information of connection, and generate correlation tag.Correlation tag may include: " environment scoring ", " taste scoring ", " service scoring ", " push away Recommend vegetable ", " pre-capita consumption " or " net exploxer comment " etc..
It should be understood that in addition to can first carry out outdoor scene scanning, then carry out outside text input, can also advanced style of writing it is defeated Enter, then carries out outdoor scene scanning, can specifically be adjusted according to practical application scene, be not construed as limiting herein.
In addition, in addition to that can also be inputted by voice by text input.
The input process of specific outdoor scene scanning is similar with earlier figures 4A to Fig. 4 E with search routine.
Firstly, as shown in Figure 5A, may include in search interface shooting figure be marked with and voice input icon, selection search circle The imaging icon in the face upper right corner is scanned current scene or shoots, obtain current scene figure subsequently into shooting interface.
As shown in Figure 5 B, it calls AR recognizer to carry out reception identification to current scene figure, obtains in relevant to picture Hold, such as: " Hai Shawan seafood guild hall " and " salmon " illustratively can choose " Hai Shawan seafood guild hall ".
Then it is shown in the input frame above the search interface of terminal " the Hai Shawan seafood guild hall " of user's selection, and The thumbnail of current scene figure, as shown in Figure 5 C.
According to user select " Hai Shawan seafood guild hall ", and based on history input information, preset corresponding relationship or Search rate determines corresponding response text " finding Hai Shawan seafood guild hall " and shows in big data, and shows simultaneously alternatively Response text, for example, " navigation Hai Shawan seafood guild hall ", " pre- Dinghai Sha Wan seafood guild hall " or " search Hai Shawan seafood guild hall " Deng.User can choose alternative response text, and with the response text automatically generated instead of terminal, new root of laying equal stress on is according to the standby of selection Response text is selected to scan for.Meanwhile extract the keyword in " find Hai Shawan seafood guild hall ", may include: " Hai Shawan ", " seafood guild hall " etc., the determining conjunctive word with " Hai Shawan seafood guild hall ", such as: " body shop difficult to understand ", " Xin Jie Kou shop " and " various schools of thinkers The information such as Hu Dian ", and generate correlation tag and show.User can directly select correlation tag and scan for, to obtain further Search result.
Later, it can choose and carry out voice input.It selects voice to input icon, carries out voice input, the language of typing user Sound, as shown in Figure 5 D.Then speech recognition can be carried out, obtains the corresponding text of voice " nice ", and on search interface It is shown in the input frame of side, as shown in fig. 5e.
Terminal can record historical search data, including the keyword searched for recently, the number such as search language model According to.Wherein, the data that the keyword searched for recently can be searched in during cycle in terminal inner.Search language mould Type is mainly used for recording common search language describing mode, including common input language and by the response text after machine translation This matching relationship.Hence, it can be determined that corresponding response text and alternative response text.Determine response text first herein This " evaluation ", and display to response text " finding Hai Shawan seafood guild hall " and " is commented in the input frame above search interface The search result of valence ", as illustrated in figure 5f.And alternative text " recommending vegetable ", " pre-capita consumption " or " net are shown under input frame Friend's comment " etc..One or more replacements response text " evaluation " in alternative text may be selected in user, and scan for again, Show search result.
It can continue to show that expansion icon and voice input icon after obtaining search result, in search interface, to continue It is inputted and is searched for.
Therefore, the application can by the scanning of mixing outdoor scene and voice or text, to the scene for currently needing to search for into Row more accurately description, and more accurate search result is obtained, and implement to show search result, improve search efficiency.
Two, picture and text/voice
Initial search interface is as shown in Figure 6A, may include that expanded view is marked with and phonogram in initial search interface Mark.Selection expansion icon first shows the icons such as camera, video, picture and audio as shown in Figure 6B in search interface, And select picture.
As shown in Figure 6 C, picture can be the picture file stored in terminal, and specific format may include JPEG, portable Network graphic (portable network graphics, PNG), the formats such as BMP (Bitmap), after selecting picture, selection is true Recognize, picture recognition can be entered.
The picture of selection is identified, be switched to search interface, terminal first determines that the file selected is picture, then Start picture recognition, and show the thumbnail of selection picture in the upper left corner, such as Fig. 6 D.
Content in identification picture specifically can identify text, building name, the Item Title, personage in picture Etc. contents.Then according to historical search record or big data analysis etc., response text " search pictures-relevant to picture are determined Mona Lisa ", and shown in the input frame above search interface, extract the pass of response text " search pictures-Mona Lisa " Keyword, such as: " Mona Lisa ", " picture " etc., and scan for, it obtains showing in search result and search result frame.Simultaneously Also the lower section of input frame show alternative text " opening ' Mona Lisa ' video ", " search ' tears of Mona Lisa ' song " or " downloading ' Mona Lisa Smile ' e-book " etc., user can choose any alternative text and scan for as response text, As illustrated in fig. 6e.It may include: " search ", " figure in addition, extracting the keyword of response text " search pictures-Mona Lisa " Piece ", " Mona Lisa " etc..Then associated conjunctive word is searched according to keyword, for example, may search for and picture " Mona Lisa " Related word, such as: " oil painting ", " Leonardo da Vinci " or " Italy " etc. generates correlation tag and in the lower section of input frame, alternative text The display of this top.User can choose any one correlation tag, further to be searched for.
User can continue to click input frame, input text, such as " whom picture is ", as fig 6 f illustrates.
The text of second of input is determined to carry out semantic analysis to the text in input frame, and lead to after " whom picture is " It crosses to search historical search data, preset language model or carry out big data analysis and obtains search rate etc., by " whom picture is " It can be translated as corresponding response text " personage's prototype ", and to response text " search pictures-Mona Lisa " and " personage's original Type " scans for, and obtains search result and shows in search result frame, as shown in Figure 6 G.In addition, when determining response text, Also by alternative text, such as: " Mona Lisa Smile personage " or " male version Mona Lisa " etc. show simultaneously.User can choose Alternatively any one in text, to replace the content in input frame.In addition, also from " search pictures-Mona Lisa " and " people Keyword is extracted in object prototype ", for example, " Mona Lisa ", " personage's prototype " etc., search for associated in database or internet Conjunctive word, such as: " Leonardo da Vinci ", " beautiful Sha Ge Ladini " etc. generate correlation tag etc., and the input frame on search interface Lower display.User can choose any one in correlation tag, replace " personage's prototype ", or increase selection in input frame Text corresponding to correlation tag.
Other than it can input picture and text, text and voice can also be inputted, detailed process is as follows:
Initial search interface is as shown in Figure 7 A, and initial search interface is similar with earlier figures 5A, and selection first is taken pictures, into Enter to shoot interface.
As shown in Figure 7 B, after opening imaging icon, photo album icon is selected to be saved on displaying terminal into photograph album interface Picture, picture is selected from photograph album.
As seen in figure 7 c, photo to be searched is selected in photo, is then determined.
The picture of selection is identified, it is first determined the file selected is picture, then starts picture recognition, and on a left side The thumbnail of upper angle display selection picture, such as Fig. 7 D.
The image content identified can be " search pictures-Mona Lisa ", in addition, further including alternative text and pass Join label etc., detailed process is similar with the process of earlier figures 6E, does not repeat herein.
It after the search result for obtaining " search pictures-Mona Lisa ", can be inputted by voice, such as Fig. 7 F institute Show, inputted by voice, and the voice of input is identified, the text identified is " whom picture is ".
After determining that the text of second input is " whom picture is ", as shown in Figure 7 G, obtain search result, alternative text with And the process of correlation tag is similar with the process in earlier figures 6G, does not repeat herein specifically.
Therefore, the input mode of the picture and speech/text that provide in the application embodiment scans for, for being difficult to The image content etc. of description, can directly search picture, and add further description, and the range of search can be made more clear, Obtained search result is more accurate.
Three, audio file and text/voice
For some audio files for needing to search for, for example, the audio files such as music, recording, can directly input audio File, and by text or voice additional notes, realize the search to audio file.
Specifically, initial search interface is as shown in Figure 8 A, and more input modes are unfolded in selection.
As shown in Figure 8 B, audio input is selected, which is the audio file saved or the audio of recording in terminal File.
Wherein, as the exhibition method of icon and icon included in 8A and Fig. 8 B can refer to earlier figures 6A and figure 6B, specific details are not described herein again.
As shown in Figure 8 C, the icon of audio data included in terminal is unfolded, the audio downloaded or recorded from terminal Audio file is selected in file, and is determined.The format of audio file may include: dynamic image expert's compression standard audio level 3 (moving picture experts group audio layer III, MP3), Microsoft audio format (windows Media audio, WMA), wave file (wave audio files, WAV), APE, Lossless Audio Compression coding (free Lossless audio codec, FLAC), OGG (ogg vorbis), Advanced Audio Coding (advanced audio Coding, AAC) etc. formats.
It is then return to search interface, the audio file of input is identified, such as Fig. 8 D, the audio text of selection can be played Part, or parsing etc. is decoded to audio file.
As illustrated in fig. 8e, data corresponding to available audio file after identification, and shown in input frame above. For example, if the audio file is music file, it is possible to title, author, the album of the music file etc..By audio text Data corresponding to part generate corresponding response text, for example, it may be " search song-cowboy is extremely busy ".Response text is searched for, Obtain search result and in search result frame (" result classification 1 " and " result classification 2 " in figure) display.It can also be under input frame The alternative text of side's display response text, such as: " opening MV ".User can choose any one of alternative text, with replacement Response text " search song-cowboy is extremely busy " in input frame, and replaced response text is scanned for, or selection is alternative Any one of text is searched further for as increased search condition.At the same time it can also extract " the search of response text Song-cowboy is extremely busy " in keyword, such as: " song ", " cowboy is extremely busy " etc., and pass through historical search record determine association Conjunctive word, or search vocabulary related with keyword etc. is obtained by big data analysis, such as: " Zhou Jielun ", " I am extremely busy specially Volume " etc., and correlation tag is generated, it is shown below input frame and above alternative text.User is also an option that correlation tag In any one, be added in input frame, and continue to scan for as response text, obtain search result.
Later, can continue to select voice input, input further describes search content.As shown in Figure 8 F, to defeated The voice entered identified, available corresponding text " types of songs ", and is shown in input frame.
To in input frame text carry out semantic analysis, and by searching for historical search data, preset language model or It carries out big data analysis and obtains search rate etc., " types of songs " can be translated as corresponding response text " style of song ", and right Response text " search song-cowboy is extremely busy " and " style of song " scan for, and obtain search result and show in search result frame, As shown in fig. 8g.In addition, when determining response text, also by alternative text, such as: it " other style of song for searching Jay " or " searches The heat of Jay searches style of song " etc. and meanwhile show.User can choose any one in alternative text, to replace in input frame Hold, carry out next step search, or any one for selecting in alternative text is scanned for as increased search condition.Meanwhile Keyword can also be extracted from " search song-cowboy is extremely busy " and " style of song ", for example, " cowboy is extremely busy ", " style of song " etc., search The associated conjunctive word of rope, such as: " rural folk rhyme ", " Chinese musical telling etc. " etc. generate correlation tag etc., and in the input frame of search interface Lower section, the display of alternative text top.User can choose corresponding label, and the text for the correlation tag chosen can replace response Text " style of song ", or directly addition is after response text " style of song ", to carry out next step search.
It is aforementioned to have carried out with audio file and voice input for example, being lifted below to audio file and text input Example explanation.
Firstly, Fig. 9 A to Fig. 9 E is similar with the search routine of earlier figures 8A to Fig. 8 E, do not repeat herein.
When input audio file, and after obtaining the search result, alternative text and correlation tag of audio file, Ke Yiji Input keyboard is shown in extension, inputs text " types of songs ", as shown in fig. 9f.
To in input frame text carry out semantic analysis, and by searching for historical search data, preset language model or It carries out big data analysis and obtains search rate etc., " types of songs " can be translated as corresponding response text " style of song ", and right Response text " search song-cowboy is extremely busy " and " style of song " scan for, and obtain search result and show in search result frame, As shown in fig. 9g.In addition, when determining response text, also by alternative text, such as: it " other style of song for searching Jay " or " searches The heat of Jay searches style of song " etc. and meanwhile show, may be displayed on the lower section of input frame.User can choose any in alternative text One, to replace the content in input frame, carry out next step search.At the same time it can also from " search song-cowboy extremely busy " and Keyword is extracted in " style of song ", for example, " cowboy is extremely busy ", " style of song " etc., search for associated conjunctive word, such as: " rural folk rhyme ", " Chinese musical telling etc. " etc., generates correlation tag etc., and below the input frame of search interface, display above alternative text.User can be with Corresponding label is selected, the text for the correlation tag chosen can be replaced response text " style of song ", or directly addition is in response text After this " style of song ", to carry out next step search.
Therefore, it in the application embodiment, can be inputted by audio file and text or voice mixing, for some Inenarrable audio file can directly input the audio file, then by text or voice supplement to the audio file Description or search condition etc. realize the search to inenarrable audio file to obtain corresponding search result, obtain More accurate search result improves search efficiency, while improving user experience.
Four, text and voice
In addition to outdoor scene above-mentioned scanning is with text/voice, picture and text/voice, audio file and text/voice etc., It can be with mixing voice and text input.
Initial search interface is as shown in Figure 10 A, and voice is selected to input icon, then carries out voice input, acquisition input Voice.
As shown in Figure 10 B, speech recognition is carried out to the data of voice input, obtains the text " I is hungry " of voice input, And it is shown in input frame.
As illustrated in figure 10 c, history input record or preset language model be may search for, to " I is hungry " it is semantic into Row identification obtains corresponding response text " finding restaurant ", and shows in input frame, and to response text " finding restaurant " into Row search, obtains search result and shows in search result frame.The response text can be comparison historical search frequency and obtain, or It translates to obtain by language model.Also, alternative text can also be obtained, for example, " finding supermarket ", " predetermined to take out ", " search ' I is hungry ' " etc., and alternative text is shown below input frame, user can choose alternative text replacement response text and " find Restaurant ", and scan for, obtain search result and shown in search result frame again.At the same time it can also extract response text Keyword in " finding restaurant ", for example, " restaurant ", the conjunctive word of search key, such as: " Chinese meal ".It is " Japanese dish ", " safe State's dish ", " enchilada " etc., generate correlation tag and the lower section in input frame, the top of alternative text are shown.User can select Select it is any one or more in correlation tag, to replace the response text in input frame, or as increased input condition into Row search, further to be searched for, obtains more specific search result.
It such as figure, can continue to input text, as shown in Figure 10 D, keyboard input, shearing can be continued through, copied, pasted Deng input text, for example, " fast food ", increases the text of input and displaying in input frame.
Text " fast food " progress semantic analysis to input, available corresponding response text " fast food ", and to response Text " finding restaurant ", " fast food " scan for, and obtain search result and show in search result frame, as shown in figure 10e.This Outside, alternative text, such as " finding fast food restaurant " can also be determined, and shown below input frame.User can choose alternative text This, to replace the text in input frame, or as increased search condition, is further searched for.At the same time it can also mention Response text " find restaurant ", the keyword in " fast food " are taken, such as;" restaurant ", " fast food " etc., search associated conjunctive word, For example, " noodles ", " spicy soup ", " packed meal " etc..Conjunctive word is generated into correlation tag, and below input frame, on alternative text Side's display.User can choose label and carry out next step search, obtain more specific search result.
In the application embodiment, content to be searched can be inputted by way of text and voice mixing input, The flexible input of multiple input modes is provided, improves the efficiency of input, and then promote the search experience of user.
It should be noted that the search interface shown in earlier figures 4A to Figure 10 E, only to number provided by the present application According to the process of the method for search and the exemplary illustration of GUI, specific icon arrangement is response text display position, standby This display position of selection and the display position of correlation tag etc., can be adjusted according to practical application scene, and the application is only Only make exemplary illustration, is not intended as limiting.
In addition, the combination for multiple input modes, the group of the two kinds of input modes shown in earlier figures 4A to Figure 10 E Conjunction is merely exemplary explanation, can also be three kinds or more input mode combinations, can specifically be adjusted according to actual scene, It is not construed as limiting herein.In addition to outdoor scene above-mentioned scanning and text/voice, picture and text/voice, audio file and text/language Sound, voice and text input etc. can also be other mixed inputs methods, for example, picture and audio file, video file with Speech/text etc..
Earlier figures 2 are into Figure 10 E and corresponding embodiment, to method provided by the present application and corresponding search circle Face is described in detail, and is illustrated below to device provided by the present application.
Figure 11 is please referred to, a kind of embodiment schematic diagram of terminal in the application may include:
Display module 1101 and processing module 1102.
Display module 1101, for showing search interface, which includes for calling the first recognizer One icon and speech recognition icon for calling speech recognition program;
Processing module 1102 clicks the operation of first icon for obtaining user, and first recognizer is called to receive First input data;Wherein, which is augmented reality AR data, image data or audio data, this first Recognizer is AR recognizer corresponding with first input data, image recognition program or audio identification program;
The display module 1101 is also used to show first obtained after the processing of the first recognizer in the search interface Response text;
The processing module 1102 is also used to obtain the operation that the user clicks the speech recognition icon in the search interface, The speech recognition program is called to receive input voice data, which is the number with the first response textual association According to;
The display module 1101 is also used to that the obtained after speech recognition program processing is presented in the search interface Two response texts;
The processing module 1102 is also used to be scanned for based on the second response text, obtains target search result,
The display module 1101, is also used to show the target search result.
Optionally, in some possible embodiments,
The processing module 1102 is also used to call relevant to the second response text the first search program, this first Search program scans for the second response text, obtains the target search result;
The display module 1101, specifically for showing the target search result in first search program.
Optionally, in some possible embodiments,
The processing module 1102, specifically for being scanned for based on the first response text and the second response text, The target search result is obtained,
The display module 1101, specifically for showing the target search result in the search interface.
Optionally, in some possible embodiments,
The display module 1101 is also used to show first after scanning for the first response text in the search interface Search result;
The processing module 1102 is also used within the scope of first search result, is scanned for the second response text, Obtain the target search result;
The display module 1101, specifically for showing the target search result on the search interface.
Optionally, in some possible embodiments,
The processing module 1102 is specifically used for calling first recognizer, identifies first input data, obtain first Response text collection includes at least one text corresponding with first input data in the first response text collection;
The processing module 1102 meets the response text of preset condition specifically for determining in the first response text collection This, as the first response text, shows the first response text in the search interface.
Optionally, in some possible embodiments,
The processing module 1102, if being also used to the first response text collection includes at least two and first input data Corresponding text then determines at least one alternative text from the first response text collection;
The display module 1101 is also used to show at least one alternative text in the search interface.
Optionally, in some possible embodiments, the voice is clicked in the search interface in the acquisition user to know Before the operation of other icon,
The processing module 1102 is also used to obtain the operation that the user clicks target alternative text, by target alternative text As the first response text, which is any one alternative text of at least one alternative text for this;
The display module 1101, specifically for showing the first response text in the search interface.
Optionally, in some possible embodiments,
The processing module 1102 is also used at least one pass of determining and the first keyword and/or the second keyword association Join keyword, which is to extract to obtain from the first response text, which is from second response It extracts and obtains in text;
The display module 1101 is also used to show at least one associating key word in the search interface.
Optionally, in some possible embodiments,
The processing module 1102, be also used to obtain user click target keywords operation, using the target keywords as Third response text, and the third response text is shown in the search interface, which is at least one association Any one keyword in keyword;
The processing module 1102 is also used to be scanned for based on the target search result and institute's third response text, be obtained To the second search result;
The display module 1101 is also used to show second search result in the search interface.
Optionally, in some possible embodiments,
The processing module 1102 reduces and the AR number if being also used to first input data is AR data or image data According to or the corresponding picture of image data size, obtain thumbnail corresponding with the AR data or image data;
The display module 1101 is also used to show the thumbnail in the search interface.
Wherein, handled by the corresponding display module 1101 or interface of display can be such as earlier figures 4A to figure in earlier figures 11 Display interface shown in the corresponding embodiment of 10E.Step performed by processing module 1102 in earlier figures 11 can be as The step of described in earlier figures 2, Fig. 3 and corresponding embodiment.
Present invention also provides a kind of graphic user interface (Graphical User Interface, GUI), the figures User interface stores in the terminal, and the terminal includes display screen, one or more memories, one or more processors, institute One or more processors are stated to be used to execute the one or more computer programs being stored in one or more of memories, The graphic user interface includes:
Show search interface, described search interface includes the first icon for calling the first recognizer and for adjusting With the speech recognition icon of speech recognition program;Response user clicks the operation of first icon, shows first identification The interface of program;It is shown in the first response text that described search interface display obtains after the processing of the first recognizer;It rings It answers the user to click the operation of the speech recognition icon at described search interface, shows the boundary of the speech recognition program Face;The the second response text obtained after speech recognition program processing is shown in search interface;Response is to described the The operation that two response texts scan for, displaying target search result are specific such as earlier figures 4A to Figure 10 E and corresponding implementation Display interface in mode.
Optionally, in some embodiments of the application, the graphic user interface is specifically included:
It shows the interface of the first search program relevant to the second response text, responds in first search program To the operation that the second response text scans for, the target search knot described in the interface display of first search program Fruit.
Optionally, in some embodiments of the application, the graphic user interface is specifically included:
The operation scanned for based on the first response text and the second response text is responded, in described search The target search result is shown in interface.Specific display circle as in earlier figures 4A to Figure 10 E and corresponding embodiment Face.
Optionally, in some embodiments of the application, the graphic user interface further include:
The operation scanned for the first response text is responded, in the first search result of described search interface display;
Response is within the scope of first search result, to the operation that the second response text scans for, described The target search result is shown on search interface.It is aobvious in specific such as earlier figures 4A to Figure 10 E and corresponding embodiment Show interface.
Optionally, in some embodiments of the application, the graphic user interface is specifically included:
In response to determining that meeting the response text of preset condition in the first response text collection, as the first response text Operation, show the first response text in described search interface, the first response text collection is to call described the After one recognizer, identify that first input data obtains.Specific such as earlier figures 4A to Figure 10 E and corresponding embodiment In display interface.
Optionally, in some embodiments of the application, the graphic user interface further include:
In at least one alternative text of described search interface display, at least one described alternative text is to answer from described first It answers in text collection and determines, the first response text collection includes at least two texts corresponding with first input data This.The specific display interface as in earlier figures 4A to Figure 10 E and corresponding embodiment.
Optionally, in some embodiments of the application, the graphic user interface further include:
It responds the user and clicks target alternative text, using the target alternative text as the first response text Operation, shows the first response text in described search interface, and the target alternative text is that described at least one is alternative Any one alternative text of text.The specific display interface as in earlier figures 4A to Figure 10 E and corresponding embodiment.
Optionally, in some embodiments of the application, the graphic user interface further include:
In response to determining that the operation at least one associating key word of the first keyword and/or the second keyword association, At least one described associating key word is shown in described search interface, first keyword is from the first response text Extraction obtains, and second keyword is to extract to obtain from the second response text.It is specific as earlier figures 4A to Figure 10 E with And the display interface in corresponding embodiment.
Optionally, in some embodiments of the application, the graphic user interface further include:
It responds user and clicks target keywords, using the target keywords as the operation of third response text, described The third response text is shown in search interface, the target keywords are any at least one described associating key word A keyword;
The operation scanned for based on the target search result and institute's third response text is responded, in described search circle It is shown in face.The specific display interface as in earlier figures 4A to Figure 10 E and corresponding embodiment.
Optionally, in some embodiments of the application, the graphic user interface further include:
If first input data is AR data or image data, response is reduced and the AR data or image data pair The operation of the size for the picture answered shows thumbnail corresponding with the AR data or image data in described search interface. The specific display interface as in earlier figures 4A to Figure 10 E and corresponding embodiment.
The embodiment of the invention also provides another terminals, as shown in figure 12, for ease of description, illustrate only and this hair The relevant part of bright embodiment, it is disclosed by specific technical details, please refer to present invention method part.The terminal can be with Being includes mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, point-of-sale terminal), any terminal device such as vehicle-mounted computer, taking the terminal as an example:
Figure 12 shows the block diagram with the part-structure of terminal provided in an embodiment of the present invention.With reference to Figure 12, case for mobile telephone It includes: radio frequency (Radio Frequency, RF) circuit 1210, memory 1220, input unit 1230, display unit 1240, sensing Device 1250, voicefrequency circuit 1260, Wireless Fidelity (wireless fidelity, WiFi) module 1270, processor 1280 and The components such as power supply 1290.It will be understood by those skilled in the art that the limit of the not structure paired terminal of terminal structure shown in Figure 12 It is fixed, it may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Figure 12 to terminal:
RF circuit 1210 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, handled to processor 1280;In addition, the data for designing uplink are sent to base station.In general, RF circuit 1210 include but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 1210 can also be logical with network and other equipment by wireless communication Letter.Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 1220 can be used for storing software program and module, and processor 1280 is stored in memory by operation 1220 software program and module, thereby executing the various function application and data processing of terminal.Memory 1220 can be led It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses institute according to terminal Data (such as audio data, phone directory etc.) of creation etc..In addition, memory 1220 may include high random access storage Device, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are solid State memory device.
Input unit 1230 can be used for receiving the number or character information of input, and generate with the user setting of terminal with And the related key signals input of function control.Specifically, input unit 1230 may include touch panel 1231 and other inputs Equipment 1232.Touch panel 1231, also referred to as touch screen collect touch operation (such as the user of user on it or nearby Use the behaviour of any suitable object or attachment such as finger, stylus on touch panel 1231 or near touch panel 1231 Make), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1231 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 1280, and order that processor 1280 is sent can be received and executed.In addition, Touch panel 1231 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface Plate 1231, input unit 1230 can also include other input equipments 1232.Specifically, other input equipments 1232 may include But in being not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. It is one or more.
Display unit 1240 can be used for showing information input by user or be supplied to user information and terminal it is each Kind menu.Display unit 1240 may include display panel 1241, optionally, can use liquid crystal display (Liquid Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) To configure display panel 1241.Further, touch panel 1231 can cover display panel 1241, when touch panel 1231 detects After arriving touch operation on it or nearby, processor 1280 is sent to determine the type of touch event, is followed by subsequent processing device 1280 provide corresponding visual output according to the type of touch event on display panel 1241.Although in Figure 12, touch surface Plate 1231 and display panel 1241 are the input and input function for realizing terminal as two independent components, but certain In embodiment, can be integrated by touch panel 1231 and display panel 1241 and that realizes terminal output and input function.
Terminal may also include at least one sensor 1250, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel 1241, proximity sensor can close display panel when terminal is moved in one's ear 1241 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, can be used to identify application (such as the horizontal/vertical screen of terminal posture Switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Also as terminal The other sensors such as configurable gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 1260, loudspeaker 1261, microphone 1262 can provide the audio interface between user and terminal.Audio Electric signal after the audio data received conversion can be transferred to loudspeaker 1261, be converted by loudspeaker 1261 by circuit 1260 For voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 1262, by voicefrequency circuit 1260 Audio data is converted to after reception, then by after the processing of audio data output processor 1280, through RF circuit 1210 to be sent to ratio Such as another terminal, or audio data is exported to memory 1220 to be further processed.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronics postal by WiFi module 1270 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 12 is shown WiFi module 1270, but it is understood that, and it is not belonging to must be configured into for terminal, it can according to need do not changing completely Become in the range of the essence of invention and omits.
Processor 1280 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, By running or execute the software program and/or module that are stored in memory 1220, and calls and be stored in memory 1220 Interior data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Optionally, processor 1280 may include one or more processing units;Preferably, processor 1280 can integrate application processor and modulation /demodulation processing Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1280.
Terminal further includes the power supply 1290 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply Management system and processor 1280 are logically contiguous, to realize management charging, electric discharge and power consumption pipe by power-supply management system The functions such as reason.
Although being not shown, terminal can also include camera, bluetooth module etc., and details are not described herein.
In embodiments of the present invention, processor 1280 included by the terminal can be used for executing earlier figures 2 to Figure 10 E couple Step performed by terminal in the embodiment answered.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or other network equipments etc.) executes each reality in the application Fig. 2 to Figure 10 E Apply all or part of the steps of the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk Etc. the various media that can store program code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the range of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (32)

1. a kind of method of data search characterized by comprising
Show search interface, described search interface includes the first icon for calling the first recognizer and for calling language The speech recognition icon of sound recognizer;
It obtains user and clicks the operation of first icon, and first recognizer is called to receive the first input data;Its In, first input data be augmented reality AR data, image data or audio data, first recognizer be with The corresponding AR recognizer of first input data, image recognition program or audio identification program;
In the first response text that described search interface display obtains after the processing of the first recognizer;
It obtains the user and clicks the operation of the speech recognition icon at described search interface, and call the speech recognition journey Sequence receives input voice data, and the input voice data is the data with the first response textual association;
The the second response text obtained after speech recognition program processing is presented at described search interface;
It is scanned for based on the second response text, obtains target search result, and show the target search result.
2. obtaining mesh the method according to claim 1, wherein scanning for based on the second response text Search result is marked, and shows described search result, comprising:
The first search program relevant to the second response text is called, and is answered in first search program described second It answers text to scan for, obtains the target search result, show the target search result in first search program.
3. obtaining target the method according to claim 1, wherein scanning for the second response text Search result, and show described search result, comprising:
It is scanned for based on the first response text and the second response text, obtains the target search result, and The target search result is shown in described search interface.
4. according to the method described in claim 3, it is characterized in that, described in obtain the user at described search interface and click Before the operation of speech recognition icon, the method also includes:
The first search result after described search interface display scans for the first response text;
It is described to be scanned for based on the first response text and the second response text, obtain the target search knot Fruit, and the target search result is shown in described search interface, comprising:
Within the scope of first search result, the second response text is scanned for, the target search result is obtained, And the target search result is shown on described search interface.
5. method according to any of claims 1-4, which is characterized in that pass through first in described search interface display The the first response text obtained after recognizer processing, comprising:
First recognizer is called, first input data is identified, obtains the first response text collection, described first answers Answering includes at least one text corresponding with first input data in text collection;
The text for meeting preset condition in the first response text collection is determined, as the first response text, described The first response text is shown in search interface.
6. according to the method described in claim 5, it is characterized in that, the method also includes:
If the first response text collection includes at least two texts corresponding with first input data, from described the At least one alternative text, at least one alternative text described in described search interface display are determined in one response text collection.
7. according to the method described in claim 6, it is characterized in that, obtaining the user in the click of described search interface described Before the operation of the speech recognition icon, the method also includes:
The operation that the user clicks target alternative text is obtained, using the target alternative text as the first response text This, and shows the first response text in described search interface, and the target alternative text is that described at least one is alternative Any one alternative text of text.
8. the method according to claim 3 or 4, which is characterized in that the method also includes:
Determining at least one associating key word with the first keyword and/or the second keyword association, in described search interface Show at least one described associating key word, first keyword is to extract to obtain from the first response text, described Second keyword is to extract to obtain from the second response text.
9. according to the method described in claim 8, it is characterized in that, the method also includes:
The operation that user clicks target keywords is obtained, using the target keywords as third response text, and is searched described The third response text is shown in rope interface, the target keywords are any of at least one described associating key word Keyword;
It is scanned for based on the target search result and institute's third response text, obtains the second search result, and described Second search result is shown in search interface.
10. method according to claim 1 to 9, which is characterized in that the method also includes:
If first input data is AR data or image data, picture corresponding with the AR data or image data is reduced Size, obtain thumbnail corresponding with the AR data or image data, the thumbnail shown in described search interface.
11. a kind of terminal characterized by comprising
Display module, for showing search interface, described search interface includes the first icon for calling the first recognizer And the speech recognition icon for calling speech recognition program;
Processing module clicks the operation of first icon for obtaining user, and first recognizer is called to receive first Input data;Wherein, first input data be augmented reality AR data, image data or audio data, described first Recognizer is AR recognizer corresponding with first input data, image recognition program or audio identification program;
The display module is also used to the first response obtained after the processing of the first recognizer in described search interface display Text;
The processing module is also used to obtain the user at described search interface and clicks the operation of the speech recognition icon, The speech recognition program is called to receive input voice data, the input voice data is and the first response textual association Data;
The display module is also used to that second obtained after speech recognition program processing is presented at described search interface Response text;
The processing module is also used to be scanned for based on the second response text, obtains target search result,
The display module is also used to show the target search result.
12. terminal according to claim 11, which is characterized in that
The processing module is also used to call the first search program relevant to the second response text, search described first Suo Chengxu scans for the second response text, obtains the target search result;
The display module is specifically used for showing the target search result in first search program.
13. terminal according to claim 11, which is characterized in that
The processing module is obtained specifically for being scanned for based on the first response text and the second response text To the target search result,
The display module, specifically for showing the target search result in described search interface.
14. terminal according to claim 13, which is characterized in that
The display module is also used to first after described search interface display scans for the first response text and searches Hitch fruit;
The processing module is also used within the scope of first search result, is scanned for, is obtained to the second response text To the target search result;
The display module, specifically for showing the target search result on described search interface.
15. terminal described in any one of 1-14 according to claim 1, which is characterized in that
The processing module is specifically used for calling first recognizer, identifies first input data, obtain first and answer Text collection is answered, includes at least one text corresponding with first input data in the first response text collection;
The processing module is made specifically for meeting the response text of preset condition in determination the first response text collection For the first response text, the first response text is shown in described search interface.
16. terminal according to claim 15, which is characterized in that
The processing module, if being also used to the first response text collection includes at least two and first input data pair The text answered then determines at least one alternative text from the first response text collection;
The display module is specifically used at least one alternative text described in described search interface display.
17. terminal according to claim 16, which is characterized in that obtain the user in described search interface point described It hits before the operation of the speech recognition icon,
The processing module is also used to obtain the operation that the user clicks target alternative text, by the target alternative text As the first response text, the target alternative text is any one alternative text of at least one described alternative text;
The display module, specifically for showing the first response text in described search interface.
18. terminal described in 3 or 14 according to claim 1, which is characterized in that
The processing module is also used to determination and is associated with key at least one of the first keyword and/or the second keyword association Word, first keyword are to extract to obtain from the first response text, and second keyword is to answer from described second It answers to extract in text and obtain;
The display module is also used to show at least one described associating key word in described search interface.
19. terminal according to claim 18, which is characterized in that
The processing module is also used to obtain the operation that user clicks target keywords, using the target keywords as third Response text, and the third response text is shown in described search interface, the target keywords be it is described at least one Any one keyword in associating key word;
The processing module is also used to scan for based on the target search result and institute's third response text, obtains the Two search results;
The display module is also used to show second search result in described search interface.
20. terminal described in any one of 1-19 according to claim 1, which is characterized in that
The processing module reduces and the AR data if being also used to first input data is AR data or image data Or the size of the corresponding picture of image data, obtain thumbnail corresponding with the AR data or image data;
The display module is also used to show the thumbnail in described search interface.
21. a kind of graphic user interface GUI, in the terminal, the terminal includes display screen, one for the graphic user interface storage A or multiple memories, one or more processors, one or more of processors for execute be stored in it is one or One or more computer programs in multiple memories, which is characterized in that the graphic user interface includes:
Show search interface, described search interface includes the first icon for calling the first recognizer and for calling language The speech recognition icon of sound recognizer;Response user clicks the operation of first icon, shows first recognizer Interface;It is shown in the first response text that described search interface display obtains after the processing of the first recognizer;Response institute It states user and clicks the operation of the speech recognition icon at described search interface, show the interface of the speech recognition program;? The the second response text obtained after speech recognition program processing is shown in search interface;Response is to second response The operation that text scans for, displaying target search result.
22. graphic user interface according to claim 21, which is characterized in that the graphic user interface specifically includes:
Show the interface of the first search program relevant to the second response text, response is in first search program to institute State the operation that the second response text scans for, the target search result described in the interface display of first search program.
23. graphic user interface according to claim 21, which is characterized in that the graphic user interface specifically includes:
The operation scanned for based on the first response text and the second response text is responded, at described search interface The middle display target search result.
24. graphic user interface according to claim 23, which is characterized in that the graphic user interface further include:
The operation scanned for the first response text is responded, in the first search result of described search interface display;
Response is within the scope of first search result, to the operation that the second response text scans for, in described search The target search result is shown on interface.
25. the graphic user interface according to any one of claim 21-24, which is characterized in that graphical user circle Mask body includes:
In response to determining that meet the response text of preset condition in the first response text collection, the behaviour as the first response text Make, shows that the first response text, the first response text collection are to call described first to know in described search interface After other program, identify that first input data obtains.
26. graphic user interface according to claim 25, which is characterized in that the graphic user interface further include:
In at least one alternative text of described search interface display, at least one described alternative text is from the first response text It is determined in this set, the first response text collection includes at least two texts corresponding with first input data.
27. graphic user interface according to claim 26, which is characterized in that the graphic user interface further include:
It responds the user and clicks target alternative text, using the target alternative text as the behaviour of the first response text Make, shows that the first response text, the target alternative text are at least one described alternative text in described search interface This any one alternative text.
28. the graphic user interface according to any one of claim 21-27, which is characterized in that graphical user circle Face further include:
In response to determining that the operation at least one associating key word of the first keyword and/or the second keyword association, described At least one described associating key word is shown in search interface, first keyword is to extract from the first response text It obtains, second keyword is to extract to obtain from the second response text.
29. graphic user interface according to claim 28, which is characterized in that the graphic user interface further include:
It responds user and clicks target keywords, using the target keywords as the operation of third response text, in described search The third response text is shown in interface, the target keywords are that any of at least one described associating key word is closed Key word;
The operation scanned for based on the target search result and institute's third response text is responded, in described search interface Display.
30. the graphic user interface according to any one of claim 21-29, which is characterized in that graphical user circle Face further include:
If first input data is AR data or image data, response reduces corresponding with the AR data or image data The operation of the size of picture shows thumbnail corresponding with the AR data or image data in described search interface.
31. a kind of terminal characterized by comprising
Memory, for storing program;
Processor, for executing the described program of the memory storage, when described program is performed, the processor is used for Execute the step as described in any in claim 1-10.
32. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer executes such as Method described in any one of claim 1-10.
CN201810981347.7A 2018-08-27 2018-08-27 Method, terminal, user images display interface and the storage medium of data search Pending CN109407916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810981347.7A CN109407916A (en) 2018-08-27 2018-08-27 Method, terminal, user images display interface and the storage medium of data search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810981347.7A CN109407916A (en) 2018-08-27 2018-08-27 Method, terminal, user images display interface and the storage medium of data search

Publications (1)

Publication Number Publication Date
CN109407916A true CN109407916A (en) 2019-03-01

Family

ID=65463625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810981347.7A Pending CN109407916A (en) 2018-08-27 2018-08-27 Method, terminal, user images display interface and the storage medium of data search

Country Status (1)

Country Link
CN (1) CN109407916A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442704A (en) * 2019-08-13 2019-11-12 重庆誉存大数据科技有限公司 A kind of Company News screening technique and system
CN111638846A (en) * 2020-05-26 2020-09-08 维沃移动通信有限公司 Image recognition method and device and electronic equipment
CN112148963A (en) * 2019-06-28 2020-12-29 百度在线网络技术(北京)有限公司 Information query method, device, equipment and storage medium
CN112786022A (en) * 2019-11-11 2021-05-11 青岛海信移动通信技术股份有限公司 Terminal, first voice server, second voice server and voice recognition method
CN113270093A (en) * 2020-01-29 2021-08-17 丰田自动车株式会社 Proxy device, proxy system, and non-transitory recording medium
CN113707145A (en) * 2021-08-26 2021-11-26 海信视像科技股份有限公司 Display device and voice search method
CN114385886A (en) * 2020-10-19 2022-04-22 聚好看科技股份有限公司 Content searching method and three-dimensional display equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407337A (en) * 2016-09-05 2017-02-15 深圳震有科技股份有限公司 Quick search method and system
US20170147680A1 (en) * 2015-11-19 2017-05-25 Microsoft Technology Licensing, Llc Displaying graphical representations of query suggestions
CN106993085A (en) * 2017-03-01 2017-07-28 北京小米移动软件有限公司 Positioning result display methods and device, electronic equipment
CN107015979A (en) * 2016-01-27 2017-08-04 阿里巴巴集团控股有限公司 A kind of data processing method, device and intelligent terminal
CN107832396A (en) * 2017-10-30 2018-03-23 江西博瑞彤芸科技有限公司 Information retrieval method
CN108399174A (en) * 2017-02-07 2018-08-14 阿里巴巴集团控股有限公司 object search method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147680A1 (en) * 2015-11-19 2017-05-25 Microsoft Technology Licensing, Llc Displaying graphical representations of query suggestions
CN107015979A (en) * 2016-01-27 2017-08-04 阿里巴巴集团控股有限公司 A kind of data processing method, device and intelligent terminal
CN106407337A (en) * 2016-09-05 2017-02-15 深圳震有科技股份有限公司 Quick search method and system
CN108399174A (en) * 2017-02-07 2018-08-14 阿里巴巴集团控股有限公司 object search method and device
CN106993085A (en) * 2017-03-01 2017-07-28 北京小米移动软件有限公司 Positioning result display methods and device, electronic equipment
CN107832396A (en) * 2017-10-30 2018-03-23 江西博瑞彤芸科技有限公司 Information retrieval method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148963A (en) * 2019-06-28 2020-12-29 百度在线网络技术(北京)有限公司 Information query method, device, equipment and storage medium
CN110442704A (en) * 2019-08-13 2019-11-12 重庆誉存大数据科技有限公司 A kind of Company News screening technique and system
CN112786022A (en) * 2019-11-11 2021-05-11 青岛海信移动通信技术股份有限公司 Terminal, first voice server, second voice server and voice recognition method
CN112786022B (en) * 2019-11-11 2023-04-07 青岛海信移动通信技术股份有限公司 Terminal, first voice server, second voice server and voice recognition method
CN113270093A (en) * 2020-01-29 2021-08-17 丰田自动车株式会社 Proxy device, proxy system, and non-transitory recording medium
CN111638846A (en) * 2020-05-26 2020-09-08 维沃移动通信有限公司 Image recognition method and device and electronic equipment
CN114385886A (en) * 2020-10-19 2022-04-22 聚好看科技股份有限公司 Content searching method and three-dimensional display equipment
CN113707145A (en) * 2021-08-26 2021-11-26 海信视像科技股份有限公司 Display device and voice search method

Similar Documents

Publication Publication Date Title
CN109407916A (en) Method, terminal, user images display interface and the storage medium of data search
CN108319489B (en) Application page starting method and device, storage medium and electronic equipment
CN108334608B (en) Link generation method and device of application page, storage medium and electronic equipment
US11221819B2 (en) Extendable architecture for augmented reality system
Emmanouilidis et al. Mobile guides: Taxonomy of architectures, context awareness, technologies and applications
CN103455590B (en) The method and apparatus retrieved in touch-screen equipment
KR101780034B1 (en) Generating augmented reality exemplars
US20170344224A1 (en) Suggesting emojis to users for insertion into text-based messages
CN108496150A (en) A kind of method and terminal of screenshot capture and reading
CN110162770A (en) A kind of word extended method, device, equipment and medium
CN109379641A (en) A kind of method for generating captions and device
CN108287917B (en) File opening method and device, storage medium and electronic equipment
US10276162B2 (en) Method and electronic device for performing voice based actions
CN108470041A (en) A kind of information search method and mobile terminal
US9900427B2 (en) Electronic device and method for displaying call information thereof
CN107070779A (en) A kind of information processing method and device
CN109063583A (en) Learning method based on point reading operation and electronic equipment
US10365806B2 (en) Keyword-based user interface in electronic device
CN108541310A (en) A kind of method, apparatus and graphic user interface of display candidate word
JP2013020411A (en) Information processing apparatus, information processing method and program
US20190179848A1 (en) Method and system for identifying pictures
CN108492836A (en) A kind of voice-based searching method, mobile terminal and storage medium
US9424364B2 (en) Integrated context-driven information search and interaction
JP2014049140A (en) Method and apparatus for providing intelligent service using input characters in user device
CN110390569A (en) A kind of content promotion method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301