CN112820284A - Voice interaction method and device, electronic equipment and computer readable storage medium - Google Patents
Voice interaction method and device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN112820284A CN112820284A CN202011579339.3A CN202011579339A CN112820284A CN 112820284 A CN112820284 A CN 112820284A CN 202011579339 A CN202011579339 A CN 202011579339A CN 112820284 A CN112820284 A CN 112820284A
- Authority
- CN
- China
- Prior art keywords
- search result
- voice
- user
- result
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000003993 interaction Effects 0.000 title claims abstract description 35
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 239000012141 concentrate Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/151—Transformation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a voice interaction method and device, electronic equipment and a computer readable storage medium, belongs to the technical field of artificial intelligence, is applied to vehicles and is used for improving driving safety. The method comprises the following steps: acquiring a voice instruction of a user; determining a search result according to the voice instruction; and displaying the search result through a head-up display of the vehicle.
Description
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a voice interaction method and device, electronic equipment and a computer readable storage medium.
Background
With the increasing maturity of voice recognition technology, a user can control not only multimedia but also more and more ecological applications through voice in a vehicle, so that the user can liberate hands in the driving process.
At present, the voice recognition technology can accurately recognize the voice of a user, but after a voice instruction sent by the user is recognized, the recognized result is displayed through a multimedia display, and the user still needs to look away from the road surface to check and select the recognized result. Therefore, the method for controlling the application in the vehicle by the user voice also needs the sight of the user to be away from the road surface, and the driving safety is influenced.
Disclosure of Invention
An embodiment of the application aims to provide a voice interaction method, a voice interaction device, electronic equipment and a computer-readable storage medium, which can improve driving safety.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a voice interaction method, which is applied to a vehicle, and includes: acquiring a voice instruction of a user; determining a search result according to the voice instruction; and displaying the search result through a head-up display of the vehicle.
In a second aspect, an embodiment of the present application provides a voice interaction apparatus, including: the acquisition module is used for acquiring a voice instruction of a user; the searching module is used for determining a searching result according to the voice instruction; and the display module is used for displaying the search result through a head-up display of the vehicle.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory arranged to store computer-executable instructions, wherein the processor, when executing the executable instructions, implements the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing one or more programs which, when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a voice instruction of a user can be acquired, a search result is determined according to the voice instruction, and the search result is displayed through a head-up display of the vehicle. In this way, the user can view or select the search result without looking away from the road surface by displaying the search result on the head-up display of the vehicle, so as to improve the driving safety.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a hardware configuration of a vehicle to which embodiments of the present application are applicable;
FIG. 2 is a schematic flow chart diagram of a method of voice interaction according to one embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a method of voice interaction according to another embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of a method of voice interaction according to another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of a method of voice interaction according to another embodiment of the present application;
FIG. 6 is a schematic block diagram of a voice interaction device, according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a voice interaction device, according to another embodiment of the present application;
FIG. 8 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the specification and claims of the present application, "and/or" means at least one of connected objects, a character "/" generally means that the former and latter related objects are in an "or" relationship.
The voice interaction method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic diagram showing a hardware configuration of a vehicle to which an embodiment of the present application is applicable, including: a microphone for receiving a voice input; the multimedia controller is used for carrying out voice recognition processing and understanding and controlling voice instructions; and the head-up display is used for displaying the search result.
Optionally, the method may further include: the cloud platform is used for voice recognition and semantic understanding and connecting a plurality of ecology; the loudspeaker is used for broadcasting the search result and/or broadcasting the execution result of the execution target operation; and the multimedia display is used for displaying the content of the voice recognition.
As shown in fig. 2, a schematic flow chart of a voice interaction method 200 according to an embodiment of the present application may be performed by an electronic device, which may be a vehicle, a smart vehicle, an unmanned vehicle, etc., which may have, for example, but not limited to, a hardware structure as shown in fig. 1. Or the electronic device may be a software or hardware device installed on a vehicle, a smart vehicle, an unmanned vehicle, in other words, the method may be performed by software or hardware installed on the electronic device, the method comprising the steps of:
s202: and acquiring a voice instruction of a user.
For example, when a user wants to listen to music or search for a charging pile during the driving process of a vehicle, a voice instruction can be sent to the terminal device in a voice mode.
For another example, when the user wants to listen to music named "song a," a voice command such as "play song a" may be issued, causing the microphone to receive the voice command and send it to the multimedia controller.
S204: and determining a search result according to the voice instruction.
For example, if the voice instruction sent by the user is to search for the charging pile position, after the microphone receives the voice instruction of the user, the information of the charging pile is searched according to the voice instruction of the user, and the searched charging pile information is determined. Wherein, fill electric pile information and can include the address of filling electric pile, fill the distance between electric pile and the vehicle, travel to fill the required time of electric pile, fill the price of electric pile and so on.
It should be understood that the process of determining the search result may be executed by a multimedia controller in the vehicle, and certainly, may also be executed by other structures capable of performing semantic understanding in the vehicle, and of course, after performing voice recognition through the cloud platform, performing semantic understanding on the content of the voice recognition, and issuing the search result to the multimedia controller, which is not limited in this embodiment.
In one implementation, after the multimedia controller receives a voice command sent by a microphone, the multimedia controller performs voice recognition processing on the voice and understanding and controlling the voice command.
In another implementation mode, if the multimedia controller cannot directly determine the unique search result, semantic understanding can be performed on the text information after voice recognition through the cloud platform, then searching is performed according to the content after semantic understanding, and a plurality of search results are determined, so that the accuracy of searching is improved.
S206: and displaying the search result through a head-up display of the vehicle.
For example, when at least one charging post location is searched for, the corresponding at least one charging post information may be presented on a heads-up display of the vehicle. Wherein, fill electric pile information and can include the address of filling electric pile, fill the distance between electric pile and the vehicle, travel to fill the required time of electric pile, fill the price of electric pile and so on. The head-up display is a display device which does not require a user to look away from a road surface, enables the user to perform blind operation, and is generally arranged at a position which is matched with a vehicle driving direction and does not block the user's sight line, so that the user does not look away from the road surface when looking at the head-up display, for example, a position above a front windshield. Alternatively, the head-up display may be a projection device, and the information to be displayed is projected to an appropriate area of the front windshield, so that the user can see the displayed information without lowering or turning his head. Therefore, when a user needs to check a search result, the user can check the search result through the head-up display, the sight line does not need to leave the road surface, and driving safety can be improved.
In one implementation, the search results may be one or more. In other words, regardless of the number of search results being one or more, the one or more search results may be presented via the head-up display of the vehicle.
In another implementation, in the case that the search result is plural, the search result is presented through a head-up display of the vehicle. In other words, if the number of the search results is one, the search results can be directly executed without displaying, for example, if the user searches for the nearest charging pile, the search result is unique, navigation to the nearest charging pile is automatically executed, and information of the nearest charging pile may not be displayed. However, in the case where there are a plurality of search results, the plurality of search results are presented for selection by the user through the head-up display of the vehicle.
Therefore, according to the voice interaction method provided by the embodiment of the application, the voice instruction of the user is obtained; determining a search result according to the voice instruction; through the new line display of vehicle, show search result, can bring convenient function for the user, make the user can control through pronunciation in the car to driving the in-process, the sight need not to leave the road surface and can look over speech control's search result, promotes the experience sense and the driving safety that user's pronunciation was used.
As shown in fig. 3, which is a schematic flow chart of a voice interaction method 300 according to another embodiment of the present application, the method may be performed by an electronic device, which may be a vehicle, a smart vehicle, an unmanned vehicle, etc., or a software or hardware device installed on the vehicle, the smart vehicle, the unmanned vehicle, in other words, the method may be performed by software or hardware installed on the electronic device, the method includes the following steps:
s302: and acquiring a voice instruction of a user.
S304: and determining a search result according to the voice instruction.
S306: and displaying the search result through a head-up display of the vehicle.
Steps S302-S306 may adopt the same or similar descriptions as steps S202-S206 in the embodiment of fig. 2, and are not described again here.
S308: and broadcasting the search result.
In one implementation, after the search result is displayed, the user can broadcast the search result in a voice broadcast mode, so that the user can know the information of the search result by combining the voice broadcast content when looking up the head-up display.
For example, the words such as "fill electric pile A", "fill electric pile B" and "fill electric pile C" are shown on the new line display, and at this moment, can report each information of filling electric pile through the speaker, and the user can know the information about filling electric pile in combination with voice broadcast's content. Wherein, fill electric pile information and can include the address of filling electric pile, fill the distance between electric pile and the vehicle, travel to fill the required time of electric pile, fill the price of electric pile and so on.
S310: and receiving voice selection of at least one target result in the plurality of results by the user in the case that the search result is the plurality of results.
For example, when the user searches for song a, information of song a sung by 3 singers is searched out, and the information of song a sung by 3 singers is displayed on the head-up display, "song a-singer a," "song a-singer B," and "song a-singer C," at which time, the user may issue a voice instruction of "song a-singer a" to enable the electronic device to determine that the song that the user wants to play is song a sung by singer a. That is, when the user views the search result displayed on the heads-up display, the user may express the content displayed on the heads-up display in a voice form, so that the electronic device can determine the target result selected by the user.
S312: and executing the target operation corresponding to the target result.
For example, the user expresses the selected charging pile information in a voice form, and the electronic device may set the charging pile position selected by the user as a navigation destination and present a navigation route for the vehicle to travel to the charging pile to the user.
S314: and broadcasting an execution result of the execution target operation.
In one implementation, after the target operation corresponding to the target result is executed, the execution result is broadcasted in voice.
For example, the position of the charging pile selected by the user is set as a navigation destination, and after a navigation route of the vehicle running to the charging pile is displayed to the user, navigation information is broadcasted through voice.
For another example, after the user determines to turn on the air conditioner in the vehicle through the voice command, the multimedia controller broadcasts the 'turned on air conditioner' through the loudspeaker, so that the voice command of the user is determined to be executed, the user does not need to determine the execution result, the user can concentrate on driving the vehicle, and the driving safety is improved.
The voice interaction system in this embodiment can provide a voice broadcast function for the user, and can broadcast a search result and/or broadcast an execution result of executing the target operation. For example, the search result is broadcasted after the search result is displayed on the head-up display, or the execution result of the execution target operation is broadcasted after the user selects the target result by voice, or the search result is broadcasted after the search result is displayed, and the execution result of the execution target operation is broadcasted after the user selects the target result by voice. Through providing diversified voice broadcast function for the user, promote user's voice and use experience.
Therefore, in the voice interaction method provided by the embodiment of the application, the voice selection of the user on at least one target result in the multiple results is received under the condition that the search result is the multiple results; executing target operation corresponding to the target result; the method and the device have the advantages that under the condition that the search results are multiple, the user can determine the target result by looking up the head-up display and in a voice selection mode, so that the user can directly control the vehicle through full voice interaction in the vehicle driving process, the sight does not need to leave the road, the operation can be completed without distracting and repeatedly confirming information in the driving process, the driving safety is improved, and the voice interaction experience of the user is improved.
As shown in fig. 4, which is a schematic flow chart of a voice interaction method 400 according to another embodiment of the present application, the method may be performed by an electronic device, which may be a vehicle, a smart vehicle, an unmanned vehicle, etc., or a software or hardware device installed on the vehicle, the smart vehicle, the unmanned vehicle, in other words, the method may be performed by software or hardware installed on the electronic device, the method includes the following steps:
s402: and acquiring a voice instruction of a user.
S4041: and under the condition that the search result is a plurality of results, carrying out voice recognition on the voice instruction through the cloud platform to obtain the character information after the voice recognition.
When the result obtained by the local search of the electronic device or the vehicle is a plurality of results, the vehicle may not be able to identify which search result is subsequently displayed or executed, and at this time, the cloud platform may be connected to a plurality of ecosystems through the cloud platform, and when the voice instruction of the user may have a plurality of search results, the cloud platform performs voice recognition on the voice instruction to obtain text information after the voice recognition.
S4042: and determining a search result according to the text information.
S406: and displaying the search result through a head-up display of the vehicle.
The search result can be converted into the text information through the cloud platform, the search result is displayed through the head-up display, and the text information of the search result can be checked without the sight of a user leaving the ground.
Therefore, when a plurality of search results are obtained according to the voice control instruction, voice recognition is carried out on the voice instruction through the cloud platform, text information after voice recognition is obtained, the plurality of search results are displayed through the head-up display, a user can check the search results through the head-up display, the sight line does not need to leave the ground, the content of the plurality of search results can be clearly known, and the driving safety is improved.
In addition, the steps S4041 to S4042 may also adopt the description related to step S204 in the embodiment of fig. 2, and are not described herein again.
As shown in fig. 5, which is a schematic flow chart of a voice interaction method 500 according to another embodiment of the present application, the method may be performed by an electronic device, which may be a vehicle, a smart vehicle, an unmanned vehicle, etc., or a software or hardware device installed on the vehicle, the smart vehicle, the unmanned vehicle, in other words, the method may be performed by software or hardware installed on the electronic device, the method includes the following steps:
s502: and acquiring a voice instruction of a user.
S504: and determining a search result according to the voice instruction.
Steps S502 to S504 may adopt the same or similar descriptions as steps S202 to S204 in the embodiment of fig. 2 or steps S402 to S4042 in the embodiment of fig. 4, and are not described again here.
S5061: and displaying key information of the search result through a head-up display of the vehicle, wherein the key information of the search result is determined according to the information corresponding to the search result.
The existing head-up display has a small area and a short display distance, and a search result is directly displayed on the head-up display, so that characters displayed by the display are small, a user cannot see contents displayed by the head-up display clearly in a driving process, the user cannot concentrate on driving, and safety of the user in a driving process is affected. Therefore, in order to not shield the sight of the user during normal driving, the key information can be determined according to the information corresponding to the search result, and the key information is displayed on the head-up display, so that the problem that the font displayed by the head-up display is small and the user cannot easily see the content displayed by the head-up display due to excessive content of the search result is avoided. The key information may be part of the information in the search result, or information summarized based on the search result, and the word number of the key information is usually smaller than that of the search result.
In one implementation, when information of charging post a, charging post B, and charging post C is searched, for example, a charging post name, a construction time, an address, a distance, a driving route, a travel time, a price, and the like. If all the information corresponding to the search result is displayed on the head-up display, the content displayed on the head-up display is too much, and the user cannot clearly see all the content and cannot select the content in a short time. Therefore, the embodiment can determine the key information of the information corresponding to the search result, and display the key information on the head-up display.
For example, "charging pile a distance XX, time XX, price XX", "charging pile B distance XX, time XX, price XX", "charging pile C distance XX, time XX, price XX" may be displayed on the head-up display, so that the user can clearly see the information of the charging pile A, B, C, so that the user can quickly determine the charging pile desired to be selected according to the distance, time, and price.
In addition, in step S5061, the description related to step S206 in the embodiment of fig. 2 may also be adopted, and is not repeated herein.
Therefore, the key information of the search result is displayed through the head-up display, the situation that the content displayed by the head-up display affects the user to check the search result is avoided, the user can know the content of the search result according to the key information by checking the key information, the content displayed on the head-up display is not required to be checked with too much effort, and the driving safety is improved.
FIG. 6 is a schematic structural diagram of a voice interaction device according to an embodiment of the present application. As shown in fig. 6, the voice interaction apparatus 600 includes: an acquisition module 601, a search module 602, and a presentation module 603.
The obtaining module 601 is used for obtaining a voice instruction of a user. The search module 602 determines the search result according to the voice instruction. The display module 603 is configured to display the search result through a head-up display of the vehicle.
In one implementation, the search module 602 is configured to perform voice recognition on the voice instruction through the cloud platform to obtain text information after voice recognition when the search result is a plurality of results.
In one implementation, the displaying module 603 is configured to display key information of the search result, where the key information of the search result is determined according to information corresponding to the search result.
Fig. 7 is a schematic structural diagram of a voice interaction device according to another embodiment of the present application. As shown in fig. 7, the voice interaction apparatus 700 includes: an acquisition module 701, a search module 702, a presentation module 703, a receiving module 704, and an execution module 705.
The obtaining module 701 is configured to obtain a voice instruction of a user. The search module 702 allows the user to determine the search results based on the voice instructions. The display module 703 is configured to display the search result through a head-up display of the vehicle. The receiving module 704 is configured to receive a voice selection of at least one target result of the plurality of results from the user if the search result is the plurality of results. The execution module 705 is configured to execute a target operation corresponding to the target result.
In one implementation, the presentation module 703 is configured to broadcast a search result and/or a result of executing the target operation.
The voice interaction device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), and the like, and the embodiments of the present application are not limited in particular.
The voice interaction device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The device 600 or 700 according to the embodiment of the present application may refer to the process corresponding to the method 200-500 according to the embodiment of the present application, and each unit/module and the other operations and/or functions in the device 600 or 700 are respectively for implementing the corresponding process in the method 200-500 and achieving the same or equivalent technical effects, and are not repeated herein for brevity.
FIG. 8 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Fig. 8 shows that the embodiment of the present application further provides an electronic device, which may be a terminal device or a server device, and the electronic device includes: antenna 801, radio frequency device 802, baseband device 803, network interface 804, memory 805 and processor 806, programs or instructions stored on memory 805 and executable on said processor 806, which when executed by processor 806, implement:
the processor 806 is configured to obtain a voice instruction of a user; determining a search result according to the voice instruction; and displaying the search result through a head-up display of the vehicle.
In one implementation, the processor 806 is configured to receive a user's voice selection of at least one target result of the plurality of results if the search result is the plurality of results; and executing the target operation corresponding to the target result.
In one implementation, the processor 806 is configured to present key information of the search result, where the key information of the search result is determined according to information corresponding to the search result.
In one implementation, the processor 806 is configured to perform voice recognition on the voice instruction through the cloud platform to obtain text information after voice recognition when the search result is a plurality of results; and determining a search result according to the text information.
In one implementation, processor 806 is configured to present search results; and/or broadcast the execution result of the execution target operation.
The electronic device 800 according to the embodiment of the present application may refer to the process corresponding to the method 200-500 in the embodiment of the present application, and each unit/module and the other operations and/or functions in the electronic device 800 are respectively for implementing the corresponding process in the method 200-500 and achieving the same or equivalent technical effects, and for brevity, no further description is provided herein.
The embodiment of the present application further provides a computer-readable storage medium, where a program or an instruction is stored on the computer-readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the voice interaction method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. Such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above method for switching antennas, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application further provide a computer program product comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the method according to the first aspect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A voice interaction method is applied to a vehicle, and comprises the following steps:
acquiring a voice instruction of a user;
determining a search result according to the voice instruction;
and displaying the search result through a head-up display of the vehicle.
2. The method of claim 1, wherein after presenting the search results, the method further comprises:
receiving voice selection of at least one target result in a plurality of results from a user under the condition that the search result is the plurality of results;
and executing the target operation corresponding to the target result.
3. The method of claim 1, wherein said presenting the search results comprises:
and displaying key information of the search result, wherein the key information of the search result is determined according to the information corresponding to the search result.
4. The method of claim 1, wherein determining search results based on the voice instruction comprises:
under the condition that the search result is a plurality of results, performing voice recognition on the voice instruction through a cloud platform to obtain character information after voice recognition;
and determining the search result according to the text information.
5. The method of claim 2, wherein after presenting the search results, the method further comprises:
broadcasting the search result; and/or
And broadcasting an execution result of executing the target operation.
6. A voice interaction apparatus, comprising:
the acquisition module is used for acquiring a voice instruction of a user;
the searching module is used for determining a searching result according to the voice instruction;
and the display module is used for displaying the search result through a head-up display of the vehicle.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the receiving module is used for receiving voice selection of at least one target result in the plurality of results from a user under the condition that the search result is the plurality of results;
and the execution module is used for executing the target operation corresponding to the target result.
8. The apparatus of claim 7, wherein the display module is to:
and displaying key information of the search result, wherein the key information of the search result is determined according to the information corresponding to the search result.
9. An electronic device comprising a processor and a memory arranged to store computer-executable instructions, wherein the processor, when executing the executable instructions, implements the voice interaction method of any of claims 1-5.
10. A computer-readable storage medium storing one or more programs which, when executed by a processor, implement the voice interaction method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011579339.3A CN112820284A (en) | 2020-12-28 | 2020-12-28 | Voice interaction method and device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011579339.3A CN112820284A (en) | 2020-12-28 | 2020-12-28 | Voice interaction method and device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112820284A true CN112820284A (en) | 2021-05-18 |
Family
ID=75854123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011579339.3A Pending CN112820284A (en) | 2020-12-28 | 2020-12-28 | Voice interaction method and device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112820284A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113220265A (en) * | 2021-05-28 | 2021-08-06 | 海信集团控股股份有限公司 | Automobile and voice response text display method |
CN113436628A (en) * | 2021-08-27 | 2021-09-24 | 广州小鹏汽车科技有限公司 | Voice interaction method, device, system, vehicle and medium |
CN114205371A (en) * | 2021-11-29 | 2022-03-18 | 中汽研(天津)汽车工程研究院有限公司 | System and method for quickly interacting data between vehicle end and server end |
CN116558536A (en) * | 2023-04-27 | 2023-08-08 | 中国第一汽车股份有限公司 | Vehicle navigation voice interaction method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090326936A1 (en) * | 2007-04-17 | 2009-12-31 | Honda Motor Co., Ltd. | Voice recognition device, voice recognition method, and voice recognition program |
CN103699023A (en) * | 2013-11-29 | 2014-04-02 | 安徽科大讯飞信息科技股份有限公司 | Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment |
CN107885810A (en) * | 2017-01-24 | 2018-04-06 | 问众智能信息科技(北京)有限公司 | The method and apparatus that result for vehicle intelligent equipment interactive voice is shown |
WO2018099000A1 (en) * | 2016-12-01 | 2018-06-07 | 中兴通讯股份有限公司 | Voice input processing method, terminal and network server |
CN109101613A (en) * | 2018-08-06 | 2018-12-28 | 斑马网络技术有限公司 | Interest point indication method and device, electronic equipment, storage medium for vehicle |
CN109572702A (en) * | 2017-09-25 | 2019-04-05 | Lg电子株式会社 | Controller of vehicle and vehicle including the controller of vehicle |
CN110795608A (en) * | 2018-08-02 | 2020-02-14 | 声音猎手公司 | Visually presenting information related to natural language dialog |
-
2020
- 2020-12-28 CN CN202011579339.3A patent/CN112820284A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090326936A1 (en) * | 2007-04-17 | 2009-12-31 | Honda Motor Co., Ltd. | Voice recognition device, voice recognition method, and voice recognition program |
CN103699023A (en) * | 2013-11-29 | 2014-04-02 | 安徽科大讯飞信息科技股份有限公司 | Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment |
WO2018099000A1 (en) * | 2016-12-01 | 2018-06-07 | 中兴通讯股份有限公司 | Voice input processing method, terminal and network server |
CN107885810A (en) * | 2017-01-24 | 2018-04-06 | 问众智能信息科技(北京)有限公司 | The method and apparatus that result for vehicle intelligent equipment interactive voice is shown |
CN109572702A (en) * | 2017-09-25 | 2019-04-05 | Lg电子株式会社 | Controller of vehicle and vehicle including the controller of vehicle |
CN110795608A (en) * | 2018-08-02 | 2020-02-14 | 声音猎手公司 | Visually presenting information related to natural language dialog |
CN109101613A (en) * | 2018-08-06 | 2018-12-28 | 斑马网络技术有限公司 | Interest point indication method and device, electronic equipment, storage medium for vehicle |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113220265A (en) * | 2021-05-28 | 2021-08-06 | 海信集团控股股份有限公司 | Automobile and voice response text display method |
CN113436628A (en) * | 2021-08-27 | 2021-09-24 | 广州小鹏汽车科技有限公司 | Voice interaction method, device, system, vehicle and medium |
CN114205371A (en) * | 2021-11-29 | 2022-03-18 | 中汽研(天津)汽车工程研究院有限公司 | System and method for quickly interacting data between vehicle end and server end |
CN116558536A (en) * | 2023-04-27 | 2023-08-08 | 中国第一汽车股份有限公司 | Vehicle navigation voice interaction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112820284A (en) | Voice interaction method and device, electronic equipment and computer readable storage medium | |
JP5616142B2 (en) | System for automatically posting content using in-vehicle devices linked to mobile devices | |
CN106445296B (en) | Method and device for displaying vehicle-mounted application program icons | |
KR101525842B1 (en) | Image processing for image dislay apparatus mounted to vehicle | |
KR101513643B1 (en) | Information providing apparatus and method thereof | |
KR101602268B1 (en) | Mobile terminal and control method for the mobile terminal | |
CN113031905A (en) | Voice interaction method, vehicle, server, system and storage medium | |
CN104166645A (en) | Interest point and path information obtaining method and vehicle-mounted electronic equipment | |
CN108627176B (en) | Screen brightness adjusting method and related product | |
CN105988581A (en) | Voice input method and apparatus | |
US20150187351A1 (en) | Method and system for providing user with information in vehicle | |
CN107702725B (en) | Driving route recommendation method and device | |
CN103699023A (en) | Multi-candidate POI (Point of Interest) control method and system of vehicle-mounted equipment | |
US20180151065A1 (en) | Traffic Information Update Method and Apparatus | |
CN111722825A (en) | Interaction method, information processing method, vehicle and server | |
CN111913769A (en) | Application display method, device and equipment | |
CN107885810A (en) | The method and apparatus that result for vehicle intelligent equipment interactive voice is shown | |
CN104700751A (en) | Scenic spot information acquisition method and device | |
KR20180069477A (en) | Method and vehicle device controlling refined program behavior using voice recognizing | |
CN108595141A (en) | Pronunciation inputting method and device, computer installation and computer readable storage medium | |
KR101553952B1 (en) | Control method of mobile terminal and apparatus thereof | |
CN114115790A (en) | Voice conversation prompting method, device, equipment and computer readable storage medium | |
US20180075164A1 (en) | Multi-character string search engine for in-vehicle information system | |
CN116368353A (en) | Content aware navigation instructions | |
KR20100079091A (en) | Navigation system and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210518 |