CN114171025A - Automatic driving method, device, electronic equipment and computer readable storage medium - Google Patents

Automatic driving method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114171025A
CN114171025A CN202111501607.4A CN202111501607A CN114171025A CN 114171025 A CN114171025 A CN 114171025A CN 202111501607 A CN202111501607 A CN 202111501607A CN 114171025 A CN114171025 A CN 114171025A
Authority
CN
China
Prior art keywords
information
template
voice
driving
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111501607.4A
Other languages
Chinese (zh)
Inventor
曾曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avatr Technology Chongqing Co Ltd
Original Assignee
Avatr Technology Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avatr Technology Chongqing Co Ltd filed Critical Avatr Technology Chongqing Co Ltd
Priority to CN202111501607.4A priority Critical patent/CN114171025A/en
Publication of CN114171025A publication Critical patent/CN114171025A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3605Destination input or retrieval
    • G01C21/3608Destination input or retrieval using speech input, e.g. using speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention relates to the technical field of automobiles, and discloses an automatic driving method, which comprises the following steps: the method comprises the steps of obtaining voice information, obtaining a target information template matched with the voice information in a preset information template base, further generating corresponding path planning information according to the matched target information template, the voice information and map navigation information, generating a driving execution instruction corresponding to the voice information according to the path planning information, and controlling a vehicle to execute matched driving operation based on the generated driving execution instruction. By applying the technical scheme of the invention, the collected voice instruction can be converted into the map information without changing the pre-stored information of the original map, so that the auxiliary driving is controlled by the intelligent driving system, and the flexibility and the intelligence of the control of the intelligent driving system are improved. The invention also provides an automatic driving device, electronic equipment and storage equipment.

Description

Automatic driving method, device, electronic equipment and computer readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of automobiles, in particular to an automatic driving method, an automatic driving device, electronic equipment and a computer readable storage medium.
Background
Along with the development of social economy, people are increasingly pursuing functional feelings and technological feelings in the automobile field, various technological innovations in the automobile field gradually attract the eyes of people, particularly, the intelligent driving assistance technology of the automobile is rapidly developed and developed at present, a passenger car is popularized with the L2 or even higher-level assistance driving technology, and the assistance driving technology is more and more popular in the driving of the automobile and plays an increasingly important role due to the advantages of time and labor saving, convenience and comfort and the like of the driver in operation.
In the prior art, the intelligent driving assistance technology is mainly based on the auxiliary driving of a map navigation system of an automobile, after a specific path plan is made, an automatic driving system can start the pilot auxiliary driving according to the path data planned by the map, that is, the auxiliary driving can be realized only by the corresponding path plan of the map navigation system, once the navigation system cannot work normally, the intelligent driving assistance system cannot work normally, that is, the real intelligent driving cannot be realized. Therefore, how to improve the control diversity of the intelligent driving assistance system is important, so that the degree of intelligent driving is improved.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide an automatic driving method, which is used to solve the problem of insufficient intelligence of a driving assistance system in the prior art, and can provide a more intelligent control method, thereby improving the flexibility and intelligence of the driving assistance system.
According to an aspect of an embodiment of the present invention, there is provided an automatic driving method, including:
acquiring voice information, and acquiring a target information template matched with the voice information according to a preset information template library; the information template library comprises at least one pre-created information template;
generating path planning information according to the target information template, the voice information and the map navigation information;
generating a driving execution instruction according to the path planning information and the map navigation information;
and controlling the vehicle to perform driving operation matched with the driving execution instruction based on the driving execution instruction.
In an optional manner, the obtaining of the voice information and obtaining a target information template matched with the voice information according to a preset information template library includes:
processing and recognizing the voice information to obtain a voice recognition result;
and retrieving the target information template matched with the voice information in the information template library according to the voice recognition result.
In an alternative, the voice recognition result includes a voice text including at least one of direction information, location information, and data information; the retrieving, according to the voice recognition result, the target information template matched with the voice information from the information template library specifically includes:
and retrieving a target information template matched with the voice information in the information template library according to at least one of direction information, place information and data information in the text information.
In an alternative mode, the method creates any one of the information templates in advance by:
establishing an information element component library; the information element component library comprises a plurality of information element components;
and selecting at least one information element component corresponding to the information template from the information element component library, and creating the information template according to the at least one information element component.
In an optional manner, the information element component includes direction type information, location type information, and data type information, the selecting at least one information element component corresponding to the information template from the information element component library, and creating the information template according to the at least one information element component specifically includes:
and generating the direction information, the place information and the data information into the information template.
In an optional manner, the generating the path planning information according to the target information template, the voice information, and the map navigation information includes:
generating starting point information and end point information of the vehicle according to the target information template, the parameter information and the map navigation information;
generating navigation information of the vehicle based on the start point information and the end point information;
wherein the navigation information comprises at least one of: distance to the next intersection, driving direction of the next intersection, road name of the next intersection, and position of the end point.
In an alternative, the map navigation information includes electronic horizon data information; generating a driving execution instruction according to the path planning information and the map navigation information, specifically comprising:
and generating the driving execution instruction according to the path planning information and the electronic horizon data information.
According to another aspect of an embodiment of the present invention, there is provided an automatic driving apparatus including:
the acquisition module is used for acquiring the voice information;
the matching module is used for acquiring a target information template matched with the voice information according to a preset information template base; the information template library comprises at least one pre-created information template;
the generating module is used for generating path planning information according to the target information template, the voice information and the map navigation information; the generating module is further used for generating a driving execution instruction according to the path planning information and the map navigation information;
and the control module is used for controlling the vehicle to execute the driving operation matched with the driving execution instruction based on the driving execution instruction.
According to another aspect of the embodiments of the present invention, there is provided an electronic device including:
the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation of the automatic driving method.
According to another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium having at least one executable instruction stored therein, the executable instruction causing an electronic device to perform the following operations:
the executable instructions, when executed on an autonomous driving apparatus, cause an electronic apparatus as described in the above invention to perform the operations of the autonomous driving method as described in the above invention.
According to the embodiment of the invention, the voice information of the driver is acquired, the target information template matched with the voice information is acquired according to the preset information template base, the path planning information is generated according to the matched target information template, the voice information and the map navigation information, the driving execution instruction is generated according to the path planning information and the map navigation information, and the vehicle is controlled to execute the matched driving operation.
Therefore, the embodiment of the invention collects the voice information of the driver, outputs the voice information into the corresponding map information through the preset template, and generates the instruction of automatic driving, thereby controlling the vehicle to carry out the corresponding automatic driving operation, converting the collected voice instruction into the path planning information suitable for the map navigation system under the condition of not changing the pre-stored information of the original map, converting the required route into the specific map information through the voice information, and realizing the intelligent auxiliary driving through the intelligent driving system, thereby realizing the intelligent driving by directly controlling the automatic driving system through the voice instruction of the driver, improving the control flexibility and intelligence of the intelligent driving system, and further improving the operability of the intelligent driving system.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of an autonomous driving method provided by the present invention;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 6 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 7 is a schematic flow chart diagram illustrating another embodiment of an autopilot method provided by the present invention;
FIG. 8 shows a schematic view of an embodiment of an autopilot device provided by the present invention;
FIG. 9 shows a schematic view of an embodiment of an electronic device provided by the invention;
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Fig. 1 shows a flow chart of a first embodiment of the inventive autopilot method, which method is carried out by an electronic device. As shown in fig. 1, the method comprises the steps of:
step S10: acquiring voice information, and acquiring a target information template matched with the voice information according to a preset information template library; the information template library comprises at least one pre-created information template.
The voice information acquisition process comprises the steps that firstly, a voice interaction process is carried out, a user can awaken a voice acquisition module through a voice keyword, then the user inputs an operation to be executed through voice, and after the voice signal is acquired, the voice signal is converted into at least one intermediate parameter comprising characters, pictures and videos, so that actual control content contained in the voice signal is acquired. It should be noted that the voice information may be output by an in-vehicle person including a driver or other in-vehicle persons, for example, the driver may ask other in-vehicle passengers to select a destination, and a rear-row in-vehicle person may input a voice through the voice obtaining module.
It can be understood that the preset information template library includes at least one pre-created information template, and after the voice obtaining module obtains the controlled voice content, the preset information template library needs to be matched with a plurality of information templates in the pre-created information template library one by one to determine a target information template finally suitable for the voice content.
It can be understood that the voice obtaining module can obtain multiple rounds of voice information, and combine the voice information according to the information templates corresponding to the multiple rounds of voice information to determine a final target information template. The voice acquisition module is used for determining a first information template according to the first round of voice information and the preset information template base, determining a second information template according to the second round of voice information and the preset information template base when the user inputs a second round of voice again, and combining and generating a corresponding final target information template according to the first information template and the second information template. It should be noted that the voice information may be input for multiple times within a preset time, for example, the preset time is 30S, the driver continuously inputs two pieces of information and the voice information within 30S, matches two pieces of information templates after acquiring the two pieces of voice information, and finally determines the final target information template according to the two pieces of information templates, where it is to be noted that the multiple pieces of voice information may be respectively from voices of different people in the vehicle, and this is not limited herein.
Step S20: and generating path planning information according to the target information template, the voice information and the map navigation information.
In a specific embodiment, the target information template is matched with the voice information, and the matched result and the map navigation information are used together to generate final path planning information. According to the determined information template, a map navigation system of the vehicle can be called to identify corresponding parameters in the information template and the voice information, and the map navigation system can convert the parameters and the information template into specific path information, wherein the path information can comprise a specific driving path, for example, a point which takes the current automobile position as a starting point, finds a second intersection along the current road and takes a point which turns left and then takes 100 meters as an end point.
It can be understood that, in another embodiment, the finally generated path planning information may further include a specific driving route and specific map real object information, where the specific driving route may include a start point and an end point of driving, and an intersection, a specific road name, lane information, steering information, and the like that the driving process passes through, and the specific map real object information is output according to the content matched by the target information template and the voice information, so as to simulate a more specific driving environment and improve the safety of the driving process.
It is understood that, when the voice of the vehicle occupant includes only the start point and the information of the road section passing by during driving and does not include a specific end point, the generated map information corresponding to the voice information may not include a specific end point position, and the point where the vehicle actually travels is controlled according to the voice content as the end point. It should be noted that, when the voice of the person in the vehicle includes only the information of the link that has passed through during driving, and does not include a specific start point and an end point, the generated map information corresponding to the voice information may not include a specific start point and an end point position, and the point from which the vehicle actually travels and which is controlled according to the voice content is used as the start point, and the point at which the driving is actually ended is used as the end point. It should be noted that the voice of the vehicle occupant may include only the start point and the end point, and does not include the information of the link that has passed through during driving, and in this case, the route may be planned according to the existing map information.
Step S30: and generating a driving execution instruction corresponding to the voice information according to the path planning information and the map navigation information.
The method comprises the steps of determining a specific driving mode corresponding to a target route according to path planning information and a determined final driving route, and generating control parameters corresponding to each vehicle motion module according to the determined specific driving mode and route parameters, wherein the route parameters comprise one or more of intersections passed by a specific driving process, specific road names, lane information and steering information.
In some other embodiments, the map information generated according to the voice information is combined with the environmental parameters generated by the vehicle-mounted system, wherein the environmental parameters include one or more of obstacle information, other vehicle driving information and weather information which appear in the actual driving scene, and a more reasonable final driving operation instruction can be generated by considering the environmental parameters in the driving process.
Step S40: and controlling the vehicle to perform the driving operation matched with the driving execution instruction based on the driving execution instruction.
And after the final path planning information and the driving execution instruction corresponding to the path planning information are generated and sent to the intelligent driving system, specific driving operation is carried out through a corresponding control module. For example, the driving operation finally output according to the voice information is: and driving to a second intersection along the current road, and stopping 100 meters after turning left.
Therefore, the voice information of the driver is collected, the voice information is output to be the corresponding map information through the preset template, the automatic driving instruction is generated, the vehicle is controlled to carry out the corresponding automatic driving operation, the required route can be converted into the specific map information through the voice information under the condition that the original pre-stored map information is not changed, the intelligent auxiliary driving is realized through the intelligent driving system, the automatic driving system is directly controlled to carry out the intelligent driving through the voice instruction of the driver, the control flexibility and the intelligence of the intelligent driving system are improved, and the operability of the intelligent driving system is improved.
Fig. 2 shows a flow chart of another embodiment of the inventive autopilot method, which is performed by an electronic device. As shown in fig. 2, the method step S10 includes the following steps:
step S110: and processing and recognizing the voice information to obtain a voice recognition result.
When the voice acquisition module receives the awakening voice of the user, the voice direction of the awakening voice is determined, the voice after noise reduction can be recognized through a voice recognition algorithm, the obtained voice recognition result can be a text or an image, and the specific content of the voice information of the user is collected through the text or the image and the like.
Step S120: and searching a target information template matched with the voice information in the information template library according to the voice recognition result.
The technical scheme provided by the embodiment of the invention is characterized in that an information template base is preset, the information template base comprises at least one information template which is created in advance, each information template corresponds to a defined type, when the information templates are matched, the voice information to be matched of a vehicle is firstly obtained, then the preset information template base is inquired, the information template with the highest matching degree with the content contained in the voice information to be matched is found out from the information template base, and finally, a target information template which is finally matched with the voice information to be matched is determined.
Specifically, in this embodiment, the user wakes up the voice acquisition module, processes and recognizes the voice information of the user to obtain a voice recognition result, and then searches and matches the voice recognition result with a target information template matched with the voice information in the information template library, and finally determines a target information template finally matched with the voice information to be matched. The accuracy of matching the target information template is improved by matching the acquired voice information with the preset information template base, so that the accuracy of the acquired voice information is improved.
Fig. 3 shows a flow chart of another embodiment of the inventive autopilot method, which is performed by an electronic device. As shown in fig. 3, the method includes, in step S120, that the speech recognition result includes a speech text, and the speech text includes at least one of direction information, location information, and data information, and the step S120 includes the steps of:
step S121: and retrieving a target information template matched with the voice information in the information template library according to at least one of the direction information, the place information and the data information in the text information.
The voice recognition result includes a voice text, and further, the voice text includes at least one of direction information, location information, and data information, for example, the text information recognized after the voice is input is: turning right at the second intersection, the voice text comprises data information: second intersection, and direction information: and turning right, matching in a preset information template library according to the contained data information and direction information, wherein the finally matched template is as follows: turning to the intersection [1] at the [0] th intersection, wherein [0] is data and [1] is direction.
Specifically, in this embodiment, the voice recognition result is output as a voice text, and the voice text is specifically the commonly used direction information, location information, and data information, and is matched in the preset information template library according to the direction information, the location information, and the data information, so that the finally matched final target information template is quickly determined, and the accuracy of matching the language information is improved.
FIG. 4 shows a flow diagram of another embodiment of an autopilot method of the present invention, which is performed by an electronic device. As shown in fig. 4, in step S10 of the method, any information template is created in advance by:
step S130: establishing an information element component library; the information element component library comprises a plurality of information element components;
and selecting at least one information element component corresponding to the information template from the information element component library, and creating the information template according to the at least one information element component.
The number of the information element components in the created information template can be one or more, when the number of the information elements is less, the corresponding driving information in the information template is simpler, and can adapt to simple driving actions, and conversely, the number of the information elements is more, the driving actions are more complicated. For example, the created information template is: and (4) driving along [0], wherein [0] is the place, and the template information established at the moment is simple driving operation, namely driving along a specific road.
Specifically, in the embodiment, the information template is created through at least one information element component, so that more types of information templates can be created, more driving actions can be adapted, and the matching accuracy of the content corresponding to the acquired language information is improved.
FIG. 5 shows a flow diagram of another embodiment of an autopilot method of the present invention, which is performed by an electronic device. As shown in fig. 5, in step S130 of the method, the information element component includes direction class information, location class information and data class information, and step S130 includes the following steps:
step S131: and generating an information template by using the direction class information, and/or the place class information, and/or the data class information.
In this embodiment, the information element components include direction information, location information, and data information, and the number of specific information elements that a corresponding information template may include may be determined according to the information of the information element components, where it should be noted that the information template may include only one information element component, for example: when the vehicle travels along [0], where [0] is a location, only one type of location class information is contained, but of course, only direction class information or data class information may be contained, and it should be noted that the information template may contain two types of information element components: directions and locations, directions and data, locations and data, for example, go straight ahead along [0] and then turn right away [1], where [0] is the location and [1] is the direction, it should be noted that the information template may also include three specific information element components.
In some other embodiments, other information element components may also be included, such as time class information or attribute class information, etc.
Specifically, in the embodiment, the information element component is determined as the combination of the direction information, the location information and the data information, so that more common driving actions can be adapted, the content accuracy corresponding to the acquired language information is improved, and more pertinence is achieved.
FIG. 6 illustrates a flow chart of another embodiment of an autopilot method of the present invention, which is performed by an electronic device. As shown in fig. 6, in step S30 of the method, the voice information includes parameter information, and step S30 includes the following steps:
step S310: generating starting point information and end point information of the vehicle according to the target information template, the parameter information and the map navigation information;
path planning information is generated based on the start point information and the end point information.
Wherein the path planning information comprises at least one of: distance to the next intersection, driving direction of the next intersection, road name of the next intersection, and position of the end point.
It should be noted that, according to the finally determined target information template, after the map navigation system is called, the target information template may be matched with the parameter information in the voice information, map information that can be used by the map navigation module is generated in the map navigation system, and the map information may include at least one of a real-time position, end point information, along-road surface information, and the like.
In some other embodiments, a starting point position or an ending point position of the vehicle may be generated according to the target information template, the parameter information, and the map navigation information, and specific path planning information may be generated according to the target information template and the parameter information, that is, a specific driving route without the starting point or the ending point information may be made according to the information when the starting point or the ending point information is absent.
In some other embodiments, after the map navigation system is invoked, the starting position and the ending position of the vehicle may be determined according to the position information and the auxiliary information in the matched target information template, according to the position information and the auxiliary information in the information template, so as to determine a final driving route, or the final driving route may be determined directly according to the position information and the auxiliary information in the information template, so that the driving route generated at this time is the final path planning information.
It can be understood that the starting point position of the vehicle may be a real-time position where the current vehicle is located, when the current vehicle is in a stationary state, the starting point position of the vehicle is a current position determined by the positioning system, and when the vehicle is in motion, the starting point position of the vehicle is a real-time position where the map navigation system receives the template information and needs to make specific path information.
It is to be understood that the generated specific path information may include navigation information, which may include at least one of: the distance to the next intersection, the driving direction of the next intersection, the road name of the next intersection and the position of the terminal point, so as to realize the planning of the specific route. That is, a specific driving route may be established in the absence of a starting point or an ending point, and the specific driving route may include at least one of a distance to a next intersection, a driving direction of the next intersection, a road name of the next intersection, and a position of the ending point.
Specifically, in this embodiment, after the map navigation system is called, map information that can be used by the map navigation module is generated according to matching between information in the target information template and parameter information, that is, the map navigation system determines a start position and an end position of the vehicle according to the target information template and the parameter information, and generates a planned path required by the vehicle based on the start position and the end position, so that voice information can be converted into specific map information without calling original pre-stored map information, thereby realizing the planning of the path, realizing diversification of map information acquisition in the intelligent driving process, and bringing more novel control mode experience to users.
FIG. 7 illustrates a flow chart of another embodiment of an autopilot method of the present invention, which is performed by an electronic device. As shown in fig. 7, in step S30 of the method, the map navigation information includes electronic horizon data information, and step S30 further includes the steps of:
step S320: and generating a driving execution instruction according to the path planning information and the electronic horizon data information.
In some other embodiments, electronic horizon data may be searched out on the basis of the generated final driving route, and the searched data may be transmitted to a control system of the vehicle, for example, by acquiring paths that all vehicles within a preset range in front and rear of the vehicle may reach, and predicting scenes of various roads that the vehicle may encounter in front through the searched various paths, acceleration, deceleration, steering and the like of the vehicle may be controlled, and finally, the real-time conditions of the vehicle obtained by sensors of various modules of the vehicle are integrated, so that auxiliary driving is realized, and a better driving state is achieved.
Fig. 8 shows a schematic configuration of an embodiment of the automatic driving apparatus of the present invention. As shown in fig. 8, the apparatus 100 includes:
an obtaining module 110, configured to obtain voice information;
the matching module 120 is configured to obtain a target information template matched with the voice information according to a preset information template library; the information template library comprises at least one pre-created information template;
a generating module 130, configured to generate path planning information according to the target information template, the voice information, and the map navigation information;
the generating module 130 is further configured to generate a driving execution instruction according to the path planning information and the map navigation information;
and the control module 140 is used for controlling the vehicle to execute the driving operation matched with the driving execution instruction based on the driving execution instruction.
Specifically, the device described in fig. 8 can convert the route to be traveled into specific map information through voice information without depending on the original map information, and intelligent driving assistance is realized through the intelligent driving system, so that the automatic driving system is directly controlled through the voice instruction of the driver to carry out intelligent driving, the intelligence and convenience of the intelligent driving system are improved, and the operability of the intelligent driving system is improved.
In an alternative manner, as shown in fig. 8, the apparatus may further include:
and the recognition module 150 is configured to process and recognize the voice information to obtain a voice recognition result.
And the matching module 120 is configured to retrieve a target information template matching the voice information from the information template library according to the voice recognition result.
Specifically, the device described in fig. 8 can match the acquired voice information with the preset information template library, so that the accuracy of matching the final target information template is improved, and the accuracy of the acquired voice information is improved.
In an alternative manner, the matching module 120 is configured to retrieve a target information template matching the voice information from the information template library according to the direction information, the location information, and the data information in the text information.
In particular, the device described in fig. 8 can be implemented to perform adaptation of the voice information and the information template more quickly and accurately.
In an optional manner, the generating module 130 is further configured to establish an information element component library; the information element component library comprises a plurality of information element components;
and selecting at least one information element component corresponding to the information template from the information element component library, and creating the information template according to the at least one information element component.
Specifically, implementing the apparatus described in fig. 8 can create more types of information templates, can adapt to more driving actions, and improves the accuracy of matching the content corresponding to the acquired language information.
In an optional manner, the generating module 130 is further configured to generate an information template from the direction class information, and/or the location class information, and/or the data class information.
Specifically, by implementing the apparatus described in fig. 8, more types of information templates can be created, which can adapt to more common driving actions, and improve the accuracy of matching the content corresponding to the acquired language information.
In an optional manner, the generating module 130 is further configured to generate start point information and end point information of the vehicle according to the target information template, the parameter information, and the map navigation information; and generating path planning information for the vehicle based on the start point information and the end point information.
Wherein the including path planning information includes at least one of: distance to the next intersection, driving direction of the next intersection, road name of the next intersection, and position of the end point.
In an optional manner, the generating module 130 is further configured to generate a driving execution instruction according to the path planning information and the electronic horizon data information.
Specifically, the implementation of the device described in fig. 8 can realize route planning without depending on the original map information, so that the map information setting in the intelligent driving process can be diversified.
Fig. 9 is a schematic structural diagram of an embodiment of the electronic device according to the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 9, the electronic device may include: a processor (processor)202, a communication Interface (Communications Interface)204, a memory (memory)206, and a communication bus 208. Wherein: the processor 202, communication interface 204, and memory 206 communicate with each other via a communication bus 208. A communication interface 204 for communicating with network elements of other devices, such as clients or other servers. Processor 202, configured to execute program 210, may specifically perform the relevant steps described above for an embodiment of an autonomous driving method.
In particular, program 210 may include program code comprising computer-executable instructions. The processor 202 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 206 for storing a program 210. Memory 206 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 210 may specifically be invoked by the processor 202 to cause the electronic device to perform an autopilot method in any of the method embodiments described above. The route planning can be realized on the basis of not changing the original map preset information, and intelligent auxiliary driving is carried out, so that the map information acquisition mode in the intelligent driving process can be diversified.
An embodiment of the present invention provides a computer-readable storage medium, where the storage medium stores at least one executable instruction, and when the executable instruction is executed on an electronic device, the electronic device is caused to execute an automatic driving method in any method embodiment described above.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. In addition, embodiments of the present invention are not directed to any particular programming language.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. Similarly, in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. Where the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or elements are mutually exclusive.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. An autonomous driving method for controlling a vehicle, the method comprising:
acquiring voice information, and acquiring a target information template matched with the voice information according to a preset information template library; the information template library comprises at least one pre-created information template;
generating path planning information according to the target information template, the voice information and the map navigation information;
generating a driving execution instruction according to the path planning information and the map navigation information;
and controlling the vehicle to perform driving operation matched with the driving execution instruction based on the driving execution instruction.
2. The method of claim 1, wherein the obtaining the voice message and the obtaining the target information template matching the voice message according to a preset information template library comprises:
processing and recognizing the voice information to obtain a voice recognition result;
and retrieving the target information template matched with the voice information in the information template library according to the voice recognition result.
3. The method of claim 2, wherein the speech recognition result comprises speech text, the speech text comprising at least one of direction information, location information, and data information; the retrieving, according to the voice recognition result, the target information template matched with the voice information from the information template library specifically includes:
and retrieving the target information template matched with the voice information in the information template library according to at least one of the direction information, the place information and the data information in the text information.
4. The method of claim 3, wherein the method creates any information template in advance by:
establishing an information element component library; the information element component library comprises a plurality of information element components;
and selecting at least one information element component corresponding to the information template from the information element component library, and creating the information template according to the at least one information element component.
5. The method according to claim 4, wherein the information element components include direction class information, location class information, and data class information, the selecting at least one information element component corresponding to the information template from the information element component library, the creating the information template according to the at least one information element component, specifically includes:
and generating the direction information, the place information and the data information into the information template.
6. The method according to claim 1, wherein the voice information includes parameter information, and the generating of the map path information according to the target information template, the voice information, and the map navigation information specifically includes:
generating starting point information and end point information of the vehicle according to the target information template, the parameter information and the map navigation information;
generating the path planning information based on the start point information and the end point information;
wherein the path planning information comprises at least one of: distance to the next intersection, driving direction of the next intersection, road name of the next intersection, and position of the end point.
7. The method of claim 6, wherein the map navigation information includes electronic horizon data information; generating a driving execution instruction according to the path planning information and the map navigation information, specifically comprising:
and generating the driving execution instruction according to the path planning information and the electronic horizon data information.
8. An autopilot device, the device comprising:
the acquisition module is used for acquiring the voice information;
the matching module is used for acquiring a target information template matched with the voice information according to a preset information template base; the information template library comprises at least one pre-created information template;
the generating module is used for generating path planning information according to the target information template, the voice information and the map navigation information; the generating module is further used for generating a driving execution instruction according to the path planning information and the map navigation information;
and the control module is used for controlling the vehicle to execute the driving operation matched with the driving execution instruction based on the driving execution instruction.
9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the automated driving method of any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one executable instruction that, when executed by an electronic device, causes the electronic device to perform operations of the autopilot method of any one of claims 1-7.
CN202111501607.4A 2021-12-09 2021-12-09 Automatic driving method, device, electronic equipment and computer readable storage medium Pending CN114171025A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111501607.4A CN114171025A (en) 2021-12-09 2021-12-09 Automatic driving method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111501607.4A CN114171025A (en) 2021-12-09 2021-12-09 Automatic driving method, device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114171025A true CN114171025A (en) 2022-03-11

Family

ID=80485114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111501607.4A Pending CN114171025A (en) 2021-12-09 2021-12-09 Automatic driving method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114171025A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105460019A (en) * 2014-09-11 2016-04-06 苗码信息科技(上海)股份有限公司 Method for car driving through full-automatic remote control of Chinese speech
CN111439271A (en) * 2020-04-21 2020-07-24 上汽大众汽车有限公司 Auxiliary driving method and auxiliary driving equipment based on voice control
CN112242141A (en) * 2020-10-15 2021-01-19 广州小鹏汽车科技有限公司 Voice control method, intelligent cabin, server, vehicle and medium
CN113060150A (en) * 2021-04-29 2021-07-02 陈潇潇 Signal prompting method of driving control information of automatic driving
CN113226886A (en) * 2021-03-31 2021-08-06 华为技术有限公司 Method and device for controlling vehicle to run and vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105460019A (en) * 2014-09-11 2016-04-06 苗码信息科技(上海)股份有限公司 Method for car driving through full-automatic remote control of Chinese speech
CN111439271A (en) * 2020-04-21 2020-07-24 上汽大众汽车有限公司 Auxiliary driving method and auxiliary driving equipment based on voice control
CN112242141A (en) * 2020-10-15 2021-01-19 广州小鹏汽车科技有限公司 Voice control method, intelligent cabin, server, vehicle and medium
CN113226886A (en) * 2021-03-31 2021-08-06 华为技术有限公司 Method and device for controlling vehicle to run and vehicle
CN113060150A (en) * 2021-04-29 2021-07-02 陈潇潇 Signal prompting method of driving control information of automatic driving

Similar Documents

Publication Publication Date Title
JP6602967B2 (en) Driving method and system for autonomous vehicle based on motion plan
US11077864B2 (en) Travel control apparatus, vehicle, travel control system, travel control method, and storage medium
DE112018001422T5 (en) DISPLAY CONTROL SYSTEM AND METHOD FOR GENERATING A VIRTUAL ENVIRONMENT IN A VEHICLE
KR102279078B1 (en) A v2x communication-based vehicle lane system for autonomous vehicles
CN109358614A (en) Automatic Pilot method, system, device and readable storage medium storing program for executing
US20170221480A1 (en) Speech recognition systems and methods for automated driving
JP2017174355A (en) Drive support method and drive support device using the same, automatic drive controller, vehicle, drive support system, and program
CN107499311A (en) Switching method, device and the equipment of driving model
US20200247415A1 (en) Vehicle, and control apparatus and control method thereof
JP2018018389A (en) Control device for automatic drive vehicle, and control program
US20180093673A1 (en) Utterance device and communication device
CN113065257A (en) Automatic generation method and device of test case, computer equipment and medium
EP3627110B1 (en) Method for planning trajectory of vehicle
US11634129B2 (en) Travel control apparatus, vehicle, travel control method, and non-transitory computer-readable storage medium
CN110843768B (en) Automatic parking control method, device and equipment for automobile and storage medium
CN112363511A (en) Vehicle path planning method and device, vehicle-mounted device and storage medium
US11454977B2 (en) Information processing method and information processing device
CN114171025A (en) Automatic driving method, device, electronic equipment and computer readable storage medium
CN111341134A (en) Lane line guide prompting method, cloud server and vehicle
CN110576790A (en) Information prompting method and system based on automobile rear window glass display screen and vehicle-mounted terminal
JP7176383B2 (en) Information processing device and information processing program
JP2021143908A (en) Vehicle controller, vehicle, method for controlling vehicle, and program
US11694377B2 (en) Editing device and editing method
US20220222946A1 (en) Traffic environment recognition device and vehicle control device
JP2020080129A (en) Information processing system, program, and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination