CN107677289B - Information processing method and device and motor vehicle - Google Patents

Information processing method and device and motor vehicle Download PDF

Info

Publication number
CN107677289B
CN107677289B CN201710918519.1A CN201710918519A CN107677289B CN 107677289 B CN107677289 B CN 107677289B CN 201710918519 A CN201710918519 A CN 201710918519A CN 107677289 B CN107677289 B CN 107677289B
Authority
CN
China
Prior art keywords
information
image
conversion
recognized
target identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710918519.1A
Other languages
Chinese (zh)
Other versions
CN107677289A (en
Inventor
王吉芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201710918519.1A priority Critical patent/CN107677289B/en
Publication of CN107677289A publication Critical patent/CN107677289A/en
Priority to PCT/CN2018/098603 priority patent/WO2019062332A1/en
Application granted granted Critical
Publication of CN107677289B publication Critical patent/CN107677289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3652Guidance using non-audiovisual output, e.g. tactile, haptic or electric stimuli

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses an information processing method, an information processing device and a motor vehicle. One embodiment of the method comprises: generating a planned path from a departure place to a destination in response to receiving a path planning instruction of a user; acquiring an image to be identified; analyzing the image to be recognized to determine target recognition information in the image to be recognized; screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with a planned path; and performing language conversion on the effective information to generate first conversion information. The implementation mode is beneficial to eliminating the language barrier of the road identification in the driving process of the driver and reducing the driving potential safety hazard caused by the language problem.

Description

Information processing method and device and motor vehicle
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of image recognition, and in particular, to an information processing method and apparatus, and a motor vehicle.
Background
In the prior art, if a driver of a motor vehicle travels to an unfamiliar road segment or desires to reach an unfamiliar destination, the driver is often assisted by a navigation application.
However, if the map information in the navigation application is not updated in time due to a change in the actual road condition, there is a risk that a driver may make a road selection error when driving along the planned route generated by the navigation application. At this time, in order to reduce the risk, the driver can adjust the course of the road in time by simultaneously referring to the traffic signs provided near the road. However, if the language used for indicating the road information on the traffic sign is not known to the driver, the time required for the driver to read the traffic sign is correspondingly long, thereby causing a more prominent safety hazard.
Disclosure of Invention
The present application aims to provide an improved information processing method, device and motor vehicle to solve the technical problems mentioned in the background section above.
In a first aspect, an embodiment of the present application provides an information processing method, where the method includes: generating a planned path from a departure place to a destination in response to receiving a path planning instruction of a user; acquiring an image to be identified; analyzing the image to be recognized to determine target recognition information in the image to be recognized; screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with a planned path; and performing language conversion on the effective information to generate first conversion information.
In some embodiments, parsing the image to be recognized to determine the target recognition information in the image to be recognized includes: analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located; and recognizing the image of the area where the traffic sign is located to take the character information in the traffic sign as target recognition information.
In some embodiments, parsing the image to be recognized to determine the target recognition information in the image to be recognized includes: and analyzing the image to be recognized, and taking the character information in the image to be recognized as target recognition information.
In some embodiments, the planned path includes a departure point, a destination point, and position information of a transit point between the departure point and the destination point; matching the target identification information with the planned path further comprises: the target identification information is matched with any one of the departure point, the destination and the passing point.
In some embodiments, the method further comprises: receiving and analyzing a voice instruction of a user; analyzing information obtained by responding to the voice analysis instruction comprises geographical position information, and judging whether the geographical position information is contained in the target identification information; if yes, performing language conversion on the geographic position information to generate second conversion information.
In some embodiments, the method further comprises: and respectively matching the first conversion information and the second conversion information with a preset geographic position database, taking the geographic position information with the highest matching degree with the first conversion information in the geographic position database as the corrected first conversion information, and taking the geographic position information with the highest matching degree with the second conversion information in the geographic position database as the corrected second conversion information.
In some embodiments, the method further comprises: and presenting the first conversion information in the area corresponding to the effective information in the image to be recognized.
In a second aspect, an embodiment of the present application further provides an information processing apparatus, including: the route generation unit is used for generating a planned route from a starting place to a destination in response to receiving a route planning instruction of a user; an acquisition unit that acquires an image to be recognized; the image analysis unit is used for analyzing the image to be identified so as to determine target identification information in the image to be identified; the screening unit is used for screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with the planned path; and a conversion unit configured to perform language conversion on the effective information to generate first conversion information.
In some embodiments, the image parsing unit is further to: analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located; and recognizing the image of the area where the traffic sign is located to take the character information in the traffic sign as target recognition information.
In some embodiments, the image parsing unit is further to: and analyzing the image to be recognized, and taking the character information in the image to be recognized as target recognition information.
In some embodiments, the planned path includes a departure point, a destination point, and position information of a transit point between the departure point and the destination point; matching the target identification information with the planned path further comprises: the target identification information is matched with any one of the departure point, the destination and the passing point.
In some embodiments, the apparatus further comprises: the voice analysis unit is used for receiving and analyzing a voice instruction of a user; the judging unit is used for responding to the analysis information obtained by analyzing the voice command and containing the geographic position information and judging whether the geographic position information is contained in the target identification information or not; the conversion unit is further configured to: and if the geographic position information is contained in the target identification information, performing language conversion on the geographic position information to generate second conversion information.
In some embodiments, the apparatus further comprises: and the correction unit is used for respectively matching the first conversion information and the second conversion information with a preset geographic position database, taking the geographic position information with the highest matching degree with the first conversion information in the geographic position database as the corrected first conversion information, and taking the geographic position information with the highest matching degree with the second conversion information in the geographic position database as the corrected second conversion information.
In some embodiments, the apparatus further comprises: and a synthesizing unit for presenting the first conversion information in an area corresponding to the effective information in the image to be recognized.
In a third aspect, embodiments of the present application further provide a motor vehicle, including: an image acquisition device and a processor; the image acquisition device is used for acquiring an image to be identified; the processor is used for acquiring the image to be identified and executing the information processing method.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including: one or more processors; and a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the information processing method.
In a fifth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the information processing method.
According to the information processing method and device, after the planned path is generated, the image to be recognized is analyzed, the target recognition information is determined, then the target recognition information is screened to obtain effective information meeting at least one preset condition, and finally the effective information is subjected to language conversion to generate first conversion information. Therefore, the method is favorable for eliminating the language barrier of the driver for identifying the road in the driving process and reducing the driving safety hidden trouble caused by the language problem.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an information processing method according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of an information processing method according to the present application;
FIG. 4 is a schematic diagram of an application scenario of an information processing method according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an information processing apparatus according to the present application;
fig. 6 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the information processing method or information processing apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as navigation applications, search applications, instant messaging tools, translation applications, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and having navigation functions, including but not limited to automobiles, smart phones, tablet computers, laptop computers, and the like.
The server 105 may be a server that provides various services, such as a path planning server that generates a planned path based on a path planning request transmitted by the terminal apparatuses 101, 102, 103. The path planning server may analyze and perform other processing on the received data such as the path planning request, and feed back a processing result (for example, a planned path) to the terminal device.
It should be noted that the information processing method provided in the embodiment of the present application may be executed by the terminal devices 101, 102, and 103, or may be executed by a part of the terminal devices 101, 102, and 103 and executed by the server 105. Accordingly, the information processing apparatus may be provided in the terminal devices 101, 102, 103, or a part may be provided in the terminal devices 101, 102, 103 and another part may be provided in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an information processing method according to the present application is shown. The information processing method comprises the following steps:
in response to receiving the path planning instruction of the user, a planned path from the departure place to the destination is generated, step 210.
In the present embodiment, the user may send a path planning instruction to an electronic device (for example, a terminal device in fig. 1) on which the information processing method of the present embodiment operates by manually inputting a departure place and/or a destination. Alternatively, the user may send the path planning instruction by means of voice input, for example, in which case the electronic device may obtain the departure place and/or the destination indicated by the voice input by analyzing the voice input of the user. The electronic device may generate a planned path according to the received path planning instruction.
In addition, the path planning instruction sent by the user may further include a self-defined constraint condition, for example, the user may set a passing point to be passed through in the planned path, preferentially select an avoided congested road segment, preferentially select a traffic signal lamp to be passed through as little as possible, and the like. The generation of the planned path based on the path planning instruction in this step can be realized by using the existing path planning algorithm, which is not described herein again.
Step 220, acquiring an image to be identified.
The image to be recognized may be, for example, an image obtained by capturing the surroundings of the motor vehicle during the travel of the motor vehicle, which image may represent information about the current travel route, the current travel state, etc. of the motor vehicle.
In some alternative implementations, the image to be recognized may be acquired by an image acquisition module provided on the electronic device on which the information processing method of the present embodiment operates.
Alternatively, in other alternative implementations, the image to be recognized may also be acquired by an independent image acquiring apparatus connected to the electronic device in communication with the electronic device on which the information processing method of the present embodiment operates, and transmitted to the electronic device. In these alternative implementations, the image acquiring apparatus may transmit the acquired image to be recognized to the electronic device through a wired connection manner or a wireless connection manner, for example. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 230, the image to be recognized is analyzed to determine the target recognition information in the image to be recognized.
Here, the target recognition information may be, for example, information that is likely to be focused on by the user in the image to be recognized.
In some application scenarios, it is assumed that a user needs to collect body advertising information on an approach vehicle while driving a motor vehicle. In these application scenarios, information analyzed from a region of the image to be recognized, in which there is a high possibility that the vehicle body of the passing vehicle is included, may be used as the target recognition information.
And 240, screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with the planned path.
Here, the preset condition may be set according to a specific requirement of a user.
In some application scenarios, a user desires to acquire body advertisement information on an approach vehicle while driving a motor vehicle and determine whether the body advertisement information contains a body advertisement of brand a. Then, in these application scenarios, the preset condition may include, for example, that the target identification information contains the name of brand a and/or a registered trademark pattern. Thus, the vehicle body advertisement information only aiming at the brand A can be screened out.
In other application scenarios, it is assumed that the user drives the motor vehicle and needs to predict whether the user travels along the planned path or needs to remind the user of key points in the planned path in advance. In these application scenarios, the preset conditions may include, for example, matching the target identification information with the planned path. In this way, information matching the planned path in the target identification information can be screened out as valid information.
And step 250, performing language conversion on the effective information to generate first conversion information.
Since the target identification information is filtered to obtain the valid information in step 240, the data size of the valid information that needs to be subjected to the language conversion is correspondingly smaller in this step, which is beneficial to improving the processing efficiency of the language conversion processing and reducing the hardware loss of the electronic device to which the information processing method of this embodiment is applied.
In some alternative implementations, the first conversion information generated by this step may be of any type that is conveniently known to the user. For example, in some application scenarios, the first conversion information may be voice information, and in these application scenarios, the electronic device may play the first conversion information through a voice module integrated thereon or a voice device communicatively connected thereto so that the first conversion information is acquired by the user. Alternatively, in other application scenarios, the first conversion information may also be text information, and in these application scenarios, the electronic device may present the generated first conversion information at a corresponding position of the image to be recognized for viewing by the user. Alternatively, in other application scenarios, the first conversion information may include multiple types of information. For example, the first conversion information may include, for example, voice information and text information. In these application scenarios, the electronic device may play the voice information in the first conversion information through the voice module integrated thereon or the voice device communicatively connected thereto, and present the text information in the first conversion information at a corresponding position of the image to be recognized.
By performing language conversion on the valid information, the information (e.g., the valid information obtained by screening in step 240) concerned by the user (e.g., the driver of the motor vehicle) can be converted into the language known by the user, which is beneficial to eliminating the language barrier for identifying the road during the driving process of the driver and reducing the possibility of causing driving safety hazard due to language problem.
In some optional implementations, the parsing the image to be recognized in step 230 of this embodiment to determine the target identification information in the image to be recognized may include:
step 231, analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located.
And step 232, recognizing the image of the area where the traffic sign is located to take the character information in the traffic sign as target recognition information.
In these alternative implementations, the area in which the traffic sign is located may be first determined from the image to be recognized, so that only the text information contained in this area may be recognized, thereby obtaining the target recognition information. Because the area that the traffic sign was located often only accounts for the less part of waiting to discern the image, through the text message of only discerning the area that the traffic sign is located, can effectively promote the discernment efficiency that discernment obtained target identification information.
Alternatively, in another optional implementation manner of this embodiment, the analyzing the image to be recognized in step 230 of this embodiment to determine the target identification information in the image to be recognized may further include:
and 233, analyzing the image to be recognized, and taking the character information in the image to be recognized as the target recognition information.
In these alternative implementations, for example, an OCR (Optical Character Recognition) technique may be adopted to resolve text information in the image to be recognized.
In these alternative implementations, only the text information in the image to be recognized may be recognized by the OCR technology without further recognizing other parts in the image to be recognized, and the recognition efficiency of the target recognition information obtained by recognition may also be effectively improved.
In addition, in some optional implementations of the present embodiment, the planned path may include position information of the departure point, the destination, and a passing point between the departure point and the destination. Here, the location information may include, for example, names of a departure point, a destination, and respective waypoints. Here, the waypoint may be, for example, a key point between a start point and a destination of the planned path. The waypoints may be manually set by the user and added to the planned path. Alternatively, the route point may be automatically generated based on the planned route, for example, a position of a road switch from a certain road a to another road B in the planned route may be set as the route point in the planned route.
In these alternative implementations, the matching of the target identification information in the preset condition with the planned route may mean that the target identification information matches any one of a departure point, a destination, and a passing point, for example.
Referring to fig. 3, a schematic flow chart diagram 300 of another embodiment of the information processing method of the present application is shown.
The information processing method of the embodiment includes:
in response to receiving the path planning instruction of the user, a planned path from the departure place to the destination is generated, step 310.
And step 320, acquiring an image to be identified.
Step 330, the image to be recognized is analyzed to determine the target recognition information in the image to be recognized.
And 340, screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with the planned path.
And 350, performing language conversion on the effective information to generate first conversion information.
The execution manner of the steps 310 to 350 may be similar to that of the steps 210 to 250 in the embodiment shown in fig. 2, and will not be described herein again.
Different from the embodiment shown in fig. 2, the information processing method of the present embodiment may further include:
and step 360, receiving and analyzing the voice command of the user.
Here, any existing or to-be-developed voice recognition technology may be adopted to recognize the voice instruction input by the user as the corresponding text, and the semantics therein are resolved by, for example, a Natural Language Processing (NLP) technology.
Step 370, in response to the analysis information obtained by analyzing the voice command includes the geographic location information, determining whether the geographic location information is included in the target identification information.
And 380, if so, performing language conversion on the geographic position information.
If the geographic position information is contained in the target identification information, it can be shown that the current image to be identified has a strong correlation with the geographic position information. Since the geographical location information is obtained by analyzing the user voice command, it can be considered that, if the geographical location information is included in the target recognition information, the current image to be recognized has a strong association with the user voice command. Therefore, the language conversion is performed on the geographic position information, so that the user can intuitively know the geographic position information, for example, the geographic position information after the language conversion is presented in the image to be recognized, or voice prompt information is sent to the user, so that the user is prompted that the geographic position information concerned by the user is presented in the current image to be recognized.
In some optional implementations of this embodiment, a process of correcting the first conversion information and the second conversion information after the language conversion may be further included. Specifically, the first conversion information and the second conversion information may be respectively matched with a preset geographic position database, the geographic position information with the highest matching degree with the first conversion information in the geographic position database is used as the corrected first conversion information, and the geographic position information with the highest matching degree with the second conversion information in the geographic position database is used as the corrected second conversion information.
In these optional implementations, the geographic location database may store standard translations of each point of Interest (POI) as the geographic location information of each POI. And the first conversion information and the second conversion information are corrected based on the first position database, so that the obtained first conversion information and second conversion information can be unified with the standard translation of each POI, and the user cognitive deviation possibly caused by the nonstandard first conversion information and second conversion information can be avoided.
In addition, in some optional implementation manners, the information processing method of this embodiment may further include:
and presenting the first conversion information in the area corresponding to the effective information in the image to be recognized.
In these alternative implementations, for example, a real-time video synthesis technique may be adopted to add the first conversion information to the corresponding position of the image to be recognized and to cover the effective information in the image to be recognized.
Compared with the embodiment shown in fig. 2, the information processing method of the embodiment may further analyze the voice instruction of the user, and perform language conversion on the geographic position information when the voice instruction includes the geographic position information and the geographic position information is included in the target identification information, thereby implementing translation and presentation of the geographic position information in the voice instruction of the user.
With continued reference to fig. 4. Fig. 4 is a schematic diagram of an application scenario of the information processing method according to the present embodiment.
In the application scenario shown in fig. 4, the user's native language is chinese, and the driving environment is a non-chinese environment, e.g., the user is traveling by self in germany.
In step 410, the user enters the navigation destination "Magdeburg" and clicks on "navigation" to generate a planned path from the current location (Leibozin) to Magdeburg. Further, the user may manually set a waypoint (hamburger) in the planned path.
Next, in step 420, during the driving process of the user, a camera mounted on the vehicle captures images outside the vehicle in real time and presents the images on a display screen.
Next, in step 430, the vehicle exterior image collected in real time is analyzed, and it is determined whether the image includes a traffic sign.
Next, in step 440, if the vehicle exterior image captured at a certain time is analyzed and the traffic sign is found to be included in the image, the characters in the traffic sign are analyzed.
Next, in step 450, it is determined whether the text obtained by analyzing the traffic sign includes one of a planned route starting point "leipizing" (leipizin), a route point "Hamburg" (hamburger), or an end point "Magdeburg" (Magdeburg).
Next, in step 460, if the character obtained by analyzing the traffic sign includes a passing point "Hamburg" (Hamburg), the "Hamburg" is subjected to language conversion to obtain a chinese "Hamburg".
Next, in step 470, the chinese "hamburger" is added to the image outside the car on the display screen using real-time video compositing technology and covers the "Hamburg" in the original traffic sign. Therefore, the user can accurately obtain the driving route of the key point in the planned route without being interfered by language barriers in the navigation process, the times of route re-planning caused by wrong road selection are effectively reduced, and the driving safety hidden trouble caused by language problems is reduced.
Furthermore, during the user's driving, fuel may be required or a stop may be required. Thus, when the user has these needs, a voice instruction can be issued, such as: "search for nearest gas station", or "search for nearest service area". In this scenario, as shown in step 480, the voice command of the user may be analyzed, and it is determined whether the currently acquired vehicle exterior image includes a "gas station" or a "service area".
Next, in step 490, if the current vehicle exterior image contains "Tankstelle" or "Service-area", then the "Tankstelle" or "Service-area" is language-converted into chinese "gas station" or "Service area" and presented on the screen.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an information processing apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the information processing apparatus 500 according to the present embodiment includes: a path generating unit 510, an acquiring unit 520, an image analyzing unit 530, a filtering unit 540, and a converting unit 550.
The path generation unit 510 may be configured to generate a planned path from a departure point to a destination in response to receiving a path planning instruction of a user.
The obtaining unit 520 may be used to obtain an image to be recognized.
The image parsing unit 530 may be configured to parse the image to be recognized to determine the target identification information in the image to be recognized.
The screening unit 540 may be configured to screen the target identification information for effective information that meets at least one preset condition, where the preset condition includes that the target identification information matches the planned path.
The conversion unit 550 may be configured to perform language conversion on the valid information to generate first conversion information.
In some optional implementations, the image parsing unit 530 may be further configured to: analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located; and recognizing the image of the area where the traffic sign is located to take the character information in the traffic sign as target recognition information.
In some optional implementations, the image parsing unit 530 may be further configured to: and analyzing the image to be recognized, and taking the character information in the image to be recognized as target recognition information.
In some alternative implementations, the planned path may include location information of a start point, a destination, and a waypoint between the start point and the destination; matching the target identification information with the planned path may further include: the target identification information is matched with any one of the departure point, the destination and the passing point.
In some alternative implementations, the information processing apparatus may further include a voice parsing unit (not shown in the figure) and a judging unit (not shown in the figure).
In these alternative implementations, the voice parsing unit may be configured to receive and parse a voice instruction of a user. The judging unit may be configured to judge whether the geographic position information is included in the target identification information in response to that the analysis information obtained by analyzing the voice instruction includes the geographic position information.
In these alternative implementations, the conversion unit 550 may also be configured to: and if the geographic position information is contained in the target identification information, performing language conversion on the geographic position information to generate second conversion information.
In some optional implementations, the information processing apparatus may further include a correction unit. The correction unit may be configured to match the first conversion information and the second conversion information with a preset geographic location database, respectively, use, in the geographic location database, the geographic location information with the highest matching degree with the first conversion information as the corrected first conversion information, and use, in the geographic location database, the geographic location information with the highest matching degree with the second conversion information as the corrected second conversion information.
In some optional implementations, the information processing apparatus may further include a combining unit. The synthesizing unit may be configured to present the first conversion information in an area corresponding to the valid information in the image to be recognized.
Those skilled in the art will appreciate that the information processing apparatus 500 described above also includes some other well-known structures, such as processors, memories, etc., which are not shown in fig. 5 in order to not unnecessarily obscure embodiments of the present disclosure.
In addition, the application also discloses a motor vehicle. The vehicle may include an image capture device and a processor.
The image acquisition device can be used for acquiring an image to be identified. The processor may be configured to acquire an image to be recognized and perform the information processing method as described above.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing a terminal device or server of an embodiment of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a path generation unit, an acquisition unit, an image analysis unit, a filtering unit, and a conversion unit. Where the names of the elements do not in some cases constitute a limitation on the elements themselves, for example, the path generation element may also be described as "an element for generating a planned path from a departure point to a destination in response to receiving a path planning instruction of a user".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: generating a planned path from a departure place to a destination in response to receiving a path planning instruction of a user; acquiring an image to be identified; analyzing the image to be recognized to determine target recognition information in the image to be recognized; screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with a planned path; and performing language conversion on the effective information to generate first conversion information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (11)

1. An information processing method characterized by comprising:
generating a planned path from a departure place to a destination in response to receiving a path planning instruction of a user;
acquiring an image to be identified;
analyzing the image to be recognized to determine target recognition information in the image to be recognized;
screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with the planned path; and
performing language conversion on the effective information to generate first conversion information;
the method further comprises the following steps:
receiving and analyzing a voice instruction sent by a user in the driving process;
analyzing information obtained by responding to the voice analysis instruction comprises geographical position information, and judging whether the geographical position information is contained in the target identification information;
if yes, performing language conversion on the geographic position information to generate second conversion information;
respectively matching the first conversion information and the second conversion information with a preset geographic position database, taking the geographic position information with the highest matching degree with the first conversion information in the geographic position database as corrected first conversion information, and taking the geographic position information with the highest matching degree with the second conversion information in the geographic position database as corrected second conversion information;
and presenting the first conversion information in an area corresponding to the effective information in the image to be identified.
2. The method of claim 1, wherein the parsing the image to be recognized to determine target recognition information in the image to be recognized comprises:
analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located; and
and identifying the image of the area where the traffic sign is located so as to take the character information in the traffic sign as target identification information.
3. The method according to claim 1, wherein the parsing the image to be recognized to determine the target recognition information in the image to be recognized comprises:
and analyzing the image to be recognized, and taking the character information in the image to be recognized as target recognition information.
4. A method according to claim 2 or 3, characterized in that:
the planning path comprises a starting point, a destination and position information of a passing point between the starting point and the destination;
the matching of the target identification information with the planned path further comprises:
the target identification information is matched with any one of a departure place, a destination and a passing point.
5. An information processing apparatus characterized by comprising:
the route generation unit is used for generating a planned route from a starting place to a destination in response to receiving a route planning instruction of a user;
an acquisition unit that acquires an image to be recognized;
the image analysis unit is used for analyzing the image to be identified so as to determine target identification information in the image to be identified;
the screening unit is used for screening effective information meeting at least one preset condition from the target identification information, wherein the preset condition comprises that the target identification information is matched with the planned path; and
the conversion unit is used for carrying out language conversion on the effective information to generate first conversion information;
further comprising:
the voice analysis unit is used for receiving and analyzing a voice instruction sent by a user in the driving process;
the judging unit is used for responding to the analysis information obtained by analyzing the voice command and containing the geographic position information, and judging whether the geographic position information is contained in the target identification information or not;
the conversion unit is further configured to: if the geographic position information is contained in the target identification information, performing language conversion on the geographic position information to generate second conversion information;
the correction unit is used for respectively matching the first conversion information and the second conversion information with a preset geographic position database, taking the geographic position information with the highest matching degree with the first conversion information in the geographic position database as corrected first conversion information, and taking the geographic position information with the highest matching degree with the second conversion information in the geographic position database as corrected second conversion information;
the device further comprises: and the synthesis unit is used for presenting the first conversion information in an area corresponding to the effective information in the image to be identified.
6. The apparatus of claim 5, wherein the image parsing unit is further to:
analyzing the image to be recognized to determine the area where the traffic sign in the image to be recognized is located; and
and identifying the image of the area where the traffic sign is located so as to take the character information in the traffic sign as target identification information.
7. The apparatus of claim 5, wherein the image parsing unit is further configured to:
and analyzing the image to be recognized, and taking the character information in the image to be recognized as target recognition information.
8. The apparatus of claim 6 or 7, wherein:
the planning path comprises a starting point, a destination and position information of a passing point between the starting point and the destination;
the matching of the target identification information with the planned path further comprises:
the target identification information is matched with any one of a departure point, a destination and a passing point.
9. A motor vehicle comprising an image acquisition device and a processor, characterized in that:
the image acquisition device is used for acquiring an image to be identified;
the processor is used for acquiring the image to be identified and executing the information processing method according to any one of claims 1 to 4.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the information processing method according to any one of claims 1 to 4.
11. A computer-readable storage medium on which a computer program is stored, which program, when executed by a processor, implements the information processing method according to any one of claims 1 to 4.
CN201710918519.1A 2017-09-30 2017-09-30 Information processing method and device and motor vehicle Active CN107677289B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710918519.1A CN107677289B (en) 2017-09-30 2017-09-30 Information processing method and device and motor vehicle
PCT/CN2018/098603 WO2019062332A1 (en) 2017-09-30 2018-08-03 Information processing method and apparatus, and motor vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710918519.1A CN107677289B (en) 2017-09-30 2017-09-30 Information processing method and device and motor vehicle

Publications (2)

Publication Number Publication Date
CN107677289A CN107677289A (en) 2018-02-09
CN107677289B true CN107677289B (en) 2020-04-28

Family

ID=61138989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710918519.1A Active CN107677289B (en) 2017-09-30 2017-09-30 Information processing method and device and motor vehicle

Country Status (2)

Country Link
CN (1) CN107677289B (en)
WO (1) WO2019062332A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107677289B (en) * 2017-09-30 2020-04-28 百度在线网络技术(北京)有限公司 Information processing method and device and motor vehicle
CN108534795A (en) * 2018-06-26 2018-09-14 百度在线网络技术(北京)有限公司 Selection method, device, navigation equipment and the computer storage media of navigation routine
CN108871370A (en) * 2018-07-03 2018-11-23 北京百度网讯科技有限公司 Air navigation aid, device, equipment and medium
CN114061608A (en) * 2019-06-06 2022-02-18 阿波罗智联(北京)科技有限公司 Method, system and device for generating driving route

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1890533A (en) * 2003-12-05 2007-01-03 松下电器产业株式会社 Navigation device
CN100433050C (en) * 2004-01-08 2008-11-12 日本电气株式会社 Mobile communication system, mobile terminal device, fixed station device, character recognition device and method, and program
JP4935145B2 (en) * 2006-03-29 2012-05-23 株式会社デンソー Car navigation system
CN101726311A (en) * 2008-10-10 2010-06-09 北京灵图软件技术有限公司 Path navigation method and device
CN101561871B (en) * 2009-02-17 2011-10-12 昆明理工大学 Method for recognizing manually-set road sign in agricultural machine visual navigation
CN102426015A (en) * 2011-09-06 2012-04-25 深圳市凯立德科技股份有限公司 Search method of navigation system interest points, and position service terminal
DE102012012269B3 (en) * 2012-06-20 2013-05-29 Audi Ag information means
CN104121910A (en) * 2013-04-28 2014-10-29 腾讯科技(深圳)有限公司 Navigation method, device, terminal, server and system
CN104422462A (en) * 2013-09-06 2015-03-18 上海博泰悦臻电子设备制造有限公司 Vehicle navigation method and vehicle navigation device
CN107677289B (en) * 2017-09-30 2020-04-28 百度在线网络技术(北京)有限公司 Information processing method and device and motor vehicle

Also Published As

Publication number Publication date
CN107677289A (en) 2018-02-09
WO2019062332A1 (en) 2019-04-04

Similar Documents

Publication Publication Date Title
CN107677289B (en) Information processing method and device and motor vehicle
CN109141464B (en) Navigation lane change prompting method and device
JP4812415B2 (en) Map information update system, central device, map information update method, and computer program
US20090285445A1 (en) System and Method of Translating Road Signs
CN109065053B (en) Method and apparatus for processing information
CN107767685B (en) Vehicle searching system and method
US10315516B2 (en) Driving-support-image generation device, driving-support-image display device, driving-support-image display system, and driving-support-image generation program
KR101790655B1 (en) Feedback method for bus information inquiry, mobile terminal and server
CN110567475A (en) Navigation method, navigation device, computer readable storage medium and electronic equipment
US20120130704A1 (en) Real-time translation method for mobile device
CN102447886A (en) Visualizing video within existing still images
CN109302492B (en) Method, apparatus, and computer-readable storage medium for recommending service location
JP2020094956A (en) Information processing system, program, and method for information processing
CN110119725B (en) Method and device for detecting signal lamp
JP2020086659A (en) Information processing system, program, and information processing method
KR101280313B1 (en) Smart bus information system
EP4043831A1 (en) Method and device for managing data, and computer program product
JP2020073913A (en) Information provision system, information provision method, and program
CN112532929A (en) Road condition information determining method and device, server and storage medium
JP2016045678A (en) Price display system, price determination apparatus, display device, price determination method, and price determination program
CN113701772A (en) Navigation route determining method, system, electronic equipment and storage medium
CN110706497A (en) Image processing apparatus and computer-readable storage medium
JP2020013411A (en) Information providing device, information providing method, and program
US20230029628A1 (en) Data processing method for vehicle, electronic device, and medium
KR102098130B1 (en) Traffic information recognition system and method using QR code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant