CN114323046A - Travel navigation assisting method and device, blind glasses and computer medium - Google Patents

Travel navigation assisting method and device, blind glasses and computer medium Download PDF

Info

Publication number
CN114323046A
CN114323046A CN202111667828.9A CN202111667828A CN114323046A CN 114323046 A CN114323046 A CN 114323046A CN 202111667828 A CN202111667828 A CN 202111667828A CN 114323046 A CN114323046 A CN 114323046A
Authority
CN
China
Prior art keywords
information
lane
user
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111667828.9A
Other languages
Chinese (zh)
Other versions
CN114323046B (en
Inventor
王文志
葛琳楠
张光明
史建伟
周昊
王润迪
刘楚勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Jiulian Huibo Technology Co ltd
Original Assignee
Hubei Jiulian Huibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Jiulian Huibo Technology Co ltd filed Critical Hubei Jiulian Huibo Technology Co ltd
Priority to CN202111667828.9A priority Critical patent/CN114323046B/en
Publication of CN114323046A publication Critical patent/CN114323046A/en
Application granted granted Critical
Publication of CN114323046B publication Critical patent/CN114323046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the field of intelligent wearable equipment, in particular to a method and a device for assisting travel navigation, blind glasses and a computer medium. The method comprises the following steps: acquiring starting information, wherein the starting information represents a riding command input by a user; acquiring request position information of a user, wherein the request position information of the user represents position information of a position where the user inputs a riding command; determining at least one candidate lane according to the requested position information; determining road condition information of each candidate lane, wherein the road condition information comprises road marking information, flow information and distance information; the method comprises the steps of obtaining user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and road condition information, and feeding back the target lane. This application has the effect of the convenience of promoting the blind person's trip.

Description

Travel navigation assisting method and device, blind glasses and computer medium
Technical Field
The application relates to the field of intelligent wearable equipment, in particular to a method and a device for assisting travel navigation, blind glasses and a computer medium.
Background
The blind person loses vision due to congenital or acquired physiological defects, so that the most important source for obtaining knowledge and experience is lost, great obstacles are caused to the daily life of the blind person, and the travel activity of the blind person is particularly restricted.
The existing intelligent blind person glasses or intelligent blind guiding crutch can help the blind person to identify the barrier when walking on the road, and usually adopts the modes of infrared detection, ultrasonic detection, image processing and the like.
However, the use scene of the above method is limited when the blind walks, and the travel mode of the blind is still limited.
Disclosure of Invention
In order to improve the convenience of the blind people in traveling, the application provides a navigation method and device for assisting in traveling, glasses for the blind people and a computer medium.
In a first aspect, the present application provides an assisted travel navigation method, which adopts the following technical scheme:
an assisted travel navigation method comprises the following steps:
acquiring starting information, wherein the starting information represents a riding command input by a user;
acquiring request position information of a user, wherein the request position information of the user represents position information of a position where the user inputs the riding command;
determining at least one candidate lane according to the requested position information;
determining road condition information of each candidate lane, wherein the road condition information comprises road marking information, flow information and distance information;
wherein the road marking information characterizes a type of a separation between a non-motor lane and a motor lane in the candidate lane, the traffic information characterizes a degree of congestion of a road, and the distance information characterizes a distance between the candidate lane and the request position information;
acquiring user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information, and feeding back the target lane;
the user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency.
By adopting the technical scheme, at least one candidate lane is screened in a known map according to an input riding instruction and the position of an instruction sent by a user, road marking information, flow information and distance information of each road are determined, the area where the blind user can take a car is measured through a road marking line, the user can screen a lane corresponding to the safe driving area according to riding safety, screen road crowding degree according to destination efficiency, select a lane closest to the riding efficiency according to the riding efficiency, screen a target lane according to the screening condition, guide the blind user to reach the target lane to take the car after the requirement of the blind user is determined, and the diversity and the convenience of the blind user in traveling are improved.
In one possible implementation, after the feeding back the target lane, the method further includes:
determining a driving area of the target lane;
acquiring a field image corresponding to a taxi taking area;
determining a taxi taking position in the taxi taking area according to the field image;
determining a parking area according to the taxi taking position;
determining a riding mode according to the starting information, wherein the riding mode comprises real-time calling and preset calling;
if the riding mode is real-time calling and the real-time position of the user is located in the taxi taking position, executing a prompting step; the prompting step comprises the following steps: identifying whether a no-load taxi meeting carrying conditions exists in the target lane, and if so, generating second prompt information;
after the second prompt information is fed back, whether unloaded taxis stop in the stopping area is judged, if not, the prompt step is continuously executed until the unloaded taxis stop in the stopping area.
By adopting the technical scheme, after the blind user reaches the driving area of the target lane, the actual information of the road condition is obtained according to the field image collected by the blind glasses on site, the driving position most suitable for the blind to drive is determined, and a parking area for the vehicle to park is formed according to the driving position; if no-load taxis exist in the target lane, prompting the blind users to execute taxi taking actions, and after the taxi taking actions are executed, if taxis stop in the stopping areas, indicating that the blind users succeed in the taxi taking actions; otherwise, the empty taxi carrying conditions is continuously searched on the target lane, and the taxi taking action is executed again.
In one possible implementation, the determining the driving area of the target lane includes:
if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is an entity spacer, the taxi taking area is a communication area in the entity spacer;
and if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a lane line, the taxi taking area comprises an area corresponding to the edge of one side, far away from the motor lane, of the non-motor lane.
By adopting the technical scheme, the marking line is not easy to shield the sight of a taxi driver, and can be temporarily stopped at the roadside for the blind user to directly take the bus, and the characteristic of road marking information is used for analysis, so that the driving ranges of different lanes can be determined, and a more convenient driving area is provided for the blind user.
In one possible implementation, the determining a taxi taking position in the taxi taking area according to the live image includes:
a segmentation step, namely generating at least one segmentation area in the taxi taking area according to the field image;
screening, determining the density information of the obstacles in each partition area, determining the partition area with the minimum density information of the obstacles as a taxi taking position, and feeding back the taxi taking position.
By adopting the technical scheme, the partitioned area with the least number of the obstacles is used as the taxi taking position, so that the influence of other obstacles on the safety of blind users can be reduced.
In a possible implementation manner, after the feeding back the taxi taking position, the method further includes:
judging whether an updating condition is met, if so, updating the field image according to the currently acquired image, executing the segmentation step and the screening step to update the taxi taking position, and feeding back the updated taxi taking position; wherein the update condition characterizes that the number of obstacles in the taxi taking position is greater than a reference number.
By adopting the technical scheme, in the process that the blind users move to the initial taxi taking position, if the initial taxi taking position is influenced by the change of the quantity and the quantity is larger than the reference quantity, the blind users are not suitable for taxi taking, the scene image at the last moment is updated according to the current scene image, the taxi taking position is updated, and the purpose of providing safer taxi taking positions for the blind users according to the change of the scene dynamic environment is achieved.
In one possible implementation, the method further includes:
and if the parked taxis exist in the parking area, generating a boarding route according to the positions of the taxis and the real-time position of the user, and generating and feeding back navigation voice according to the boarding route.
By adopting the technical scheme, after the taxi meeting the carrying conditions stops in the stopping area, the blind person user is guided to safely take the bus by generating the getting-on route.
In one possible implementation manner, determining a target lane from the candidate lanes according to the user requirement information and the traffic information includes:
if the user demand information is the waiting safety degree, determining the target lane in all candidate lanes of which the interval type between the road marking information representing the non-motor lane and the motor lane is a lane line;
if the user demand information is the riding efficiency, taking the candidate lane with the minimum distance information as the target lane;
and if the user demand information is the destination arrival efficiency, taking the candidate lane with the lowest congestion degree of the road as the target lane.
In a second aspect, the present application provides an assisted travel navigation apparatus, which adopts the following technical solution:
an assisted travel navigation device comprising:
the first acquisition module is used for acquiring starting information, and the starting information represents a riding command input by a user;
the second acquisition module is used for acquiring the request position information of the user, wherein the request position information of the user represents the position information of the position where the user inputs the riding command;
a first analysis module for determining at least one candidate lane according to the requested location information;
the second analysis module is used for determining the road condition information of each candidate lane, and the road condition information comprises road marking information, flow information and distance information;
wherein the road marking information characterizes a type of a separation between a non-motor lane and a motor lane in the candidate lane, the traffic information characterizes a degree of congestion of a road, and the distance information characterizes a distance between the candidate lane and the request position information;
the navigation module is used for acquiring user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information, and feeding back the target lane;
the user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency.
In one possible implementation, the navigation device further includes a field analysis module, and the field analysis module is configured to:
determining a driving area of the target lane;
acquiring a field image corresponding to a taxi taking area;
determining a taxi taking position in the taxi taking area according to the field image;
determining a parking area according to the taxi taking position;
determining a riding mode according to the starting information, wherein the riding mode comprises real-time calling and preset calling;
if the riding mode is real-time calling and the real-time position of the user is located in the taxi taking position, executing a prompting step; the prompting step comprises the following steps: identifying whether a no-load taxi meeting carrying conditions exists in the target lane, and if so, generating second prompt information;
after the second prompt information is fed back, whether unloaded taxis stop in the stopping area is judged, if not, the prompt step is continuously executed until the unloaded taxis stop in the stopping area.
In a possible implementation manner, when determining the taxi taking area of the target lane, the field analysis module is specifically configured to:
if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is an entity spacer, the taxi taking area is a communication area in the entity spacer;
and if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a lane line, the taxi taking area comprises an area corresponding to the edge of one side, far away from the motor lane, of the non-motor lane.
In a possible implementation manner, when determining a taxi taking position in the taxi taking area according to the live image, the live analysis module is specifically configured to:
generating at least one segmentation area in the taxi taking area according to the live image;
and determining the density information of the obstacles in each divided area, determining the divided area with the minimum density information of the obstacles as a taxi taking position, and feeding back the taxi taking position.
In one possible implementation manner, after the feedback of the taxi taking position, the field analysis module is further configured to: judging whether an updating condition is met, if so, updating the field image according to the currently acquired image, executing the segmentation step and the screening step to update the taxi taking position, and feeding back the updated taxi taking position; wherein the update condition characterizes that the number of obstacles in the taxi taking position is greater than a reference number.
In one possible implementation, the field analysis module is further configured to: and when a parked taxi exists in the parking area, generating a boarding route according to the position of the taxi and the real-time position of the user, and generating and feeding back navigation voice according to the boarding route.
In a possible implementation manner, when determining a target lane from the candidate lanes according to the user demand information and the road condition information, the navigation module is specifically configured to:
when the user demand information is the waiting safety degree, determining the target lane in all candidate lanes of which the interval type between the road marking information representing the non-motor lane and the motor lane is a lane line;
when the user demand information is the riding efficiency, taking the candidate lane with the minimum distance information as the target lane;
and when the user demand information is the destination arrival efficiency, taking the candidate lane with the lowest congestion degree of the road as the target lane.
In a third aspect, the present application provides an electronic device, which adopts the following technical solutions:
an electronic device, comprising:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: and executing the auxiliary travel navigation method.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: a computer program is stored which can be loaded by a processor and which implements the above-described assisted travel navigation method.
To sum up, the application comprises the following beneficial technical effects: the method comprises the steps of screening at least one candidate lane in a known map according to an input riding command and a position of a command sent by a user, determining road marking information, flow information and distance information of each road, weighing an area where the blind user can take a car through a road marking line, screening lanes corresponding to a safe driving area according to riding safety, screening road crowding degrees according to destination efficiency, selecting lanes closest to the riding efficiency, screening a target lane according to the screening conditions, and guiding the blind user to reach the target lane to take the car after the blind user's requirement is determined, so that the diversity and convenience of the blind user in traveling are improved.
Drawings
Fig. 1 is a schematic flow chart of an assisted travel navigation method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating the assisted travel navigation method according to the embodiment of the present application after feeding back a target lane;
FIG. 3 is a schematic illustration of a multilane in an embodiment of the present application;
FIG. 4 is a schematic diagram of a plurality of candidate lanes according to an embodiment of the present application;
fig. 5 is a block diagram of an assisted travel navigation device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to fig. 1.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiment of the application provides an auxiliary travel navigation method, which is executed by blind glasses, and with reference to fig. 1, the method comprises the following steps:
and step S10, acquiring starting information.
The intelligent blind person glasses are worn by a user and provided with at least one camera, at least one processor and a memory, the camera is used for acquiring images of an external environment, for example, a main camera is arranged on the front side of the lens, and a camera can be arranged on each glasses leg to realize all-dimensional framing of the position of the blind person.
The starting information represents a riding instruction input by a user, the starting information can be input by the user by arranging a starting key on the blind glasses, the starting key is electrically connected with the processor, and the blind presses the starting key to input the starting information; or a voice interaction module is arranged on the blind glasses and electrically connected with the processor, and a user can input voice instructions, such as: and the voice interaction module analyzes the voice command, and the analyzed voice command is the starting information.
And step S20, acquiring the request position information of the user.
The requested position information of the user represents the position information of the position where the user inputs the riding command, namely the real-time position information of the blind glasses when the blind glasses acquire the starting information; the mode of acquiring the request position information comprises the step of arranging a GPS positioning device on the blind glasses and the like.
And step S30, determining at least one candidate lane according to the request position information.
When a user needs to drive a vehicle, no matter whether the requested position information is in a residential area, a shopping mall or outdoors, the user needs to walk to a motor vehicle lane to wait and wait, and at least one candidate lane is determined, so that the user can select a lane for waiting according to the requirement of the user.
And step S40, determining road condition information of each candidate lane.
The traffic information includes road marking information, traffic information, and distance information. Specifically, the road marking information represents a type of a gap between a non-motor lane and a motor lane in the candidate lanes, the traffic information represents a degree of congestion of the candidate lanes, and the distance information represents a distance between the candidate lanes and the request position information; in particular, the type of separation between the non-motorized lane and the motorized lane may be a lane line in a non-solid form or a segregator barrier (segregating green) in a solid form.
The road marking information can be judged by introducing a panoramic image of the candidate lane in map software, wherein the panoramic image is an image of the lane acquired based on historical time points, and the panoramic image comprises the lane type of the candidate lane; for example: the lane is three lanes, five lanes, the type of separation between the motor vehicle and the non-motor vehicle, whether it is a no-parking lane, etc. The flow information can be obtained by calling the real-time road condition of the candidate lane in the map software; the distance information can be obtained by calling the position of the candidate lane in the map software.
And step S50, acquiring user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information, and feeding back the target lane.
The user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency. Among different candidate lanes, some candidate lanes are slightly far away from the user, but the road section is smooth; the number of possible vehicle-parking points on the candidate lane is large, the vehicle-parking range of the user is enlarged, the position with few obstacles can be selected for parking, and the waiting safety of the blind user is improved; and some possible candidate lanes are closest to the user, so that the tired user can take a bus quickly. According to different selections of the same user at different time and under different conditions, a target lane meeting the current requirements of the user can be selected. Specifically, the mode of acquiring the user demand information may be to analyze the voice information of the user in a voice interaction mode, or to set different demand keys on the blind glasses. And further, generating first navigation information based on the fed-back target lane and the real-time position of the user, and guiding the blind person to reach the target lane through the first navigation information.
In a specific embodiment, the step S50 determining a target lane from a plurality of candidate lanes according to the user requirement information and the road condition information includes:
if the user demand information is the waiting safety degree, determining a target lane from all candidate lanes which are marked lines between the road marking information representation non-motor lanes and motor lanes; and if a plurality of candidate lanes corresponding to the non-solid spacers exist, pushing the candidate lane closest to the user.
If the user demand information is riding efficiency, taking the candidate lane with the minimum distance information as a target lane; specifically, since the lengths of the roads are different and the areas corresponding to the roads, which are available for the user to drive, are different, the distance information may be calculated using the end point of each road, which is closest to the requested position information, as the destination, or using the midpoint of the road as the destination, in combination with the lengths of the roads, or using the drive area that has been determined in the candidate lane as the destination.
And if the user demand information is the destination arrival efficiency, taking the candidate lane with the lowest road congestion degree as the target lane. The road congestion degree information can be acquired at a preset frequency, and the road congestion degree information is updated.
In a particular embodiment, the method further comprises: and if the user requirement information is not detected, taking the candidate lane with the minimum distance information as the target lane.
In a specific embodiment, the method further includes step S60, and referring to fig. 2, the step S60 is disposed after the step S50 of feeding back the target lane, and specifically includes:
and step S161, determining the driving area of the target lane. The taxi taking area is an area where a user can wait and take a bus.
And S162, acquiring a field image corresponding to the taxi taking area. The scene image is an image which is shot by a camera on the blind person glasses and contains a part of or the whole taxi taking area after the user arrives near the taxi taking area.
And step S163, determining the taxi taking position in the taxi taking area according to the field image. The taxi taking position is an area which is determined in the taxi taking area and is safe for blind users. And generating second navigation information according to the real-time position and the taxi taking position of the blind user, and guiding the user to arrive at the taxi taking position to wait for the taxi through the second navigation information in a semantic playing mode.
And step S164, determining a parking area according to the taxi taking position. The taxi parking system comprises a parking area, a taxi taking area, a taxi calling area and a taxi parking area, wherein the parking area is an area for a taxi to park, and the taxi taking area is an area formed based on preset parameters; after the blind user carries out the taxi taking action, the blind user can be assisted to judge whether a vehicle responds to a taxi taking instruction sent by the blind user according to whether a no-load taxi is positioned in the parking area.
And S165, determining a riding mode according to the starting information, wherein the riding mode comprises real-time calling and scheduled calling. When the riding mode is selected, two riding buttons can be arranged on the blind glasses, wherein one is a real-time riding button, and the other is a preset riding button; if the starting information can be generated according to the voice instruction input by the user, the starting information represents real-time calling when the voice instruction input by the user is 'hello, taxi taking' and the like; when the voice instruction input by the user is 'hello, reservation taxi taking', the starting information represents reservation calling, and at the moment, third-party taxi taking software is accessed to carry out vehicle calling. In a particular embodiment, the method further comprises: if the riding mode is a preset call, acquiring a voice instruction input by a user, and identifying destination information in the voice instruction; and judging whether at least two pieces of destination information exist or not, and if so, generating approval prompt information. The user connects a third-party taxi taking software platform to take a taxi based on the intelligent blind glasses, homophones may exist in voice input, information of each destination is called and fed back to the user based on the taxi taking platform, and the prompting information comprises road name information, distance information and the like of roads where the destinations are located, so that the blind user can conveniently determine the destination to be selected.
Step S166, if the riding mode is real-time calling and the real-time position of the user is located in the taxi taking position, executing a prompting step; the prompting step comprises the following steps: and identifying whether the empty taxi meeting the carrying condition exists in the target lane, and if so, generating second prompt information.
The second prompt information is used for prompting a user to carry out a taxi taking action, and the carrying condition represents that the driving direction of the vehicle is a forward direction; when identifying whether an empty taxi exists in the target lane, firstly determining the judgment range, in step S50, feeding back the target lane, generating first navigation information according to the target lane, and enabling the blind user to arrive at the target lane according to the first navigation information so that the user is positioned at the right side of the forward driving vehicle.
And if the target lane is a single lane, identifying unloaded taxis from all vehicles running in the same direction on the side of the road. If the target lane is a multi-lane, referring to fig. 3, the lane is 4 lanes, the user is located at the side of the lane 1, and since the lane 3 and the lane 4 cannot be taken by the user, the vehicles on the lane 3 and the lane 4 are not in the recognition range, and the driving directions of the vehicles in the lane 2 and the lane 1 are opposite, and are in the reverse direction, it is only necessary to recognize whether there is an empty taxi in the vehicles driving in the lane 1.
And step S167, after the second prompt information is fed back, judging whether the unloaded taxi is parked in the parking area, if not, continuing to execute the prompt step until the unloaded taxi is parked in the parking area. If no taxi stops, it is indicated that no empty-load taxi exists on the road at the moment, and if the user does not take the taxi successfully, the empty-load taxi in the target lane is identified again, and second prompt information is generated again to prompt the user to take the hand.
Further, if a parked taxi exists in the parking area, a boarding route is generated according to the position of the taxi and the real-time position of the user, and navigation voice is generated and fed back according to the boarding route. The voice navigation mode is used for guiding the user to arrive at a taxi for taking a bus, so that the use convenience of the blind user is improved.
In a specific embodiment, the blind glasses are provided with an obstacle identification module, the obstacle identification module can determine the types of obstacles around the blind user and the distance information between each obstacle and the user according to the real-time images acquired by the blind glasses, and obstacle avoidance prompt information is generated according to the types of the obstacles and the distance information between each obstacle and the user. The blind can be prompted to avoid obstacles in real time in the process of advancing according to the getting-on route.
In a particular embodiment, the method further comprises: after determining a parking area corresponding to the taxi taking area, judging whether a decelerated taxi exists in the parking area, if so, generating a third prompt instruction, wherein the third prompt instruction is used for prompting a user to stop the taxi taking action. If the taxi in the taxi taking position executes the deceleration action, the taxi is possibly about to stop, namely the taxi stops in response to the taxi taking action of the blind user.
Due to the different types of spacers between the vehicle lane and the non-vehicle lane, the driving area of the target lane is formed differently, and specifically, the determining the driving area of the target lane in step S161 includes: if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a solid spacer, the taxi taking area is a communication area in the solid spacer; if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a lane line, the driving area comprises an area corresponding to the edge of one side, away from the motor lane, of the non-motor lane.
Specifically, the manner of determining the taxi taking area of the target lane in step S161 includes: the taxi taking area is determined by inquiring a panoramic image of a target lane from map software or analyzing a live image acquired by blind glasses near the target lane.
Wherein, if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a solid spacer, for example: the fence is isolated, the lawn is greened, and the like, the driving area is a connected area (namely, an opening of the fence) in the solid spacer, the connected area can be inquired through map software, the connected area is defined as the driving area, and the non-connected area is an area which can not be driven because the blind user can not cross the fence or the lawn. Referring to fig. 4, the driving area includes an area formed with the length of the connected area (parallel to the road driving direction) and the width of the solid divider (the width of the green belt or the width of the barrier fence) as reference parameters.
If the road marking information of the target lane indicates that the type of the interval between the non-motor vehicle lane and the motor vehicle lane is a lane line (marking line), the lane line (marking line) is not easy to interfere with the sight of a taxi driver, and when the taxi temporarily parks beside the marking line, a user can directly get on the taxi, so that the taxi driving area comprises an area corresponding to the edge of one side, away from the motor vehicle lane, of the non-motor vehicle lane. Referring to fig. 4, a side edge of the non-motor vehicle lane away from the motor vehicle lane in the target vehicle lane is set as a sideline L, the driving region includes a region formed by taking the sideline L as a length and taking a preset width as a parameter, and the driving region covers two sides of the sideline L.
In addition to greening plants, trees and parked vehicles all the time, a plurality of dynamic obstacles such as pedestrians and the like may exist in the taxi taking area; in order to facilitate the blind users to take a bus, the area with the least influence of the barriers on the blind users is selected as the bus taking position in the whole bus taking area, so that the blind users can safely wait for the bus.
In one particular embodiment, determining a taxi-taking location within a taxi-taking area based on the live images includes: a segmentation step, namely generating at least one segmentation area in the taxi taking area according to the field image; screening, namely determining the density information of the obstacles in each partition area, determining the partition area with the minimum density information of the obstacles as a taxi taking position, and feeding back the taxi taking position; judging whether the updating condition is met, if so, updating the field image according to the currently acquired image, executing the segmentation step and the screening step to update the taxi taking position, feeding back the updated taxi taking position, and representing that the number of obstacles in the taxi taking position is greater than the reference number according to the updating condition.
The scene image at a certain moment comprises a plurality of images shot by all cameras on the intelligent glasses for the blind at the moment; specifically, the live image includes the whole or part of the taxi taking area, and the taxi taking area in the shooting visual field range is divided into at least one divided area, and the dividing manner may be to equally divide the taxi taking area in the visual field range.
The obstacle density information characterizes the number of obstacles in each of the partitioned areas by determining obstacle density information within the partitioned area. In the communication area, the static obstacles are marble isolating balls and the like, and the dynamic obstacles comprise pedestrians, non-motor vehicles or motor vehicles waiting for pedestrians to get on the vehicle for temporary stop and the like.
For a driving area formed by the borderline L, the driving area is divided into an inner area and an outer area by the borderline L, the inner area is an area close to a motor vehicle lane, usually, a static obstacle is a greening tree or other plants beside a road, or a static motor vehicle parked in a roadside parking space, and a dynamic obstacle includes pedestrians and the like in the outer area. In the driving area formed by the borderline L, the green plants in the whole driving area are generally uniformly planted, and the obstacles in each divided area are distinguished by whether there are stationary motor vehicles parked and/or pedestrians walking, etc. The partition area with the minimum obstacle density is selected as the taxi taking position, so that the influence of the obstacles on the blind taxi taking can be reduced.
For the connected area, more users can get a car at a certain position of the connected area, and the divided area with the minimum obstacle density is selected as the car-getting position, so that the influence of other pedestrians on the blind person can be reduced. If there is a case where two pieces of obstacle density information are aligned to be the minimum at the same time, the divided area closest to the real-time position of the user is selected as the taxi-taking position.
Specifically, in the process of traveling, the position of the user changes, the flow condition of dynamic obstacles in the taxi taking area also changes, if the number of obstacles in the taxi taking position determined based on the live image at the previous moment is larger than the reference number, the taxi taking position is possibly not suitable for taxi taking, at the moment, the updating condition is met, the live image is updated according to the currently acquired image, the dividing step and the screening step are executed to update the taxi taking position, the updated taxi taking position is fed back, and the function of updating the taxi taking position according to the live condition is achieved.
The above embodiment describes an assisted travel navigation method from the perspective of a method flow, and referring to fig. 5, the following embodiment describes an assisted travel navigation device from the perspective of a virtual module or a virtual unit, which is described in detail in the following embodiment.
An assisted travel navigation device comprising:
a first obtaining module 1001, configured to obtain start information, where the start information represents a riding instruction input by a user;
the second obtaining module 1002 is configured to obtain requested position information of a user, where the requested position information of the user represents position information of a position where the user inputs a riding command;
a first analysis module 1003 for determining at least one candidate lane according to the requested location information;
a second analysis module 1004, configured to determine traffic information of each candidate lane, where the traffic information includes road marking information, traffic information, and distance information;
the road marking information represents the interval type between the non-motor vehicle lane and the motor vehicle lane in the candidate lane, the traffic information represents the congestion degree of the road, and the distance information represents the distance between the candidate lane and the request position information;
the navigation module 1005 is configured to acquire user demand information, determine a target lane from the multiple candidate lanes according to the user demand information and the road condition information, and feed back the target lane;
the user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency.
In a specific embodiment, the navigation device further comprises a field analysis module for:
determining a driving area of a target lane;
acquiring a field image corresponding to a taxi taking area;
determining a taxi taking position in a taxi taking area according to the field image;
determining a parking area according to the taxi taking position;
determining a riding mode according to the starting information, wherein the riding mode comprises real-time calling and scheduled calling;
if the riding mode is real-time calling and the real-time position of the user is located in the taxi taking position, executing a prompting step; the prompting step comprises the following steps: identifying whether a no-load taxi meeting carrying conditions exists in the target lane, and if so, generating second prompt information;
and after the second prompt information is fed back, judging whether the unloaded taxi is parked in the parking area, if not, continuing to execute the prompt step until the unloaded taxi is parked in the parking area.
In a specific embodiment, the field analysis module, when determining the taxi taking area of the target lane, is specifically configured to:
if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a solid spacer, the taxi taking area is a communication area in the solid spacer;
if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a lane line, the driving area comprises an area corresponding to the edge of one side, away from the motor lane, of the non-motor lane.
In a specific embodiment, the field analysis module, when determining the taxi-taking location within the taxi-taking area based on the field image, is specifically configured to:
generating at least one segmentation area in the taxi taking area according to the live image;
and determining the density information of the obstacles in each divided area, determining the divided area with the minimum density information of the obstacles as a taxi taking position, and feeding back the taxi taking position.
In a specific embodiment, the field analysis module is further configured to, after feeding back the taxi taking location: judging whether an updating condition is met, if so, updating the field image according to the currently acquired image, executing a segmentation step and a screening step to update the taxi taking position, and feeding back the updated taxi taking position; and the updating condition represents that the number of the obstacles in the taxi taking position is larger than the reference number.
In a particular embodiment, the field analysis module is further configured to: when a parked taxi exists in the parking area, a boarding route is generated according to the position of the taxi and the real-time position of the user, and navigation voice is generated and fed back according to the boarding route.
In a specific embodiment, when determining a target lane from a plurality of candidate lanes according to the user requirement information and the road condition information, the navigation module 1005 is specifically configured to:
when the user demand information is the traffic waiting safety degree, determining a target lane from all candidate lanes, wherein the road marking information represents that the interval type between a non-motor lane and a motor lane is a lane line;
when the user demand information is the riding efficiency, taking the candidate lane with the minimum distance information as a target lane;
and when the user demand information is the destination reaching efficiency, the candidate lane with the lowest road congestion degree is used as the target lane.
The embodiment of the present application also introduces an electronic device from the perspective of a physical apparatus, as shown in fig. 6, an electronic device 1100 shown in fig. 6 includes: a processor 110 and a memory 130. Wherein processor 110 is coupled to memory 130, such as via bus 120. Optionally, the electronic device 1100 may also include a transceiver 140. It should be noted that the transceiver 140 is not limited to one in practical applications, and the structure of the electronic device 1100 is not limited to the embodiment of the present application.
The Processor 110 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 110 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 120 may include a path that conveys information between the aforementioned components. The bus 120 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 120 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
The Memory 130 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 130 is used for storing application program codes for executing the scheme of the present application, and is controlled by the processor 110 to execute. The processor 110 is configured to execute application program code stored in the memory 130 to implement the aspects illustrated in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. An assisted travel navigation method is characterized by comprising the following steps:
acquiring starting information, wherein the starting information represents a riding command input by a user;
acquiring request position information of a user, wherein the request position information of the user represents position information of a position where the user inputs the riding command;
determining at least one candidate lane according to the requested position information;
determining road condition information of each candidate lane, wherein the road condition information comprises road marking information, flow information and distance information;
wherein the road marking information characterizes a type of a separation between a non-motor lane and a motor lane in the candidate lane, the traffic information characterizes a degree of congestion of a road, and the distance information characterizes a distance between the candidate lane and the request position information;
acquiring user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information, and feeding back the target lane;
the user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency.
2. A travel aid identification method according to claim 1, characterized in that after said feedback of the target lane, it further comprises:
determining a driving area of the target lane;
acquiring a field image corresponding to a taxi taking area;
determining a taxi taking position in the taxi taking area according to the field image;
determining a parking area according to the taxi taking position;
determining a riding mode according to the starting information, wherein the riding mode comprises real-time calling and preset calling;
if the riding mode is real-time calling and the real-time position of the user is located in the taxi taking position, executing a prompting step; the prompting step comprises the following steps: identifying whether a no-load taxi meeting carrying conditions exists in the target lane, and if so, generating second prompt information;
after the second prompt information is fed back, whether unloaded taxis stop in the stopping area is judged, if not, the prompt step is continuously executed until the unloaded taxis stop in the stopping area.
3. A travel assistance recognition method according to claim 2, wherein the determining of the driving area of the target lane comprises:
if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is an entity spacer, the taxi taking area is a communication area in the entity spacer;
and if the road marking information of the target lane represents that the type of the interval between the non-motor lane and the motor lane is a lane line, the taxi taking area comprises an area corresponding to the edge of one side, far away from the motor lane, of the non-motor lane.
4. A travel assistant recognition method according to claim 2, wherein the determining a taxi taking position in the taxi taking area according to the live image comprises:
a segmentation step, namely generating at least one segmentation area in the taxi taking area according to the field image;
screening, determining the density information of the obstacles in each partition area, determining the partition area with the minimum density information of the obstacles as a taxi taking position, and feeding back the taxi taking position.
5. An auxiliary trip identification method according to claim 4, wherein after the feedback of the taxi taking position, the method further comprises:
judging whether an updating condition is met, if so, updating the field image according to the currently acquired image, executing the segmentation step and the screening step to update the taxi taking position, and feeding back the updated taxi taking position; wherein the update condition characterizes that the number of obstacles in the taxi taking position is greater than a reference number.
6. A travel aid identification method according to claim 2, characterized in that said method further comprises:
and if the parked taxis exist in the parking area, generating a boarding route according to the positions of the taxis and the real-time position of the user, and generating and feeding back navigation voice according to the boarding route.
7. A travel aid identification method according to claim 1, wherein determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information comprises:
if the user demand information is the waiting safety degree, determining the target lane in all candidate lanes of which the interval type between the road marking information representing the non-motor lane and the motor lane is a lane line;
if the user demand information is the riding efficiency, taking the candidate lane with the minimum distance information as the target lane;
and if the user demand information is the destination arrival efficiency, taking the candidate lane with the lowest congestion degree of the road as the target lane.
8. An assisted travel navigation device, comprising:
the first acquisition module is used for acquiring starting information, and the starting information represents a riding command input by a user;
the second acquisition module is used for acquiring the request position information of the user, wherein the request position information of the user represents the position information of the position where the user inputs the riding command;
a first analysis module for determining at least one candidate lane according to the requested location information;
the second analysis module is used for determining the road condition information of each candidate lane, and the road condition information comprises road marking information, flow information and distance information;
wherein the road marking information characterizes a type of a separation between a non-motor lane and a motor lane in the candidate lane, the traffic information characterizes a degree of congestion of a road, and the distance information characterizes a distance between the candidate lane and the request position information;
the navigation module is used for acquiring user demand information, determining a target lane from a plurality of candidate lanes according to the user demand information and the road condition information, and feeding back the target lane;
the user demand information comprises any one of waiting safety, riding efficiency and destination arriving efficiency.
9. Glasses for the blind, characterized in that they comprise:
at least one camera;
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: executing the assisted travel navigation method according to any one of claims 1 to 7.
10. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed on a computer, causes the computer to execute the method of assisted travel navigation according to any one of claims 1 to 7.
CN202111667828.9A 2021-12-30 2021-12-30 Travel navigation assisting method and device, blind glasses and computer medium Active CN114323046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667828.9A CN114323046B (en) 2021-12-30 2021-12-30 Travel navigation assisting method and device, blind glasses and computer medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667828.9A CN114323046B (en) 2021-12-30 2021-12-30 Travel navigation assisting method and device, blind glasses and computer medium

Publications (2)

Publication Number Publication Date
CN114323046A true CN114323046A (en) 2022-04-12
CN114323046B CN114323046B (en) 2022-09-16

Family

ID=81020910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667828.9A Active CN114323046B (en) 2021-12-30 2021-12-30 Travel navigation assisting method and device, blind glasses and computer medium

Country Status (1)

Country Link
CN (1) CN114323046B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008111842A (en) * 2007-11-12 2008-05-15 Navitime Japan Co Ltd Riding position guidance system and terminal, route search server, and program
CN103198653A (en) * 2013-04-23 2013-07-10 杭州九树网络科技有限公司 Intelligent taxi calling system and application method
CN109115237A (en) * 2018-08-27 2019-01-01 北京优酷科技有限公司 A kind of position recommended method and server of riding
CN109313846A (en) * 2017-03-02 2019-02-05 北京嘀嘀无限科技发展有限公司 System and method for recommending to get on the bus a little
CN110522617A (en) * 2019-09-05 2019-12-03 张超 Blind person's wisdom glasses
US20200333146A1 (en) * 2018-01-08 2020-10-22 Via Transportation, Inc. Assigning on-demand vehicles based on eta of fixed-line vehicles
CN111811528A (en) * 2019-12-12 2020-10-23 北京嘀嘀无限科技发展有限公司 Control area-based pick-up and delivery driving method and device, electronic equipment and storage medium
CN111861643A (en) * 2020-06-30 2020-10-30 北京嘀嘀无限科技发展有限公司 Riding position recommendation method and device, electronic equipment and storage medium
CN111862589A (en) * 2020-01-13 2020-10-30 北京嘀嘀无限科技发展有限公司 High-capacity lane determining method and device
CN113407871A (en) * 2021-06-21 2021-09-17 北京畅行信息技术有限公司 Boarding point recommendation method and device, electronic equipment and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008111842A (en) * 2007-11-12 2008-05-15 Navitime Japan Co Ltd Riding position guidance system and terminal, route search server, and program
CN103198653A (en) * 2013-04-23 2013-07-10 杭州九树网络科技有限公司 Intelligent taxi calling system and application method
CN109313846A (en) * 2017-03-02 2019-02-05 北京嘀嘀无限科技发展有限公司 System and method for recommending to get on the bus a little
US20200333146A1 (en) * 2018-01-08 2020-10-22 Via Transportation, Inc. Assigning on-demand vehicles based on eta of fixed-line vehicles
CN109115237A (en) * 2018-08-27 2019-01-01 北京优酷科技有限公司 A kind of position recommended method and server of riding
CN110522617A (en) * 2019-09-05 2019-12-03 张超 Blind person's wisdom glasses
CN111811528A (en) * 2019-12-12 2020-10-23 北京嘀嘀无限科技发展有限公司 Control area-based pick-up and delivery driving method and device, electronic equipment and storage medium
CN111862589A (en) * 2020-01-13 2020-10-30 北京嘀嘀无限科技发展有限公司 High-capacity lane determining method and device
CN111861643A (en) * 2020-06-30 2020-10-30 北京嘀嘀无限科技发展有限公司 Riding position recommendation method and device, electronic equipment and storage medium
CN113407871A (en) * 2021-06-21 2021-09-17 北京畅行信息技术有限公司 Boarding point recommendation method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN114323046B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
KR102524716B1 (en) Vehicle track prediction method and device, storage medium and terminal device
RU2719495C2 (en) Method and device for driving assistance
CN110692094B (en) Vehicle control apparatus and method for control of autonomous vehicle
JP4613881B2 (en) Parking guidance device
WO2015115159A1 (en) Automatic driving assistance device, automatic driving assistance method, and program
RU2746684C1 (en) Parking control method and parking control equipment
CN107430817A (en) Path searching apparatus, control method, program and storage medium
JP6221874B2 (en) Automatic driving support device, automatic driving support method and program
CN1737876A (en) Navigation device, method and programme for guiding way
CN108058707B (en) Information display device, information display method, and recording medium for information display program
CN110228485B (en) Driving assistance method, system, device and storage medium for fatigue driving
CN115497331B (en) Parking method, device and equipment and vehicle
CN114115204A (en) Management device, management system, management method, and storage medium
JP7233386B2 (en) Map update device, map update system, and map update method
CN113407871B (en) Get-on point recommendation method and device, electronic equipment and readable storage medium
CN114323046B (en) Travel navigation assisting method and device, blind glasses and computer medium
US11719553B2 (en) Spotfinder
JP7136538B2 (en) electronic device
CN111583584A (en) Intelligent walking stick safety warning system, warning method and intelligent walking stick
CN114987533A (en) Mobile body control system, control method, and storage medium
JP2007271550A (en) Route guide system and route guide method
JP2022129013A (en) Mobile body control system, mobile body, control method, and program
CN115131965B (en) Vehicle control method, device, system, electronic equipment and storage medium
JP7243669B2 (en) Autonomous driving system
JP7372365B2 (en) Control device, control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant