CN113205088A - Obstacle image presentation method, electronic device, and computer-readable medium - Google Patents

Obstacle image presentation method, electronic device, and computer-readable medium Download PDF

Info

Publication number
CN113205088A
CN113205088A CN202110759663.1A CN202110759663A CN113205088A CN 113205088 A CN113205088 A CN 113205088A CN 202110759663 A CN202110759663 A CN 202110759663A CN 113205088 A CN113205088 A CN 113205088A
Authority
CN
China
Prior art keywords
obstacle
information
static
dynamic
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110759663.1A
Other languages
Chinese (zh)
Other versions
CN113205088B (en
Inventor
张京
谷焓
倪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202110759663.1A priority Critical patent/CN113205088B/en
Publication of CN113205088A publication Critical patent/CN113205088A/en
Application granted granted Critical
Publication of CN113205088B publication Critical patent/CN113205088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

Embodiments of the present disclosure disclose an obstacle image presentation method, an electronic device, and a computer-readable medium. One embodiment of the method comprises: acquiring an image group shot by a vehicle-mounted camera; splicing each image in the image group to obtain a spliced image; inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information; adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group; generating a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set; generating an obstacle display information group and a driver operation prompt information group; and displaying the spliced image based on the obstacle display information group and the driver operation prompt information group. This embodiment can improve the safety of the vehicle driving.

Description

Obstacle image presentation method, electronic device, and computer-readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method for displaying an image of an obstacle, an electronic device and a computer-readable medium.
Background
A method for displaying an image of an obstacle is a technique for displaying an image of an obstacle. At present, when displaying an obstacle image, the method generally adopted is as follows: first, an image photographed by an on-vehicle camera of a vehicle is recognized to generate a detection frame of an obstacle, and then the image with the obstacle detection frame is displayed at a display terminal of the vehicle.
However, when the obstacle image display is performed in the above manner, there are often the following technical problems:
specific characteristics of the obstacle are not identified (for example, the obstacle belongs to a dynamic obstacle or a static obstacle), so that the potential danger level of the obstacle relative to the current vehicle cannot be determined, and therefore, a more detailed obstacle image cannot be displayed at the vehicle display terminal in time, and further, the obstacle image displayed at the vehicle display terminal cannot convey more detailed obstacle information to the user, so that a plurality of potential safety hazards exist, and the safety of vehicle driving is reduced.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an obstacle image presentation method, an electronic device, and a computer-readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of presenting an image of an obstacle, the method comprising: acquiring an image group shot by a vehicle-mounted camera; splicing each image in the image group to obtain a spliced image; inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information; adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group; generating a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set; generating an obstacle display information set and a driver operation prompt information set based on the collision prediction information set and the travelable region detection information; and displaying the spliced image based on the obstacle display information group and the driver operation prompt information group.
In a second aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a third aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the obstacle image display method of some embodiments of the present disclosure, specific features of an obstacle (for example, the obstacle belongs to a dynamic obstacle or a static obstacle) can be displayed to reduce potential safety hazards, and improve safety of vehicle driving. Specifically, the reasons for the major potential safety hazards are: the specific characteristics of the obstacle are not identified (e.g., the obstacle belongs to a dynamic obstacle or a static obstacle), and thus, the potential danger level of the obstacle with respect to the current vehicle cannot be determined, resulting in a more detailed obstacle image that cannot be timely displayed at the vehicle display terminal, and further, resulting in a more detailed obstacle information that cannot be conveyed to the user by the obstacle image displayed at the vehicle display terminal. Based on this, the obstacle image display method of some embodiments of the present disclosure first considers a case where specific characteristics of an obstacle cannot be identified (for example, the obstacle belongs to a dynamic obstacle or a static obstacle). Therefore, a preset visual detection model is introduced, and the visual detection model can be used for distinguishing whether the obstacle is a dynamic obstacle or a static obstacle. Then, the problem of the accuracy of the generated dynamic obstacle detection information and static obstacle detection information is considered. Therefore, the dynamic obstacle detection information set and the static obstacle detection information set are adjusted. So that the accuracy of the generated dynamic obstacle detection information and static obstacle detection information can be improved. Thereafter, an obstacle presentation information group and a driver operation prompt information group are generated. The method can be used for prompting the driver to avoid different types of obstacles. Compared with the common method which only displays the image with the obstacle detection frame, the method not only can identify and display the specific characteristics of the obstacle, but also can give corresponding driver operation prompt information aiming at the obstacles with different characteristics. Therefore, driving accidents due to inattention of the driver can be avoided. Therefore, potential safety hazards are reduced, and the safety of vehicle driving is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of the obstacle image presentation method of some embodiments of the present disclosure;
figure 2 is a flow diagram of some embodiments of an obstacle image presentation method according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of a method of obstacle image presentation according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic view of an application scene of the obstacle image display method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire an image group 102 captured by a vehicle-mounted camera. Next, the computing device 101 may stitch the respective images in the image group 102 to obtain a stitched image 103. Then, the computing device 101 may input the above-mentioned stitched image 103 into a preset visual detection model 104, resulting in a dynamic obstacle detection information set 105, a static obstacle detection information set 106, and travelable area detection information 107. Thereafter, the computing device 101 may adjust the dynamic obstacle detection information set 105 and the static obstacle detection information set 106 to obtain a dynamic obstacle adjustment information set 108 and a static obstacle adjustment information set 109. Then, the computing device 101 may generate the collision prediction information set 110 based on the above dynamic obstacle adjustment information set 108 and the above static obstacle adjustment information set 109. Then, the computing device 101 may generate the obstacle presentation information group 111 and the driver operation prompt information group 112 based on the above-described collision prediction information group 110 and the above-described travelable region detection information 107. Finally, the computing device 101 may display the stitched image 102 based on the obstacle display information group 111 and the driver operation prompt information group 112.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of an obstacle image presentation method according to the present disclosure is shown. The process 200 of the obstacle image display method includes the following steps:
in step 201, an image group captured by a vehicle-mounted camera is acquired.
In some embodiments, the subject (e.g., computing device 101 shown in fig. 1) performing the obstacle image presentation method may acquire the image group captured by the vehicle-mounted camera in a wired manner or in a wireless manner. There may be a plurality of (e.g., 4) vehicle-mounted cameras. The image group can then be used to characterize the images of the surroundings of the vehicle taken by the respective vehicle-mounted cameras at the same moment in time.
And step 202, splicing the images in the image group to obtain a spliced image.
In some embodiments, the execution subject may stitch the images in the image group to obtain a stitched image. The image stitching may be performed by stitching the images of the image group into a two-dimensional image viewed from above as a stitched image.
And 203, inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information.
In some embodiments, the execution body may input the stitched image into a preset visual detection model to obtain a dynamic obstacle detection information set, a static obstacle detection information set, and travelable area detection information. The preset visual detection model may be a pre-trained deep convolutional neural network or a Hopfield neural network (a hassfield network). The dynamic obstacle detection information in the dynamic obstacle detection information group may include: an obstacle detection number, a name of the dynamic obstacle (e.g., a pedestrian, a bicycle, a dog, or the like), a moving direction of the dynamic obstacle, a moving speed of the dynamic obstacle, and a frame coordinate value sequence of the dynamic obstacle. The static obstacle detection information in the static obstacle detection information group may include: an obstacle detection number, a name of a static obstacle (e.g., a stone, a manhole cover, a nail, or a road surface depression), a size of the static obstacle, and a frame coordinate value sequence of the static obstacle. The travelable region detection information may include a sequence of road coordinate values for indicating normal travelable. For example, when the lane change is impossible, the travelable region detection information of the vehicle may be obtained by expressing the current road on which the vehicle is located by a coordinate value sequence.
And 204, adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group.
In some embodiments, the execution main body may adjust the dynamic obstacle detection information set and the static obstacle detection information set to obtain a dynamic obstacle adjustment information set and a static obstacle adjustment information set. Wherein the dynamic obstacle detection information set and the static obstacle detection information set may be adjusted by:
in a first step, a set of obstacle information detected by the lidar may be acquired. The obstacle information in the obstacle information group may include an obstacle detection number or an obstacle detection number, and a frame coordinate value sequence of a dynamic obstacle or a static obstacle.
In the second step, the obstacle information having the same obstacle detection number and the dynamic obstacle detection information or the static obstacle detection information may be determined to be in a corresponding relationship.
And thirdly, determining the average value between every two adjacent coordinate values in the frame coordinate value sequence included in the obstacle detection information and the corresponding dynamic obstacle detection information or static obstacle detection information as the adjusted coordinate value to obtain the adjusted frame coordinate value sequence. Therefore, the dynamic obstacle detection information group and the static obstacle detection information group can be adjusted to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group.
Step 205 generates a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set.
In some embodiments, the execution body may generate the collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set. Wherein the speed value and the vehicle width value of the vehicle may be determined first. Then, for each dynamic obstacle adjustment information in the above dynamic obstacle adjustment information group, the following steps are performed:
the moving direction and the velocity value of the dynamic obstacle included in the dynamic obstacle adjustment information and the velocity value of the vehicle may be input to a preset vehicle kinematics model, so as to obtain a predicted collision duration of the vehicle and the dynamic obstacle. The predicted collision duration may be used to characterize that the vehicle will have a risk of collision with the dynamic obstacle characterized by the dynamic obstacle adjustment information after the collision duration elapses. And determining the predicted collision duration as collision prediction information corresponding to the dynamic obstacle adjustment information.
For each static obstacle adjustment information in the set of static obstacle adjustment information, performing the following steps:
the speed value and the width value of the vehicle can be input into the vehicle dynamics model to obtain the vehicle movement prediction coverage area. In response to determining that a distance value between any frame coordinate value in a frame coordinate value sequence of the static obstacle included in the static obstacle adjustment information and the movement prediction coverage area is less than or equal to a preset distance threshold (for example, 5 cm), it is determined that there is a collision risk between the vehicle and the static obstacle represented by the static obstacle adjustment information. Then a static risk indicator may be generated as collision prediction information corresponding to the static obstacle adjustment information described above.
And step 206, generating an obstacle display information group and a driver operation prompt information group based on the collision prediction information group and the travelable area detection information.
In some embodiments, the execution body may generate an obstacle presentation information set and a driver operation presentation information set based on the collision prediction information set and the travelable region detection information set. The collision prediction information in the collision prediction information group may include a predicted collision duration or a static risk identifier.
If the collision prediction information in the collision prediction information group comprises the predicted collision duration. Then, the dynamic obstacle frame coordinate value sequence included in the dynamic obstacle adjustment information corresponding to the collision prediction information may be determined as the obstacle presentation information. In addition, a signal for prompting the driver to decelerate and stop may be generated as the driver operation prompting information.
If the collision prediction information in the collision prediction information group comprises a static risk identification. Then, the static obstacle frame coordinate value sequence included in the static obstacle adjustment information corresponding to the static risk identifier may be determined as the obstacle display information. In addition, the vehicle travel direction adjustment instruction information may be generated as the driver operation prompt information such that the travel coverage area of the vehicle is within the travelable area indicated by the travelable area detection information, and the distance value between any one of the frame coordinate values in the static obstacle frame coordinate value sequence included in the static obstacle adjustment information and the movement prediction coverage area is greater than the preset distance threshold value. Thus, the obstacle presentation information group and the driver operation presentation information group can be generated.
And step 207, displaying the spliced image based on the obstacle display information group and the driver operation prompt information group.
In some embodiments, the execution body may display the merged image based on the obstacle display information set and the driver operation prompt information set. The displaying of the stitched image may be highlighting the obstacle frame coordinate value sequence included in each obstacle display information in the obstacle display information group in the displayed stitched image. In addition, a driver deceleration stop signal or vehicle driving direction adjustment instruction information included in each piece of obstacle presentation information in the obstacle presentation information group may be displayed at a position of an obstacle represented by the obstacle presentation information in the merged image.
Optionally, the executing body may further send a driving operation prompt tone according to the driver operation prompt information group. And if the operation prompt information of the driver comprises a signal for prompting the driver to decelerate and stop. Then, the deceleration parking voice prompt information may be generated as the driver operation prompt information. And if the driver operation prompt information comprises vehicle driving direction adjustment instruction information. Then, the direction adjustment voice guidance information may be generated as the driver operation guidance information. Then, voice broadcasting can be performed according to the deceleration parking voice prompt information or the driver operation prompt information included in each driver operation prompt information in the driver operation prompt information group.
As an example, the deceleration parking voice guidance information may be "please decelerate parking". Then, a voice prompt "please slow down to park" may be broadcast.
Optionally, the executing main body may further perform the following steps:
the method comprises the steps of firstly, determining relative positioning information between the vehicle at the current moment and each piece of dynamic obstacle detection information in the dynamic obstacle detection information group to obtain a dynamic obstacle relative positioning information set. The relative positioning information may be a variation between a distance value between the dynamic obstacle detection information and the vehicle at a current time and a distance value at a previous time.
And secondly, determining relative positioning information between the vehicle at the current moment and each piece of static obstacle detection information in the static obstacle detection information group to obtain a static obstacle relative positioning information set. The relative positioning information may be a variation between a distance value between the static obstacle detection information and the vehicle at a current time and a distance value at a previous time.
And thirdly, performing motion compensation on the spliced image according to the dynamic obstacle relative positioning information set and the static obstacle relative positioning information set to obtain a motion compensation image. The dynamic obstacle relative positioning information set and the static obstacle relative positioning information set, and the stitched image may be input to a preset image filter to generate a motion compensated image.
And fourthly, marking the obstacle object on the motion compensation image according to the static obstacle adjustment information group and the dynamic obstacle adjustment information group to obtain an obstacle object marking image. The obstacle marking of the motion compensation image may be performed by marking an obstacle region represented by a frame coordinate value sequence included in the static obstacle adjustment information or the dynamic obstacle adjustment information in a highlight color. Therefore, the obstacle marking of the motion compensation image is completed, and an obstacle marking image is obtained.
And fifthly, displaying the obstacle marking image. The obstacle mark image may be displayed on a display terminal of the vehicle. So that the user can view obstacles around the vehicle and blind areas of the field of view in real time.
The above embodiments of the present disclosure have the following advantages: by the obstacle image display method of some embodiments of the present disclosure, specific features of an obstacle (for example, the obstacle belongs to a dynamic obstacle or a static obstacle) can be displayed to reduce potential safety hazards, and improve safety of vehicle driving. Specifically, the reasons for the major potential safety hazards are: the specific characteristics of the obstacle are not identified (e.g., the obstacle belongs to a dynamic obstacle or a static obstacle), and thus, the potential danger level of the obstacle with respect to the current vehicle cannot be determined, resulting in a more detailed obstacle image that cannot be timely displayed at the vehicle display terminal, and further, resulting in a more detailed obstacle information that cannot be conveyed to the user by the obstacle image displayed at the vehicle display terminal. Based on this, the obstacle image display method of some embodiments of the present disclosure first considers a case where specific characteristics of an obstacle cannot be identified (for example, the obstacle belongs to a dynamic obstacle or a static obstacle). Therefore, a preset visual detection model is introduced, and the visual detection model can be used for distinguishing whether the obstacle is a dynamic obstacle or a static obstacle. Then, the problem of the accuracy of the generated dynamic obstacle detection information and static obstacle detection information is considered. Therefore, the dynamic obstacle detection information set and the static obstacle detection information set are adjusted. So that the accuracy of the generated dynamic obstacle detection information and static obstacle detection information can be improved. Thereafter, an obstacle presentation information group and a driver operation prompt information group are generated. The method can be used for prompting the driver to avoid different types of obstacles. Compared with the common method which only displays the image with the obstacle detection frame, the method not only can identify and display the specific characteristics of the obstacle, but also can give corresponding driver operation prompt information aiming at the obstacles with different characteristics. Therefore, driving accidents due to inattention of the driver can be avoided. Therefore, potential safety hazards are reduced, and the safety of vehicle driving is improved.
With further reference to fig. 3, a flow 300 of further embodiments of an obstacle image presentation method is shown. The process 300 of the obstacle image display method includes the following steps:
in step 301, an image group captured by a vehicle-mounted camera is acquired.
And step 302, splicing the images in the image group to obtain a spliced image.
And 303, inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information.
In some embodiments, the specific implementation manner and technical effects of steps 301 and 303 can refer to steps 201 and 203 in the embodiments corresponding to fig. 2, which are not described herein again.
And step 304, acquiring a static obstacle distance information set and a dynamic obstacle distance information set output by the ultrasonic radar.
In some embodiments, the performing agent of the obstacle image presentation method (e.g., computing device 101 shown in fig. 1) may obtain the set of static obstacle distance information and the set of dynamic obstacle distance information output by the ultrasound radar. The static obstacle distance information in the static obstacle distance information set may include a second static obstacle detection distance value, and the dynamic obstacle distance information in the dynamic obstacle distance information set may include a second dynamic obstacle detection distance value. The second static obstacle detection distance value may be used to represent a distance value between the vehicle and a static obstacle represented by the static obstacle detection information detected by the ultrasonic radar at the current time. The second dynamic obstacle detection distance value may be used to represent a distance value between the vehicle and a dynamic obstacle represented by the dynamic obstacle detection information detected by the ultrasonic radar at the current time. The static obstacle distance information and the dynamic obstacle detection distance information may further include an obstacle detection number, respectively.
In addition, the dynamic obstacle detection information in the dynamic obstacle detection information set may include a first dynamic obstacle detection distance value. The static obstacle detection information in the static obstacle detection information group may include a first static obstacle detection distance value. The dynamic obstacle adjustment information in the dynamic obstacle adjustment information set may include a dynamic obstacle distance value. The static obstacle adjustment information in the static obstacle adjustment information group may include a static obstacle distance value. The first dynamic obstacle detection distance value may be used to represent a distance value between the current time and the vehicle of the dynamic obstacle represented by the dynamic obstacle detection information. The first static obstacle detection distance value may be used to represent a distance value between the static obstacle represented by the static obstacle detection information and the vehicle at the current time. The above dynamic obstacle distance value may be used to characterize the adjusted first dynamic obstacle detection distance value. The static obstacle distance value may be used to characterize the adjusted first static obstacle detection distance value.
Step 305, fusing a first dynamic obstacle distance value included in each piece of dynamic obstacle detection information in the dynamic obstacle detection information set with a second dynamic obstacle distance value included in the piece of dynamic obstacle distance information matched in the dynamic obstacle distance information set to generate a dynamic obstacle distance value included in the piece of dynamic obstacle adjustment information, so as to obtain a dynamic obstacle adjustment information set.
In some embodiments, the executing body may fuse a first dynamic obstacle distance value included in each piece of dynamic obstacle detection information in the dynamic obstacle detection information set with a second dynamic obstacle distance value included in matching piece of dynamic obstacle distance information in the dynamic obstacle distance information set to generate a dynamic obstacle distance value included in dynamic obstacle adjustment information, so as to obtain a dynamic obstacle adjustment information set. The matching may be that the obstacle detection numbers included in the dynamic obstacle detection information and the dynamic obstacle distance information are the same. The fusion may be to determine a mean value of the first dynamic obstacle distance value and the matched second dynamic obstacle distance value as the dynamic obstacle adjustment information.
Step 306, fusing the first static obstacle distance value included in each static obstacle detection information in the static obstacle detection information group with the second static obstacle distance value included in the static obstacle distance information group, which is matched with the static obstacle distance information in the static obstacle distance information group, to generate a static obstacle distance value included in the static obstacle adjustment information, so as to obtain a static obstacle adjustment information group.
In some embodiments, the executing body may fuse a first static obstacle distance value included in each static obstacle detection information in the static obstacle detection information set with a second static obstacle distance value included in the matching static obstacle distance information in the static obstacle distance information set to generate a static obstacle distance value included in the static obstacle adjustment information, so as to obtain the static obstacle adjustment information set. The matching may be that the static obstacle detection information and the static obstacle distance information include the same obstacle detection number. The fusing may be to determine a mean value of the first static obstacle distance value and the matched second static obstacle distance value as the dynamic obstacle adjustment information.
Step 307 generates a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set.
In some embodiments, the execution body generates the collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set. The static obstacle adjustment information in the static obstacle adjustment information group may include static obstacle feature information. The dynamic obstacle adjustment information in the dynamic obstacle adjustment information set may include dynamic obstacle feature information. The static obstacle feature information may include: static obstacle name, static obstacle feature, and static obstacle size. For example, [ human, still, 1.8 meters ]. The dynamic obstacle feature information may include: a dynamic obstacle name, a dynamic obstacle feature, a dynamic obstacle movement direction, and a dynamic obstacle movement direction. For example, [ dog, moving animal, north direction, 1 meter per second ].
Firstly, predicting the moving track of the dynamic obstacle corresponding to each piece of dynamic obstacle adjusting information in the dynamic obstacle adjusting information group to generate a moving track of the dynamic obstacle, and obtaining a moving track group of the dynamic obstacle. And predicting the moving track of the dynamic obstacle corresponding to each piece of dynamic obstacle adjusting information in the dynamic obstacle adjusting information group through a preset Markov model to generate the moving track of the dynamic obstacle.
And secondly, predicting the moving track of the vehicle to obtain the predicted moving track of the vehicle. The Markov model can be used for predicting the movement track of the vehicle to obtain the predicted movement track of the vehicle.
Thirdly, for each dynamic obstacle moving track in the dynamic obstacle moving track group and the predicted moving track of the vehicle, executing the following dynamic processing steps:
the first substep is to determine a predicted distance value between the vehicle and a dynamic obstacle corresponding to the dynamic obstacle movement trajectory within a predetermined time period based on the predicted vehicle movement trajectory. The position coordinate value of the vehicle at each moment in a predetermined time period (for example, within 3 seconds) may be determined according to the speed value of the vehicle and the predicted movement trajectory of the vehicle. Then, the position coordinate value of each moment in the preset time length can be determined according to the moving track of the dynamic obstacle and the corresponding speed value. Then, the distance value between the dynamic obstacles corresponding to the moving track of the dynamic obstacles at each moment can be determined. Finally, the minimum distance value at different time can be determined as the predicted distance value.
And a second substep of determining the risk level of the dynamic obstacle to be a first dynamic risk level in response to determining that the predicted distance value is less than a preset safe distance threshold. Wherein, the preset safe distance threshold may be 3 meters. The first dynamic risk level may be used to characterize a person or vehicle that is determined to be at risk of a collision between the vehicle and a dynamic obstacle.
And a third substep of determining the risk level of the dynamic obstacle to be a second dynamic risk level in response to determining that the dynamic obstacle feature information included in the dynamic obstacle adjustment information corresponding to the movement trajectory of the dynamic obstacle meets a first preset safety condition. The first preset safety condition may be that a preset distance value corresponding to a moving trajectory of the dynamic obstacle is smaller than the preset safety distance threshold, and the name of the dynamic obstacle included in the dynamic obstacle feature information is an animal. The second dynamic risk level may be used to characterize an animal that is determined to be at risk of a collision between the vehicle and a dynamic obstacle. E.g., a dog or cat, etc.
In some optional implementations of some embodiments, the executing body generating the collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set may further include:
first, for each static obstacle adjustment information in the static obstacle adjustment information set and the predicted movement trajectory of the vehicle, the static processing steps are executed:
and a first substep of determining the risk level of the static obstacle corresponding to the static obstacle adjustment information as a first static risk level in response to determining that the static obstacle feature information included in the static obstacle adjustment information and the predicted movement trajectory of the vehicle satisfy a second preset safety condition. The second preset safety condition may be that the static obstacle feature included in the static obstacle feature information is a stationary person, and a minimum distance value between the vehicle movement track and the position of the static obstacle represented by the static obstacle feature information is smaller than the preset safety distance threshold. The first static risk level may be used to indicate that a potential collision risk exists for a static obstacle characterized by the static obstacle.
And a second substep of determining the risk level of the static obstacle corresponding to the static obstacle adjustment information as a second static risk level in response to determining that the static obstacle feature information included in the static obstacle adjustment information and the predicted movement trajectory of the vehicle satisfy a third preset safety condition. The third preset condition may be that the static obstacle feature included in the static obstacle feature information is a static animal, and the minimum distance value between the vehicle movement trajectory and the position of the static obstacle represented by the static obstacle feature information is smaller than the preset safe distance threshold. The second static risk level may be used to characterize the potential risk of a stationary animal moving the vehicle.
And a third substep, determining the risk level of the static obstacle corresponding to the static obstacle adjustment information as a third static risk level in response to determining that the static obstacle feature information included in the static obstacle adjustment information meets a fourth preset safety condition. Wherein, the fourth preset safety condition may be: the static obstacle feature information includes a static obstacle feature that is an object, and the static obstacle represented by the static obstacle feature information is within a movement track of the vehicle. Such as nails, rocks, well covers, etc. The third static risk level may be used to represent a potential risk caused by a road surface defect whose position is within the moving track of the vehicle.
And a second step of generating a collision prediction information set by using the risk level corresponding to each dynamic obstacle adjustment information in the dynamic obstacle adjustment information set as collision prediction information and the risk level corresponding to each static obstacle adjustment information in the static obstacle adjustment information set as collision prediction information. The collision prediction information in the collision prediction information group may be used to represent dynamic obstacle adjustment information and a corresponding risk level, or static obstacle adjustment information and a corresponding risk level.
And 308, generating an obstacle display information group and a driver operation prompt information group based on the collision prediction information group and the travelable area detection information.
In some embodiments, the executing body may generate an obstacle presentation information group and a driver operation presentation information group based on the collision prediction information group and the travelable region detection information, and may include:
a first step of executing, for each of the collision prediction information in the above-described collision prediction information group, a generation step of:
the first substep is to generate first presentation information and first operation prompt information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a first dynamic risk level. Wherein the first presentation information may include a dynamic obstacle name and a parking signal. The first operation prompt message may be a left or right detour avoidance signal. Specifically, for a person or a vehicle at the first dynamic risk level, the vehicle should be decelerated to stop or detour, so as to reduce the potential safety hazard of vehicle driving.
And a second substep of generating first presentation information and second operation prompt information based on the travelable region detection information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a second dynamic risk level. Wherein, the second operation prompt message may be a whistle prompt signal. Specifically, animals at the second dynamic risk level should slow down their stops and whistle a blow out. Thereby reducing the potential safety hazard of vehicle driving.
And a third substep of generating first presentation information and third operation prompt information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a first static risk level. Wherein, the third operation prompt message may be a voice prompt message. For example, to alert pedestrians to avoid.
And a fourth substep of generating first presentation information and second operation prompt information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a second static risk level. Wherein for a stationary animal at the second static risk level, the vehicle should be slowed down and whistled for expulsion. Thus, the potential safety hazard of vehicle driving can be reduced.
And a fifth substep of generating second presentation information and fourth operation prompt information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a third static risk level. Wherein the second presentation information may be: and displaying the distance value between the vehicle body and the obstacle. The fourth operation prompt message may be: and prompting that the front part has road surface diseases and suggesting to detour.
And secondly, determining first display information or second display information corresponding to each piece of collision prediction information in the collision prediction information group as obstacle display information, and determining first operation prompt information or second operation prompt information or third operation prompt information or fourth operation prompt information corresponding to the collision prediction information as driver operation prompt information to obtain an obstacle display information group and a driver operation prompt information group.
Optionally, the executing body may further generate detailed operation information for the static obstacle feature information with the risk level being a third static risk level.
Wherein, in response to detecting that the static obstacle feature information whose static obstacle risk level is the third static risk level includes an obstacle name of stone, the following steps are performed based on a relationship between an obstacle size and the vehicle track included in the static obstacle feature information:
step one, if the stone width value is less than the vehicle wheel base to and stone height value is less than the vehicle bottom height value after the air suspension mode is opened to the vehicle. Then, the generated detailed operation information may be: prompting the driver to turn on the air suspension mode.
And secondly, if the width value of the stone is smaller than the wheel track of the vehicle and the height value of the stone is larger than or equal to the height value of the bottom of the vehicle after the air suspension mode is started. Or the stone width value is greater than the vehicle track. Then, the generated detailed operation information may be: and prompting the driver to detour.
And in response to detecting that the static obstacle characteristic information with the static obstacle risk level as the third static risk level comprises the obstacle name of the road pothole. Then, the generated detailed operation information may be: in potholes, it is recommended to bypass or open the air suspension mode.
And in response to detecting that the static obstacle characteristic information with the static obstacle risk level as the third static risk level comprises the names of the obstacles, such as nails, steps and the like. Then, the generated detailed operation information may be: and prompting the driver to detour.
And in response to the detection that the static obstacle characteristic information with the static obstacle risk level as the third static risk level comprises the obstacle name of the manhole cover. Then, the generated detailed operation information may be: and prompting a driver to adjust the angle of the steering wheel so that the well cover is positioned right below the vehicle in the vehicle passing process. Therefore, the potential safety hazard of vehicle driving is reduced.
And 309, displaying the spliced image based on the obstacle display information group and the driver operation prompt information group.
In some embodiments, the specific implementation manner and technical effects of step 309 may refer to step 207 in those embodiments corresponding to fig. 2, and are not described herein again.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the process 300 of the obstacle image display method in some embodiments corresponding to fig. 3 embodies the steps of generating the collision prediction information set, the obstacle display information set, and the driver operation prompt information set. Therefore, not only the specific characteristics of the obstacles can be identified, but also the danger level of each obstacle to the vehicle can be further determined. And different obstacle display information and driver operation prompts are generated for different danger levels. Therefore, more detailed obstacle images can be displayed at the vehicle display terminal in time. Also, more detailed obstacle information may be conveyed to the user through the obstacle image displayed by the vehicle display terminal. Furthermore, potential safety hazards are reduced, and the safety of vehicle driving is improved.
Referring now to FIG. 4, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 400 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image group shot by a vehicle-mounted camera; splicing each image in the image group to obtain a spliced image; inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information; adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group; generating a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set; generating an obstacle display information set and a driver operation prompt information set based on the collision prediction information set and the travelable region detection information; and displaying the spliced image based on the obstacle display information group and the driver operation prompt information group.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (9)

1. An obstacle image presentation method, comprising:
acquiring an image group shot by a vehicle-mounted camera;
splicing each image in the image group to obtain a spliced image;
inputting the spliced image into a preset visual detection model to obtain a dynamic obstacle detection information group, a static obstacle detection information group and travelable area detection information;
adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group;
generating a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set;
generating an obstacle display information group and a driver operation prompt information group based on the collision prediction information group and the travelable region detection information;
and displaying the spliced image based on the obstacle display information group and the driver operation prompt information group.
2. The method of claim 1, wherein the method further comprises:
and sending out a driving operation prompt tone according to the driver operation prompt information group.
3. The method of claim 1, wherein the method further comprises:
determining relative positioning information between the vehicle at the current moment and each piece of dynamic obstacle detection information in the dynamic obstacle detection information group to obtain a dynamic obstacle relative positioning information set;
determining relative positioning information between the vehicle at the current moment and each static obstacle detection information in the static obstacle detection information group to obtain a static obstacle relative positioning information set;
performing motion compensation on the spliced image according to the dynamic obstacle relative positioning information set and the static obstacle relative positioning information set to obtain a motion compensation image;
according to the static obstacle adjustment information group and the dynamic obstacle adjustment information group, performing obstacle marking on the motion compensation image to obtain an obstacle marking image;
and displaying the obstacle marking image.
4. The method of claim 1, wherein the dynamic obstacle detection information in the set of dynamic obstacle detection information comprises a first dynamic obstacle detection distance value, the static obstacle detection information in the set of static obstacle detection information comprises a first static obstacle detection distance value, the dynamic obstacle adjustment information in the set of dynamic obstacle adjustment information comprises a dynamic obstacle distance value, and the static obstacle adjustment information in the set of static obstacle adjustment information comprises a static obstacle distance value; and
the adjusting the dynamic obstacle detection information group and the static obstacle detection information group to obtain a dynamic obstacle adjustment information group and a static obstacle adjustment information group includes:
acquiring a static obstacle distance information group and a dynamic obstacle distance information group output by an ultrasonic radar, wherein the static obstacle distance information in the static obstacle distance information group comprises a second static obstacle detection distance value, and the dynamic obstacle distance information in the dynamic obstacle distance information group comprises a second dynamic obstacle detection distance value;
fusing a first dynamic obstacle detection distance value included in each piece of dynamic obstacle detection information in the dynamic obstacle detection information group with a second dynamic obstacle distance value included in dynamic obstacle distance information matched in the dynamic obstacle distance information group to generate a dynamic obstacle distance value included in dynamic obstacle adjustment information, and obtaining a dynamic obstacle adjustment information group;
and fusing a first static obstacle detection distance value included by each piece of static obstacle detection information in the static obstacle detection information group with a second static obstacle distance value included by the static obstacle distance information matched in the static obstacle distance information group to generate a static obstacle distance value included by the static obstacle adjustment information, so as to obtain a static obstacle adjustment information group.
5. The method of claim 1, wherein static obstacle adjustment information in the set of static obstacle adjustment information includes static obstacle feature information and dynamic obstacle adjustment information in the set of dynamic obstacle adjustment information includes dynamic obstacle feature information; and
generating a collision prediction information set based on the dynamic obstacle adjustment information set and the static obstacle adjustment information set, including:
predicting the moving track of the dynamic obstacle corresponding to each piece of dynamic obstacle adjusting information in the dynamic obstacle adjusting information group to generate a dynamic obstacle moving track, and obtaining a dynamic obstacle moving track group;
predicting the moving track of the vehicle to obtain a predicted moving track of the vehicle;
for each dynamic obstacle movement trajectory in the set of dynamic obstacle movement trajectories and the predicted movement trajectory of the vehicle, performing the following dynamic processing steps:
determining a predicted distance value between dynamic obstacles corresponding to the moving track of the dynamic obstacles in a preset time length based on the predicted moving track of the vehicle;
in response to determining that the predicted distance value is less than a preset safe distance threshold, determining that the risk level of the dynamic obstacle is a first dynamic risk level;
and determining the risk level of the dynamic obstacle to be a second dynamic risk level in response to determining that the dynamic obstacle feature information included in the dynamic obstacle adjustment information corresponding to the moving track of the dynamic obstacle meets a first preset safety condition.
6. The method of claim 5, wherein generating a set of collision prediction information based on the set of dynamic obstacle adjustment information and the set of static obstacle adjustment information further comprises:
for each static obstacle adjustment information in the set of static obstacle adjustment information and the predicted movement trajectory of the vehicle, performing the following static processing steps:
in response to the fact that the static obstacle feature information included in the static obstacle adjustment information and the predicted moving track of the vehicle meet a second preset safety condition, determining the risk level of the static obstacle corresponding to the static obstacle adjustment information as a first static risk level;
in response to the fact that the static obstacle feature information included in the static obstacle adjustment information and the predicted movement track of the vehicle meet a third preset safety condition, determining the risk level of the static obstacle corresponding to the static obstacle adjustment information as a second static risk level;
in response to determining that the static obstacle feature information included in the static obstacle adjustment information meets a fourth preset safety condition, determining that the risk level of the static obstacle corresponding to the static obstacle adjustment information is a third static risk level;
and generating a collision prediction information group by using the risk grade corresponding to each dynamic obstacle adjustment information in the dynamic obstacle adjustment information group as collision prediction information and using the risk grade corresponding to each static obstacle adjustment information in the static obstacle adjustment information group as collision prediction information.
7. The method according to claim 6, wherein the generating of the obstacle presentation information set and the driver operation prompt information set based on the collision prediction information set and the travelable region detection information includes:
for each collision prediction information in the set of collision prediction information, performing the generating step of:
generating first display information and first operation prompt information in response to the fact that the risk level corresponding to the collision prediction information in the collision prediction information group is determined to be a first dynamic risk level;
generating first display information and second operation prompt information based on the travelable region detection information in response to determining that the risk level corresponding to the collision prediction information in the collision prediction information group is a second dynamic risk level;
generating first display information and third operation prompt information in response to the fact that the risk level corresponding to the collision prediction information in the collision prediction information group is determined to be a first static risk level;
generating first display information and second operation prompt information in response to the fact that the risk level corresponding to the collision prediction information in the collision prediction information group is determined to be a second static risk level;
generating second display information and fourth operation prompt information in response to the fact that the risk level corresponding to the collision prediction information in the collision prediction information group is determined to be a third static risk level;
and determining first display information or second display information corresponding to each piece of collision prediction information in the collision prediction information group as obstacle display information, and determining first operation prompt information or second operation prompt information or third operation prompt information or fourth operation prompt information corresponding to the collision prediction information as driver operation prompt information to obtain an obstacle display information group and a driver operation prompt information group.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
9. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202110759663.1A 2021-07-06 2021-07-06 Obstacle image presentation method, electronic device, and computer-readable medium Active CN113205088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110759663.1A CN113205088B (en) 2021-07-06 2021-07-06 Obstacle image presentation method, electronic device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110759663.1A CN113205088B (en) 2021-07-06 2021-07-06 Obstacle image presentation method, electronic device, and computer-readable medium

Publications (2)

Publication Number Publication Date
CN113205088A true CN113205088A (en) 2021-08-03
CN113205088B CN113205088B (en) 2021-09-24

Family

ID=77022755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110759663.1A Active CN113205088B (en) 2021-07-06 2021-07-06 Obstacle image presentation method, electronic device, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN113205088B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627364A (en) * 2021-08-16 2021-11-09 禾多科技(北京)有限公司 Road information display system, method, electronic device, and computer-readable medium
CN115147838A (en) * 2022-06-30 2022-10-04 小米汽车科技有限公司 Image processing method, image processing apparatus, vehicle, medium, and program product
CN116279454A (en) * 2023-01-16 2023-06-23 禾多科技(北京)有限公司 Vehicle body device control method, device, electronic apparatus, and computer-readable medium
CN116563817A (en) * 2023-04-14 2023-08-08 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170072851A1 (en) * 2015-09-16 2017-03-16 Sony Corporation System and method for generation of a preventive alert
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
CN109934164A (en) * 2019-03-12 2019-06-25 杭州飞步科技有限公司 Data processing method and device based on Trajectory Safety degree
CN109927719A (en) * 2017-12-15 2019-06-25 百度在线网络技术(北京)有限公司 A kind of auxiliary driving method and system based on barrier trajectory predictions
CN111469836A (en) * 2020-02-28 2020-07-31 广东中科臻恒信息技术有限公司 Obstacle avoidance method and device based on vehicle-mounted unit and road side unit, and storage medium
CN112733820A (en) * 2021-03-31 2021-04-30 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN112861833A (en) * 2021-04-26 2021-05-28 禾多科技(北京)有限公司 Vehicle lane level positioning method and device, electronic equipment and computer readable medium
CN112896083A (en) * 2021-04-06 2021-06-04 合肥工业大学 Driver and passenger protection method based on vehicle-mounted sensor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170072851A1 (en) * 2015-09-16 2017-03-16 Sony Corporation System and method for generation of a preventive alert
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
CN109927719A (en) * 2017-12-15 2019-06-25 百度在线网络技术(北京)有限公司 A kind of auxiliary driving method and system based on barrier trajectory predictions
CN109934164A (en) * 2019-03-12 2019-06-25 杭州飞步科技有限公司 Data processing method and device based on Trajectory Safety degree
CN111469836A (en) * 2020-02-28 2020-07-31 广东中科臻恒信息技术有限公司 Obstacle avoidance method and device based on vehicle-mounted unit and road side unit, and storage medium
CN112733820A (en) * 2021-03-31 2021-04-30 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN112896083A (en) * 2021-04-06 2021-06-04 合肥工业大学 Driver and passenger protection method based on vehicle-mounted sensor
CN112861833A (en) * 2021-04-26 2021-05-28 禾多科技(北京)有限公司 Vehicle lane level positioning method and device, electronic equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
辛煜等: "基于激光传感器的无人驾驶汽车动态障碍物检测及表示方法", 《机器人》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627364A (en) * 2021-08-16 2021-11-09 禾多科技(北京)有限公司 Road information display system, method, electronic device, and computer-readable medium
CN115147838A (en) * 2022-06-30 2022-10-04 小米汽车科技有限公司 Image processing method, image processing apparatus, vehicle, medium, and program product
CN115147838B (en) * 2022-06-30 2023-08-29 小米汽车科技有限公司 Image processing method, device, vehicle, medium, and program product
CN116279454A (en) * 2023-01-16 2023-06-23 禾多科技(北京)有限公司 Vehicle body device control method, device, electronic apparatus, and computer-readable medium
CN116279454B (en) * 2023-01-16 2023-12-19 禾多科技(北京)有限公司 Vehicle body device control method, device, electronic apparatus, and computer-readable medium
CN116563817A (en) * 2023-04-14 2023-08-08 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116563817B (en) * 2023-04-14 2024-02-20 禾多科技(北京)有限公司 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Also Published As

Publication number Publication date
CN113205088B (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN113205088B (en) Obstacle image presentation method, electronic device, and computer-readable medium
CN109739236B (en) Vehicle information processing method and device, computer readable medium and electronic equipment
US11060882B2 (en) Travel data collection and publication
US20170010679A1 (en) Communication between autonomous vehicle and external observers
CN112590813B (en) Method, device, electronic device and medium for generating information of automatic driving vehicle
US11971481B2 (en) Point cloud registration for lidar labeling
KR20210090624A (en) Data distribution systems, sensor devices and servers
CN110007752A (en) The connection of augmented reality vehicle interfaces
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
US20230343108A1 (en) Systems and methods for detecting projection attacks on object identification systems
JP2023024857A (en) Road-to-vehicle cooperative information processing method, apparatus, system, electronic device, storage medium, and computer program
KR102144778B1 (en) System and method for providing real-time updated road information
JP7136538B2 (en) electronic device
US11908095B2 (en) 2-D image reconstruction in a 3-D simulation
WO2022113772A1 (en) Information processing device, information processing method, and information processing system
US11458841B2 (en) Display control apparatus, display control method, and computer-readable storage medium storing program
Nourbakhshrezaei et al. A novel context-aware system to improve driver’s field of view in urban traffic networks
CN114207685B (en) Autonomous vehicle interaction system
CN114056337A (en) Vehicle driving behavior prediction method, device and computer program product
CN114511834A (en) Method and device for determining prompt information, electronic equipment and storage medium
JP2022056153A (en) Temporary stop detection device, temporary stop detection system, and temporary stop detection program
CN112885087A (en) Method, apparatus, device and medium for determining road condition information and program product
US11634156B1 (en) Aerial view generation for vehicle control
US11628859B1 (en) Vehicle placement on aerial views for vehicle control
US11645038B1 (en) Augmented reality head-up display for audio event awareness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Obstacle image display method, electronic equipment and computer-readable medium

Effective date of registration: 20230228

Granted publication date: 20210924

Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch

Pledgor: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

Registration number: Y2023980033668

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.