CN116563818A - Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium - Google Patents

Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium Download PDF

Info

Publication number
CN116563818A
CN116563818A CN202310395943.8A CN202310395943A CN116563818A CN 116563818 A CN116563818 A CN 116563818A CN 202310395943 A CN202310395943 A CN 202310395943A CN 116563818 A CN116563818 A CN 116563818A
Authority
CN
China
Prior art keywords
obstacle
looking
detection frame
obstacle detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310395943.8A
Other languages
Chinese (zh)
Other versions
CN116563818B (en
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HoloMatic Technology Beijing Co Ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202310395943.8A priority Critical patent/CN116563818B/en
Publication of CN116563818A publication Critical patent/CN116563818A/en
Application granted granted Critical
Publication of CN116563818B publication Critical patent/CN116563818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses an obstacle information generation method, an obstacle information generation device, an electronic device and a computer readable medium. One embodiment of the method comprises the following steps: acquiring a historical obstacle length value, a historical obstacle width value, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an initial obstacle detection frame coordinate point group; adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set to generate an adjusted obstacle detection frame coordinate set; generating obstacle information by using the history obstacle length value, the history obstacle width value, and the adjusted obstacle detection frame coordinate set. The embodiment can improve the accuracy of the generated obstacle information.

Description

Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, an electronic device, and a computer readable medium for generating obstacle information.
Background
The obstacle information generation method is a technique for determining obstacle information in an image. Currently, when generating obstacle information (for example, the obstacle is another vehicle, the obstacle information may be distance information of the other vehicle, speed information of the other vehicle, or the like), the following methods are generally adopted: by performing obstacle detection on two road images of the same frame corresponding to two adjacent vehicle-mounted cameras (for example, a front view camera and a right front view camera) shooting an obstacle vehicle, the obstacle information extracted from the two road images can be complemented, so that the condition that the obstacle display in a single road image is incomplete is avoided. Thus, obstacle information can be generated by the geometric relationship between the obstacle and the in-vehicle camera.
However, the inventors found that when the obstacle information generation is performed in the above manner, there are often the following technical problems:
due to the common knowledge between two adjacent vehicle-mounted cameras, there is a case where the obstacle region in the two photographed road images is truncated in the images, and if the truncated portion of the obstacle region in the two road images is too large (for example, only the left line of the obstacle detection frame can be observed in the forward-looking road image, and only the right line of the obstacle head or tail detection frame can be observed in the forward-looking road image), it is difficult to generate complete obstacle information from the information detected from the road images, thereby resulting in a decrease in the accuracy of the generated obstacle information.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an obstacle information generation method, apparatus, electronic device, and computer-readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an obstacle information generating method, the method including: acquiring a historical obstacle length value, a historical obstacle width value, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information and a pre-generated obstacle course angle vector; adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information and the right forward looking obstacle detection information to generate an adjusted obstacle detection frame coordinate set; generating obstacle information by using the history obstacle length value, the history obstacle width value, and the adjusted obstacle detection frame coordinate set.
In a second aspect, some embodiments of the present disclosure provide an obstacle information generating apparatus, the apparatus including: an acquisition unit configured to acquire a history obstacle length value, a history obstacle width value, a forward-looking road image captured by a forward-looking camera of a current vehicle, and a right forward-looking road image captured by a right forward-looking camera; an identification processing unit configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; a first generation unit configured to generate an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and the obstacle course angle vector generated in advance; an adjustment processing unit configured to perform adjustment processing on each of the initial obstacle detection frame coordinate points in the initial obstacle detection frame coordinate point group based on the history obstacle length value, the history obstacle width value, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information, to generate an adjusted obstacle detection frame coordinate group; and a second generation unit configured to generate obstacle information using the history obstacle length value, the history obstacle width value, and the adjusted obstacle detection frame coordinate set.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information generation method of some embodiments of the present disclosure, the accuracy of the generated obstacle information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle information is that: due to the common knowledge between two adjacent vehicle-mounted cameras, there is a case where the obstacle region in the two photographed road images is truncated in the images, and if the truncated portion of the obstacle region in the two road images is too large (for example, only the left line of the obstacle detection frame can be observed in the forward-looking road image, and only the right line of the obstacle head or tail detection frame can be observed in the forward-looking road image), it is difficult to generate complete obstacle information from the information detected from the road images, thereby resulting in a decrease in the accuracy of the generated obstacle information. Based on this, the obstacle information generation method of some embodiments of the present disclosure first acquires a history obstacle length value, a history obstacle width value, a forward-looking road image captured by a forward-looking camera of the current vehicle, and a right forward-looking road image captured by a right forward-looking camera. And secondly, performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. Then, an initial obstacle detection frame coordinate point group is generated based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and the obstacle course angle vector generated in advance. Here, by introducing the obstacle course angle vector, it is possible to facilitate fusion of the forward-looking obstacle detection information and the right forward-looking obstacle detection information. In addition, when it is difficult to fuse detection information in the forward-looking road image and the right forward-looking road image, by generating the initial obstacle detection frame coordinate point group, it can be used as initial information for generating obstacle information, facilitating the generation of the obstacle information. And then, based on the historical obstacle length value, the historical obstacle width value, the forward-looking obstacle detection information and the right forward-looking obstacle detection information, adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set to generate an adjusted obstacle detection frame coordinate set. Through the adjustment processing, the coordinate points of each initial obstacle detection frame can be adjusted, so that the accuracy of generating the adjusted obstacle detection frame coordinates is improved. And because the historical obstacle length value and the historical obstacle broadband value are introduced, the method can be used for combining the forward-looking obstacle detection information and the right forward-looking obstacle detection information, and the accuracy of the generated adjusted obstacle detection frame coordinates is further improved. And finally, generating obstacle information by using the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set. Thus, even if the cut-off portion of the obstacle region in the two road images is too large, more accurate adjusted obstacle detection frame coordinates can be generated by using the historical obstacle length value, the historical obstacle broadband value, and the initial obstacle detection frame coordinate point. Further, it can be used to generate complete obstacle information and to improve the accuracy of the generated obstacle information.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an obstacle information generation method according to the present disclosure;
fig. 2 is a schematic structural view of some embodiments of an obstacle information generating device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of an obstacle information generation method according to the present disclosure. The obstacle information generation method comprises the following steps:
Step 101, acquiring a historical obstacle length value, a historical obstacle width value, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera.
In some embodiments, the execution subject of the obstacle information generation method may acquire the current vehicle detection information in a wired manner or a wireless manner. Wherein the acquisition time points of the forward-looking road image and the right forward-looking road image may be the same. Secondly, the same obstacle vehicle exists in the obtained forward-looking road image and the right forward-looking road image. The historical obstacle length value and the historical obstacle width value may be an obstacle length value and an obstacle width value detected in a historical frame corresponding to the obstacle vehicle described above. Here, the historical obstacle length value and the historical obstacle width value may be used as prior information of the obstacle vehicle, for improving accuracy of generating the obstacle information.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
And 102, performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information.
In some embodiments, the execution body may perform recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. Wherein the forward-looking obstacle detection information may characterize obstacle information detected from the forward-looking road image. The right forward looking obstacle detection information may characterize obstacle information detected from the right forward looking road image. Here, the forward-looking obstacle detection information and the right forward-looking obstacle detection information may correspond to the same obstacle.
In some optional implementations of some embodiments, the executing body performs recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information, and may include the steps of:
first, obstacle detection is performed on the forward-looking road image to generate forward-looking obstacle detection information. Wherein, the forward looking obstacle detection information may include, but is not limited to, at least one of the following: and (3) a left line equation of a front-view obstacle full vehicle detection frame, a left line equation of a front-view obstacle head-tail detection frame and/or a front-view obstacle wheel grounding point coordinate set. The obstacle detection can be performed on the forward-looking road image through a preset detection algorithm, so that forward-looking obstacle detection information is generated. Second, the forward looking obstacle all-vehicle detection frame left line equation may be an equation of the all-vehicle detection frame left line detected from the forward looking road image. The equation is in a forward looking road image coordinate system of the forward looking road image. The forward looking obstacle head-to-tail detection frame left line equation may be an equation of a head-to-tail detection frame left line detected from a forward looking road image. The forward-looking obstacle wheel ground point coordinates may be coordinates of a contact point of the outside of the obstacle vehicle wheel with the ground, detected from the forward-looking road image. The forward-looking obstacle wheel-ground point coordinate set may be empty when no forward-looking obstacle wheel-ground point coordinates are detected. In addition, the detected equations can be equations with a range of values to characterize one edge of the obstacle detection frame.
As an example, the obstacle detection algorithm described above may include, but is not limited to, at least one of: MRF (MRF-Markov Random Field, markov conditional random field) model, SPP (Spatial Pyramid Pooling, spatial pyramid pooling module) model, FCN (Fully Convolutional Networks, full-roll machine neural network) model, and the like.
In practice, since the position of the obstacle in the forward-looking road image is not fixed, there are cases where the forward-looking obstacle is detected partially, or entirely in the forward-looking obstacle full-vehicle detection frame left edge line equation, the forward-looking obstacle head-to-tail detection frame left edge line equation, and/or the forward-looking obstacle wheel grounding point coordinate set.
And secondly, performing obstacle detection on the right front view road image to generate right front view obstacle detection information. Wherein, the right front view obstacle detection information may include, but is not limited to, at least one of the following: a left line equation of a head-to-tail detection frame of the right front-view obstacle, a right line equation of a head-to-tail detection frame of the right front-view obstacle, and/or a ground point coordinate set of wheels of the right front-view obstacle. Next, the obstacle detection algorithm may perform obstacle detection on the right forward-looking road image to generate right forward-looking obstacle detection information. Here, the equation of the left line of the head-to-tail detection frame of the right-front obstacle may be an equation of the left line of the head-to-tail detection frame detected from the right-front road image. The right line equation for the head-to-tail detection frame of the right forward-looking obstacle may be an equation for the right line of the head-to-tail detection frame detected from the right forward-looking road image.
Step 103, generating an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information and the pre-generated obstacle course angle vector.
In some embodiments, the execution body may generate the initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and the pre-generated obstacle course angle vector.
In some optional implementations of some embodiments, the executing body may generate the initial obstacle detection frame coordinate point set based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and a pre-generated obstacle course angle vector, and may include the steps of:
and in the first step, in response to determining that the right forward looking obstacle detection information comprises a left side line equation of a forward looking obstacle full vehicle detection frame, selecting coordinates of the left side line equation of the forward looking obstacle full vehicle detection frame to generate obstacle course coordinates. The obstacle course coordinates may be in a forward-looking road image coordinate system of the forward-looking road image. Here, the coordinate selection may be randomly selected on a line segment where a left line equation of the full vehicle detection frame of the front view obstacle is located. The detection information of the right front view obstacle comprises a left side line equation of a full vehicle detection frame of the front view obstacle, and the left side line equation of the full vehicle detection frame of the front view obstacle can be represented by the detection of the full vehicle detection frame of the front view obstacle from the right front view road image. The obstacle course angle vector may be a unit vector of obstacle course.
And a second step of determining an obstacle head-to-tail detection frame right edge vector corresponding to the right front-view obstacle detection frame right edge line equation in response to determining that the right front-view obstacle detection information includes the right front-view obstacle head-to-tail detection frame right edge line equation. The unit vector of the right line equation of the right forward looking obstacle detection frame can be determined and used as the right line vector of the obstacle head-tail detection frame.
And thirdly, selecting coordinates of the direction of the right line vector of the obstacle head and tail detection frame to generate right line coordinates of the obstacle head and tail detection frame. The coordinates of the right edge line of the obstacle head and tail detection frame may be in a right forward looking road image coordinate system of the right forward looking road image. And secondly, randomly selecting coordinates on a line segment where the right line vector of the obstacle head and tail detection frame is located, and taking the coordinates as the right line coordinates of the obstacle head and tail detection frame. Here, the coordinates of the right edge line of the obstacle head-to-tail detection frame may represent one vertex coordinate of the frame bottom surface in which the obstacle three-dimensional detection frame is projected to the right forward-looking road image coordinate system.
And fourthly, determining a connecting line of the coordinates of the grounding point of the obstacle wheel in the coordinates set of the grounding point of the obstacle wheel and the coordinates of an intersection point of the connecting line of the coordinates of the grounding point of the obstacle wheel and the left line equation of the front-view obstacle whole vehicle detection frame as the vertex coordinates of the first front-view obstacle detection frame in response to determining that the front-view obstacle detection information comprises the left line equation of the front-view obstacle whole vehicle detection frame, the left line equation of the front-view obstacle head-tail detection frame and the coordinates set of the grounding point of the obstacle wheel. Under the condition that the left side line equation of the head and tail detection frame of the forward-looking obstacle is detected, the grounding point coordinates of two wheels on the side face of the obstacle vehicle in the forward-looking road image can be detected. Thus, two obstacle wheel ground point coordinates may be included in the obstacle wheel ground point coordinate set. Thus, a straight line where the coordinates of the ground points of the two obstacle wheels are located can be determined. And finally, determining the intersection point coordinate of the straight line and a left line equation of the full vehicle detection frame of the forward-looking obstacle as the vertex coordinate of the first forward-looking obstacle detection frame. Here, the first forward-looking obstacle detection frame vertex coordinates may characterize another vertex coordinate that projects the obstacle three-dimensional detection frame to the bezel bottom surface in the forward-looking road image coordinate system.
And fifthly, determining the connection line of the coordinates of the grounding point of the obstacle wheel in the coordinate set of the grounding point of the obstacle wheel and the coordinate of the intersection point of the left line equation of the head and tail detection frame of the forward-looking obstacle as the vertex coordinate of the second forward-looking obstacle detection frame. The second forward-looking obstacle detection frame vertex coordinate may represent a third vertex coordinate of the obstacle three-dimensional detection frame projected to the bottom surface of the frame in the forward-looking road image coordinate system.
And a sixth step of determining the right edge coordinates of the head and tail detection frames of the obstacle, the vertex coordinates of the first forward-looking obstacle detection frame and the vertex coordinates of the second forward-looking obstacle detection frame as initial obstacle detection frame coordinate points respectively to obtain an initial obstacle detection frame coordinate point group.
And 104, adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point group based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information and the right forward looking obstacle detection information to generate an adjusted obstacle detection frame coordinate group.
In some embodiments, the execution body may perform adjustment processing on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the historical obstacle length value, the historical obstacle width value, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information to generate an adjusted obstacle detection frame coordinate set.
In some optional implementations of some embodiments, the executing body performs an adjustment process on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information, and the right forward looking obstacle detection information to generate an adjusted obstacle detection frame coordinate set, and may include the following steps:
and a first step of constructing a first constraint equation based on the obstacle course coordinates and the left line equation of the front-looking obstacle full-vehicle detection frame in response to determining that the right-looking obstacle detection information comprises the left line equation of the front-looking obstacle full-vehicle detection frame. First, a unit vector of the line equation on the left side of the forward-looking obstacle all-vehicle detection frame may be determined as the unit vector on the left side of the forward-looking obstacle all-vehicle detection frame. The first constraint equation may then be constructed by the following formula:
wherein,,representing a first constraint equation. />Representing coordinates. />Representing a map coordinate system. />Representing a forward looking road image coordinate system. />Representing a unit vector. />And the initial first position coordinate of the position where the heading coordinate of the obstacle is back projected to the map coordinate system is indicated, namely the coordinate of the left upper corner position of the bottom surface of the three-dimensional frame of the obstacle in the map coordinate system. / >Representing the obstacle course coordinates in the forward looking road image coordinate system. />And the left line unit vector of the front-view obstacle full vehicle detection frame in the front-view road image coordinate system is shown. />Representing a 2-norm. />Representing a preset projection function for converting coordinates into a forward looking road image coordinate system. The multiplier number represents a cross product.
Here, by constructing the first constraint equation, it can be used to constrain the obstacle course coordinates to be close to the first position coordinates after being back projected into the map coordinate system.
And a second step of constructing a second constraint equation based on the right line vector of the front-to-rear detection frame of the obstacle and the right line coordinate of the front-to-rear detection frame of the obstacle in response to determining that the right front-to-front obstacle detection information comprises the right line equation of the front-to-rear detection frame of the right front-to-front obstacle. Wherein the second constraint equation may be constructed by the following formula:
wherein,,representing a second constraint equation. />Representing a preset projection function for converting coordinates to a right forward looking road image coordinate system. />And the initial second position coordinate of the position where the right edge line coordinate of the head and tail detection frame of the obstacle is back projected to the map coordinate system is indicated, namely the coordinate of the right lower corner position of the bottom surface of the three-dimensional frame of the obstacle in the map coordinate system. / >The coordinates of the right edge line of the obstacle head-tail detection frame in the right front view road image coordinate system are shown. />And the right side line vector of the obstacle head-tail detection frame in the right front view road image coordinate system is represented. The multiplier number represents a cross product.
And thirdly, responding to the fact that the forward-looking obstacle detection information comprises a forward-looking obstacle full vehicle detection frame left side line equation, a forward-looking obstacle head-and-tail detection frame left side line equation and an obstacle wheel grounding point coordinate set, and constructing a third constraint equation based on the historical obstacle length value, the obstacle course angle vector and a left lower angle vertex coordinate corresponding to the left lower angle position of the obstacle detection frame on the forward-looking obstacle head-and-tail detection frame left side line equation. The coordinate of the lowest part on the line segment where the left side line equation of the head-tail detection frame of the forward-looking obstacle is located can be determined as the left lower corner vertex coordinate of the left lower corner position of the obstacle detection frame. Second, a third constraint equation can be constructed by the following formula:
wherein,,representing a third constraint equation. />Representing the historical obstacle length values. />Representing the obstacle course angle vector in the map coordinate system. / >And the initial third position coordinate of the left lower corner vertex coordinate is reversely projected into a map coordinate system.
And a fourth constraint equation is constructed based on the historical obstacle width value and the left lower corner vertex coordinates. Wherein the fourth constraint equation may be constructed by the following formula:
wherein,,representing a fourth constraint equation. />Representing the historical obstacle width values.
And fifthly, constructing a fifth constraint equation based on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set. Wherein the fifth constraint equation may be constructed by the following formula:
wherein,,representing a fifth constraint equation. />Representing the transpose of the matrix. The points represent dot products.
And a sixth step of adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the first constraint equation, the second constraint equation, the third constraint equation, the fourth constraint equation, and/or the fifth constraint equation to generate an adjusted obstacle detection frame coordinate set. Wherein, each adjusted obstacle detection frame coordinate in the adjusted obstacle detection frame coordinate set may be a coordinate in a map coordinate system. Next, the adjustment process may be performed for each initial obstacle detection frame coordinate point in the above initial obstacle detection frame coordinate point group by the following formula:
Wherein,,representing the adjusted first position coordinates in the map coordinate system. />Representing the adjusted second position coordinates in the map coordinate system. />And representing the adjusted third position coordinate in the map coordinate system.Representing a minimum objective function.
Specifically, in the solving process, the data of the constraint equation may be adjusted according to the five constraint equations. For example, the heading coordinate of the obstacle, the right line coordinate of the head and tail detection frame of the obstacle, and the like in the first constraint equation can be adjusted, so that the initial first position coordinate and the initial second position coordinate are adjusted according to the constraint condition of the constraint equation. Thereby, the adjusted coordinates are made to satisfy the conditions of the five constraint equations, and the conditions of the minimum objective function. Thus, more accurate first, second and third position coordinates can be obtained.
The above formulas and their related contents serve as an invention point of the embodiments of the present disclosure, and may further solve the technical problem "due to the consensus between two adjacent vehicle-mounted cameras, the situation that the obstacle region in the two captured road images is truncated in the image may be difficult to generate complete obstacle information according to the information detected from the road images if the truncated portion of the obstacle region in the two road images is too large (for example, only the left line of the obstacle detection frame may be observed in the forward-looking road image and only the right line of the obstacle head or tail detection frame may be observed in the right forward-looking road image), thereby resulting in a decrease in accuracy of the generated obstacle information. The reason why the accuracy of the generated obstacle information is lowered is that: due to the common knowledge between two adjacent vehicle-mounted cameras, there is a case where the obstacle region in the two photographed road images is truncated in the images, and if the truncated portion of the obstacle region in the two road images is too large (for example, only the left line of the obstacle detection frame can be observed in the forward-looking road image, and only the right line of the obstacle head or tail detection frame can be observed in the forward-looking road image), it is difficult to generate complete obstacle information from the information detected from the road images, thereby resulting in a decrease in the accuracy of the generated obstacle information. If the above factors are solved, the accuracy of the obstacle information can be improved. To achieve this, first, the information detected in the forward-looking road image and the right forward-looking road image is finely divided so that corresponding steps are performed for the identification of different information for the appropriate construction of constraint equations. Thus, the adaptability of the obstacle information generation method can be improved. Secondly, in the case that some information (for example, a forward-looking obstacle wheel grounding point coordinate set) cannot be detected in the actual process, therefore, a constraint equation can be adaptively adjusted according to the identified arrival information in the subsequent construction, so that each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set can be adjusted. Specifically, the first constraint equation and the second constraint equation may be used to constrain the difference between the coordinates of the vertex in the three-dimensional detection frame of the obstacle and the coordinates of the corresponding position in the image coordinate system to be reduced as much as possible after the vertex coordinates are projected to the image coordinate system. The third constraint equation and the fourth constraint equation may be used to constrain a distance between the initial obstacle detection bin coordinate points using the prior information, thereby adjusting the initial obstacle detection bin coordinate points. So that the accuracy of the adjusted obstacle detection frame coordinates is improved. In addition, considering that the bottom surface of the obstacle three-dimensional frame is rectangular, by constructing the fifth constraint equation, it can be used to further constrain the coordinate positions between the vertex coordinates in the map coordinate system corresponding to the initial obstacle detection frame coordinate points. Thus, it can be used to improve the accuracy of the generated adjusted obstacle detection frame coordinates in the minimum objective function. Further, it can be used to improve the accuracy of the generated obstacle information.
And 105, generating obstacle information by using the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set.
In some embodiments, the execution body may generate the obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection frame coordinate set.
In some optional implementations of some embodiments, the executing body may generate the obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection frame coordinate set, and may include:
and generating obstacle information based on the left-upper corner vertex coordinates and the adjusted obstacle detection frame coordinate set in response to determining that the forward-looking obstacle detection information includes left-upper corner vertex coordinates corresponding to the left-upper corner position of the obstacle detection frame on a forward-looking obstacle head-and-tail detection frame left edge line equation. The forward-looking obstacle detection information comprises the left upper corner vertex coordinates corresponding to the left upper corner position of the obstacle detection frame on the forward-looking obstacle head and tail detection frame left side line equation, and then the coordinates of the left upper corner position of the obstacle head and tail detection frame are represented. Then, since the three-dimensional detection of the obstacle is a cuboid, the height value of the three-dimensional frame of the obstacle can be generated by using a similar triangle method. Meanwhile, three vertex coordinates of the bottom surface of the three-dimensional obstacle frame, namely the coordinates of the adjusted obstacle detection frame in the adjusted obstacle detection frame coordinate set, can be used for determining equations of all vertex coordinates and all sides of the three-dimensional obstacle detection frame to serve as obstacle information.
In some optional implementations of some embodiments, the executing body generates the obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection frame coordinate set, and may further include the steps of:
and generating obstacle information based on the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set in response to determining that the forward-looking obstacle detection information does not include upper left corner vertex coordinates corresponding to the upper left corner position of the obstacle detection frame on the forward-looking obstacle head-and-tail detection frame left edge line equation. The forward-looking obstacle detection information does not include the left-upper corner vertex coordinates of the left side line equation of the forward-looking obstacle head-and-tail detection frame, which correspond to the left-upper corner position of the obstacle detection frame, and the forward-looking obstacle detection information can represent that the left-upper corner vertex coordinates are not detected from the forward-looking road image. Therefore, the historical obstacle length value and the historical obstacle width value can be used as prior values, and the equations of the top coordinates and the sides of the three-dimensional obstacle detection frame can be determined based on the adjusted obstacle detection frame coordinates in the adjusted obstacle detection frame coordinate set to serve as obstacle information.
Alternatively, the executing body may further send the obstacle information to a target display terminal for display. In addition, the above implementation may be to detect only one obstacle nearest to the right front of the current vehicle in the road image. In the actual obstacle information generation process, a camera of the current vehicle, such as a left front view camera, can be set according to requirements. After that, the above implementation may be appropriately adjusted to generate the obstacle information, without being particularly limited.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information generation method of some embodiments of the present disclosure, the accuracy of the generated obstacle information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle information is that: due to the common knowledge between two adjacent vehicle-mounted cameras, there is a case where the obstacle region in the two photographed road images is truncated in the images, and if the truncated portion of the obstacle region in the two road images is too large (for example, only the left line of the obstacle detection frame can be observed in the forward-looking road image, and only the right line of the obstacle head or tail detection frame can be observed in the forward-looking road image), it is difficult to generate complete obstacle information from the information detected from the road images, thereby resulting in a decrease in the accuracy of the generated obstacle information. Based on this, the obstacle information generation method of some embodiments of the present disclosure first acquires a history obstacle length value, a history obstacle width value, a forward-looking road image captured by a forward-looking camera of the current vehicle, and a right forward-looking road image captured by a right forward-looking camera. And secondly, performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. Then, an initial obstacle detection frame coordinate point group is generated based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and the obstacle course angle vector generated in advance. Here, by introducing the obstacle course angle vector, it is possible to facilitate fusion of the forward-looking obstacle detection information and the right forward-looking obstacle detection information. In addition, when it is difficult to fuse detection information in the forward-looking road image and the right forward-looking road image, by generating the initial obstacle detection frame coordinate point group, it can be used as initial information for generating obstacle information, facilitating the generation of the obstacle information. And then, based on the historical obstacle length value, the historical obstacle width value, the forward-looking obstacle detection information and the right forward-looking obstacle detection information, adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set to generate an adjusted obstacle detection frame coordinate set. Through the adjustment processing, the coordinate points of each initial obstacle detection frame can be adjusted, so that the accuracy of generating the adjusted obstacle detection frame coordinates is improved. And because the historical obstacle length value and the historical obstacle broadband value are introduced, the method can be used for combining the forward-looking obstacle detection information and the right forward-looking obstacle detection information, and the accuracy of the generated adjusted obstacle detection frame coordinates is further improved. And finally, generating obstacle information by using the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set. Thus, even if the cut-off portion of the obstacle region in the two road images is too large, more accurate adjusted obstacle detection frame coordinates can be generated by using the historical obstacle length value, the historical obstacle broadband value, and the initial obstacle detection frame coordinate point. Further, it can be used to generate complete obstacle information and to improve the accuracy of the generated obstacle information.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an obstacle information generating device, which correspond to those method embodiments shown in fig. 1, and which are particularly applicable in various electronic apparatuses.
As shown in fig. 2, the obstacle information generating apparatus 200 of some embodiments includes: an acquisition unit 201, an identification processing unit 202, a first generation unit 203, an adjustment processing unit 204, and a second generation unit 205. Wherein the acquiring unit 201 is configured to acquire a history obstacle length value, a history obstacle width value, a forward-looking road image captured by a forward-looking camera of the current vehicle, and a right forward-looking road image captured by a right forward-looking camera; an identification processing unit 202 configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; a first generation unit 203 configured to generate an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and the obstacle course angle vector generated in advance; an adjustment processing unit 204 configured to perform adjustment processing on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point group based on the history obstacle length value, the history obstacle width value, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information, to generate an adjusted obstacle detection frame coordinate group; the second generation unit 205 is configured to generate obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection frame coordinate set.
It will be appreciated that the elements described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above for the method are equally applicable to the apparatus 200 and the units contained therein, and are not described in detail herein.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means 301 (e.g., a central processing unit, a graphics processor, etc.) that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the apparatus; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a historical obstacle length value, a historical obstacle width value, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information and a pre-generated obstacle course angle vector; adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information and the right forward looking obstacle detection information to generate an adjusted obstacle detection frame coordinate set; generating obstacle information by using the history obstacle length value, the history obstacle width value, and the adjusted obstacle detection frame coordinate set.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, an identification processing unit, a first generation unit, an adjustment processing unit, and a second generation unit. The names of these units do not constitute limitations on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires a history obstacle length value, a history obstacle width value, a forward-looking road image taken by a forward-looking camera of the current vehicle, and a right forward-looking road image taken by a right forward-looking camera", for example.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. An obstacle information generation method, comprising:
acquiring a historical obstacle length value, a historical obstacle width value, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera;
performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information;
generating an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information and a pre-generated obstacle course angle vector;
based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information and the right forward looking obstacle detection information, adjusting each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set to generate an adjusted obstacle detection frame coordinate set;
and generating obstacle information by using the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set.
2. The method of claim 1, wherein the method further comprises:
And sending the obstacle information to a target display terminal for display.
3. The method of claim 1, wherein the identifying the forward looking road image and the right forward looking road image to generate forward looking obstacle detection information and right forward looking obstacle detection information comprises:
performing obstacle detection on the forward-looking road image to generate forward-looking obstacle detection information, wherein the forward-looking obstacle detection information comprises at least one of the following: a left line equation of a front-view obstacle full vehicle detection frame, a left line equation of a front-view obstacle head-tail detection frame and/or a front-view obstacle wheel grounding point coordinate set;
performing obstacle detection on the right front view road image to generate right front view obstacle detection information, wherein the right front view obstacle detection information comprises at least one of the following: a left line equation of a head-to-tail detection frame of the right front-view obstacle, a right line equation of a head-to-tail detection frame of the right front-view obstacle, and/or a ground point coordinate set of wheels of the right front-view obstacle.
4. The method of claim 3, wherein the generating an initial obstacle detection box coordinate point set based on the forward looking obstacle detection information, the right forward looking obstacle detection information, and a pre-generated obstacle course angle vector comprises:
Responding to the fact that the right forward looking obstacle detection information comprises a forward looking obstacle full vehicle detection frame left line equation, and carrying out coordinate selection on the forward looking obstacle full vehicle detection frame left line equation to generate obstacle course coordinates, wherein the obstacle course coordinates are located in a forward looking road image coordinate system of the forward looking road image;
in response to determining that the right forward looking obstacle detection information comprises a right forward looking obstacle head-to-tail detection frame right edge line equation, determining an obstacle head-to-tail detection frame right edge line vector corresponding to the right forward looking obstacle detection frame right edge line equation;
selecting coordinates of the direction of the right line vector of the obstacle head-tail detection frame to generate right line coordinates of the obstacle head-tail detection frame, wherein the right line coordinates of the obstacle head-tail detection frame are located in a right forward-looking road image coordinate system of the right forward-looking road image;
in response to determining that the forward-looking obstacle detection information comprises a forward-looking obstacle full-vehicle detection frame left line equation, a forward-looking obstacle head-and-tail detection frame left line equation and an obstacle wheel grounding point coordinate set, determining a connecting line of the obstacle wheel grounding point coordinates in the obstacle wheel grounding point coordinate set and an intersection point coordinate of the connecting line and the forward-looking obstacle full-vehicle detection frame left line equation as a first forward-looking obstacle detection frame vertex coordinate;
Determining the connection line of the obstacle wheel grounding point coordinates in the obstacle wheel grounding point coordinate set and the intersection point coordinate of the left line equation of the head and tail detection frame of the forward-looking obstacle as the vertex coordinate of the second forward-looking obstacle detection frame;
and respectively determining the right edge coordinates of the head and tail detection frames of the obstacle, the vertex coordinates of the first forward-looking obstacle detection frame and the vertex coordinates of the second forward-looking obstacle detection frame as initial obstacle detection frame coordinate points, and obtaining an initial obstacle detection frame coordinate point group.
5. The method of claim 4, wherein the adjusting each initial obstacle detection bin coordinate point in the initial obstacle detection bin coordinate point set based on the historical obstacle length value, the historical obstacle width value, the forward looking obstacle detection information, and the right forward looking obstacle detection information to generate an adjusted obstacle detection bin coordinate set comprises:
responding to the fact that the right front-view obstacle detection information comprises a left side line equation of a front-view obstacle full-vehicle detection frame, and constructing a first constraint equation based on the obstacle course coordinates and the left side line equation of the front-view obstacle full-vehicle detection frame;
In response to determining that the right forward looking obstacle detection information comprises a right line equation of a right forward looking obstacle head-to-tail detection frame, constructing a second constraint equation based on the right line vector of the obstacle head-to-tail detection frame and the right line coordinate of the obstacle head-to-tail detection frame;
responding to the fact that the forward-looking obstacle detection information comprises a forward-looking obstacle full vehicle detection frame left side line equation, a forward-looking obstacle head-and-tail detection frame left side line equation and an obstacle wheel grounding point coordinate set, and constructing a third constraint equation based on the historical obstacle length value, the obstacle course angle vector and a left lower angle vertex coordinate corresponding to the left lower angle position of the obstacle detection frame on the forward-looking obstacle head-and-tail detection frame left side line equation;
constructing a fourth constraint equation based on the historical obstacle width value and the lower left corner vertex coordinates;
constructing a fifth constraint equation based on each initial obstacle detection box coordinate point in the initial obstacle detection box coordinate point set;
and performing adjustment processing on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point set based on the first constraint equation, the second constraint equation, the third constraint equation, the fourth constraint equation and/or the fifth constraint equation to generate an adjusted obstacle detection frame coordinate set, wherein each adjusted obstacle detection frame coordinate in the adjusted obstacle detection frame coordinate set is a coordinate in a map coordinate system.
6. The method of claim 5, wherein the generating obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection box coordinate set comprises:
and generating obstacle information based on the left-upper corner vertex coordinates and the adjusted obstacle detection frame coordinate set in response to determining that the forward-looking obstacle detection information comprises the left-upper corner vertex coordinates of the left-upper corner position of the obstacle detection frame on the forward-looking obstacle head-and-tail detection frame left edge line equation.
7. The method of claim 6, wherein the generating obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection box coordinate set further comprises:
and generating obstacle information based on the historical obstacle length value, the historical obstacle width value and the adjusted obstacle detection frame coordinate set in response to determining that the forward-looking obstacle detection information does not include upper left corner vertex coordinates corresponding to the upper left corner position of the obstacle detection frame on a forward-looking obstacle head-and-tail detection frame left edge line equation.
8. An obstacle information generating device comprising:
An acquisition unit configured to acquire a history obstacle length value, a history obstacle width value, a forward-looking road image captured by a forward-looking camera of a current vehicle, and a right forward-looking road image captured by a right forward-looking camera;
an identification processing unit configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information;
a first generation unit configured to generate an initial obstacle detection frame coordinate point group based on the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and a pre-generated obstacle course angle vector;
an adjustment processing unit configured to perform adjustment processing on each initial obstacle detection frame coordinate point in the initial obstacle detection frame coordinate point group based on the historical obstacle length value, the historical obstacle width value, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information, to generate an adjusted obstacle detection frame coordinate group;
and a second generation unit configured to generate obstacle information using the historical obstacle length value, the historical obstacle width value, and the adjusted obstacle detection frame coordinate set.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202310395943.8A 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium Active CN116563818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310395943.8A CN116563818B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310395943.8A CN116563818B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Publications (2)

Publication Number Publication Date
CN116563818A true CN116563818A (en) 2023-08-08
CN116563818B CN116563818B (en) 2024-02-06

Family

ID=87488883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310395943.8A Active CN116563818B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN116563818B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046743A (en) * 2019-11-21 2020-04-21 新奇点企业管理集团有限公司 Obstacle information labeling method and device, electronic equipment and storage medium
CN111222579A (en) * 2020-01-09 2020-06-02 北京百度网讯科技有限公司 Cross-camera obstacle association method, device, equipment, electronic system and medium
US20210374439A1 (en) * 2020-05-29 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Obstacle detection method and device, apparatus, and storage medium
CN113963330A (en) * 2021-10-21 2022-01-21 京东鲲鹏(江苏)科技有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN114241448A (en) * 2021-12-31 2022-03-25 深圳市镭神智能***有限公司 Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN114802261A (en) * 2022-04-21 2022-07-29 合众新能源汽车有限公司 Parking control method, obstacle recognition model training method and device
CN114943952A (en) * 2022-06-13 2022-08-26 北京易航远智科技有限公司 Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN115009305A (en) * 2022-06-29 2022-09-06 北京易航远智科技有限公司 Narrow road passing processing method and narrow road passing processing device
CN115527283A (en) * 2022-09-21 2022-12-27 华南农业大学 Inspection platform and inspection method in cage chicken house
CN115540894A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN115817463A (en) * 2023-02-23 2023-03-21 禾多科技(北京)有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046743A (en) * 2019-11-21 2020-04-21 新奇点企业管理集团有限公司 Obstacle information labeling method and device, electronic equipment and storage medium
CN111222579A (en) * 2020-01-09 2020-06-02 北京百度网讯科技有限公司 Cross-camera obstacle association method, device, equipment, electronic system and medium
US20210374439A1 (en) * 2020-05-29 2021-12-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Obstacle detection method and device, apparatus, and storage medium
CN113963330A (en) * 2021-10-21 2022-01-21 京东鲲鹏(江苏)科技有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN114241448A (en) * 2021-12-31 2022-03-25 深圳市镭神智能***有限公司 Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN114802261A (en) * 2022-04-21 2022-07-29 合众新能源汽车有限公司 Parking control method, obstacle recognition model training method and device
CN114943952A (en) * 2022-06-13 2022-08-26 北京易航远智科技有限公司 Method, system, device and medium for obstacle fusion under multi-camera overlapped view field
CN115009305A (en) * 2022-06-29 2022-09-06 北京易航远智科技有限公司 Narrow road passing processing method and narrow road passing processing device
CN115527283A (en) * 2022-09-21 2022-12-27 华南农业大学 Inspection platform and inspection method in cage chicken house
CN115540894A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN115817463A (en) * 2023-02-23 2023-03-21 禾多科技(北京)有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩志伟: ""复杂障碍环境下无人水面艇航迹规划算法研究"", 《中国博士学位论文全文数据库信息科技辑》 *

Also Published As

Publication number Publication date
CN116563818B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN112598762B (en) Three-dimensional lane line information generation method, device, electronic device, and medium
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN113256742B (en) Interface display method and device, electronic equipment and computer readable medium
CN113255619B (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115326099B (en) Local path planning method and device, electronic equipment and computer readable medium
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN116164770B (en) Path planning method, path planning device, electronic equipment and computer readable medium
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN113269168B (en) Obstacle data processing method and device, electronic equipment and computer readable medium
CN112232451B (en) Multi-sensor data fusion method and device, electronic equipment and medium
CN116311155A (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN115468578B (en) Path planning method and device, electronic equipment and computer readable medium
CN116563818B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN114723640B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN116563817B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116630436B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium
CN116740682B (en) Vehicle parking route information generation method, device, electronic equipment and readable medium
CN114863025B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN116188583B (en) Method, device, equipment and computer readable medium for generating camera pose information
CN116229417A (en) Obstacle distance information generation method, device, equipment and computer readable medium
CN114494428B (en) Vehicle pose correction method and device, electronic equipment and computer readable medium
CN116259037A (en) Guideboard distance information generation method, apparatus, device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant