CN110443819B - Method and device for detecting track of monorail train - Google Patents

Method and device for detecting track of monorail train Download PDF

Info

Publication number
CN110443819B
CN110443819B CN201810414889.6A CN201810414889A CN110443819B CN 110443819 B CN110443819 B CN 110443819B CN 201810414889 A CN201810414889 A CN 201810414889A CN 110443819 B CN110443819 B CN 110443819B
Authority
CN
China
Prior art keywords
track
image
information
radar
picture image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810414889.6A
Other languages
Chinese (zh)
Other versions
CN110443819A (en
Inventor
鲁星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201810414889.6A priority Critical patent/CN110443819B/en
Publication of CN110443819A publication Critical patent/CN110443819A/en
Application granted granted Critical
Publication of CN110443819B publication Critical patent/CN110443819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting a track of a monorail train. The method comprises the following steps: obtaining the distance information of a reflecting object in a target detection area, wherein the distance information of the reflecting object is measured by a radar; acquiring picture image information of a target detection area; performing information fusion on the reflector distance information and the picture image information to obtain track edge information in the picture image; and detecting an obstacle in the screen image according to the track edge information. The invention simultaneously utilizes the radar and the camera to acquire the information of the target track area, is influenced by environment, weather and the like to a far smaller extent than a detection method based on a camera image, and can obtain good detection precision no matter a straight track or a more curved track. In addition, the specific technical problem that the obstacle detection is more prone to be interfered by the background can be well achieved by aiming at the single track and supposing that the single track is arranged at high altitude.

Description

Method and device for detecting track of monorail train
Technical Field
The invention relates to the technical field of rail transit, in particular to a method and a device for detecting a rail of a monorail train.
Background
The monorail train system is a rail transit system for supporting train operation by erecting a single rail in a space with a certain height, and is also vividly called as a 'cloud rail' because the rail is generally higher than the ground and is far higher than the head-up visual angle range of pedestrians on the ground. The track of monorail can erect in road central separation area or narrow street, not only monopoly use the road surface, and the fortune that provides simultaneously can be close to subway system. Compared with an urban underground rail traffic system, the monorail train has the advantages of strong climbing capacity, small turning radius, adaptability to various terrains, low noise, low rail construction cost, short construction period and the like, and is a traffic tool with relatively high cost performance in the current technical environment. In addition to and as an alternative to underground railway traffic, it is being adopted by an increasing number of cities.
In order to enhance the driving safety of the train, the train is required to have a perception capability on the driving environment, and obstacle detection is an essential link. Currently, in the field of rail transit, it is a common practice to collect image information of a rail in front of a train by using a camera, and identify the rail and an obstacle near the rail by using an image processing technology. For example, since the railway rail has a substantially uniform surface gray scale in the monitor image, has a strong edge characteristic, and is expressed as a long and continuous straight line, the railway rail can be identified by detecting the straight line. However, for the outdoor operation, the track image is easily affected by the environment, weather changes and the like, and the detection accuracy is difficult to guarantee under the conditions of uneven illumination, fog days, rain days and the like.
Further, even when the track recognition based on the line detection is performed, an accurate track recognition result cannot be obtained even when the track is branched or curved. Compared with a double-track railway for remote traffic, the track of the monorail train for urban traffic has the great characteristic of more curves. In order to adapt to the complex road environment of an urban area, to adapt to the current and future urban planning, many of the monorail trains are constructed or planned to be constructed above isolation zones on the road surface, above green zones around residential quarters or office buildings, etc., and often need to bypass a particular existing building, and therefore many areas of the track do not exhibit a continuous straight-line shape. This makes the existing track identification method based on the detection of the straight line edge in the image unable to achieve satisfactory results in the detection of the monorail track.
In addition, since a monorail is operated on an aerial track, the tracks of pedestrians and things on the ground appear in large numbers in the background of images, and the track recognition is more important than a ground or underground track vehicle (such as a railway train or a subway). Once the track information is not accurately identified, the road background information outside the track and the information in the track cannot be effectively distinguished, so that the identification error is easily caused, and particularly, the ground information originally belonging to the background is mistakenly identified as the obstacle with a high probability. If such a situation occurs, it is difficult to distinguish whether a rail obstacle is actually present or whether the rail obstacle is recognized by mistake, which causes a great trouble in driving control. If a detected obstacle is ignored with a certain probability, the consequences may be too severe to be tolerable in case of a real obstacle. If the vehicle or pedestrian stops every time the vehicle or pedestrian outside the track is erroneously determined to be an obstacle, the normal running of the train is seriously affected.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art described above. Provided are a track detection method and device capable of effectively detecting a cloud track.
In order to achieve the above object, an embodiment according to a first aspect of the present invention proposes a track detection method of a monorail train, comprising: obtaining the distance information of a reflecting object in a target detection area, wherein the distance information of the reflecting object is measured by a radar; acquiring picture image information of a target detection area; performing information fusion on the reflector distance information and the picture image information to obtain track edge information in the picture image; and detecting an obstacle in the screen image according to the track edge information.
In some embodiments, the acquisition times of the acquired reflector distance information of the target detection area and the acquired picture image information of the target detection area are synchronized.
In some embodiments, the information fusion of the reflector distance information and the picture image information to obtain the track edge information in the picture image comprises: obtaining a radar image according to the reflector distance information, and identifying a dividing point of the track edge in the radar image; mapping pixel points corresponding to the segmentation points in the radar image into the picture image through coordinate transformation to obtain edge points of the track in the picture image; and obtaining the edge information of the track according to the edge points in the picture image.
In some embodiments, the information fusion of the reflector distance information and the picture image information to obtain the track edge information in the picture image comprises: obtaining a radar image according to the reflector distance information, and identifying a dividing point of the track edge in the radar image; obtaining track edge information in the radar image according to the dividing points; and mapping pixel points corresponding to the track edge in the radar image into the picture image through coordinate transformation to obtain the edge information of the track in the picture image.
In some embodiments, identifying the segmentation points of the track edges in the radar image comprises: and acquiring pixel points with the value of the pixel points in the neighborhood having step change larger than a predefined threshold, and taking the pixel points as segmentation points.
In some embodiments, the mapping pixel points in the radar image into the picture image through coordinate transformation includes: acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W; acquiring a second mapping relation f2 between the world coordinate system W and the coordinate system C of the camera; acquiring a second mapping relation f3 between the coordinate system C of the camera and the picture image; and mapping pixel points in the radar image into the picture image through coordinate transformation according to the first mapping relation f1, the second mapping relation f2 and the third mapping relation f 3.
In some embodiments, the obstacle detection according to the edge information of the track includes: performing image mask operation on the picture image according to the edge information of the track to obtain a target track area; and carrying out obstacle detection in the target track area.
By using the track detection method of the monorail train, the radar and the camera are used for acquiring the information of the target track area. On one hand, because the radar has high penetrability and good precision, the radar can well obtain information such as the distance, the direction, the speed and the like of a target, so that the influence degree of the radar on the environment, the weather and the like is far smaller than that of a detection method based on a camera image; on the other hand, the detection of the track edge based on the distance measured by the radar can be free from the limitation of the track shape, and good detection precision can be obtained no matter a straight track or more complete tracks.
In addition, aiming at the specific technical problem that the single track is supposed to be in the high altitude and the obstacle detection is more prone to be interfered by the background, the sensor fusion method provided by the invention is used for accurately determining the track area based on the radar information and then identifying the obstacle in the track area according to the image information, so that the method is higher in stability, good in robustness, capable of effectively preventing false detection and missing detection, improving the driving efficiency and guaranteeing the driving safety.
An embodiment according to a second aspect of the present invention provides a track detection device of a monorail train, comprising: the camera module is used for acquiring picture image information of a target detection area; the radar module is used for scanning a target detection area and acquiring the distance information of a reflecting object in the target detection area; the central control module is used for controlling the camera module and the radar module to work, and performing information fusion on the acquired picture image and the reflector distance information to obtain track edge information in the picture image; and detecting an obstacle in the screen image according to the track edge information.
In some embodiments, the central control module comprises: the signal sampling control unit is used for controlling the camera module and the radar module to acquire information of a target detection area; the first image processing unit is used for representing the distance information of the reflecting object as a radar image and identifying the dividing points of the track edge in the radar image; the coordinate mapping unit is used for mapping pixel points in the radar image into the picture image through coordinate transformation to obtain corresponding mapping pixel points; the second image processing unit is used for obtaining the edge information of the track in the radar image according to the segmentation points or obtaining the edge information of the track in the picture image according to the corresponding mapping pixel points of the segmentation points; and a third image processing unit for performing obstacle detection in the picture image according to the edge information of the rail.
In some embodiments, the signal sampling control unit controls the radar-using module and the camera-using module to synchronously acquire the reflector distance information and the picture image information of the target detection area.
In some embodiments, the information fusion of the acquired picture image and the reflector distance information to obtain the track edge information in the picture image includes: calling a first image processing unit, and identifying a segmentation point of a track edge in a radar image; calling a coordinate mapping unit, and mapping pixel points corresponding to the segmentation points in the radar image into the picture image through coordinate transformation to obtain edge points of the track in the picture image; and calling a second image processing unit to obtain the edge information of the track in the picture image according to the edge points.
In some embodiments, the information fusion of the acquired picture image and the reflector distance information to obtain the track edge information in the picture image includes: calling a first image processing unit, and identifying a segmentation point of a track edge in a radar image; calling a second image processing unit to obtain track edge information in the radar image according to the dividing points; and calling a coordinate mapping unit, and mapping pixel points corresponding to the track edge in the radar image into the picture image through coordinate transformation to obtain the edge information of the track in the picture image.
In some embodiments, the first image processing unit identifying a segmentation point of a track edge in the radar image comprises: and acquiring pixel points with the value of the pixel points in the neighborhood having step change larger than a predefined threshold, and taking the pixel points as segmentation points.
In some embodiments, the coordinate mapping unit maps the pixel points in the radar image into the picture image through coordinate transformation, including: acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W; acquiring a second mapping relation f2 from the world coordinate system W to the coordinate system C of the camera; acquiring a second mapping relation f3 between the coordinate system C of the camera and the picture image; and mapping pixel points in the radar image into the picture image through coordinate transformation according to the first mapping relation f1, the second mapping relation f2 and the third mapping relation f3 to obtain edge points.
In some embodiments, the third image processing unit performing obstacle detection in the screen image according to edge information of the rail includes: performing image mask operation on the picture image according to the edge information of the track to obtain a target track area; and carrying out obstacle detection in the target track area.
The track detection device of the monorail train provided by the invention is used for acquiring the information of the target track area by utilizing the radar and the camera at the same time. On one hand, because the radar has high penetrability and good precision, the radar can well obtain information such as the distance, the direction, the speed and the like of a target, so that the influence degree of the radar on the environment, the weather and the like is far smaller than that of a detection method based on a camera image; on the other hand, the detection of the track edge based on the distance measured by the radar can be free from the limitation of the track shape, and good detection precision can be obtained no matter a straight track or more complete tracks.
In addition, aiming at the specific technical problem that the monorail is erected at high altitude and the obstacle detection is more prone to be interfered by the background, the sensor fusion method disclosed by the invention is used for accurately determining the track area based on the radar information and then identifying the obstacle in the track area according to the image information, so that higher stability and high robustness can be obtained, false detection and missing detection can be effectively prevented, the driving efficiency is improved, and the driving safety is guaranteed.
Embodiments according to the third aspect of the present invention provide a non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, implement the method according to the first aspect of the present invention.
The non-transitory computer-readable storage medium according to embodiments of the third aspect of the present invention has similar advantageous effects to the embodiments of the method and apparatus according to the first and second aspects of the present invention, and thus, detailed descriptions thereof are omitted.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of a method of track detection of a monorail car in accordance with one embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate position relationship of a method of track detection of a monorail car in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a radar coordinate system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera coordinate system according to an embodiment of the invention;
FIG. 5 is a schematic diagram of the relationship between a pixel coordinate system and an image plane coordinate system according to an embodiment of the invention;
FIG. 6 is a schematic diagram of the identification principle of the division points;
FIG. 7 is a schematic flow chart of a method of track detection of a monorail car in accordance with another embodiment of the present invention;
FIG. 8 is a schematic flow chart of a method of track detection of a monorail car in accordance with yet another embodiment of the present invention;
FIG. 9 is a block diagram of a track detection device of the monorail train in accordance with an embodiment of the present invention;
fig. 10 is a block diagram of a central control module according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method comprises the following steps of simultaneously using a radar and a camera to acquire information, using the radar to acquire the distance information of a reflector in a target detection area, and using the camera to acquire the image information of the image of the target detection area; performing information fusion on the radar image and the picture image to obtain track edge information in the picture image; and performing obstacle detection according to the track edge information. And a better detection effect is obtained through the fusion of different types of sensor information.
The method and apparatus of embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method of track detection of a monorail train in accordance with one embodiment of the present invention. The track detection method of the monorail train comprises steps S110 to S140.
In step S110, the reflecting object distance information of the target detection area is acquired, and the reflecting object distance information is measured by the radar. Generally, the radar may be a vehicle-mounted laser radar. The accuracy of the radar can be comprehensively selected according to specific running requirements of the train, such as the highest running speed, the obstacle detection accuracy and the like. Such as an optional millimeter wave radar.
The information collected by the radar is the distance and corresponding angle information from the radar to the reflection point from the signal transmitting end, so that the distance information of the reflection object corresponding to each point on the radar image can be directly obtained. Details of the radar image and the camera view image and their mapping relationship will be described later with reference to fig. 2 to 6.
In step S120, screen image information of the target detection area is acquired. And may typically be implemented by a camera mounted on the vehicle. Also, the type and performance parameters of the cameras may be selected according to the specific operating requirements of the train.
The steps S110 and S120 are not required to be executed in a strict order, and the order may be switched. However, since most of the application scenarios of obstacle detection are that the train is moving, i.e. both the camera and the radar are in motion, in order to facilitate the later data fusion processing, it is preferable that the camera and the radar record the acquisition time at the same time when acquiring information, so that the coordinate conversion is performed through the acquisition time and the train speed when the acquisition of the camera and the radar are asynchronous. To facilitate the calculation, reduce the complexity of the coordinate scaling and obtain more accurate results, in some embodiments the acquired reflector distance information of the target detection area and the acquisition time of the acquired picture image information of the target detection area are synchronized. Namely, the camera and the radar are controlled to carry out synchronous information acquisition. In this way, the relative position relationship between the corresponding camera and the radar origin is the same in each acquisition.
In step S130, the reflector distance information and the picture image information are fused to obtain track edge information in the picture image.
And carrying out information fusion on the reflector distance information and the picture image, and mainly comprising the steps of identifying the edge of the track by using the reflector distance information and carrying out obstacle detection in the picture image according to the track edge information. To better explain the process, the representation forms of the information collected by the radar and the picture information and the mapping relation between the two are introduced first.
The specific representation forms of the radar collected information and the image information and the mapping relationship between the two will be described in detail with reference to fig. 2 to 6.
Fig. 2 is a schematic diagram of a coordinate position relationship of a track detection method of a monorail train according to an embodiment of the present invention. Wherein the radar can be mounted most forward of the locomotive of the train to obtain an unobstructed field of view, and relatively close to the track. And the camera can be arranged at the position, closest to the direction of the locomotive, of the roof of the train so as to obtain a better shooting angle and range. In addition, for the convenience of modeling calculation, the radar and the camera are horizontally arranged so as to ensure that the cross sections of the radar and the camera are parallel to the track surface and the ground.
In fig. 2, S, Q denotes a train top surface and a track surface, respectively. O isrIs a radar mounting point, OcIs the mounting point for the camera. Respectively with Or、OcEstablishing a radar coordinate system O for the origin of coordinatesrXrYrZrAnd camera coordinate system OCXCYCZC. By a point O on the track surfaceWEstablishing a world coordinate system O for the originWXWYWZWThe OXZ planes of the three coordinate systems coincide, i.e. OrAnd OcAt OWXWYWThe projections on the plane all fall on OwXWOn the coordinate axis. Typically, both the radar and camera are mounted on the central axis of the train, the origin O of the world coordinatesWAnd also as a point on the centerline of the track, thereby facilitating the calculation. Setting the installation height of the radar as h1The installation height of the camera is h0,OWAnd OcA distance d in the X-axis direction0,OwAnd OrA distance d in the X-axis direction1
Fig. 3 is a schematic view of a radar coordinate system according to an embodiment of the invention. In radar applications, the target is usually determined by a spherical coordinate system, and the coordinates of the target 20 at an arbitrary position P in space can be described by the following three variables:
(1) distance R of the target: represents a linear distance OP from the radar origin O to the target 20;
(2) azimuth angle α: and the included angle between the projection OD of the connecting line OP of the radar origin O and the point P on the horizontal plane (OXY) and the positive direction of the X axis is shown.
(3) Elevation angle β: and the included angle between the connecting line OP of the radar origin O and the point P and the projection OD of the connecting line OP on the horizontal plane is shown.
The information collected by the radar is actually a series of combinations of (α, β, R) in a spherical coordinate system. In the present disclosure, the information collected by the radar is also described by "radar image", which is an abstract and generalized image concept, and is defined for convenience of explaining the method of the present invention, and is slightly different from the image used for display. In the radar image, the coordinates of each point are an azimuth angle alpha and an elevation angle beta, and the value of each pixel point is a distance R. It can be seen that each pixel point in the radar image is also described by (α, β, R). That is, correlation, projective transformation, and the like of pixel points in the radar image may have the same form as the spherical coordinate expression.
For an arbitrary position P in space, the first mapping f1 of the coordinate system R of the radar and the world coordinate system W is as follows:
Figure BDA0001649100980000071
as can be seen in FIG. 2, a point P (X) in spaceW,YW,ZW) And camera coordinate system (X)C,YC,ZC) The relationship between (i.e. the second mapping f2 between the world coordinate system W to the camera coordinate system C) is as follows:
Figure BDA0001649100980000072
FIG. 4 is a schematic diagram of a camera coordinate system showing a projected relationship between the camera and an image plane coordinate system according to an embodiment of the present invention. O isCXCYCIs a cross section of a camera coordinate system, OXY is an image plane coordinate system of a picture photographed by the camera, and M (x, y) is a certain point M (x) in spaceC,yC,zC) Projection onto the image plane, f is the camera focal length. According to the triangle similarity principle, there are:
Figure BDA0001649100980000073
arranging into a matrix form, and for any point:
Figure BDA0001649100980000074
fig. 5 shows the relation between the picture pixel coordinate system and the image plane coordinate system. In FIG. 7, O0uv is the pixel coordinate system, O1xy is the image coordinate system, O1The intersection point of the optical axis of the camera and the image plane, namely the origin of the coordinate system of the image plane, is set as the pixel coordinate (u) in the pixel coordinate system0,v0) Let d be the physical length (in mm) of each pixel in the x-axis and y-axis directions in the image plane, respectivelyxWhat d isyThen, the two coordinate systems have a transformation relationship:
Figure BDA0001649100980000081
Figure BDA0001649100980000082
arranging into a matrix form:
Figure BDA0001649100980000083
a third mapping f3 of the camera coordinate system to pixel points in the picture image can be determined according to equations (3) and (4). It should be noted that, regarding camera calibration, there are relatively mature solutions in the related fields of computer vision and the like, and there are many open source codes, and these existing camera calibration methods can be used in the present invention to obtain the third mapping relationship f 3. For example, Calibration can be performed using the Camera Calibration Toolbox tool box in MATLAB.
Based on the above equations (1) - (4), mapping from coordinate points in the radar coordinate system (or pixel points in the radar image) to pixel positions in the camera image can be realized.
Fig. 6 is a schematic diagram illustrating the principle of recognition of the division points. Since the monorail track is assumed to be at a certain height in the air (e.g., a typical height is about 4 meters), the invention utilizes the height difference between the track and the ground to determine the edge of the track, i.e., the dividing point of the track and the surrounding space.
Referring to fig. 6, which shows a cross section of the radar scan, O is the radar center position, Q is the orbital plane, G is the ground, H is the orbital plane height, β is the radar elevation, and P is some demarcation point on one side of the orbital plane. Obviously, there will be a significant difference in the height of the points lying on the track surface and the background objects lying outside the track.
That is, for the adjacent point of the boundary point on the side close to the track and the adjacent point on the side of the boundary point principle track, the distance between the two adjacent points from the radar origin is obviously different. This difference in distance can be used numerically
Figure BDA0001649100980000084
To approximate. Furthermore, because the radar is positioned on the central line of the track, the boundary points on two sides of the track are distributed on two sides of the central axis of the radar, and the boundary points on two sides can be separated through azimuth angle judgment. That is, the points in the radar image at which the range variable changes stepwise as the azimuth angle changes may be the split points of the orbit and the surrounding space. In addition, since the height of the rail is high and has an obvious height difference with an object in the surrounding environment (for example, a pedestrian or a vehicle below the rail), the pixel value before and after the step change (the distance value from the corresponding point to the radar origin) is greater than a predefined threshold value as a condition for judging the segmentation point. Objects with steep edges that may be present in the background outside the track, such as buses, flower beds of green belts, etc., have been excluded. Or further judging the distance value in the neighborhood of the division point, comparing the distance value of the division point with the reference distance from the radar origin corresponding to the elevation angle to the track surface, and eliminating the points (also called interference points) with larger difference with the standard distance.
Therefore, in some embodiments of the method for detecting a track of a monorail train of the present invention, identifying in the radar image segmentation points of the track edges comprises: and acquiring pixel points with the value of the pixel points in the neighborhood having step change larger than a predefined threshold, and taking the pixel points as segmentation points.
Step changes include both rising and falling changes, for example, where a point on the orbital plane is taken as a split point, the distance within its neighborhood after the step change (e.g., change to ground-to-radar distance) is increased; on the other hand, when a point where a space reflector (for example, the ground) other than the track surface is located is set as a division point, the distance after the step change (change to the distance between the track and the radar) in the vicinity thereof is reduced. Therefore, the elimination of the interference point can compare the shorter distance of the distances before and after the change in the step change process with the reference distance.
Furthermore, to filter out distant and surrounding unwanted information, the azimuth and elevation angles can be set within a range rather than scanning the entire three-dimensional space in front of the radar to improve data quality.
The realization of information fusion mainly comprises the steps of obtaining a segmentation point of a track and the ground through a radar image, determining the approximate position of the track, and mapping the track position obtained in the radar image to a picture image. Specifically, there may be two embodiments depending on whether the determination of the track edge information is made in the radar image or the picture image, as shown in fig. 7 and 8.
FIG. 7 is a schematic flow chart of a method of track detection of a monorail car in accordance with another embodiment of the present invention. In this embodiment, steps S210, S220, and S240 may be implemented in a manner similar to steps S110, S120, and S140 in the embodiment in fig. 1, and are not described again. The information fusion of the reflector distance information and the picture image information to obtain the track edge information in the picture image may further include steps S231 to S233.
In step S231, a division point of the track edge is identified in the radar image.
In step S232, pixel points corresponding to the segmentation points in the radar image are mapped to the picture image through coordinate transformation, so as to obtain edge points of the track in the picture image.
In step S233, edge information of the track is obtained from the edge points in the screen image.
FIG. 8 is a schematic flow chart of a method of track detection of a monorail car in accordance with yet another embodiment of the present invention. In this embodiment, steps S310, S320, and S340 may be implemented in a manner similar to steps S110, S120, and S140 in the embodiment in fig. 1, and are not described again. The information fusion of the reflector distance information and the picture image information to obtain the track edge information in the picture image may further include steps S331 to S333.
In step S331, a division point of a track edge is identified in the radar image.
In step S332, orbit edge information is obtained in the radar image from the division points.
In step S33, mapping pixel points corresponding to the track edge in the radar image to the picture image through coordinate transformation, so as to obtain edge information of the track in the picture image.
The embodiment corresponding to fig. 7 and 8 differs in that the track edge information is determined from the division points in the radar image or the division points are mapped to the edge points in the picture image, and then the track edge information is determined from the edge points. Both can reach and finally confirm the marginal information of orbit in the picture, can choose any kind of form to realize.
In contrast, the embodiment shown in fig. 7 is more intuitive in effect when the track edge is confirmed in the screen image. In addition, the track in the picture image is represented by a continuous curve, so that the shape of the edge of the track can be restored by discrete points through a differential method such as curve fitting. Therefore, the following description will be given taking an example in which the division points are mapped to the edge points in the screen image and the edge information is determined from the edge points in the screen image.
Mapping pixel points in the radar image into the picture image through coordinate transformation according to the above description in conjunction with fig. 2 to 5 may include: acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W; acquiring a second mapping relation f2 from the world coordinate system W to the coordinate system C of the camera; and acquiring a third mapping relation f3 from the camera coordinate system to pixel points in the picture image, and mapping the pixel points in the radar image into the picture image through coordinate transformation through the first mapping relation f1 and the second mapping relation f 2.
It is to be noted that, whether the division points are mapped to the edge points, or the track edge information in the radar image is mapped to the track edge information in the picture image of the camera, the substance thereof is to map from the pixel set in the radar image to the pixel set in the picture image, and thus can be realized by the formulas (1) to (4).
The dividing points are mapped into the picture image coordinate system through the conversion relation between the radar coordinate system and the picture image coordinate system, the obtained edge points are a plurality of discrete points, the discrete points cannot divide the track finely, and some interference points are sometimes mixed. Therefore, to obtain accurate track edge information, further data processing may be performed.
Since the trajectory can be approximately regarded as a continuous curve, in some embodiments, an accurate segmentation line is obtained by curve fitting all segmentation points. Moreover, one of the characteristics of the monorail train track is that the monorail train track is not in a straight line shape and is often accompanied with a curve, so that a good fitting effect can be obtained by using a spline difference value which is greater than quadratic. For example, a cubic B-spline curve may be used for the fitting. Specific curve fitting procedures are available in the related art in various mathematical methods and computer programs, for example, and can be accomplished using the corresponding toolkit in MATLAB. For those skilled in the art, the specific data fitting process can be implemented according to the curve fitting manner taught by the present invention (i.e. the form selection of the fitted curve provided according to the present disclosure), and therefore, the detailed description thereof is omitted here.
In step S140, obstacle detection is performed on the screen image based on the track edge information. Various image recognition algorithms in the related art may be employed to detect whether an obstacle exists in an area within the range of the edge of the track in the picture image. For example, image mask operation can be performed on the picture image according to the edge information of the track to obtain a target track area; and then carrying out obstacle detection in the target track area.
In the related art, particularly, the existing rail obstacle detection method for railways and subways can be applied to the step. After the determination of the track edge information, the presence or absence of obstacles in the area within the track is determined, and for a single-track, the detection of obstacles based on image recognition is easier to obtain good results because the track has no background interference of sleepers.
By using the track detection method of the monorail train, the radar and the camera are used for acquiring the information of the target track area. On one hand, because the radar has high penetrability and good precision, the radar can well obtain information such as the distance, the direction, the speed and the like of a target, so that the influence degree of the radar on the environment, the weather and the like is far smaller than that of a detection method based on a camera image; on the other hand, the detection of the track edge based on the distance measured by the radar can be free from the limitation of the track shape, and good detection precision can be obtained no matter a straight track or more complete tracks.
In addition, aiming at the specific technical problem that the single track is supposed to be in the high altitude and the obstacle detection is more prone to be interfered by the background, the sensor fusion method provided by the invention is used for accurately determining the track area based on the radar information and then identifying the obstacle in the track area according to the image information, so that the method is higher in stability, good in robustness, capable of effectively preventing false detection and missing detection, improving the driving efficiency and guaranteeing the driving safety.
In order to better implement the track detection method of the monorail train, the embodiment of the second aspect of the invention provides a track detection device of the monorail train. For specific details of the implementation process of the functions and actions of each module in the system of the present invention, reference may be made to the implementation process of the corresponding step in the above method, which is not described herein again. For the apparatus embodiments, since they substantially correspond to the method embodiments, reference is made to the partial description of the method embodiments for relevant inexhaustibles.
Fig. 9 is a block diagram showing the structure of a track detection device for a monorail train in accordance with an embodiment of the present invention. The track detection device 100 of the monorail train comprises a camera module 110, a radar module 120 and a central control module 130.
The camera module 110 is used to acquire image information of a target detection area.
The radar module 120 is configured to scan a target detection area and obtain distance information of a reflection object in the target detection area.
The central control module 130 is configured to control the camera module and the radar module to work, perform information fusion on the acquired picture image and the radar image, identify a track in the picture image, and perform track obstacle detection.
Fig. 10 is a block diagram of a central control module according to an embodiment of the present invention. The central control module 130 includes a signal sampling control unit 131, a first image processing unit 132, a coordinate mapping unit 133, a second image processing unit 134, and a third image processing unit 135.
The signal sampling control unit 131 is configured to control the camera module and the radar module to acquire information of a target detection area. In some embodiments, the signal sampling control unit 131 controls the radar module 110 and the camera module 120 to synchronously acquire the object distance information and the picture image information of the target detection area.
The first image processing unit 132 is used to identify the segmentation points of the track edges in the radar image. Identifying segmentation points for the rail edges in the radar image may include: and acquiring pixel points with the value of the pixel points in the neighborhood having step change larger than a predefined threshold, and taking the pixel points as segmentation points. For details, reference is made to the description of the method embodiment in connection with fig. 6.
The coordinate mapping unit 133 is configured to map pixel points in the radar image to the picture image through coordinate transformation, so as to obtain corresponding mapped pixel points. The coordinate mapping may include: acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W; acquiring a second mapping relation f2 from the world coordinate system W to the coordinate system C of the camera; acquiring a second mapping relation f3 between the coordinate system C of the camera and the picture image; and mapping pixel points in the radar image into the picture image through coordinate transformation according to the first mapping relation f1, the second mapping relation f2 and the third mapping relation f3 to obtain edge points. For details, reference is made to the description of the method embodiment in connection with fig. 6.
The second image processing unit 134 is configured to obtain edge information of the track in the radar image according to the segmentation point, or obtain edge information of the track in the image according to the corresponding mapping pixel point of the segmentation point.
The third image processing unit 135 is configured to perform obstacle detection in the screen image based on the edge information of the trajectory.
Corresponding to the method embodiments described in conjunction with fig. 7 and 8, the central control module performs information fusion on the acquired picture image and the reflector distance information to obtain the track edge information in the picture image, or there may be two ways.
The first mode comprises the following steps: the first image processing unit 132 is invoked to identify the segmentation points of the track edges in the radar image. A coordinate mapping unit 133 is called to map pixel points corresponding to the segmentation points in the radar image into the picture image through coordinate transformation, so as to obtain edge points of the track in the picture image; and calling the second image processing unit 134 to obtain the edge information of the track in the picture image according to the edge points.
The second way includes: calling a first image processing unit 132 to identify a segmentation point of the track edge in the radar image; calling a second image processing unit 134 to obtain track edge information in the radar image according to the segmentation points; and calling a coordinate mapping unit 133 to map pixel points corresponding to the track edge in the radar image into the picture image through coordinate transformation, so as to obtain the edge information of the track in the picture image.
The third image processing unit 134 performs obstacle detection according to the edge information of the track in the picture image, including: performing image mask operation on the picture image according to the edge information of the track to obtain a target track area; and carrying out obstacle detection in the target track area.
The track detection device of the monorail train provided by the invention is used for acquiring the information of the target track area by utilizing the radar and the camera at the same time. On one hand, because the radar has high penetrability and good precision, the radar can well obtain information such as the distance, the direction, the speed and the like of a target, so that the influence degree of the radar on the environment, the weather and the like is far smaller than that of a detection method based on a camera image; on the other hand, the detection of the track edge based on the distance measured by the radar can be free from the limitation of the track shape, and good detection precision can be obtained no matter a straight track or more complete tracks.
In addition, aiming at the specific technical problem that the single track is supposed to be in the high altitude and the obstacle detection is more prone to be interfered by the background, the sensor fusion method provided by the invention is used for accurately determining the track area based on the radar information and then identifying the obstacle in the track area according to the image information, so that the method is higher in stability, good in robustness, capable of effectively preventing false detection and missing detection, improving the driving efficiency and guaranteeing the driving safety.
Also proposed in some embodiments of the present invention is a non-transitory computer readable storage medium having stored thereon executable instructions which, when run on a processor, perform a method of track detection of a monorail train as described in embodiments of the first aspect of the present invention. When the track detection of the train is implemented by a separate device, the storage medium may be provided as a part on the device; or when the data processing process of the track detection is realized by a train central control system, the storage medium can be arranged on the train central control system.
The specific implementation of the storage medium can be obtained from the corresponding embodiment of the method or apparatus of the present invention, and has similar beneficial effects to the corresponding method or apparatus of the present invention, and will not be described herein again.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more, for example, two, three, etc., unless specifically defined otherwise.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (9)

1. A method for detecting a track of a monorail car, comprising:
obtaining the distance information of a reflecting object in a target detection area, wherein the distance information of the reflecting object is measured by a radar;
acquiring picture image information of a target detection area;
performing information fusion on the reflector distance information and the picture image information to obtain track edge information in the picture image; and
in the picture image, carrying out obstacle detection according to the track edge information;
the information fusion of the reflector distance information and the picture image information to obtain the track edge information in the picture image comprises the following steps:
obtaining a radar image according to the reflector distance information, and identifying a dividing point of the track edge in the radar image; mapping pixel points corresponding to the segmentation points in the radar image into the picture image through coordinate transformation to obtain edge points of the track in the picture image; obtaining edge information of the track in the picture image according to the edge points; alternatively, the first and second electrodes may be,
obtaining a radar image according to the reflector distance information, and identifying a dividing point of the track edge in the radar image; obtaining track edge information in the radar image according to the dividing points; mapping pixel points corresponding to the track edges in the radar image into the picture image through coordinate transformation to obtain edge information of the tracks in the picture image;
wherein identifying the segmentation points of the track edges in the radar image comprises: and acquiring pixel points with the value of the pixel points in the neighborhood larger than the step change of a predefined threshold value, and taking the pixel points as segmentation points.
2. The method for detecting a track of a monorail car of claim 1,
the acquired reflector distance information of the target detection area and the acquired acquisition time of the picture image information of the target detection area are synchronized.
3. The method for detecting the track of the monorail train as claimed in claim 1, wherein the step of mapping pixel points in the radar image into the picture image through coordinate transformation comprises:
acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W;
acquiring a second mapping relation f2 between the world coordinate system W and the coordinate system C of the camera;
acquiring a third mapping relation f3 between the coordinate system C of the camera and the picture image; and
and mapping pixel points in the radar image into the picture image through coordinate transformation according to the first mapping relation f1, the second mapping relation f2 and the third mapping relation f 3.
4. The track detection method of a monorail train as defined in claim 1, wherein performing obstacle detection based on edge information of the track comprises:
performing image mask operation on the picture image according to the edge information of the track to obtain a target track area; and
and carrying out obstacle detection in the target track area.
5. A rail detection device for a monorail car, comprising:
the camera module is used for acquiring picture image information of a target detection area;
the radar module is used for scanning a target detection area and acquiring the distance information of a reflecting object in the target detection area; and
the central control module is used for controlling the camera module and the radar module to work, and performing information fusion on the acquired picture image and the reflector distance information to obtain track edge information in the picture image; detecting obstacles in the picture image according to the track edge information;
wherein the central control module comprises: the signal sampling control unit is used for controlling the camera module and the radar module to acquire information of a target detection area; the first image processing unit is used for representing the distance information of the reflecting object as a radar image and identifying the dividing point of the track edge in the radar image; the coordinate mapping unit is used for mapping pixel points in the radar image into the picture image through coordinate transformation to obtain corresponding mapping pixel points; the second image processing unit is used for obtaining the edge information of the track in the radar image according to the segmentation points or obtaining the edge information of the track in the picture image according to the corresponding mapping pixel points of the segmentation points; a third image processing unit for performing obstacle detection in the picture image based on the edge information of the rail;
performing information fusion on the acquired picture image and the reflector distance information to obtain track edge information in the picture image, wherein the track edge information comprises:
calling a first image processing unit, and identifying a segmentation point of a track edge in a radar image; calling a coordinate mapping unit, and mapping pixel points corresponding to the segmentation points in the radar image into the picture image through coordinate transformation to obtain edge points of the track in the picture image; calling a second image processing unit to obtain edge information of the track in the picture image according to the edge points; alternatively, the first and second electrodes may be,
calling a first image processing unit, and identifying a segmentation point of a track edge in a radar image; calling a second image processing unit to obtain track edge information in the radar image according to the dividing points; calling a coordinate mapping unit, and mapping pixel points corresponding to the track edge in the radar image into the picture image through coordinate transformation to obtain the edge information of the track in the picture image;
wherein the first image processing unit identifies a segmentation point of the track edge in the radar image comprises: and acquiring pixel points with the value of the pixel points in the neighborhood larger than the step change of a predefined threshold value, and taking the pixel points as segmentation points.
6. The apparatus for detecting a track of a monorail car of claim 5,
and the signal sampling control unit controls the radar module and the camera module to synchronously acquire the distance information of the reflecting object and the image information of the target detection area.
7. The apparatus for detecting a monorail train track according to claim 5, wherein said coordinate mapping unit maps pixel points in the radar image into the screen image by coordinate transformation, and comprises:
acquiring a first mapping relation f1 between a coordinate system R of the radar and a world coordinate system W;
acquiring a second mapping relation f2 from the world coordinate system W to the coordinate system C of the camera;
acquiring a third mapping relation f3 between the coordinate system C of the camera and the picture image; and
and mapping pixel points in the radar image into the picture image through coordinate transformation according to the first mapping relation f1, the second mapping relation f2 and the third mapping relation f3 to obtain edge points.
8. The apparatus for detecting the track of the monorail train as defined in claim 5, wherein said third image processing unit performing the obstacle detection based on the edge information of the track in the screen image comprises:
performing image mask operation on the picture image according to the edge information of the track to obtain a target track area; and
and carrying out obstacle detection in the target track area.
9. A non-transitory computer readable storage medium having stored thereon executable instructions which, when executed on a processor, perform a method of track detection of a monorail train as defined in any one of claims 1-4.
CN201810414889.6A 2018-05-03 2018-05-03 Method and device for detecting track of monorail train Active CN110443819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810414889.6A CN110443819B (en) 2018-05-03 2018-05-03 Method and device for detecting track of monorail train

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810414889.6A CN110443819B (en) 2018-05-03 2018-05-03 Method and device for detecting track of monorail train

Publications (2)

Publication Number Publication Date
CN110443819A CN110443819A (en) 2019-11-12
CN110443819B true CN110443819B (en) 2022-04-15

Family

ID=68427849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810414889.6A Active CN110443819B (en) 2018-05-03 2018-05-03 Method and device for detecting track of monorail train

Country Status (1)

Country Link
CN (1) CN110443819B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178215B (en) * 2019-12-23 2024-03-08 深圳成谷科技有限公司 Sensor data fusion processing method and device
CN111832411B (en) * 2020-06-09 2023-06-27 北京航空航天大学 Method for detecting obstacle in track based on fusion of vision and laser radar
CN111461088B (en) * 2020-06-17 2020-09-08 长沙超创电子科技有限公司 Rail transit obstacle avoidance system based on image processing and target recognition
CN113050654A (en) * 2021-03-29 2021-06-29 中车青岛四方车辆研究所有限公司 Obstacle detection method, vehicle-mounted obstacle avoidance system and method for inspection robot
CN113406642B (en) * 2021-08-18 2021-11-02 长沙莫之比智能科技有限公司 Rail obstacle identification method based on millimeter wave radar
CN115320669A (en) * 2022-08-31 2022-11-11 南京慧尔视智能科技有限公司 Method, device, equipment and medium for detecting railway coming car based on radar map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN102696060A (en) * 2009-12-08 2012-09-26 丰田自动车株式会社 Object detection apparatus and object detection method
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN105480227A (en) * 2015-12-29 2016-04-13 大连楼兰科技股份有限公司 Information fusion method based on infrared radar and video image in active driving technique
CN105667518A (en) * 2016-02-25 2016-06-15 福州华鹰重工机械有限公司 Lane detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102696060A (en) * 2009-12-08 2012-09-26 丰田自动车株式会社 Object detection apparatus and object detection method
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN103150786A (en) * 2013-04-09 2013-06-12 北京理工大学 Non-contact type unmanned vehicle driving state measuring system and measuring method
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN105480227A (en) * 2015-12-29 2016-04-13 大连楼兰科技股份有限公司 Information fusion method based on infrared radar and video image in active driving technique
CN105667518A (en) * 2016-02-25 2016-06-15 福州华鹰重工机械有限公司 Lane detection method and device

Also Published As

Publication number Publication date
CN110443819A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110443819B (en) Method and device for detecting track of monorail train
US11200433B2 (en) Detection and classification systems and methods for autonomous vehicle navigation
JP6714688B2 (en) System and method for matching road data objects to generate and update an accurate road database
US11573090B2 (en) LIDAR and rem localization
US20210061306A1 (en) Systems and methods for identifying potential communication impediments
CN107851125B9 (en) System and method for two-step object data processing through vehicle and server databases to generate, update and transmit accurate road characteristics databases
JP2022166185A (en) Crowdsourcing and distributing sparse map and lane measurements for autonomous vehicle navigation
US8818026B2 (en) Object recognition device and object recognition method
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
JP2021530388A (en) Methods and systems for detecting railroad obstacles based on rail segmentation
WO2018060313A1 (en) Methods and systems for generating and using localisation reference data
US20220035378A1 (en) Image segmentation
US20220012509A1 (en) Overhead-view image generation device, overhead-view image generation system, and automatic parking device
CN106909152A (en) A kind of automobile-used context aware systems and automobile
US20240199006A1 (en) Systems and Methods for Selectively Decelerating a Vehicle
US20210208282A1 (en) Detection device and detection system
CN113743171A (en) Target detection method and device
JP3857698B2 (en) Driving environment recognition device
JP2021117048A (en) Change point detector and map information delivery system
US20190244041A1 (en) Traffic signal recognition device
CN114930123A (en) System and method for detecting traffic lights
JP2004355139A (en) Vehicle recognition system
JP2019146012A (en) Imaging apparatus
KR102316818B1 (en) Method and apparatus of updating road network
JP2021056075A (en) Driving controller for self-driving vehicles, target for stopping, and driving control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant