KR20160125803A - Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest - Google Patents

Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest Download PDF

Info

Publication number
KR20160125803A
KR20160125803A KR1020150056779A KR20150056779A KR20160125803A KR 20160125803 A KR20160125803 A KR 20160125803A KR 1020150056779 A KR1020150056779 A KR 1020150056779A KR 20150056779 A KR20150056779 A KR 20150056779A KR 20160125803 A KR20160125803 A KR 20160125803A
Authority
KR
South Korea
Prior art keywords
information
road
image
area
lane
Prior art date
Application number
KR1020150056779A
Other languages
Korean (ko)
Inventor
안성용
곽기호
민지홍
석주일
Original Assignee
국방과학연구소
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 국방과학연구소 filed Critical 국방과학연구소
Priority to KR1020150056779A priority Critical patent/KR20160125803A/en
Publication of KR20160125803A publication Critical patent/KR20160125803A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06K9/00791
    • G06K9/00798
    • G06T5/002

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to an apparatus for extracting an area, an apparatus for detecting an object, and a method for extracting the area. More specifically, the present invention for solving a problem of a conventional object detection technology excludes detection in an area where has a low possibility of existence of an object by extracting an object detection area so as to detect the object on the road. The present invention can reduce the quantity of calculation for detecting the object and time for detecting the object.

Description

FIELD OF THE INVENTION [0001] The present invention relates to an area extracting apparatus, an object detecting apparatus, and an area extracting method. [0002]

The present invention relates to a region extracting apparatus, an object detecting apparatus, and a region extracting method, and more particularly, to a region extracting apparatus, an object detecting apparatus, and a region extracting method for extracting an object detecting region, .

In the road environment, the autonomous driving of the ground unmanned vehicle recognizes a given lane / lane, and moves the planned route within the recognized lane / lane, as opposed to simply moving from the obstacle to the shortest time or the least cost. In such a road environment, the lane / lane recognition must be performed basically at the most basic level, and in particular, a function of detecting and avoiding or stopping a static / dynamic object existing on the road is indispensably required. In order to detect an object on the road separately from the lane / lane recognition, it is important to first select only the information that is estimated as an object from the sensor information. This is because real-time processing is guaranteed by reducing the amount of computation through the process, and the accuracy of recognition can be secured by reducing the range of information to be subjected to object recognition, thereby lowering the probability of false positives.

Most of the conventional techniques focus on performing the most accurate object detection by searching or matching the entire information. However, in order to carry out autonomous driving on a vehicle, it is necessary to use a technology that has a high speed, a small amount of calculation, and a low probability of false detection even if an error occurs, rather than a precise object detection technique. In the conventional technique for extracting a region of interest among object detection techniques, the characteristics of a road region packaged in an image asphalt or cement are constant and the other region uses a large change. There are disadvantages including various environments outside the moving object and the road. In the conventional technique using the distance sensor, it is necessary to determine the position of the object in all the acquired distance information, to judge it according to the model of the vehicle or to check the image of the position, There is a disadvantage in that it is impossible to judge the information other than the reference.

Accordingly, the present invention has been made to solve the limitations of the conventional object detection technology, and it is an object of the present invention to provide an image processing apparatus and an image processing method capable of extracting an area capable of extracting an object detection area so that an object can be detected only in an area where an object is judged to exist on the road Device, object detection device, and region extraction method.

In order to solve the above-described problems, the region extracting apparatus disclosed in this specification extracts an object detection area on the road.

According to an aspect of the present invention, there is provided an area extracting apparatus including an input unit for receiving image information of a road and lane recognition information for recognizing a lane of the road, And a control unit for extracting a detection subject area judged to exist the object.

In one embodiment, the image information may include three-dimensional distance information in three dimensions by measuring a distance between the image and the image of the road.

In one embodiment, the video image is generated by a camera that photographs the road, and the three-dimensional distance information may be generated by a three-dimensional distance sensor that measures the distance of the road.

In one embodiment, the lane identification information may be information obtained by recognizing a lane on a traveling road of a moving object including the area extracting apparatus.

In one embodiment, the control unit fuses the image of the road and the lane recognition information, selects a portion corresponding to the lane recognition information of the image, and generates screening information for the selected portion can do.

In one embodiment, the controller may remove a visual effect reflected on the image before selecting a portion corresponding to the lane recognition information in the image.

In one embodiment, the control unit fuses three-dimensional distance information represented by three-dimensional coordinates with the selection information by measuring the distance of the road, and determines a portion corresponding to the selection information among the three- .

In one embodiment, the control unit selects a part of the front area of the road indicated by the three-dimensional distance information as a sample area, generates a ground model of the road based on the sample area, The portion corresponding to the ground model can be removed.

In one embodiment, the control unit may remove noise information included in the detection subject area.

In one embodiment, the control unit may generate extraction information for the detection subject area and transmit the extraction information to an apparatus for detecting an object on the road.

In one embodiment, the control unit may convert the extracted information into a video information form so that the detection subject region is reflected in the video.

In order to solve the above-described problems, the object detecting apparatus disclosed in this specification includes an object on a road that the moving object travels, included in the moving object.

According to an aspect of the present invention, there is provided an apparatus for detecting an object, the apparatus comprising: an image obtaining unit that obtains image information including the image of the road and the distance of the road to obtain three- A lane recognition unit for recognizing a lane of the road and generating lane recognition information, a region extracting unit for extracting a detection subject area judged to exist the object based on the image information and the lane recognition information, And an object recognition unit for recognizing the object on the road based on the area.

In one embodiment, the image input unit may include a camera for photographing the road to generate the image, and a three-dimensional distance sensor for generating the three-dimensional distance information by measuring the distance of the road.

In one embodiment, the lane recognition unit can recognize the lane of the road on which the mobile body is traveling, and generate the lane recognition information.

In one embodiment, the region extracting unit may be the image extracting apparatus described above.

In one embodiment, the region extracting unit may fuse the image image and the lane recognition information, and may select the portion corresponding to the lane recognition information among the image images to generate the selection information for the selected portion .

In one embodiment, the region extracting unit may extract a portion corresponding to the selection information among the three-dimensional distance information into the detection subject region by fusing the three-dimensional distance information and the selection information.

In one embodiment, the region extracting section selects a part of the front region of the road indicated by the three-dimensional distance information as a sample region, generates a ground model of the road based on the sample region, The noise information included in the portion corresponding to the ground model and the detection subject region can be removed.

In addition, in order to solve the above-described problems, the region extracting method disclosed in this specification is a method of extracting an object detection target region of a region extracting apparatus included in a moving object.

According to an aspect of the present invention, there is provided an area extracting method for extracting an area image of a road in which a moving object is running, three-dimensional distance information obtained by measuring a distance of the object, Receiving the lane identification information in which the lane identification information is recognized, fusing the image image and the lane identification information, selecting the portion corresponding to the lane identification information in the image image to generate the selection information, Dimension distance information and the selection information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region.

In one embodiment, the step of selecting the portion corresponding to the lane identification information of the image image to generate the selection information may include selecting a portion corresponding to the lane identification information of the image image, Removing the visual effect, and generating the selection information.

In one embodiment, the step of removing the visual effect shown in the selected portion may remove the visual effect by performing an inverse perspective mapping using the predetermined matrix for the selected portion .

In one embodiment, the step of extracting the portion corresponding to the selection information among the three-dimensional distance information as the detection subject region may include the step of selecting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region A step of selecting a part of a front area of the road indicated by the three-dimensional distance information as a sample area, generating a ground model of the road based on the sample area, Removing the corresponding portion, removing the noise information included in the selected detection subject region, and extracting the detection subject region.

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can extract an object detection target region in which an object on the road is judged to exist, thereby making it possible to exclude detection of an area in which an object is unlikely to exist .

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region in which the object is unlikely to exist. It is possible to reduce the amount of calculation and the time consumed for detection.

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region where the object is unlikely to exist. The detection rate can be reduced.

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can be applied to a region extracting apparatus, an object detecting apparatus, and an area extracting method disclosed in this specification that extracts an object detecting region, The accuracy of detection can be improved by excluding the detection of a region where the possibility of an object is low.

BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing a configuration of a region extracting apparatus disclosed in this specification; Fig.
2 is a block diagram showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.
3 is a block diagram showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification;
4 is an exemplary view showing an example of a video image according to an embodiment of the region extracting apparatus disclosed in this specification;
5 is an illustration showing an example of a road according to an embodiment of the region extraction apparatus disclosed herein.
6 is an exemplary diagram illustrating the concept and example of inverse perspective transformation according to an embodiment of the region extraction apparatus disclosed herein;
7 is an exemplary view illustrating an example of three-dimensional distance information according to an embodiment of the region extracting apparatus disclosed in this specification;
8 is a flowchart showing an extraction process of a detection subject region according to an embodiment of the region extraction apparatus disclosed in this specification;
FIG. 9A is an exemplary view for explaining an example of the detection subject region extraction result according to the embodiment of the region extracting apparatus disclosed in this specification.
FIG. 9B illustrates an example of the detection subject region extraction result according to the example shown in FIG. 9A. FIG.
FIG. 10A is an exemplary view for explaining an example of the detection subject region extraction result according to the embodiment of the region extracting apparatus disclosed in this specification.
FIG. 10B is a view showing an example of a detection subject region extraction result according to the example shown in FIG. 10A. FIG.
11 is a block diagram showing a configuration of an object detection apparatus disclosed in this specification;
12 is a flowchart showing the sequence of the region extraction method disclosed in this specification;
13 is a flowchart showing the sequence according to an embodiment of the region extraction method disclosed in this specification 1.
Figure 14 is a flow chart illustrating the sequence according to an embodiment of the region extraction method disclosed herein;

The region extracting apparatus, the object detecting apparatus and the region extracting method disclosed in this specification can be applied to a region extracting apparatus, an object detecting apparatus, a region extracting method of a region extracting apparatus, and an object detecting method of an object detecting apparatus, . However, the technology disclosed in this specification is not limited thereto, and may be applied to any traveling related device to which the technical idea of the present invention can be applied, for example, navigation device, traveling device included in the moving vehicle traveling on the road, And its area extraction method and object detection method. In particular, the present invention can be applied to an autonomous navigation system for autonomous navigation of ground unmanned vehicles, an area extraction apparatus for autonomous navigation, a detection apparatus, and a field of interest extraction technology.

It is noted that the technical terms used herein are used only to describe specific embodiments and are not intended to limit the scope of the technology disclosed herein. Also, the technical terms used herein should be interpreted as being generally understood by those skilled in the art to which the presently disclosed subject matter belongs, unless the context clearly dictates otherwise in this specification, Should not be construed in a broader sense, or interpreted in an oversimplified sense. In addition, when a technical term used in this specification is an erroneous technical term that does not accurately express the concept of the technology disclosed in this specification, it should be understood that technical terms which can be understood by a person skilled in the art are replaced. Also, the general terms used in the present specification should be interpreted in accordance with the predefined or prior context, and should not be construed as being excessively reduced in meaning.

Also, the singular forms "as used herein include plural referents unless the context clearly dictates otherwise. In this specification, the terms "comprising ", or" comprising ", etc. should not be construed as necessarily including the various elements or steps described in the specification, Or may be further comprised of additional components or steps.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals denote like or similar elements, and redundant description thereof will be omitted.

Further, in the description of the technology disclosed in this specification, a detailed description of related arts will be omitted if it is determined that the gist of the technology disclosed in this specification may be obscured. In addition, it should be noted that the attached drawings are only for easy understanding of the concept of the technology disclosed in the present specification, and should not be construed as limiting the idea of the technology by the attached drawings.

Hereinafter, the region extracting apparatus and the object detecting apparatus disclosed in this specification will be described with reference to Figs. 1 to 11. Fig.

BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram showing a configuration of a region extracting apparatus disclosed in the present specification;

2 is a configuration diagram 1 showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.

3 is a configuration diagram 2 showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.

4 is an exemplary view showing an example of a video image according to an embodiment of the region extracting apparatus disclosed in this specification;

5 is an illustration showing an example of a road according to an embodiment of the region extracting apparatus disclosed herein.

6 is an exemplary diagram illustrating the concept and example of inverse perspective transformation according to an embodiment of the region extraction apparatus disclosed herein.

7 is an exemplary view showing an example of three-dimensional distance information according to an embodiment of the region extracting apparatus disclosed in this specification;

FIG. 8 is a flowchart illustrating an extraction process of an area to be detected according to an embodiment of the present invention.

FIG. 9A is an exemplary view 1-A for explaining an example of the detection subject region extraction result according to the embodiment of the region extraction apparatus disclosed in this specification.

FIG. 9B is an exemplary view 1-B showing an example of the detection subject region extraction result according to the example shown in FIG. 9A.

FIG. 10A is an exemplary view 2-A for explaining an example of the detection subject region extraction result according to the embodiment of the region extraction apparatus disclosed in this specification.

FIG. 10B is an exemplary view 2-B showing an example of a detection subject region extraction result according to the example shown in FIG. 10A.

11 is a block diagram showing the configuration of the object detection apparatus disclosed in this specification.

First, a region extracting apparatus (hereinafter referred to as an extracting apparatus) disclosed in this specification will be described.

The extraction device extracts an object detection area on the road.

1, the extraction device 200 includes an input unit 210 for receiving image information of the road and lane identification information for recognizing the lane of the road, And a control unit 220 for extracting a detection object region in which the object is determined to exist.

The extraction device 200 may be included in a moving body that travels on the road.

The moving body may mean a motor vehicle, a bicycle, or a motorcycle running on the road.

The moving body may also refer to a flying object for low-flying the road, or a floating object capable of floating on the ground and running on the road.

The moving object may further include one or more other configurations for object detection on the road, including the extraction device 200.

For example, a camera 110 for photographing the road, a three-dimensional distance sensor 120 for measuring the distance of the road, a lane recognizing device 130 for recognizing a lane of a road under travel of the moving object, (300) for recognizing an object on the road based on the detection object region extraction result of the object recognition unit (200).

The extracting apparatus 200 may be included in the object detecting apparatus 400 included in the moving object, as shown in FIG.

The object detecting apparatus 400 may be included in the moving object to detect an object on the road.

The object detecting apparatus 400 may include the object recognizing apparatus 300 that recognizes an object on the road based on the extraction apparatus 200 and the extraction result of the extraction apparatus 200 have.

3, the extraction device 200 may further include the object detection device 400 further including the camera 110, the three-dimensional distance sensor 120, and the lane recognition device 130, . ≪ / RTI >

That is, the object detecting apparatus 400 may include the extracting apparatus 200 and the object recognizing apparatus 300 as shown in FIG. 2, and as shown in FIG. 3, The distance sensor 110, the three-dimensional distance sensor 120, and the lane recognition device 130.

The object detecting apparatus 400 will be described later.

In the extraction device 200, the input unit 210 may receive the image information and the lane identification information for extracting the detection subject area.

The input unit 210 may receive data of the image information and the lane identification information for extracting the detection subject area from other structures included in the moving object.

The input unit 210 may receive the image information and the lane recognition information from the camera 110, the three-dimensional distance sensor 120, and the lane recognition device 130. [

The image information may include three-dimensional distance information expressed in three dimensions by measuring a video image of the road and a distance of the road.

The video image may be generated by the camera 110 photographing the road.

As shown in FIG. 4, the image may be an image taken by the camera 110 on the road in which the moving body 10 is traveling.

The three-dimensional distance information may be information on which the object detecting apparatus 400 detects an object existing on the road.

That is, the detection of an object existing on the road may be performed on the basis of a result of presence or absence of an object indicated by the three-dimensional distance information.

The three-dimensional distance information may be generated by a three-dimensional distance sensor 120 that measures the distance of the road.

That is, the input unit 210 may receive the image from the camera 110, and receive the three-dimensional distance information from the three-dimensional distance sensor 120.

The lane identification information may be information obtained by recognizing a lane on a traveling road of the moving object including the extraction device 200. [

The lane identification information may be information obtained by recognizing a lane of a road on which the moving body 10 is traveling.

For example, as shown in FIG. 5, if the road in which the moving body 10 is traveling is a four-lane, which is a reciprocating lane, it may be information indicating that the road in which the moving body 10 is traveling is a four-lane round trip.

As shown in Fig. 5, there may be an object that affects the traveling of the moving body 10, for example, other moving bodies 10a to 10d running on the above road, in the lane of the road during traveling.

That is, in the lane of the road during traveling, objects that directly affect the traveling of the moving body 10 exist, and the area in the lane of the road on which the moving body 10 travels is used as the extraction basis of the detection subject area .

The lane identification information may be generated by the lane recognizing device 130 that recognizes a lane of a road on which the moving object 10 is traveling.

That is, the input unit 210 may receive the lane identification information from the lane recognizing device 130. [0053]

The lane recognizing device 130 acquires the image of the road in order to recognize the lane of the road in which the moving body 10 is traveling and crops or resizes the obtained image So that the image processing can be performed.

The lane recognizing device 130 may perform image processing on the obtained image to generate the lane identification information.

The lane recognizing device 130 may include image acquiring means for acquiring an image of the road as a basis for generating the lane identification information and image processing means for performing image processing on the obtained image.

The image acquiring unit may be a sensing device that is distinguished from the camera 110 and the three-dimensional distance sensor 120.

The input unit 210 may transmit the received image information and the lane recognition information to the control unit 220. [

The control unit 220 can extract the detection subject area in which the object is determined to exist based on the image information and the lane recognition information received from the input unit 210. [

The control unit 220 may select a portion corresponding to the lane recognition information among the image images by fusing the lane recognition information with the image image of the road.

The control unit 220 may fuse the image image and the lane recognition information as shown in FIG. 4 to select a lane corresponding to the lane recognition information among the backgrounds displayed in the image image.

For example, in the moving object 10 traveling on a one way two-lane road, a background including the one-way two-lane road in which the moving object 10 is traveling and its surrounding environment is displayed on the moving image, The lane recognition unit may combine the lane recognition information with the lane recognition information that recognizes the lane of the one way two-lane road in which the lane recognition information is in operation and select the one lane lane road portion corresponding to the lane recognition information on the image image .

The control unit 220 may select a portion corresponding to the lane recognition information among the image images and generate selection information for the selected portion.

That is, the selection information may be a lane portion of a road in which the moving object 10 is traveling, among backgrounds on the image image.

The selection information may be information serving as a basis of the detection subject area.

The selection information may be image data information for a portion selected from the image.

The selection information may be data information including coordinate information of an image of a portion selected from the image.

That is, the control unit 220 fuses the image image and the lane recognition information, selects a portion corresponding to the lane recognition information from the background displayed on the image image, The data on the lane portion of the road can be selected.

The control unit 220 selects the portion corresponding to the lane recognition information among the image images to generate the selection information, and before selecting a portion corresponding to the lane recognition information of the image, The reflected visual effect can be eliminated.

The visual effect may mean a perspective effect that is reflected on the image according to the photographing of the camera 110.

The visual effect may mean a perspective effect that is reflected in the image image according to at least one of an internal parameter of the camera 110, a mounting position of the camera 110, a height, and an angle.

The control unit 220 may remove the visual effect reflected on the image before fusing the image and the lane recognition information.

That is, the controller 220 may fuse the image and the lane recognition information with the visual effect reflected on the image removed.

The control unit 220 may also remove the visual effect reflected on the image image in which the lane recognition information is fused after fusing the image image and the lane recognition information.

That is, the control unit 220 may remove the visual effect reflected on the video image in a state where the lane recognition information is fused to the video image.

The controller 220 may perform an inverse perspective mapping using the predetermined matrix on the image to remove the visual effect.

The inverse perspective transformation may be a computation method for the image data.

The inverse perspective transformation may be a method of calculating the actual accumulation of the road on which the image is photographed by removing the perspective effect reflected on the image.

The inverse perspective transformation may be a method of calculating an actual accumulation of a road on which the image is photographed by inversely converting the accumulation on the image image according to the perspective effect reflected on the image image into an actual accumulation.

6, a length of each side of a certain portion (a trapezoid consisting of p1 to p4 points) on the image is calculated by using the predetermined matrix So that any one of the portions can be displayed on the coordinate plane as shown on the right side according to the inverse converted actual accumulation.

The predetermined matrix may be a matrix for inversely transforming the accumulation of the image image into an actual accumulation.

The predetermined matrix may be a homography matrix for inversely transforming the accumulation of the image image into an actual accumulation.

The predetermined matrix may be a matrix of parameters for the camera 110 that captured the image.

The predetermined matrix may be a matrix including one or more parameters of intrinsic parameters of sensors included in the camera 110, parameters of mounting position, height and angle of the camera 110, and the like.

The predetermined matrix may be set according to intrinsic parameters of a sensor included in the camera 110, parameters of mounting position, height, and angle of the camera 110.

The inverse perspective transformation can be expressed as: " (1) "

[Equation 1]

Figure pat00001

Herein, m i denotes a result of removing the visual effect, H pm denotes a homography matrix that is the predetermined matrix, and p i denotes an original of the image before the visual effect is removed .

M i And p i may be expressed in the form of coordinates or vector of the image.

M i And p i can be represented by (x ' i , y' i ) and (x i , y i ), respectively.

Accordingly, the above-described expression (1) can be expressed by the following expression (2).

&Quot; (2) "

Figure pat00002

That is, [the result (x ' i , y' i ) = H pm * original (x i , y i )] in which the visual effect is removed can be obtained.

6, p1 to p4 expressed in the video image are represented in a trapezoidal shape by the visual effect, but in actual accumulation, the moving object 10 (m1 to m4) on the right coordinate plane (+ Width) m and (-width) m, respectively, with respect to the origin of the reference point (t), and the distance from the origin (t 1 ) m and (t 2 ) m to the front.

That is, the inverse perspective transformation may be performed on any one part of the image, so that the actual accumulation including the actual width and the distance of the certain part can be grasped.

The control unit 220 may perform the inverse perspective transformation on the entire image by considering the road on which the image is imaged as a plane.

The control unit 220 fuses the three-dimensional distance information and the selection information, which are three-dimensionally measured by measuring the distance of the road, so that a portion corresponding to the selection information in the three- Can be extracted.

As shown in FIG. 7, the three-dimensional distance information may be information indicating the result of measuring the distance of the road by the three-dimensional distance sensor 120 in a three-dimensional coordinate form.

The detection subject area may be an area to be extracted by the extraction device 200.

The region to be detected may be an area to be extracted by the extraction device 200 and to be detected by the object detection device 400.

The detection subject area may refer to a lane of a road of a road in which the moving object 10 is traveling, according to the selection information.

That is, a region where an object likely to disturb the traveling of the moving body 10 exists, that is, a region for a lane of a road in which the moving body 10 is traveling can be extracted as the detection subject region.

The control unit 220 fuses the three-dimensional distance information and the selection information, which are three-dimensionally measured by measuring the distance of the road, so that a portion corresponding to the selection information in the three- So that the detection subject region can be extracted.

The control unit 220 selects the portion corresponding to the lane on which the moving object 10 is traveling on the three-dimensional distance information indicating the road in the form of three-dimensional coordinates and the lane on which the moving object 10 is traveling on the image, A portion corresponding to the lane of the road in which the moving body 10 is traveling among the three-dimensional distance information is selected as the detection subject region, and the detection subject region can be extracted.

That is, the three-dimensional distance information representing the road in the form of three-dimensional coordinates and the selection information corresponding to the lane of the road in which the moving body 10 is traveling among the image are fused, The portion corresponding to the lane of the road in which the moving body 10 is traveling is selected with the highest possibility that an object obstructing the traveling of the moving body 10 is selected, Extraction.

The selection of the detection subject area may be performed through an expression expressed by the following equation (3).

&Quot; (3) "

Figure pat00003

Here, x, y, and z may denote coordinates on the three-dimensional distance information, respectively.

X may be a left / right direction on the road with respect to the moving body 10, y may be a forward / backward direction on the road, and z may be an up / down direction.

That is, in the x and y directions, an area inside the left lane and the right lane within the range of the maximum distance max from the minimum distance min of interest, which corresponds to the selection information, Can select an area within the range of the smallest height value (z) and the largest height value (z).

The control unit 220 fuses the three-dimensional distance information and the selection information, which are three-dimensionally measured by measuring the distance of the road, so that a portion corresponding to the selection information in the three- 7, a part of the front area of the road indicated by the three-dimensional distance information is selected as a sample area, and the road surface of the road is selected based on the sample area, A model may be generated and a portion corresponding to the ground model may be removed from the selected detection subject region.

The control unit 220 may generate the ground model to exclude the ground surface of the road from the detection subject area.

The ground model may mean that the ground of the road in which the moving body 10 is running is modeled.

The ground model may be a three-dimensional model of the ground surface of the road.

Since the ground surface of the road is a distance traveled by the moving body 10 and is not an element that interferes with the traveling of the moving body 10, And the portion corresponding to the ground model may be removed from the selected detection subject region.

The controller 220 may generate the ground model for the entire ground surface of the road based on the sample area that is a part of the front area of the road.

Since the ground surface of the road is considered to be planar, the ground surface model for the entire ground surface of the road can be generated based on the sample area, have.

The control unit 220 can generate the ground model with an average value of the ground height of the road.

For example, the average value of the height of the ground of the sample region can be calculated, and the ground model can be generated accordingly.

The controller 220 can generate the ground model through a random access consensus (RANSAC) process, which is a technique for predicting factors of a mathematical model in a repetitive operation from a series of data sets including false information.

The control unit 220 may generate the plane model of the ground through the RANSAC process and generate the plane model as the ground model.

The control unit 220 fuses the three-dimensional distance information and the selection information, which are three-dimensionally measured by measuring the distance of the road, so that a portion corresponding to the selection information in the three- The noise information included in the selected detection subject area can be removed while extracting the detection subject area.

The noise information may be information on elements that do not interfere with the traveling of the moving object 10 in the detection subject area.

For example, it may be information about an object located above the height of the mobile object 10, such as a traffic light, a streetlight, a milestone, signs, and the like, or information about an avenue, a tunnel ceiling, and a ceiling of a building entrance.

Since the noise elements as shown in the above example are not obstructive to the traveling of the moving object 10, the noise information is not reflected in the detection subject region, and the portion corresponding to the noise information in the selected detection subject region is Can be removed.

The controller 220 removes the portion corresponding to the ground model and the noise information from the selected detection subject area, thereby finally extracting the detection subject area, which is determined to be an object on the road .

The extraction of the detection subject region of the extraction device 200 as described above may be performed as shown in FIG.

Referring to FIG. 8, the extraction process of the detection subject area will be described in brief as follows.

1) The inverse perspective transformation is performed on the image image to fuse the image image and the lane recognition information, and to remove the visual effect reflected on the image image.

2) generates the selection information according to 1), and fuses the selection information with the three-dimensional distance information.

3) The selection information is merged with the three-dimensional distance information, and a portion corresponding to the selection information among the three-dimensional distance information is selected as the detection subject region.

4) The ground model is generated based on the three-dimensional distance information, and a portion corresponding to the ground model is removed from the selected detection target area.

5) The noise information is removed from the selected detection subject area.

6) Finally, the detection subject area is extracted.

An example of the detection subject area extracted by the above-described process is shown in FIGS. 9 and 10. FIG.

9A and 10A show an example of a video image of a road in which the camera 110 is running while the moving body 10 is in motion. FIGS. 9B and 10B show the detection of each of FIGS. 9A and 10A An example of the result of the object area extraction is shown.

As shown in Fig. 9A, the vehicle 10 is traveling in a lane of a one-way lane of a road in which the moving body 10 is traveling, in a left lane at a position ahead of the moving vehicle A 10a and the moving vehicle A 10a, When the moving object B 10b is traveling in the same lane as that of the moving object 10, the extracting device 200 extracts the moving object A 10a located on the left lane of the area It is possible to extract the detection subject area on the right front side of the recognized area of the moving body B 10b located on the right lane while traveling on the left side of the moving body A 10a.

As shown in Fig. 10A, in the lane of the one way lane of the road in which the moving body 10 is traveling, the moving body A 10a in the same lane and the moving body B running in a position ahead of the moving body A 10a 10B, the extraction device 200 extracts the moving object A (10a) and the moving object B (10b) in the right lane of the recognized area in the same lane It is possible to extract the detection subject area.

After the control unit 220 finally extracts the detection subject area, the control unit 220 may generate extraction information for the detection subject area, and may transmit the extracted information to an apparatus for detecting an object on the road.

The control unit 220 may transmit the extracted information to the object recognition apparatus 300. [

The object recognition apparatus 300 may detect an object existing on the detection subject region based on the extracted extraction information.

The control unit 220 may convert the extracted information into the image information form so that the detection subject region is reflected in the image.

The control unit 220 may convert the extracted information into the image information form so that the detection subject region may be reflected and displayed on a monitoring screen of an apparatus for detecting an object on the road.

The conversion into the image information form can be performed by an operation expressed by the following equation (4).

&Quot; (4) "

Figure pat00004

Here, the image of the left term represents the image information, the intrinsic represents an internal parameter of the camera 110 or a display device including the camera 110 on which a screen is displayed, and [R | t] And LIDAR may refer to the extracted information.

The extraction information is reflected on the camera 110 or a display device including the camera 110, which is converted into the shape of the image information through Equation (4) to display a screen for photographing the road, The detection target area can be displayed on the screen on the display device including the display device.

Hereinafter, the object detecting apparatus 400 (hereinafter referred to as a detecting apparatus) will be described.

The detection device 400 may be the object detection device 400 described above.

The detection device 400 may include the extraction device 200 described above.

Hereinafter, the description of the extraction device 200 will be omitted, and an embodiment of the detection device 400 will be mainly described.

The detection device 400 may be included in the moving body 10 running on the road.

The detection device 400 is included in the moving object 10 and detects an object on the road on which the moving object 10 travels.

As shown in FIG. 11, the detection apparatus 400 includes a video input unit (not shown) for acquiring video image of the road and video information including three-dimensional distance information measured by measuring the distance of the road A lane recognition unit 130 for recognizing the lane of the road and generating lane recognition information, a region extracting unit 130 for extracting a detection subject area determined to exist the object based on the image information and the lane recognition information, (200) and an object recognition unit (300) for recognizing the object on the road based on the detection subject area.

The image input unit 100 may be means for obtaining the image information including the image image and the three-dimensional distance information.

The image input unit 100 may acquire the image information and transmit the image information to the region extracting unit 200. [

The image input unit 100 may include a camera 110 for photographing the road and generating the image image, and a three-dimensional distance sensor 120 for measuring the distance of the road to generate the three-dimensional distance information have.

The camera 110 and the three dimensional distance sensor 120 may be the camera 110 and the three dimensional distance sensor 120 described above in the extraction device 200.

That is, the detection device 400 may be an apparatus including the camera 110 and the three-dimensional distance sensor 120 described in the extraction device 200.

The lane recognition unit 130 may be means for recognizing a lane of a road that the moving body 10 is driving and generating the lane identification information.

The lane recognition unit 130 may generate the lane identification information and transmit the generated lane identification information to the area extraction unit 200.

The lane recognition unit 130 may be the lane recognition device 130 described in the extraction device 200.

That is, the detection device 400 may be an apparatus including the lane recognition device 130 described in the extraction device 200.

The image input unit 100 and the lane recognition unit 130 may have an internal configuration included in the detection device 400 or an external configuration separately configured outside the detection device 400 .

The area extracting unit 200 extracts the lane recognition information transmitted from the image input unit 100 and the lane recognition information received from the lane recognizing unit 130, As shown in FIG.

The region extracting unit 200 may extract the detection subject region and transmit the extracted region to the object recognition unit 300. [

The region extracting unit 200 may be the extracting apparatus 200 described above.

That is, the detection device 400 may be an apparatus including the extraction device 200 described above.

That is, the detection device 400 can detect an object on the road based on the detection subject area extracted by the extraction device 200 described above.

The detection subject region extraction process of the region extraction unit 200 may be performed in the same manner as the detection subject region extraction process of the extraction device 200 described above, and a detailed description thereof will be omitted.

The region extracting unit 200 may fuse the image image and the lane recognition information, and may select the portion corresponding to the lane recognition information among the image images to generate selection information for the selected portion.

The region extracting unit 200 may fuse the three-dimensional distance information and the selection information to select a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region.

The region extracting unit 200 selects a part of the front area of the road indicated by the three-dimensional distance information as a sample area, generates a ground model of the road based on the sample area, It is possible to remove the noise information included in the portion corresponding to the ground model and the selected detection subject region in the region.

The region extracting unit 200 can extract the portion corresponding to the ground model and the noise information in the selected detection subject region to thereby extract the detection subject region.

The object recognition unit 300 may be means for recognizing an object on the road based on the detection subject area extracted by the area extraction unit 200. [

That is, the object recognition unit 300 can recognize an object existing in the detection subject area.

The object recognition unit 300 may be the object recognition apparatus 300 previously described in the extraction apparatus 200.

That is, the detection apparatus 400 may be an apparatus including the object recognition apparatus 300 explained in the extraction apparatus 200.

The object recognition unit 300 may include means for displaying the recognition result to the outside or performing the recognition result.

The detection apparatus 400 may include an arrangement as described above to detect an object existing on the road in which the moving body 10 is traveling.

Hereinafter, the region extracting method (hereinafter referred to as an extracting method) disclosed in this specification will be described with reference to Figs. 12 to 14. Fig.

12 is a flowchart showing the sequence of the region extraction method disclosed in this specification.

13 is a flow chart 1 showing the sequence according to an embodiment of the region extraction method disclosed herein.

14 is a flow chart 2 showing the sequence according to an embodiment of the region extraction method disclosed herein.

The extraction method may be a method of extracting a region extraction device included in a moving object.

The extraction method may be a method of extracting an object detection area of the area extraction device included in the moving object.

The extraction method may be an area extraction method of the extraction apparatus 200 described above.

The extraction method may be an area extraction method of the control unit 220 of the extraction apparatus 200 described above.

The extraction method may be an area extraction method of the region extraction unit 200 of the detection apparatus 400 described above.

That is, the extraction method can be applied to the extraction device 200 and the detection device 400 described above.

Hereinafter, a description will be given mainly of a process of omitting the overlapping parts of the extraction device 200 and the detection device 400 described above and performing the extraction method.

As shown in FIG. 12, the extraction method is a method of extracting a moving image of a moving object by using a moving image of the moving object, a moving image of the moving object, a three-dimensional distance information by measuring the distance of the road, (S20) of receiving the lane identification information (S20), combining the video image and the lane identification information (S20), selecting the portion corresponding to the lane identification information of the video image to generate selection information (S30 (S40) of fusing the three-dimensional distance information and the selection information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region (S50).

(S10) receiving a video image of a road in which the moving object is running, a three-dimensional distance information obtained by measuring the distance of the road, and lane recognition information in which the moving object is recognized by the moving object, The image information, the three-dimensional distance information, and the lane recognition information may be received from each of the camera, the three-dimensional distance sensor, and the lane recognition device included in the moving object.

The step S20 of fusing the image and the lane recognition information may fuse the image and the lane recognition information to select a portion corresponding to the lane recognition information among the image images.

The step (S30) of selecting the portion corresponding to the lane recognition information of the image image to generate the selection information may include combining the image and the lane recognition information to generate a portion corresponding to the lane recognition information And generate the selection information for the selected portion.

The step S30 of selecting the portion corresponding to the lane recognition information among the image images to select information may include selecting a portion corresponding to the lane recognition information among the image images as shown in FIG. (S31), removing the visual effect (S32) displayed on the selected portion, and generating the selection information (S33).

The step (S31) of selecting a portion corresponding to the lane recognition information among the image images is performed by combining a portion corresponding to the lane recognition information among the backgrounds displayed in the image image by fusing the image image and the lane recognition information Can be selected.

The step of removing the visual effect (S32) displayed on the selected part may reflect the perspective effect on the image, thereby removing the visual effect appearing on the selected part.

The step of removing the visual effect shown in the selected part may perform the inverse perspective mapping using the predetermined matrix for the selected part to remove the visual effect.

The step of generating the selection information (S33) may generate the selection information for fusion with the three-dimensional distance information for the selected portion from which the visual effect is removed.

The step of fusing the three-dimensional distance information and the selection information (S40) comprises the steps of: extracting a portion corresponding to the selection information in the three-dimensional distance information as the detection subject region; Can be fused.

The step (S50) of extracting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region may include combining the three-dimensional distance information and the selection information, The corresponding part can be extracted as the detection subject area.

As shown in FIG. 14, the step (S50) of extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region may include detecting a portion corresponding to the selection information among the three- (S52) selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area (S52), generating a ground model of the road based on the sample area (S53), removing (S54) a portion corresponding to the ground model in the selected detection subject region, removing (S55) noise information included in the selected detection subject region, and And extracting (S56).

The step of selecting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region (S51) includes the step of selecting a portion corresponding to the selection information among the portions indicated by the three- Can be selected.

(S52) of selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area includes the step of, in order to generate the ground model for excluding the ground surface of the road from the detection subject area, It is possible to select the sample region as a basis for generation.

In the step S52 of selecting a part of the front area of the road indicated by the three-dimensional distance information as the sample area, an arbitrary area of the front area of the road may be selected as the sample area.

The step S53 of generating the ground model of the road based on the sample area may generate the ground model for the entire ground of the road based on the sample area.

The step S53 of generating the ground model of the road based on the sample area may generate the ground model as an average value of the ground heights of the road on the sample area.

The step S53 of generating the road surface model based on the sample area includes a RANAC SAmple Consensus (RANSAC) method, which is a technique for predicting the factors of a mathematical model by a repetitive operation from a series of data sets including false information Lt; RTI ID = 0.0 > model. ≪ / RTI >

(S54) of removing a portion corresponding to the ground model in the selected detection subject area is performed in a manner that the ground model is extracted from the selected detection subject area in order to exclude the ground surface of the road from the detection subject area The corresponding part can be removed.

(S55) of removing the noise information included in the selected detection subject area is performed when the noise information, which is information on elements that do not disturb the traveling of the moving body (10) in the detection subject area, It is possible to remove the portion corresponding to the noise information in the selected detection subject region so as not to be reflected in the object region.

The step of extracting the detection subject area (S56) includes the steps of: removing the portion corresponding to the ground model and the noise information in the selected detection subject area, thereby detecting the object Can be finally extracted.

Embodiments of the region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in this specification can be applied to a region extracting apparatus, an object detecting apparatus, an area extracting method of a region extracting apparatus, and an object detecting method of an object detecting apparatus, Can be applied.

Embodiments of the area extracting apparatus, the object detecting apparatus, and the area extracting method disclosed in the present specification can be applied to all traveling related apparatuses such as navigation apparatuses, traveling apparatuses included in a moving body traveling on a road, It can also be applied to the area extraction method and the object detection method thereof.

Embodiments of the region extracting apparatus, the object detecting apparatus and the region extracting method disclosed in this specification are particularly applicable to an autonomous traveling apparatus for autonomous traveling of a ground unmanned vehicle, a region extracting apparatus for autonomous traveling, a detecting apparatus, And can be practically applied to technical fields.

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can extract an object detection target region in which an object on the road is judged to exist, thereby making it possible to exclude detection of an area in which an object is unlikely to exist .

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region in which the object is unlikely to exist. It is possible to reduce the amount of calculation and the time consumed for detection.

The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region where the object is unlikely to exist. The detection rate can be reduced, and the accuracy of detection can be improved.

It will be apparent to those skilled in the art that various modifications and changes can be made in the present invention without departing from the spirit or scope of the present invention as defined by the appended claims. And such modifications and the like should be considered to fall within the scope of the following claims.

10: Moving objects 10a to 10d: Moving objects A to D
100: image input unit 110: camera
120: three-dimensional distance sensor 130: lane recognition device (lane recognition section)
200: area extracting apparatus (area extracting unit) 210:
220: control unit 300: object recognition device (object recognition unit)
400: object detection device

Claims (21)

An area extracting apparatus for extracting an object detection area on a road,
An input unit for receiving image information of the road and lane recognition information for recognizing the lane of the road; And
And a control unit for extracting a detection subject area in which the object is determined to exist based on the image information and the lane recognition information.
The method according to claim 1,
The image information includes:
Wherein the distance information includes three-dimensional distance information obtained by measuring a video image of the road and a distance of the road.
3. The method of claim 2,
The video image includes:
A camera for photographing the road,
The three-dimensional distance information may include:
Dimensional distance sensor for measuring the distance of the road.
The method according to claim 1,
The lane identification information includes:
Wherein the area extracting device is the information that recognizes the lane of the traveling road of the moving object including the area extracting device.
The method according to claim 1,
Wherein,
Wherein the lane identification information is fused with a video image obtained by photographing the road, and a portion corresponding to the lane recognition information is selected from the video image to generate selection information for the selected portion.
6. The method of claim 5,
Wherein,
Wherein the visual effect reflected on the image is removed before selecting a portion corresponding to the lane recognition information among the image.
6. The method of claim 5,
Wherein,
Dimensional distance information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region by fusing the three-dimensional distance information represented by the three-dimensional distance and the selection information by measuring the distance of the road, Device.
8. The method of claim 7,
Wherein,
A part of a front area of the road indicated by the three-dimensional distance information is selected as a sample area, a ground model of the road is generated based on the sample area, and a part corresponding to the ground model is removed The area extracting device.
8. The method of claim 7,
Wherein,
And removes the noise information included in the detection subject area.
8. The method of claim 7,
Wherein,
Generates extraction information for the detection subject region, and delivers the extracted information to an apparatus for detecting an object on the road.
11. The method of claim 10,
Wherein,
And converts the extracted information into a video information form so that the detection subject region is reflected in the video.
An object detecting apparatus for detecting an object on a road traveling on a moving object, the object detecting apparatus comprising:
An image input unit for acquiring image information including the image of the road and the distance of the road and including three-dimensional distance information in three dimensions;
A lane recognition unit for recognizing a lane of the road and generating lane identification information;
An area extracting unit for extracting a detection subject area in which the object is determined to exist based on the image information and the lane recognition information; And
And an object recognition unit for recognizing an object on the road based on the detection subject area.
13. The method of claim 12,
Wherein the image input unit comprises:
A camera for photographing the road to generate the video image; And
And a three-dimensional distance sensor for measuring the distance of the road and generating the three-dimensional distance information.
13. The method of claim 12,
The lane recognizing unit,
And the lane identification information is generated by recognizing a lane of a road on which the moving object is traveling.
13. The method of claim 12,
The region extracting unit may extract,
Wherein the lane identification information is generated by merging the image and the lane recognition information, selecting a portion corresponding to the lane recognition information of the image, and generating selection information for the selected portion.
16. The method of claim 15,
The region extracting unit may extract,
Dimensional distance information and the selection information to extract a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region.
17. The method of claim 16,
The region extracting unit may extract,
A part of the front area of the road indicated by the three-dimensional distance information is selected as a sample area, and a ground model of the road is generated based on the sample area, and a part corresponding to the ground model And removes the noise information included in the detection subject area.
An area extraction method for extracting an object detection area of an area extraction device included in a moving object,
Receiving image data of a road in which the mobile body is traveling, three-dimensional distance information measured by measuring the distance of the road, and lane recognition information in which the mobile body recognizes a lane in which the mobile body is traveling;
Fusing the image and the lane identification information;
Selecting a portion corresponding to the lane identification information from the image image to generate selection information;
Fusing the three-dimensional distance information and the selection information; And
And extracting, as the detection subject region, a portion corresponding to the selection information among the three-dimensional distance information.
19. The method of claim 18,
The step of selecting the portion corresponding to the lane identification information among the image images to generate the selection information comprises:
Selecting a portion of the image image corresponding to the lane identification information;
Removing the visual effect displayed on the selected portion; And
And generating the selection information.
20. The method of claim 19,
The step of removing the visual effect displayed on the selected portion may include:
And performing an inverse perspective mapping on the selected portion using a predetermined matrix to remove the visual effect.
19. The method of claim 18,
The step of extracting, as the detection subject region, a portion corresponding to the selection information among the three-dimensional distance information,
Selecting a portion of the three-dimensional distance information corresponding to the selection information as the detection subject region;
Selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area;
Generating a ground model of the road based on the sample area;
Removing a portion corresponding to the ground model in the selected detection subject area;
Removing noise information included in the selected detection subject area; And
And extracting the region to be detected.
KR1020150056779A 2015-04-22 2015-04-22 Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest KR20160125803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150056779A KR20160125803A (en) 2015-04-22 2015-04-22 Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150056779A KR20160125803A (en) 2015-04-22 2015-04-22 Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest

Publications (1)

Publication Number Publication Date
KR20160125803A true KR20160125803A (en) 2016-11-01

Family

ID=57484983

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150056779A KR20160125803A (en) 2015-04-22 2015-04-22 Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest

Country Status (1)

Country Link
KR (1) KR20160125803A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190001668A (en) * 2017-06-28 2019-01-07 현대모비스 주식회사 Method, apparatus and system for recognizing driving environment of vehicle
US10679377B2 (en) 2017-05-04 2020-06-09 Hanwha Techwin Co., Ltd. Object detection system and method, and computer readable recording medium
KR20210009032A (en) * 2019-07-16 2021-01-26 홍범진 Kit device for automatic truck car and control method
KR102398084B1 (en) * 2021-02-19 2022-05-16 (주)오토노머스에이투지 Method and device for positioning moving body through map matching based on high definition map by using adjusted weights according to road condition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10679377B2 (en) 2017-05-04 2020-06-09 Hanwha Techwin Co., Ltd. Object detection system and method, and computer readable recording medium
KR20190001668A (en) * 2017-06-28 2019-01-07 현대모비스 주식회사 Method, apparatus and system for recognizing driving environment of vehicle
KR20210009032A (en) * 2019-07-16 2021-01-26 홍범진 Kit device for automatic truck car and control method
KR102398084B1 (en) * 2021-02-19 2022-05-16 (주)오토노머스에이투지 Method and device for positioning moving body through map matching based on high definition map by using adjusted weights according to road condition

Similar Documents

Publication Publication Date Title
CN107272021B (en) Object detection using radar and visually defined image detection areas
EP3732657B1 (en) Vehicle localization
JP7461720B2 (en) Vehicle position determination method and vehicle position determination device
EP3519770B1 (en) Methods and systems for generating and using localisation reference data
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
JP6672212B2 (en) Information processing apparatus, vehicle, information processing method and program
JP6464673B2 (en) Obstacle detection system and railway vehicle
CN108692719B (en) Object detection device
CN106289159B (en) Vehicle distance measurement method and device based on distance measurement compensation
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
JP6450294B2 (en) Object detection apparatus, object detection method, and program
KR102428765B1 (en) Autonomous driving vehicle navigation system using the tunnel lighting
JP6524529B2 (en) Building limit judging device
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
EP3324359B1 (en) Image processing device and image processing method
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
US11151729B2 (en) Mobile entity position estimation device and position estimation method
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
JP2023029441A (en) Measuring device, measuring system, and vehicle
JP2007011994A (en) Road recognition device
JP2018073275A (en) Image recognition device
JP2018084492A (en) Self-position estimation method and self-position estimation device
WO2018225480A1 (en) Position estimating device
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application