KR20160125803A - Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest - Google Patents
Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest Download PDFInfo
- Publication number
- KR20160125803A KR20160125803A KR1020150056779A KR20150056779A KR20160125803A KR 20160125803 A KR20160125803 A KR 20160125803A KR 1020150056779 A KR1020150056779 A KR 1020150056779A KR 20150056779 A KR20150056779 A KR 20150056779A KR 20160125803 A KR20160125803 A KR 20160125803A
- Authority
- KR
- South Korea
- Prior art keywords
- information
- road
- image
- area
- lane
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000001514 detection method Methods 0.000 claims abstract description 163
- 238000000605 extraction Methods 0.000 claims description 89
- 230000000007 visual effect Effects 0.000 claims description 27
- 239000000284 extract Substances 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000013507 mapping Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000009825 accumulation Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000007796 conventional method Methods 0.000 description 3
- 238000013178 mathematical model Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- UDHXJZHVNHGCEC-UHFFFAOYSA-N Chlorophacinone Chemical compound C1=CC(Cl)=CC=C1C(C=1C=CC=CC=1)C(=O)C1C(=O)C2=CC=CC=C2C1=O UDHXJZHVNHGCEC-UHFFFAOYSA-N 0.000 description 1
- 239000010426 asphalt Substances 0.000 description 1
- 239000004568 cement Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000414 obstructive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G06K9/00791—
-
- G06K9/00798—
-
- G06T5/002—
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
The present invention relates to a region extracting apparatus, an object detecting apparatus, and a region extracting method, and more particularly, to a region extracting apparatus, an object detecting apparatus, and a region extracting method for extracting an object detecting region, .
In the road environment, the autonomous driving of the ground unmanned vehicle recognizes a given lane / lane, and moves the planned route within the recognized lane / lane, as opposed to simply moving from the obstacle to the shortest time or the least cost. In such a road environment, the lane / lane recognition must be performed basically at the most basic level, and in particular, a function of detecting and avoiding or stopping a static / dynamic object existing on the road is indispensably required. In order to detect an object on the road separately from the lane / lane recognition, it is important to first select only the information that is estimated as an object from the sensor information. This is because real-time processing is guaranteed by reducing the amount of computation through the process, and the accuracy of recognition can be secured by reducing the range of information to be subjected to object recognition, thereby lowering the probability of false positives.
Most of the conventional techniques focus on performing the most accurate object detection by searching or matching the entire information. However, in order to carry out autonomous driving on a vehicle, it is necessary to use a technology that has a high speed, a small amount of calculation, and a low probability of false detection even if an error occurs, rather than a precise object detection technique. In the conventional technique for extracting a region of interest among object detection techniques, the characteristics of a road region packaged in an image asphalt or cement are constant and the other region uses a large change. There are disadvantages including various environments outside the moving object and the road. In the conventional technique using the distance sensor, it is necessary to determine the position of the object in all the acquired distance information, to judge it according to the model of the vehicle or to check the image of the position, There is a disadvantage in that it is impossible to judge the information other than the reference.
Accordingly, the present invention has been made to solve the limitations of the conventional object detection technology, and it is an object of the present invention to provide an image processing apparatus and an image processing method capable of extracting an area capable of extracting an object detection area so that an object can be detected only in an area where an object is judged to exist on the road Device, object detection device, and region extraction method.
In order to solve the above-described problems, the region extracting apparatus disclosed in this specification extracts an object detection area on the road.
According to an aspect of the present invention, there is provided an area extracting apparatus including an input unit for receiving image information of a road and lane recognition information for recognizing a lane of the road, And a control unit for extracting a detection subject area judged to exist the object.
In one embodiment, the image information may include three-dimensional distance information in three dimensions by measuring a distance between the image and the image of the road.
In one embodiment, the video image is generated by a camera that photographs the road, and the three-dimensional distance information may be generated by a three-dimensional distance sensor that measures the distance of the road.
In one embodiment, the lane identification information may be information obtained by recognizing a lane on a traveling road of a moving object including the area extracting apparatus.
In one embodiment, the control unit fuses the image of the road and the lane recognition information, selects a portion corresponding to the lane recognition information of the image, and generates screening information for the selected portion can do.
In one embodiment, the controller may remove a visual effect reflected on the image before selecting a portion corresponding to the lane recognition information in the image.
In one embodiment, the control unit fuses three-dimensional distance information represented by three-dimensional coordinates with the selection information by measuring the distance of the road, and determines a portion corresponding to the selection information among the three- .
In one embodiment, the control unit selects a part of the front area of the road indicated by the three-dimensional distance information as a sample area, generates a ground model of the road based on the sample area, The portion corresponding to the ground model can be removed.
In one embodiment, the control unit may remove noise information included in the detection subject area.
In one embodiment, the control unit may generate extraction information for the detection subject area and transmit the extraction information to an apparatus for detecting an object on the road.
In one embodiment, the control unit may convert the extracted information into a video information form so that the detection subject region is reflected in the video.
In order to solve the above-described problems, the object detecting apparatus disclosed in this specification includes an object on a road that the moving object travels, included in the moving object.
According to an aspect of the present invention, there is provided an apparatus for detecting an object, the apparatus comprising: an image obtaining unit that obtains image information including the image of the road and the distance of the road to obtain three- A lane recognition unit for recognizing a lane of the road and generating lane recognition information, a region extracting unit for extracting a detection subject area judged to exist the object based on the image information and the lane recognition information, And an object recognition unit for recognizing the object on the road based on the area.
In one embodiment, the image input unit may include a camera for photographing the road to generate the image, and a three-dimensional distance sensor for generating the three-dimensional distance information by measuring the distance of the road.
In one embodiment, the lane recognition unit can recognize the lane of the road on which the mobile body is traveling, and generate the lane recognition information.
In one embodiment, the region extracting unit may be the image extracting apparatus described above.
In one embodiment, the region extracting unit may fuse the image image and the lane recognition information, and may select the portion corresponding to the lane recognition information among the image images to generate the selection information for the selected portion .
In one embodiment, the region extracting unit may extract a portion corresponding to the selection information among the three-dimensional distance information into the detection subject region by fusing the three-dimensional distance information and the selection information.
In one embodiment, the region extracting section selects a part of the front region of the road indicated by the three-dimensional distance information as a sample region, generates a ground model of the road based on the sample region, The noise information included in the portion corresponding to the ground model and the detection subject region can be removed.
In addition, in order to solve the above-described problems, the region extracting method disclosed in this specification is a method of extracting an object detection target region of a region extracting apparatus included in a moving object.
According to an aspect of the present invention, there is provided an area extracting method for extracting an area image of a road in which a moving object is running, three-dimensional distance information obtained by measuring a distance of the object, Receiving the lane identification information in which the lane identification information is recognized, fusing the image image and the lane identification information, selecting the portion corresponding to the lane identification information in the image image to generate the selection information, Dimension distance information and the selection information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region.
In one embodiment, the step of selecting the portion corresponding to the lane identification information of the image image to generate the selection information may include selecting a portion corresponding to the lane identification information of the image image, Removing the visual effect, and generating the selection information.
In one embodiment, the step of removing the visual effect shown in the selected portion may remove the visual effect by performing an inverse perspective mapping using the predetermined matrix for the selected portion .
In one embodiment, the step of extracting the portion corresponding to the selection information among the three-dimensional distance information as the detection subject region may include the step of selecting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region A step of selecting a part of a front area of the road indicated by the three-dimensional distance information as a sample area, generating a ground model of the road based on the sample area, Removing the corresponding portion, removing the noise information included in the selected detection subject region, and extracting the detection subject region.
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can extract an object detection target region in which an object on the road is judged to exist, thereby making it possible to exclude detection of an area in which an object is unlikely to exist .
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region in which the object is unlikely to exist. It is possible to reduce the amount of calculation and the time consumed for detection.
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region where the object is unlikely to exist. The detection rate can be reduced.
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can be applied to a region extracting apparatus, an object detecting apparatus, and an area extracting method disclosed in this specification that extracts an object detecting region, The accuracy of detection can be improved by excluding the detection of a region where the possibility of an object is low.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing a configuration of a region extracting apparatus disclosed in this specification; Fig.
2 is a block diagram showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.
3 is a block diagram showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification;
4 is an exemplary view showing an example of a video image according to an embodiment of the region extracting apparatus disclosed in this specification;
5 is an illustration showing an example of a road according to an embodiment of the region extraction apparatus disclosed herein.
6 is an exemplary diagram illustrating the concept and example of inverse perspective transformation according to an embodiment of the region extraction apparatus disclosed herein;
7 is an exemplary view illustrating an example of three-dimensional distance information according to an embodiment of the region extracting apparatus disclosed in this specification;
8 is a flowchart showing an extraction process of a detection subject region according to an embodiment of the region extraction apparatus disclosed in this specification;
FIG. 9A is an exemplary view for explaining an example of the detection subject region extraction result according to the embodiment of the region extracting apparatus disclosed in this specification.
FIG. 9B illustrates an example of the detection subject region extraction result according to the example shown in FIG. 9A. FIG.
FIG. 10A is an exemplary view for explaining an example of the detection subject region extraction result according to the embodiment of the region extracting apparatus disclosed in this specification.
FIG. 10B is a view showing an example of a detection subject region extraction result according to the example shown in FIG. 10A. FIG.
11 is a block diagram showing a configuration of an object detection apparatus disclosed in this specification;
12 is a flowchart showing the sequence of the region extraction method disclosed in this specification;
13 is a flowchart showing the sequence according to an embodiment of the region extraction method disclosed in this
Figure 14 is a flow chart illustrating the sequence according to an embodiment of the region extraction method disclosed herein;
The region extracting apparatus, the object detecting apparatus and the region extracting method disclosed in this specification can be applied to a region extracting apparatus, an object detecting apparatus, a region extracting method of a region extracting apparatus, and an object detecting method of an object detecting apparatus, . However, the technology disclosed in this specification is not limited thereto, and may be applied to any traveling related device to which the technical idea of the present invention can be applied, for example, navigation device, traveling device included in the moving vehicle traveling on the road, And its area extraction method and object detection method. In particular, the present invention can be applied to an autonomous navigation system for autonomous navigation of ground unmanned vehicles, an area extraction apparatus for autonomous navigation, a detection apparatus, and a field of interest extraction technology.
It is noted that the technical terms used herein are used only to describe specific embodiments and are not intended to limit the scope of the technology disclosed herein. Also, the technical terms used herein should be interpreted as being generally understood by those skilled in the art to which the presently disclosed subject matter belongs, unless the context clearly dictates otherwise in this specification, Should not be construed in a broader sense, or interpreted in an oversimplified sense. In addition, when a technical term used in this specification is an erroneous technical term that does not accurately express the concept of the technology disclosed in this specification, it should be understood that technical terms which can be understood by a person skilled in the art are replaced. Also, the general terms used in the present specification should be interpreted in accordance with the predefined or prior context, and should not be construed as being excessively reduced in meaning.
Also, the singular forms "as used herein include plural referents unless the context clearly dictates otherwise. In this specification, the terms "comprising ", or" comprising ", etc. should not be construed as necessarily including the various elements or steps described in the specification, Or may be further comprised of additional components or steps.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals denote like or similar elements, and redundant description thereof will be omitted.
Further, in the description of the technology disclosed in this specification, a detailed description of related arts will be omitted if it is determined that the gist of the technology disclosed in this specification may be obscured. In addition, it should be noted that the attached drawings are only for easy understanding of the concept of the technology disclosed in the present specification, and should not be construed as limiting the idea of the technology by the attached drawings.
Hereinafter, the region extracting apparatus and the object detecting apparatus disclosed in this specification will be described with reference to Figs. 1 to 11. Fig.
BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram showing a configuration of a region extracting apparatus disclosed in the present specification;
2 is a configuration diagram 1 showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.
3 is a configuration diagram 2 showing the configuration of an object detecting apparatus including the area extracting apparatus and the area extracting apparatus disclosed in this specification.
4 is an exemplary view showing an example of a video image according to an embodiment of the region extracting apparatus disclosed in this specification;
5 is an illustration showing an example of a road according to an embodiment of the region extracting apparatus disclosed herein.
6 is an exemplary diagram illustrating the concept and example of inverse perspective transformation according to an embodiment of the region extraction apparatus disclosed herein.
7 is an exemplary view showing an example of three-dimensional distance information according to an embodiment of the region extracting apparatus disclosed in this specification;
FIG. 8 is a flowchart illustrating an extraction process of an area to be detected according to an embodiment of the present invention.
FIG. 9A is an exemplary view 1-A for explaining an example of the detection subject region extraction result according to the embodiment of the region extraction apparatus disclosed in this specification.
FIG. 9B is an exemplary view 1-B showing an example of the detection subject region extraction result according to the example shown in FIG. 9A.
FIG. 10A is an exemplary view 2-A for explaining an example of the detection subject region extraction result according to the embodiment of the region extraction apparatus disclosed in this specification.
FIG. 10B is an exemplary view 2-B showing an example of a detection subject region extraction result according to the example shown in FIG. 10A.
11 is a block diagram showing the configuration of the object detection apparatus disclosed in this specification.
First, a region extracting apparatus (hereinafter referred to as an extracting apparatus) disclosed in this specification will be described.
The extraction device extracts an object detection area on the road.
1, the
The
The moving body may mean a motor vehicle, a bicycle, or a motorcycle running on the road.
The moving body may also refer to a flying object for low-flying the road, or a floating object capable of floating on the ground and running on the road.
The moving object may further include one or more other configurations for object detection on the road, including the
For example, a
The extracting
The
The
3, the
That is, the
The
In the
The
The
The image information may include three-dimensional distance information expressed in three dimensions by measuring a video image of the road and a distance of the road.
The video image may be generated by the
As shown in FIG. 4, the image may be an image taken by the
The three-dimensional distance information may be information on which the
That is, the detection of an object existing on the road may be performed on the basis of a result of presence or absence of an object indicated by the three-dimensional distance information.
The three-dimensional distance information may be generated by a three-dimensional distance sensor 120 that measures the distance of the road.
That is, the
The lane identification information may be information obtained by recognizing a lane on a traveling road of the moving object including the
The lane identification information may be information obtained by recognizing a lane of a road on which the moving
For example, as shown in FIG. 5, if the road in which the moving
As shown in Fig. 5, there may be an object that affects the traveling of the moving
That is, in the lane of the road during traveling, objects that directly affect the traveling of the moving
The lane identification information may be generated by the
That is, the
The
The
The
The image acquiring unit may be a sensing device that is distinguished from the
The
The
The
The
For example, in the moving
The
That is, the selection information may be a lane portion of a road in which the moving
The selection information may be information serving as a basis of the detection subject area.
The selection information may be image data information for a portion selected from the image.
The selection information may be data information including coordinate information of an image of a portion selected from the image.
That is, the
The
The visual effect may mean a perspective effect that is reflected on the image according to the photographing of the
The visual effect may mean a perspective effect that is reflected in the image image according to at least one of an internal parameter of the
The
That is, the
The
That is, the
The
The inverse perspective transformation may be a computation method for the image data.
The inverse perspective transformation may be a method of calculating the actual accumulation of the road on which the image is photographed by removing the perspective effect reflected on the image.
The inverse perspective transformation may be a method of calculating an actual accumulation of a road on which the image is photographed by inversely converting the accumulation on the image image according to the perspective effect reflected on the image image into an actual accumulation.
6, a length of each side of a certain portion (a trapezoid consisting of p1 to p4 points) on the image is calculated by using the predetermined matrix So that any one of the portions can be displayed on the coordinate plane as shown on the right side according to the inverse converted actual accumulation.
The predetermined matrix may be a matrix for inversely transforming the accumulation of the image image into an actual accumulation.
The predetermined matrix may be a homography matrix for inversely transforming the accumulation of the image image into an actual accumulation.
The predetermined matrix may be a matrix of parameters for the
The predetermined matrix may be a matrix including one or more parameters of intrinsic parameters of sensors included in the
The predetermined matrix may be set according to intrinsic parameters of a sensor included in the
The inverse perspective transformation can be expressed as: " (1) "
[Equation 1]
Herein, m i denotes a result of removing the visual effect, H pm denotes a homography matrix that is the predetermined matrix, and p i denotes an original of the image before the visual effect is removed .
M i And p i may be expressed in the form of coordinates or vector of the image.
M i And p i can be represented by (x ' i , y' i ) and (x i , y i ), respectively.
Accordingly, the above-described expression (1) can be expressed by the following expression (2).
&Quot; (2) "
That is, [the result (x ' i , y' i ) = H pm * original (x i , y i )] in which the visual effect is removed can be obtained.
6, p1 to p4 expressed in the video image are represented in a trapezoidal shape by the visual effect, but in actual accumulation, the moving object 10 (m1 to m4) on the right coordinate plane (+ Width) m and (-width) m, respectively, with respect to the origin of the reference point (t), and the distance from the origin (t 1 ) m and (t 2 ) m to the front.
That is, the inverse perspective transformation may be performed on any one part of the image, so that the actual accumulation including the actual width and the distance of the certain part can be grasped.
The
The
As shown in FIG. 7, the three-dimensional distance information may be information indicating the result of measuring the distance of the road by the three-dimensional distance sensor 120 in a three-dimensional coordinate form.
The detection subject area may be an area to be extracted by the
The region to be detected may be an area to be extracted by the
The detection subject area may refer to a lane of a road of a road in which the moving
That is, a region where an object likely to disturb the traveling of the moving
The
The
That is, the three-dimensional distance information representing the road in the form of three-dimensional coordinates and the selection information corresponding to the lane of the road in which the moving
The selection of the detection subject area may be performed through an expression expressed by the following equation (3).
&Quot; (3) "
Here, x, y, and z may denote coordinates on the three-dimensional distance information, respectively.
X may be a left / right direction on the road with respect to the moving
That is, in the x and y directions, an area inside the left lane and the right lane within the range of the maximum distance max from the minimum distance min of interest, which corresponds to the selection information, Can select an area within the range of the smallest height value (z) and the largest height value (z).
The
The
The ground model may mean that the ground of the road in which the moving
The ground model may be a three-dimensional model of the ground surface of the road.
Since the ground surface of the road is a distance traveled by the moving
The
Since the ground surface of the road is considered to be planar, the ground surface model for the entire ground surface of the road can be generated based on the sample area, have.
The
For example, the average value of the height of the ground of the sample region can be calculated, and the ground model can be generated accordingly.
The
The
The
The noise information may be information on elements that do not interfere with the traveling of the moving
For example, it may be information about an object located above the height of the
Since the noise elements as shown in the above example are not obstructive to the traveling of the moving
The
The extraction of the detection subject region of the
Referring to FIG. 8, the extraction process of the detection subject area will be described in brief as follows.
1) The inverse perspective transformation is performed on the image image to fuse the image image and the lane recognition information, and to remove the visual effect reflected on the image image.
2) generates the selection information according to 1), and fuses the selection information with the three-dimensional distance information.
3) The selection information is merged with the three-dimensional distance information, and a portion corresponding to the selection information among the three-dimensional distance information is selected as the detection subject region.
4) The ground model is generated based on the three-dimensional distance information, and a portion corresponding to the ground model is removed from the selected detection target area.
5) The noise information is removed from the selected detection subject area.
6) Finally, the detection subject area is extracted.
An example of the detection subject area extracted by the above-described process is shown in FIGS. 9 and 10. FIG.
9A and 10A show an example of a video image of a road in which the
As shown in Fig. 9A, the
As shown in Fig. 10A, in the lane of the one way lane of the road in which the moving
After the
The
The
The
The
The conversion into the image information form can be performed by an operation expressed by the following equation (4).
&Quot; (4) "
Here, the image of the left term represents the image information, the intrinsic represents an internal parameter of the
The extraction information is reflected on the
Hereinafter, the object detecting apparatus 400 (hereinafter referred to as a detecting apparatus) will be described.
The
The
Hereinafter, the description of the
The
The
As shown in FIG. 11, the
The
The
The
The
That is, the
The
The
The
That is, the
The
The
The
The
That is, the
That is, the
The detection subject region extraction process of the
The
The
The
The
The
That is, the
The
That is, the
The
The
Hereinafter, the region extracting method (hereinafter referred to as an extracting method) disclosed in this specification will be described with reference to Figs. 12 to 14. Fig.
12 is a flowchart showing the sequence of the region extraction method disclosed in this specification.
13 is a
14 is a
The extraction method may be a method of extracting a region extraction device included in a moving object.
The extraction method may be a method of extracting an object detection area of the area extraction device included in the moving object.
The extraction method may be an area extraction method of the
The extraction method may be an area extraction method of the
The extraction method may be an area extraction method of the
That is, the extraction method can be applied to the
Hereinafter, a description will be given mainly of a process of omitting the overlapping parts of the
As shown in FIG. 12, the extraction method is a method of extracting a moving image of a moving object by using a moving image of the moving object, a moving image of the moving object, a three-dimensional distance information by measuring the distance of the road, (S20) of receiving the lane identification information (S20), combining the video image and the lane identification information (S20), selecting the portion corresponding to the lane identification information of the video image to generate selection information (S30 (S40) of fusing the three-dimensional distance information and the selection information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region (S50).
(S10) receiving a video image of a road in which the moving object is running, a three-dimensional distance information obtained by measuring the distance of the road, and lane recognition information in which the moving object is recognized by the moving object, The image information, the three-dimensional distance information, and the lane recognition information may be received from each of the camera, the three-dimensional distance sensor, and the lane recognition device included in the moving object.
The step S20 of fusing the image and the lane recognition information may fuse the image and the lane recognition information to select a portion corresponding to the lane recognition information among the image images.
The step (S30) of selecting the portion corresponding to the lane recognition information of the image image to generate the selection information may include combining the image and the lane recognition information to generate a portion corresponding to the lane recognition information And generate the selection information for the selected portion.
The step S30 of selecting the portion corresponding to the lane recognition information among the image images to select information may include selecting a portion corresponding to the lane recognition information among the image images as shown in FIG. (S31), removing the visual effect (S32) displayed on the selected portion, and generating the selection information (S33).
The step (S31) of selecting a portion corresponding to the lane recognition information among the image images is performed by combining a portion corresponding to the lane recognition information among the backgrounds displayed in the image image by fusing the image image and the lane recognition information Can be selected.
The step of removing the visual effect (S32) displayed on the selected part may reflect the perspective effect on the image, thereby removing the visual effect appearing on the selected part.
The step of removing the visual effect shown in the selected part may perform the inverse perspective mapping using the predetermined matrix for the selected part to remove the visual effect.
The step of generating the selection information (S33) may generate the selection information for fusion with the three-dimensional distance information for the selected portion from which the visual effect is removed.
The step of fusing the three-dimensional distance information and the selection information (S40) comprises the steps of: extracting a portion corresponding to the selection information in the three-dimensional distance information as the detection subject region; Can be fused.
The step (S50) of extracting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region may include combining the three-dimensional distance information and the selection information, The corresponding part can be extracted as the detection subject area.
As shown in FIG. 14, the step (S50) of extracting a portion corresponding to the selection information among the three-dimensional distance information as a detection subject region may include detecting a portion corresponding to the selection information among the three- (S52) selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area (S52), generating a ground model of the road based on the sample area (S53), removing (S54) a portion corresponding to the ground model in the selected detection subject region, removing (S55) noise information included in the selected detection subject region, and And extracting (S56).
The step of selecting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region (S51) includes the step of selecting a portion corresponding to the selection information among the portions indicated by the three- Can be selected.
(S52) of selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area includes the step of, in order to generate the ground model for excluding the ground surface of the road from the detection subject area, It is possible to select the sample region as a basis for generation.
In the step S52 of selecting a part of the front area of the road indicated by the three-dimensional distance information as the sample area, an arbitrary area of the front area of the road may be selected as the sample area.
The step S53 of generating the ground model of the road based on the sample area may generate the ground model for the entire ground of the road based on the sample area.
The step S53 of generating the ground model of the road based on the sample area may generate the ground model as an average value of the ground heights of the road on the sample area.
The step S53 of generating the road surface model based on the sample area includes a RANAC SAmple Consensus (RANSAC) method, which is a technique for predicting the factors of a mathematical model by a repetitive operation from a series of data sets including false information Lt; RTI ID = 0.0 > model. ≪ / RTI >
(S54) of removing a portion corresponding to the ground model in the selected detection subject area is performed in a manner that the ground model is extracted from the selected detection subject area in order to exclude the ground surface of the road from the detection subject area The corresponding part can be removed.
(S55) of removing the noise information included in the selected detection subject area is performed when the noise information, which is information on elements that do not disturb the traveling of the moving body (10) in the detection subject area, It is possible to remove the portion corresponding to the noise information in the selected detection subject region so as not to be reflected in the object region.
The step of extracting the detection subject area (S56) includes the steps of: removing the portion corresponding to the ground model and the noise information in the selected detection subject area, thereby detecting the object Can be finally extracted.
Embodiments of the region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in this specification can be applied to a region extracting apparatus, an object detecting apparatus, an area extracting method of a region extracting apparatus, and an object detecting method of an object detecting apparatus, Can be applied.
Embodiments of the area extracting apparatus, the object detecting apparatus, and the area extracting method disclosed in the present specification can be applied to all traveling related apparatuses such as navigation apparatuses, traveling apparatuses included in a moving body traveling on a road, It can also be applied to the area extraction method and the object detection method thereof.
Embodiments of the region extracting apparatus, the object detecting apparatus and the region extracting method disclosed in this specification are particularly applicable to an autonomous traveling apparatus for autonomous traveling of a ground unmanned vehicle, a region extracting apparatus for autonomous traveling, a detecting apparatus, And can be practically applied to technical fields.
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification can extract an object detection target region in which an object on the road is judged to exist, thereby making it possible to exclude detection of an area in which an object is unlikely to exist .
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region in which the object is unlikely to exist. It is possible to reduce the amount of calculation and the time consumed for detection.
The region extracting apparatus, the object detecting apparatus, and the region extracting method disclosed in the present specification extract an object to be detected which is judged to be an object on the road and exclude the detection of a region where the object is unlikely to exist. The detection rate can be reduced, and the accuracy of detection can be improved.
It will be apparent to those skilled in the art that various modifications and changes can be made in the present invention without departing from the spirit or scope of the present invention as defined by the appended claims. And such modifications and the like should be considered to fall within the scope of the following claims.
10: Moving
100: image input unit 110: camera
120: three-dimensional distance sensor 130: lane recognition device (lane recognition section)
200: area extracting apparatus (area extracting unit) 210:
220: control unit 300: object recognition device (object recognition unit)
400: object detection device
Claims (21)
An input unit for receiving image information of the road and lane recognition information for recognizing the lane of the road; And
And a control unit for extracting a detection subject area in which the object is determined to exist based on the image information and the lane recognition information.
The image information includes:
Wherein the distance information includes three-dimensional distance information obtained by measuring a video image of the road and a distance of the road.
The video image includes:
A camera for photographing the road,
The three-dimensional distance information may include:
Dimensional distance sensor for measuring the distance of the road.
The lane identification information includes:
Wherein the area extracting device is the information that recognizes the lane of the traveling road of the moving object including the area extracting device.
Wherein,
Wherein the lane identification information is fused with a video image obtained by photographing the road, and a portion corresponding to the lane recognition information is selected from the video image to generate selection information for the selected portion.
Wherein,
Wherein the visual effect reflected on the image is removed before selecting a portion corresponding to the lane recognition information among the image.
Wherein,
Dimensional distance information, and extracting a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region by fusing the three-dimensional distance information represented by the three-dimensional distance and the selection information by measuring the distance of the road, Device.
Wherein,
A part of a front area of the road indicated by the three-dimensional distance information is selected as a sample area, a ground model of the road is generated based on the sample area, and a part corresponding to the ground model is removed The area extracting device.
Wherein,
And removes the noise information included in the detection subject area.
Wherein,
Generates extraction information for the detection subject region, and delivers the extracted information to an apparatus for detecting an object on the road.
Wherein,
And converts the extracted information into a video information form so that the detection subject region is reflected in the video.
An image input unit for acquiring image information including the image of the road and the distance of the road and including three-dimensional distance information in three dimensions;
A lane recognition unit for recognizing a lane of the road and generating lane identification information;
An area extracting unit for extracting a detection subject area in which the object is determined to exist based on the image information and the lane recognition information; And
And an object recognition unit for recognizing an object on the road based on the detection subject area.
Wherein the image input unit comprises:
A camera for photographing the road to generate the video image; And
And a three-dimensional distance sensor for measuring the distance of the road and generating the three-dimensional distance information.
The lane recognizing unit,
And the lane identification information is generated by recognizing a lane of a road on which the moving object is traveling.
The region extracting unit may extract,
Wherein the lane identification information is generated by merging the image and the lane recognition information, selecting a portion corresponding to the lane recognition information of the image, and generating selection information for the selected portion.
The region extracting unit may extract,
Dimensional distance information and the selection information to extract a portion corresponding to the selection information among the three-dimensional distance information as the detection subject region.
The region extracting unit may extract,
A part of the front area of the road indicated by the three-dimensional distance information is selected as a sample area, and a ground model of the road is generated based on the sample area, and a part corresponding to the ground model And removes the noise information included in the detection subject area.
Receiving image data of a road in which the mobile body is traveling, three-dimensional distance information measured by measuring the distance of the road, and lane recognition information in which the mobile body recognizes a lane in which the mobile body is traveling;
Fusing the image and the lane identification information;
Selecting a portion corresponding to the lane identification information from the image image to generate selection information;
Fusing the three-dimensional distance information and the selection information; And
And extracting, as the detection subject region, a portion corresponding to the selection information among the three-dimensional distance information.
The step of selecting the portion corresponding to the lane identification information among the image images to generate the selection information comprises:
Selecting a portion of the image image corresponding to the lane identification information;
Removing the visual effect displayed on the selected portion; And
And generating the selection information.
The step of removing the visual effect displayed on the selected portion may include:
And performing an inverse perspective mapping on the selected portion using a predetermined matrix to remove the visual effect.
The step of extracting, as the detection subject region, a portion corresponding to the selection information among the three-dimensional distance information,
Selecting a portion of the three-dimensional distance information corresponding to the selection information as the detection subject region;
Selecting a part of the front area of the road indicated by the three-dimensional distance information as a sample area;
Generating a ground model of the road based on the sample area;
Removing a portion corresponding to the ground model in the selected detection subject area;
Removing noise information included in the selected detection subject area; And
And extracting the region to be detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150056779A KR20160125803A (en) | 2015-04-22 | 2015-04-22 | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150056779A KR20160125803A (en) | 2015-04-22 | 2015-04-22 | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160125803A true KR20160125803A (en) | 2016-11-01 |
Family
ID=57484983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150056779A KR20160125803A (en) | 2015-04-22 | 2015-04-22 | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160125803A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190001668A (en) * | 2017-06-28 | 2019-01-07 | 현대모비스 주식회사 | Method, apparatus and system for recognizing driving environment of vehicle |
US10679377B2 (en) | 2017-05-04 | 2020-06-09 | Hanwha Techwin Co., Ltd. | Object detection system and method, and computer readable recording medium |
KR20210009032A (en) * | 2019-07-16 | 2021-01-26 | 홍범진 | Kit device for automatic truck car and control method |
KR102398084B1 (en) * | 2021-02-19 | 2022-05-16 | (주)오토노머스에이투지 | Method and device for positioning moving body through map matching based on high definition map by using adjusted weights according to road condition |
-
2015
- 2015-04-22 KR KR1020150056779A patent/KR20160125803A/en not_active Application Discontinuation
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10679377B2 (en) | 2017-05-04 | 2020-06-09 | Hanwha Techwin Co., Ltd. | Object detection system and method, and computer readable recording medium |
KR20190001668A (en) * | 2017-06-28 | 2019-01-07 | 현대모비스 주식회사 | Method, apparatus and system for recognizing driving environment of vehicle |
KR20210009032A (en) * | 2019-07-16 | 2021-01-26 | 홍범진 | Kit device for automatic truck car and control method |
KR102398084B1 (en) * | 2021-02-19 | 2022-05-16 | (주)오토노머스에이투지 | Method and device for positioning moving body through map matching based on high definition map by using adjusted weights according to road condition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107272021B (en) | Object detection using radar and visually defined image detection areas | |
EP3732657B1 (en) | Vehicle localization | |
JP7461720B2 (en) | Vehicle position determination method and vehicle position determination device | |
EP3519770B1 (en) | Methods and systems for generating and using localisation reference data | |
EP3283843B1 (en) | Generating 3-dimensional maps of a scene using passive and active measurements | |
JP6672212B2 (en) | Information processing apparatus, vehicle, information processing method and program | |
JP6464673B2 (en) | Obstacle detection system and railway vehicle | |
CN108692719B (en) | Object detection device | |
CN106289159B (en) | Vehicle distance measurement method and device based on distance measurement compensation | |
US11841434B2 (en) | Annotation cross-labeling for autonomous control systems | |
JP6450294B2 (en) | Object detection apparatus, object detection method, and program | |
KR102428765B1 (en) | Autonomous driving vehicle navigation system using the tunnel lighting | |
JP6524529B2 (en) | Building limit judging device | |
KR101880185B1 (en) | Electronic apparatus for estimating pose of moving object and method thereof | |
EP3324359B1 (en) | Image processing device and image processing method | |
KR102006291B1 (en) | Method for estimating pose of moving object of electronic apparatus | |
JP6552448B2 (en) | Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection | |
US11151729B2 (en) | Mobile entity position estimation device and position estimation method | |
KR20160125803A (en) | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest | |
JP2023029441A (en) | Measuring device, measuring system, and vehicle | |
JP2007011994A (en) | Road recognition device | |
JP2018073275A (en) | Image recognition device | |
JP2018084492A (en) | Self-position estimation method and self-position estimation device | |
WO2018225480A1 (en) | Position estimating device | |
CN111553342B (en) | Visual positioning method, visual positioning device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |