WO2016207987A1 - Image processing device, autonomous movement device, and self-position estimation method - Google Patents

Image processing device, autonomous movement device, and self-position estimation method Download PDF

Info

Publication number
WO2016207987A1
WO2016207987A1 PCT/JP2015/068115 JP2015068115W WO2016207987A1 WO 2016207987 A1 WO2016207987 A1 WO 2016207987A1 JP 2015068115 W JP2015068115 W JP 2015068115W WO 2016207987 A1 WO2016207987 A1 WO 2016207987A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
camera
cover
mobile device
unit
Prior art date
Application number
PCT/JP2015/068115
Other languages
French (fr)
Japanese (ja)
Inventor
秋山 靖浩
義崇 平松
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2015/068115 priority Critical patent/WO2016207987A1/en
Priority to JP2017524325A priority patent/JP6469223B2/en
Publication of WO2016207987A1 publication Critical patent/WO2016207987A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments

Definitions

  • the present invention relates to an image processing device, an autonomous mobile device that autonomously moves to a destination using image processing, and a self-position estimation method using image processing.
  • a boarding type autonomous mobile device has been developed that autonomously travels to a destination while estimating its own position using a satellite positioning system or sensors such as range sensors on public roads where infrastructure such as traveling rails and guide markers is not installed. Yes.
  • Satellite positioning system is known to be capable of metric-accurate position observation.
  • multipath there may be a phenomenon called multipath in which radio waves transmitted from a satellite are reflected and diffracted on a building, the ground surface, etc., and are received from a plurality of transmission paths.
  • multipath occurs, a large error occurs in positioning.
  • satellite radio wave reception may be interrupted, and positioning of the current position may be impossible. In either case, accurate self-position estimation cannot be performed, and continuation of the autonomous mobile device becomes difficult.
  • the boarding type autonomous mobile device of the present invention when positioning of the current position becomes impossible, analyzes the surrounding environment image captured by the camera and performs self-position estimation by image processing.
  • Patent Document 1 in a waterproof dome attached so as to cover the camera lens, a part of the waterproof dome is stored in the cover so as not to get dirty. Rotating the waterproof dome after observing the captured image and detecting a dirty part with raindrops, etc., and turning the clean dome surface toward the camera lens prevents raindrops, dirt, etc. from appearing in the captured image Is described.
  • Patent Document 1 since the method of Patent Document 1 is based on the premise of ensuring a completely clean dome surface, it cannot be applied outdoors. Since raindrops and snowdrops keep falling continuously in rainy weather etc., if a dirty location is detected and the dome is rotated repeatedly, the unclean surface will disappear and image processing will not be possible. As described above, Patent Document 1 does not consider processing with a dirty dome surface image, and thus is not robust in sensing in bad weather such as rainy weather.
  • the present invention has been made in view of the above problems, and its purpose is to estimate the self-position suitably even in bad weather such as rainy weather, snowfall, strong wind, etc., in ambient environment sensing using a captured image of a camera. It is to provide an image processing apparatus capable of performing the above and an autonomous mobile device using the same.
  • the present application includes a plurality of means for solving the above-described problems.
  • an image processing apparatus a camera that captures an image, a cover that covers the lens of the camera and transmits visible light, A drive unit that rotates the cover around the shooting direction of the camera, an extraction unit that extracts feature points from the image, and tracks the movement of multiple feature points extracted using multiple temporally continuous images.
  • a motion vector calculating unit that calculates a motion vector of the point; a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover using the motion vector; And a detection unit that detects feature points.
  • an autonomous mobile device that autonomously moves to a preset destination, using a storage unit that stores map information and feature point maps, and a destination using the recorded map information and the input destination
  • a route setting unit that sets a travel route to the ground, a camera that captures an image, a cover that covers the lens of the camera and transmits visible light, a drive unit that rotates the cover around the shooting direction of the camera, and an image
  • An extraction unit that extracts feature points from the image, a motion vector calculation unit that tracks the movements of the plurality of feature points extracted using a plurality of temporally continuous images and calculates a motion vector of the feature points, and a motion vector
  • a first feature point that detects the first feature point by detecting a first feature point that moves due to the movement of the camera and a second feature point that moves due to the rotation of the cover, and a first feature point;
  • Using feature point map And having a position estimation unit for estimating its own position of the autonomous mobile apparatus Te.
  • a method for estimating a self-position of a mobile device the first step of setting a travel route to a destination using map information stored in a storage unit and an input destination, and a camera
  • a second step of receiving the image captured in step 3 a third step of extracting feature points from the image, and a fourth step of calculating feature point motion vectors using a plurality of feature points extracted from the plurality of images.
  • the first feature point that moves due to the movement of the camera and the second feature point that moves due to the rotation of the cover are determined using the drive information of the cover that covers the lens of the camera and the motion vector,
  • the block diagram of the image processing apparatus mounted in an autonomous mobile device The figure which shows the self-position estimation method by image processing.
  • the figure which shows an example of the external appearance of an autonomous mobile apparatus The figure which shows an example of the lighting pattern of LED.
  • the figure which shows another example of the lighting pattern of LED The figure which shows another example of the lighting pattern of LED.
  • FIG. 1 is an example of a block diagram of an image processing apparatus 100 mounted on a boarding autonomous mobile device (hereinafter referred to as an autonomous mobile device).
  • an autonomous mobile device a boarding autonomous mobile device
  • the autonomous mobile device of this embodiment can reach the destination designated by the passenger by automatic driving.
  • the autonomous mobile device includes an image processing device 100 that is necessary for arithmetic processing for determining a travel route to a destination without performing an operation of a passenger and performing automatic driving of the travel route.
  • the current traveling position is grasped using the radio wave received by the satellite positioning system 111 and the traveling mechanism 114 is instructed to correctly trace the set traveling route.
  • a satellite positioning system is a system that measures the current position using radio waves from an artificial satellite, and uses radio waves from, for example, a GPS (Global Positioning System) satellite.
  • Satellite positioning system is known to be capable of metric-accurate position observation.
  • multipath there may be a phenomenon called multipath in which radio waves transmitted from a satellite are reflected and diffracted on a building, the ground surface, etc., and are received from a plurality of transmission paths.
  • multipath occurs, a large error occurs in positioning.
  • satellite radio wave reception may be interrupted, and positioning of the current position may be impossible.
  • the boarding type autonomous mobile device performs self-position estimation by image processing using the surrounding environment image captured by the camera.
  • an infrared compatible fisheye camera 108 hereinafter referred to as a fisheye camera
  • the input device 112 is used for the passenger to specify the destination, to temporarily stop the travel, to interrupt the travel, etc. to the autonomous mobile device, and to the passenger, the surrounding map information, the planned travel route information, A display unit 113 for notifying the current position and the like is provided.
  • the image processing apparatus 100 includes a central processing unit 101 that performs an operation based on a predetermined algorithm, and a non-volatile storage device 102 that stores a program describing an operation procedure so that the central processing unit 101 operates according to the predetermined algorithm. And a memory 103 for temporarily storing an intermediate process when the central processing unit 101 executes an operation of a predetermined algorithm.
  • the storage device 102 stores a map information storage unit 107 that stores feature point information (hereinafter referred to as a feature point map) of a surrounding environment image in fine weather associated with map information of a travel target area and position information on the map, and the map.
  • a travel route setting unit 105 that sets a travel route to a destination instructed by a passenger with reference to information
  • an autonomous movement control unit 104 that performs automatic travel while positioning its own position using radio waves of a GPS satellite
  • a self-position estimating unit 106 that determines the self-position by image processing using a surrounding environment image instead of a radio wave of a GPS satellite is provided.
  • the feature point map of the map information storage unit 107 is feature point information of the surrounding environment image acquired in fine weather.
  • the autonomous mobile device of this embodiment since the autonomous mobile device of this embodiment travels outdoors, it may not always be sunny. During traveling in bad weather, raindrops, snow, mud stains, etc. are reflected in the image taken by the infrared fisheye camera 108.
  • the current position is specified by matching processing between the feature point detected from the surrounding environment image in fine weather and the feature point detected from the surrounding environment image at the time of traveling. For this reason, if dirt such as raindrops occurs in an image in bad weather, a feature point detection error may occur and the current position may not be detected correctly.
  • a lens cover and a plurality of LEDs are attached to the fisheye camera as a rain protection measure, and the lens cover is rotated while the LEDs are irradiated toward the lens cover.
  • the feature point of the surrounding environment image first feature point
  • the feature point of dirt such as raindrops (second feature) are calculated from the difference in motion vectors calculated from the continuous motion of the feature points of the input image. Point
  • only the first feature point of the surrounding environment image is extracted, and matching processing with the feature point map is performed.
  • by performing such image processing in the self-position estimation unit it is possible to greatly reduce errors in self-position estimation outdoors even in rainy weather such as when the camera is likely to get dirty. Can do.
  • FIG. 2 is a diagram showing a self-position estimation method using image processing according to the present invention.
  • This figure shows the state of an image taken with a fisheye camera.
  • the fisheye camera is installed so that the imaging center axis faces the zenith, and is installed so that the body of the autonomous mobile device 200 does not appear in the captured image, so that the own device does not appear in the actual image.
  • the positional relationship between the surrounding structure 202 and the autonomous mobile device 200 is as shown in FIGS.
  • FIGS an image diagram in which the image captured by the own device and the Uoseki camera is superimposed is described.
  • FIG. 2 (a) is a diagram showing a feature point map creation method.
  • the feature point map is a coordinate on the 2D map in which the feature point 203 is calculated from an image obtained by manually driving the autonomous mobile device 200 in advance and running on the traveling path 201 and coordinate information for separately preparing the feature point is recorded. And saved as a feature point map.
  • the feature point map and the 2D map are stored in the map information storage unit 107.
  • the feature point 203 detected mainly from the photographed image is obtained by detecting a corner portion of a structure such as a building or a sign existing on the image.
  • the feature point map is data stored by associating and storing feature points 203 of various structures 202 in which captured images are reflected and associating them with coordinates on the 2D map.
  • the motion vector of the feature point 203 of the structure 202 and the motion vector of the feature point of a moving object such as a vehicle or a pedestrian are calculated, and the feature point is determined from the difference between the motion vectors, that is, the difference in moving speed. Only the feature points 203 of the structure 202 effective for creating a map are extracted. For example, a pedestrian moves about 5 km / h and a vehicle moves about 50 km / h, and the autonomous mobile device of this embodiment is about 30 km / h. It's easy to do.
  • the feature point detection method is not particularly limited, and a known technique may be used.
  • a technique such as Harris corner detection is used.
  • Harris corner detection is a processing method in which the luminance value distribution of an image is analyzed, and calculation is performed based on the knowledge that if the first derivative (difference) is large in one direction, it is an edge, and if it is large in multiple directions, it is a corner.
  • the method is not limited to Harris corner detection, and another method capable of detecting the corner portion of the structure 202 may be used.
  • FIG. 2 (b) is a diagram showing detected feature points during travel
  • FIG. 2 (c) is a diagram showing feature points in the map.
  • the set 205 of the feature points 204 of the structures existing in the vicinity detected from the photographed image is matched with the feature point set 206 of a certain part in the feature point map, and the two feature point sets (204, 206) match. Determine whether to do. If the two feature point sets match, the coordinate information of the matching part is acquired from the feature point map to grasp the current position. If they do not match, matching processing with another part of the feature point map is continued until they match. If the matching of feature points cannot be obtained after repeating the predetermined number of matching times or the range on the predetermined map, the matching process is interrupted for the captured image and processed as an error (notification to the display unit 133, etc. )
  • FIG. 3 is a block diagram of the feature point detection process in fine weather.
  • the feature point detection in fine weather is performed by a feature point detection unit 300 that detects a feature point of a surrounding structure from a captured image, a feature point set of the structure, and a feature point of a feature point map 303 that is created and stored in advance.
  • a feature point matching unit 301 that performs set matching processing, and a travel control map that acquires the coordinates of the real space of the place where the two feature point sets match from the feature point map 303 and manages them inside the autonomous mobile device It is composed of processing blocks of the self-position calculation unit 302 that performs normalization calculation to the current position.
  • the calculated self-location information is sent to the autonomous traveling control unit 104 and reflected in the traveling control process.
  • FIG. 4 is a diagram showing an example of a photographed image during traveling in rainy weather.
  • raindrops 400 and 401 adhere to the lens cover during rainy weather.
  • both the feature points 402 and 403 of the structure 406 and the feature points 404 and 405 of the raindrops 400 and 401 may be detected. Since the feature points 404 and 405 of the raindrops 400 and 401 are noise, if the matching process is performed with the feature point map as it is, there arises a problem that the self-position is erroneously detected or the position cannot be detected.
  • FIG. 5 is a diagram showing the difference between the movement points of the feature points of the structure and the feature points generated by raindrops.
  • a feature point of a structure (hereinafter referred to as an effective feature point or a first feature point) is distinguished from a feature point generated by a raindrop (hereinafter referred to as an invalid feature point or a second feature point). Therefore, it is necessary to cancel the second feature point generated by raindrops.
  • FIG. 5A shows an example of the movement pattern of effective feature points.
  • the autonomous mobile device is traveling in the arrow direction 511.
  • the effective feature points move along a trajectory that draws a gentle arc from the front to the rear in the traveling direction as shown in FIG.
  • FIG. 5B is an example of a movement pattern of invalid feature points. Since the lens cover attached to the fisheye camera is always rotating while the autonomous mobile device is traveling, raindrops attached to the lens cover move along a trajectory that circulates in the image.
  • the movement pattern of the feature point trajectory is analyzed to determine the effective feature point and the invalid feature point.
  • the discrimination means performs tracking processing of each feature point from at least two or more photographed images, calculates a motion vector of the feature point, and compares it with a predetermined movement condition to obtain a movement pattern of effective feature points. It is discriminated whether there is a movement pattern of invalid feature points.
  • FIG. 6 is a diagram showing a block configuration of the feature point detection process of the self-position estimation unit 106. The same processing blocks as those in FIG. 1 and FIG.
  • the fish-eye camera 108 is equipped with a lens cover 414 and includes a rotation control unit 413 and a rotation mechanism 412 for rotating the lens cover 414.
  • a rotation control unit 413 for rotating the lens cover 414.
  • an LED irradiation mechanism (410) and an irradiation control unit 411 for accurately recognizing raindrops and the feature points of the raindrops attached to the lens cover 414 during image processing are provided.
  • the feature point detection unit 300 a feature point tracking unit 403 that performs motion tracking of feature points from a plurality of images including the input image, a motion vector calculation unit 404 that calculates motion vectors from the feature point tracking, and calculated motion
  • the invalid feature point determination unit 405 that detects the invalid feature point caused by raindrop adhesion by analyzing the vector movement pattern, and the raindrop acquired by the invalid feature point determination unit 405 among the feature points acquired by the feature point detection unit 402
  • An effective feature point extraction unit 406 that cancels invalid feature points and acquires only effective feature points existing on the structure, a feature point matching unit 301 that performs a matching process with the feature point map 303, and a feature point map 303 It is composed of processing blocks of a self-position calculation unit 302 that uses the self-position calculation.
  • the calculated self-location information is sent to the autonomous traveling control unit 104 and reflected in the traveling control process.
  • FIG. 7 is a diagram showing the state of rotation of the lens cover attached to the fisheye camera.
  • the lens cover should be attached to cover the entire lens of the fisheye camera. Further, the image is rotated in a fixed direction around the imaging center point of the fisheye camera.
  • the rotation direction is, for example, an arrow direction 704 in the figure.
  • the direction of rotation may be the opposite direction of the illustrated arrow direction 704, or the illustrated arrow direction 704 and the opposite direction may be switched periodically. In either case, since the feature point tracking and the movement pattern of the feature point locus are determined, it is recognized in advance in which direction the lens cover is rotated at the time of the input captured image.
  • FIG. 8 is a diagram showing an outline of a method for discriminating feature points of structures and feature points generated by raindrops.
  • FIG. (A) is a diagram showing an example of feature points detected from a captured image. It is assumed that the autonomous mobile device is moving in the arrow direction 800.
  • the detected feature points include a feature point 802 of the surrounding structure 801 and a feature point 804 due to the raindrop 803.
  • (B) is a diagram showing an example of the movement trajectory of the feature point of the raindrop when the lens cover is rotated in a predetermined direction. Since the lens cover rotates about the imaging center point of the fisheye camera, the feature point 804 of the raindrop 803 also moves so as to draw a rotation locus 805 about the imaging center point.
  • FIG. (C) is a diagram showing an example of the movement trajectory of the feature point of the structure when the lens cover is rotated in a predetermined direction.
  • the locus 806 of the characteristic point 802 of the structure 807 moves so as to draw a gentle arc from the front to the rear with respect to the movement direction of the autonomous mobile device without being affected by the rotation of the lens cover.
  • FIG. 4D shows an example of movement from the start point to the end point of the feature point locus 805 of the raindrop 803.
  • the feature point locus 805 of the raindrop 803 moves so as to draw a rotation locus from the start point to the end point.
  • FIG. 8E is an example of movement from the start point to the end point of the feature point trajectory of the structure.
  • the trajectory 806 of the feature point 802 of the structure moves so as to draw a gentle arc from the front start point to the rear end point with respect to the movement direction of the autonomous mobile device.
  • the feature points of the structure and the feature points generated by raindrops have different movement trajectories, so that the effective feature points and the invalid feature points are discriminated by analyzing the movement patterns of the feature point trajectories.
  • the discrimination means performs tracking processing of each feature point from at least two or more photographed images, calculates a motion vector of the feature point, and compares it with a predetermined movement condition to obtain a movement pattern of effective feature points. It is discriminated whether there is a movement pattern of invalid feature points.
  • FIG. 9 is a diagram showing an example of the appearance of a fisheye camera.
  • the example of the fish-eye camera in the figure is an example in which a fish-eye lens 900 and a camera body 901 incorporating an image sensor are combined.
  • the fisheye lens 900 is equipped with a lens cover 902 fixed to the cover base 905, and further includes a rotation mechanism 903 for rotating the lens cover. Rotational energy 907 generated by rotating a motor (not shown) or the like is transmitted from the rotation mechanism 903 to the cover base 905, and the vertical lens cover 902 fixed to the base is rotated 908.
  • the LED (904) for accurately recognizing raindrops attached to the lens cover 902 and the feature points of the raindrops during image processing is attached to the cover pedestal portion inside the lens cover. At this time, one LED (904) may be used, but it is more effective to arrange a plurality of LEDs so as to surround the fisheye lens and to irradiate the entire lens cover with the light evenly from the inside of the lens cover. It is easy to distinguish a point from an invalid feature point.
  • the fisheye lens 900 and the camera body 901 are fixed to the autonomous mobile device body via the camera base 906, the fisheye lens 900 and the camera body 901 do not rotate.
  • the LED (904) is fixed on the fisheye lens frame so that the LED (904) itself does not rotate.
  • the LED (904) is fixed to the cover base 905 and the LED (904) itself is fixed. May also be installed so as to rotate together with the lens cover 902.
  • FIG. 10 is a diagram showing an example of the appearance of the autonomous mobile device.
  • the autonomous mobile device 1000 includes a space 1005 in which a passenger 1006 can get in a sitting posture.
  • the fisheye camera and the GPS antenna 1002 are attached to the roof of the autonomous mobile device 1000.
  • a radar sensor 1003 for detecting and avoiding obstacles in front of the vehicle during traveling may be attached to the front of the main body. (Not shown in FIG. 1). Thereby, the route which avoids an obstacle automatically can be selected, and the safety
  • a wheel 1004 for performing traveling movement such as forward movement, backward movement, right rotation, and left rotation is provided.
  • FIG. 11 is a diagram showing an example of a lighting pattern of an LED fixed on the frame 1100 of the fisheye lens 1101 when a plurality of LEDs are provided (viewed from the fisheye lens side).
  • FIG. 11A shows a pattern in which all LEDs (1102) are lit.
  • FIG. 11B shows a pattern in which all LEDs (1103) are turned off.
  • the LED is switched on and off at predetermined time intervals.
  • the switching cycle between full lighting and full lighting may always be the same cycle or different cycles. Moreover, you may set irregularly so that the time interval of all lighting and all extinction may each differ.
  • FIG. 12 is a diagram showing another example of the LED lighting pattern (viewed from the fisheye lens side).
  • FIG. 12 (a) shows that the LED (1200) is fully lit
  • FIG. 12 (b) shows that the left half LED (1201) is lit
  • FIG. 12 (c) shows that the right half LED (1202) is lit
  • FIG. 12D shows a state in which the upper half LED 1203 is turned on
  • FIG. 12E shows a state in which the lower half LED 1204 is turned on.
  • the LED lighting position periodically changes in addition to the effect described in FIG. 11. Therefore, the region where the irradiated light is reflected by raindrops also changes periodically in synchronization with the LED lighting. To do. By observing the periodic change in the brightness value of the raindrop, it is possible to more reliably determine the location where the raindrop exists.
  • each lighting location may be periodically changed by further dividing the LED lighting portion to increase the lighting pattern.
  • the switching cycle of each lighting location may always be the same cycle or a different cycle.
  • the lighting order of each lighting location may be a fixed order or may be randomly turned on in a different order.
  • FIG. 13 is a diagram showing another example of the LED lighting pattern (viewed from the fisheye lens side).
  • FIG. 13A shows a state in which each LED (1300) is turned on in turn counterclockwise.
  • FIG. 13B shows a state in which each LED (1301) is turned on in turn clockwise.
  • the switching cycle of individual LED lighting locations may always be the same cycle or different cycles. Moreover, you may set irregularly so that the time interval of each LED lighting location may each differ.
  • the lighting order of the individual LED lighting locations may always repeat the lighting pattern of FIG. 13 (a) or FIG. 13 (b), or the lighting patterns of FIG. 13 (a) and FIG. 13 (b) are alternately performed. You may do it.
  • FIG. 14 is a diagram illustrating an example of a shooting state of the raindrop when the LED is irradiated toward the lens cover to which the raindrop has adhered.
  • FIG. 14A is a diagram showing the locus of LED irradiation light.
  • the light emitted from the LED (1403) travels while diffusing in the direction of the lens cover, and the refraction phenomenon (the traveling direction of the light wave changes on the boundary surface of different substances) inside the raindrop 1404 attached to the lens cover 1402. (Based on Snell's law with properties). Part of the reflected light reaches the camera image sensor 1401.
  • FIG. 14B is a diagram illustrating an example of the LED irradiation direction and the luminance state of raindrops observed in the captured image. Since the surface of the raindrop 1406 has a spherical shape due to surface tension, the light refracted in the raindrop has the property of being partially concentrated at a certain location based on the conditions of the irradiation direction 1407 of the LED irradiation light and the incident angle of the light. . For example, when there is LED irradiation from the left side as shown in the figure, the refracted light concentrates on the raindrop portion 1408 on the right side. Therefore, the captured image also moves in the state where the luminance value of the raindrop portion 1408 on the right side is high.
  • FIG. 15 is a diagram showing a processing flow of the self-position estimation unit in the image processing apparatus of the present invention.
  • a user of an autonomous mobile device sets a destination using the input device 112 based on information displayed on the display unit 113 (S1).
  • the travel route setting unit 105 sets a travel route from the current position to the input destination (S2).
  • the image processing apparatus reads the feature point map from the map information storage unit 107 (S3).
  • the lens cover rotation mechanism 109 starts rotating the lens cover (S4)
  • the LED irradiation mechanism 110 starts LED irradiation (S5)
  • the movement of the autonomous mobile device is started by the control of the traveling mechanism 114 (S6). ).
  • it is determined whether or not the destination has been reached (S7).
  • the image taken by the camera 108 is input (S8), and the feature point detection unit 300 detects the input image.
  • Feature point extraction is performed (S9), and the extracted feature point information (A) is temporarily stored in the memory 103.
  • the feature point tracking unit 403 performs feature point tracking calculation using the feature point information of the previous input image and the feature point information (A) of the current image (S10), and the motion vector calculation unit 404 performs tracking.
  • the motion vector of the feature point is calculated from the result (S11).
  • the feature point determination unit 405 determines whether or not the obtained feature point motion vector is an invalid feature point (S12), and temporarily stores the invalid feature point information (B) in the memory 103.
  • the effective feature point training particle 406 subtracts (removes) the invalid feature point information (B) from the feature point information (A) to obtain the effective feature point information (C) (S13).
  • the feature point matching unit 301 performs matching processing between the feature point map and the effective feature points (S14), the current position is specified by the self-position calculating unit 302 (S15), and the travel route setting unit 105 updates the travel route. The process returns to the confirmation of arrival at the destination in step S7 (S16).
  • the autonomous mobile device is stopped (S17), the LED irradiation is stopped (S18), and the lens cover rotation is stopped (S19), and the processing flow is terminated.
  • FIG. 16 is a diagram illustrating a method of determining invalid feature points (S12).
  • the motion vector of the feature point is input (S161), and it is determined whether or not the motion vector of the feature point matches the motion vector of the lens cover rotation (S1622). If they do not match, it is determined that the input feature point is a feature point of a surrounding structure, and is set as an effective feature point (S167). If they match, it is first determined whether the LED irradiation mechanism is ON (S163). If the LED irradiation mechanism is OFF or does not exist, it is set as an invalid feature point as a feature point due to raindrops (S166).
  • the luminance value change period near the feature point is calculated (S164), and it is determined whether or not the luminance value change period coincides with the LED irradiation pattern change period (S165). If they do not match, it is determined that the input feature point is a feature point of a surrounding structure, and is set as an effective feature point (S167). If they match, it is determined that the input feature point is a feature point due to raindrops or the like, and is set as an invalid feature point (S166).
  • FIG. 17 is a diagram showing a processing flow of another self-position estimation unit in the image processing apparatus of the present invention.
  • the image processing apparatus of the present invention always rotates the lens cover regardless of the weather, and performs LED irradiation when an LED irradiation mechanism is provided. This is because when the weather is fine or cloudy, no raindrops adhere to the lens cover, so that the feature point of the photographed image is not erroneously detected, and there is no problem even if the lens cover is always rotated and the LED is irradiated. There is also an advantage in apparatus control that there is no need to switch between operation modes by discriminating between rainy weather and fine weather.
  • the processing flow after the confirmation of arrival at the destination (S7) may be replaced as follows.
  • Whether the current weather is rainy or not may be determined using a rain detection device such as a raindrop sensor, or may be determined by raindrop detection by image processing.
  • a rain detection device such as a raindrop sensor
  • image processing there is a method of determining that it is rainy because it is assumed that raindrops are present when invalid feature points are detected in S11.
  • FIG. 18 is a diagram showing a self-position estimation processing flow in the clear sky mode (S173).
  • the photographed image is input from the camera (S181), and feature points of the input image are extracted (S182).
  • the extracted feature point information is matched with the feature point map (S183), the current position is specified (S184), the travel route is updated (S185), and the destination arrival confirmation (S7) is returned.
  • the image processing apparatus described in the present embodiment covers the camera 108 that captures an image, the covers 414 and 902 that cover the lens 900 of the camera and transmit visible light, and the imaging direction of the camera as an axis.
  • a driving unit 412 and 413 for extracting the feature point an extraction unit 300 for extracting the feature point from the image, and tracking the motion of the plurality of feature points extracted by using a plurality of temporally continuous images.
  • a motion vector calculation unit 404 that calculates the first feature point that moves due to the movement of the camera and a second feature point that moves due to the rotation of the cover using the motion vector, and determines the first feature point
  • a detection unit 406 for detection.
  • the autonomous mobile device described in the present embodiment sets a travel route to the destination using the storage unit 107 in which the map information and the feature point map are stored, and the map information and the input destination.
  • a route setting unit 105 a camera 108 that captures an image, covers 414 and 902 that cover the lens 900 of the camera and transmit visible light, and drive units 412 and 413 that rotate the cover around the shooting direction of the camera,
  • An extraction unit 300 that extracts feature points from an image, a motion vector calculation unit 404 that tracks the movements of a plurality of feature points extracted using a plurality of temporally continuous images, and calculates a motion vector of the feature points;
  • a first detection unit 406 that discriminates a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover using the motion vector, and detects the first feature point;
  • a position estimation unit 106 that estimates its own position of the autonomous mobile unit by using the feature points and the feature point map 303, and having a.
  • the self-position estimation method described in the present embodiment is a first step (S1, S1) for setting a travel route to a destination using the map information stored in the storage unit 107 and the input destination door.
  • S2 a second step of receiving an image captured by the camera (S8), a third step of extracting feature points from the image (S9), and a feature using a plurality of feature points extracted from a plurality of images
  • the fourth step (S11) for calculating the motion vector of the point, the drive information of the cover that covers the lens of the camera, and the motion vector, the first feature point that moves due to the movement of the camera and the rotation of the cover
  • the fifth step (S13) of discriminating the second feature point that moves due to it and detecting the first feature point, the feature point map 303 and the first feature point stored in the storage unit 107 Estimate the position of the mobile device
  • S15 which is characterized by having a.
  • the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit of the present invention.
  • the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to the one having all the configurations described.
  • a part of the configuration of an embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of an embodiment.
  • each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • information such as a program, data, and a file which implement
  • 100 Image processing apparatus, 101 ... Central processing unit, 102 ... Storage device, 103 ... Memory, 104 ... autonomous driving control unit, 105: Travel route setting section, 106 ... self-position estimation unit, 107 ... map information storage unit, 108 ... fisheye camera, 109 ... Lens cover rotation mechanism, 110 ... LED, 111 ... satellite positioning system, 112 ... input device, 113 ... display section, 114: Traveling mechanism.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Studio Devices (AREA)
  • Stroboscope Apparatuses (AREA)
  • Accessories Of Cameras (AREA)

Abstract

Provided is an image processing device characterized by having the following: a camera that captures images; a cover that covers a lens of the camera and transmits visible light; a driving unit that rotates the cover around an axis defined by the imaging direction of the camera; an extraction unit that extracts characteristic points from the images; a movement vector calculation unit that tracks the movement of the plurality of characteristic points extracted using the plurality of images that are temporally sequential and calculates a movement vector of the characteristic points; and a detection unit that uses the movement vector in order to distinguish between first characteristic points that move due to movement of the camera and second characteristic points that move due to rotation of the cover, thereby detecting the first characteristic points.

Description

画像処理装置、自律移動装置および自己位置推定方法Image processing device, autonomous mobile device, and self-position estimation method
 本発明は、画像処理装置、画像処理を用いて目的地まで自律で移動する自律移動装置および画像処理を用いた自己位置推定方法に関する。 The present invention relates to an image processing device, an autonomous mobile device that autonomously moves to a destination using image processing, and a self-position estimation method using image processing.
 走行レールやガイドマーカなどのインフラが設置されていない一般公道を衛星測位システムや測域センサ等のセンサを用いて自己位置を推定しながら目的地まで自律走行する搭乗型自律移動装置が開発されている。 A boarding type autonomous mobile device has been developed that autonomously travels to a destination while estimating its own position using a satellite positioning system or sensors such as range sensors on public roads where infrastructure such as traveling rails and guide markers is not installed. Yes.
 衛星測位システムは、メートル精度の位置観測が可能なことで知られている。その反面、衛星から送信された電波が建造物、地表等に反射・回折して複数の伝送経路から電波が受信されるマルチパスと呼ばれる現象が生じる場合がある。マルチパスが発生すると測位に大きな誤差が生じる。また、衛星電波受信が中断して現在位置の測位自体が不可能になる場合もある。いずれも、正確な自己位置推定が出来なくなり、自律移動装置の継続が困難になる。 Satellite positioning system is known to be capable of metric-accurate position observation. On the other hand, there may be a phenomenon called multipath in which radio waves transmitted from a satellite are reflected and diffracted on a building, the ground surface, etc., and are received from a plurality of transmission paths. When multipath occurs, a large error occurs in positioning. In addition, satellite radio wave reception may be interrupted, and positioning of the current position may be impossible. In either case, accurate self-position estimation cannot be performed, and continuation of the autonomous mobile device becomes difficult.
 本発明の搭乗型自律移動装置は、現在位置の測位が不可能になった場合、カメラで撮影した周辺環境画像を解析して、画像処理による自己位置推定を実施する。 The boarding type autonomous mobile device of the present invention, when positioning of the current position becomes impossible, analyzes the surrounding environment image captured by the camera and performs self-position estimation by image processing.
特開2012-249082号公報JP 2012-249082 A
 特許文献1によれば、カメラレンズを覆うように取り付けられた防水ドームにおいて、防水ドームの一部分は汚れないようにカバー内に収納されている。撮影画像を観測して雨滴等で汚れた部分を検知したら防水ドームを回転させ、汚れていないドーム面をカメラレンズに向けることで雨滴やゴミ等の汚れが撮影画像に映り込まないようにする方法が記載されている。 According to Patent Document 1, in a waterproof dome attached so as to cover the camera lens, a part of the waterproof dome is stored in the cover so as not to get dirty. Rotating the waterproof dome after observing the captured image and detecting a dirty part with raindrops, etc., and turning the clean dome surface toward the camera lens prevents raindrops, dirt, etc. from appearing in the captured image Is described.
 しかし、特許文献1の方法は、完全に汚れていないドーム面の確保を前提としているため、屋外での適用は不可能である。雨天時等では絶え間なく雨滴や雪粒が降り続けるため、汚れた箇所を検知してドーム回転を繰り返すと、汚れていない面が無くなってしまい、画像処理ができないことになる。このように特許文献1では、汚れた状態のドーム面内画像での処理は考慮されていないため、雨天等の悪天候でのセンシングにおいてはロバストでない。 However, since the method of Patent Document 1 is based on the premise of ensuring a completely clean dome surface, it cannot be applied outdoors. Since raindrops and snowdrops keep falling continuously in rainy weather etc., if a dirty location is detected and the dome is rotated repeatedly, the unclean surface will disappear and image processing will not be possible. As described above, Patent Document 1 does not consider processing with a dirty dome surface image, and thus is not robust in sensing in bad weather such as rainy weather.
 本発明は、上記課題を鑑みてなされたものであり、その目的は、カメラの撮影画像を用いた周辺環境センシングにおいて、雨天、降雪、強風等の悪天候時においても好適に自己位置を推定することが可能な画像処理装置およびそれを用いた自律移動装置を提供することである。 The present invention has been made in view of the above problems, and its purpose is to estimate the self-position suitably even in bad weather such as rainy weather, snowfall, strong wind, etc., in ambient environment sensing using a captured image of a camera. It is to provide an image processing apparatus capable of performing the above and an autonomous mobile device using the same.
 上記課題を解決するために、例えば請求の範囲に記載の構成を採用する。本願は上記課題を解決する手段を複数含んでいるが、その一例を挙げるならば、画像処理装置であって、画像を撮影するカメラと、カメラのレンズを覆い、可視光を透過するカバーと、カメラの撮影方向を軸としてカバーを回転させる駆動部と、画像から特徴点を抽出する抽出部と、時間的に連続する複数の画像を用いて抽出した複数の特徴点の動きを追跡し、特徴点の動きベクトルを計算する動きベクトル算出部と、動きベクトルを用いてカメラの移動に起因して動く第1特徴点とカバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する検出部とを、有することを特徴とする。 In order to solve the above problems, for example, the configuration described in the claims is adopted. The present application includes a plurality of means for solving the above-described problems. To give an example, an image processing apparatus, a camera that captures an image, a cover that covers the lens of the camera and transmits visible light, A drive unit that rotates the cover around the shooting direction of the camera, an extraction unit that extracts feature points from the image, and tracks the movement of multiple feature points extracted using multiple temporally continuous images. A motion vector calculating unit that calculates a motion vector of the point; a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover using the motion vector; And a detection unit that detects feature points.
 または、あらかじめ設定された目的地へ自律して移動する自律移動装置であって、地図情報および特徴点地図が蓄積されている記憶部と、記地図情報と入力された目的地とを用いて目的地までの走行ルートを設定するルート設定部と、画像を撮影するカメラと、カメラのレンズを覆い、可視光を透過するカバーと、カメラの撮影方向を軸としてカバーを回転させる駆動部と、画像から特徴点を抽出する抽出部と、時間的に連続する複数の画像を用いて抽出した複数の前記特徴点の動きを追跡し、特徴点の動きベクトルを計算する動きベクトル算出部と、動きベクトルを用いて前記カメラの移動に起因して動く第1特徴点とカバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する検出部と、第1特徴点と特徴点地図とを用いて自律移動装置の自己位置を推定する位置推定部と、を有することを特徴とする。 Alternatively, an autonomous mobile device that autonomously moves to a preset destination, using a storage unit that stores map information and feature point maps, and a destination using the recorded map information and the input destination A route setting unit that sets a travel route to the ground, a camera that captures an image, a cover that covers the lens of the camera and transmits visible light, a drive unit that rotates the cover around the shooting direction of the camera, and an image An extraction unit that extracts feature points from the image, a motion vector calculation unit that tracks the movements of the plurality of feature points extracted using a plurality of temporally continuous images and calculates a motion vector of the feature points, and a motion vector A first feature point that detects the first feature point by detecting a first feature point that moves due to the movement of the camera and a second feature point that moves due to the rotation of the cover, and a first feature point; Using feature point map And having a position estimation unit for estimating its own position of the autonomous mobile apparatus Te.
 あるいは、移動装置の自己位置を推定する方法であって、記憶部に記憶されている地図情報と、入力された目的地とを用いて目的地までの走行ルートを設定する第1ステップと、カメラで撮像された画像を受け取る第2ステップと、画像から特徴点を抽出する第3ステップと、複数の画像から抽出された複数の特徴点を用いて特徴点の動きベクトルを計算する第4ステップと、カメラのレンズを覆うカバーの駆動情報と前記動きベクトルとを用いて、カメラの移動に起因して動く第1特徴点と前記カバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する第5ステップと、記憶部に記憶されている特徴点地図と前記第1特徴点とを用いて、前記移動装置の自己位置を推定する第6ステップと、を有することを特徴とする。 Alternatively, a method for estimating a self-position of a mobile device, the first step of setting a travel route to a destination using map information stored in a storage unit and an input destination, and a camera A second step of receiving the image captured in step 3, a third step of extracting feature points from the image, and a fourth step of calculating feature point motion vectors using a plurality of feature points extracted from the plurality of images. The first feature point that moves due to the movement of the camera and the second feature point that moves due to the rotation of the cover are determined using the drive information of the cover that covers the lens of the camera and the motion vector, A fifth step of detecting a first feature point; and a sixth step of estimating the self-position of the mobile device using the feature point map stored in the storage unit and the first feature point. It is characterized by.
 本発明によれば、撮影画像に雨滴、雪、泥汚れ等の映り込みが生じても、自己位置推定処理の誤差を低減することができる。 According to the present invention, it is possible to reduce errors in the self-position estimation process even if raindrops, snow, mud stains, etc. appear in the captured image.
自律移動装置に搭載する画像処理装置のブロック図。The block diagram of the image processing apparatus mounted in an autonomous mobile device. 画像処理による自己位置推定方法を示す図。The figure which shows the self-position estimation method by image processing. 晴天時の特徴点検出処理のブロック構成図。The block block diagram of the feature point detection process at the time of fine weather. 雨天時の走行における撮影画像の一例を示す図。The figure which shows an example of the picked-up image in driving | running | working at the time of rainy weather. 構造物の特徴点と雨滴で生じた特徴点の移動軌跡の差異を示す図。The figure which shows the difference of the movement locus | trajectory of the feature point produced by the raindrop and the feature point of the structure. 自己位置推定部の特徴点検出処理のブロック構成を示す図。The figure which shows the block structure of the feature point detection process of a self-position estimation part. 魚眼カメラで撮像した画像イメージを示す図。The figure which shows the image image imaged with the fisheye camera. 構造物の特徴点と雨滴で生じた特徴点の判別方法の概要を示す図。The figure which shows the outline | summary of the discriminating method of the feature point produced with the feature point and raindrop of the structure. 魚眼カメラの外観の一例を示す図。The figure which shows an example of the external appearance of a fisheye camera. 自律移動装置の外観の一例を示す図。The figure which shows an example of the external appearance of an autonomous mobile apparatus. LEDの点灯パタンの一例を示す図。The figure which shows an example of the lighting pattern of LED. LEDの点灯パタンの別の一例を示す図。The figure which shows another example of the lighting pattern of LED. LEDの点灯パタンのさらに別の一例を示す図。The figure which shows another example of the lighting pattern of LED. LED照射時の雨滴の撮影状態の一例を示す図。The figure which shows an example of the imaging | photography state of the raindrop at the time of LED irradiation. 画像処理装置における自己位置推定部の処理フローを示す図。The figure which shows the processing flow of the self-position estimation part in an image processing apparatus. 無効特徴点判定の方法を示す図。The figure which shows the method of an invalid feature point determination. 画像処理装置における別の自己位置推定部の処理フローを示す図。The figure which shows the processing flow of another self-position estimation part in an image processing apparatus. 晴天モードの自己位置推定処理フローを示す図。The figure which shows the self-position estimation process flow of fine weather mode.
(第一の実施形態)
 図1は、搭乗型自律移動装置(以下、自律移動装置)に搭載する画像処理装置100のブロック図の一例である。
(First embodiment)
FIG. 1 is an example of a block diagram of an image processing apparatus 100 mounted on a boarding autonomous mobile device (hereinafter referred to as an autonomous mobile device).
 本実施例の自律移動装置は、搭乗者が指示した目的地に自動運転で到達することができる。自律移動装置は、搭乗者の操作を伴わずに目的地までの走行ルートを決定して、且つ走行路の自動運転を行うための演算処理に必要な画像処理装置100を備える。自律移動では、衛星測位システム111が受信した電波を使用して現在の走行位置を把握すると共に、設定された走行ルートを正しくトレースするように走行機構114に指示を出す。衛星測位システムは、人工衛星の電波を使用して現在位置を計測するシステムで、例えばGPS(Global Positioning System)衛星の電波を用いる。 The autonomous mobile device of this embodiment can reach the destination designated by the passenger by automatic driving. The autonomous mobile device includes an image processing device 100 that is necessary for arithmetic processing for determining a travel route to a destination without performing an operation of a passenger and performing automatic driving of the travel route. In the autonomous movement, the current traveling position is grasped using the radio wave received by the satellite positioning system 111 and the traveling mechanism 114 is instructed to correctly trace the set traveling route. A satellite positioning system is a system that measures the current position using radio waves from an artificial satellite, and uses radio waves from, for example, a GPS (Global Positioning System) satellite.
 衛星測位システムは、メートル精度の位置観測が可能なことで知られている。その反面、衛星から送信された電波が建造物、地表等に反射・回折して複数の伝送経路から電波が受信されるマルチパスと呼ばれる現象が生じる場合がある。マルチパスが発生すると測位に大きな誤差が生じる。また、衛星電波受信が中断して現在位置の測位自体が不可能になる場合もある。搭乗型自律移動装置は、現在位置の測位が不可能になった場合、カメラで撮影した周辺環境画像を用いて画像処理による自己位置推定を実施する。周辺環境画像の取得には、360度半球形状の画像撮影が可能な赤外線対応魚眼カメラ108(以下、魚眼カメラ)を用いる。 Satellite positioning system is known to be capable of metric-accurate position observation. On the other hand, there may be a phenomenon called multipath in which radio waves transmitted from a satellite are reflected and diffracted on a building, the ground surface, etc., and are received from a plurality of transmission paths. When multipath occurs, a large error occurs in positioning. In addition, satellite radio wave reception may be interrupted, and positioning of the current position may be impossible. When the current position cannot be measured, the boarding type autonomous mobile device performs self-position estimation by image processing using the surrounding environment image captured by the camera. In order to acquire the surrounding environment image, an infrared compatible fisheye camera 108 (hereinafter referred to as a fisheye camera) capable of capturing a 360-degree hemispherical image is used.
 この他に、搭乗者が目的地を指定、走行の一時停止、走行中断等の指示を自律移動装置へ伝達するための入力装置112、さらに搭乗者に対して周辺地図情報、走行予定経路情報および現在位置等を通知するための表示部113を備える。 In addition to this, the input device 112 is used for the passenger to specify the destination, to temporarily stop the travel, to interrupt the travel, etc. to the autonomous mobile device, and to the passenger, the surrounding map information, the planned travel route information, A display unit 113 for notifying the current position and the like is provided.
 画像処理装置100は、所定のアルゴリズムに基づいて演算を実行する中央処理装置101と、中央処理装置101が所定のアルゴリズムで動作するように演算手順を記述したプログラムを保存する不揮発性の記憶装置102と、中央処理装置101が所定のアルゴリズムの演算を実行する際の途中経過を一時的に記憶するためのメモリ103を備える。 The image processing apparatus 100 includes a central processing unit 101 that performs an operation based on a predetermined algorithm, and a non-volatile storage device 102 that stores a program describing an operation procedure so that the central processing unit 101 operates according to the predetermined algorithm. And a memory 103 for temporarily storing an intermediate process when the central processing unit 101 executes an operation of a predetermined algorithm.
 記憶装置102は、走行対象地域の地図情報および地図上の位置情報と対応付けた晴天時の周辺環境画像の特徴点情報(以下、特徴点地図)を格納した地図情報記憶部107と、前記地図情報を参照して搭乗者が指示した目的地までの走行ルートを設定する走行ルート設定部105と、GPS衛星の電波を用いて自己位置を測位しながら自動走行を行う自律移動制御部104と、GPS衛星の電波の代わりに周辺環境画像を用いた画像処理で自己位置を判断する自己位置推定部106を備える。 The storage device 102 stores a map information storage unit 107 that stores feature point information (hereinafter referred to as a feature point map) of a surrounding environment image in fine weather associated with map information of a travel target area and position information on the map, and the map. A travel route setting unit 105 that sets a travel route to a destination instructed by a passenger with reference to information, an autonomous movement control unit 104 that performs automatic travel while positioning its own position using radio waves of a GPS satellite, A self-position estimating unit 106 that determines the self-position by image processing using a surrounding environment image instead of a radio wave of a GPS satellite is provided.
 地図情報記憶部107の特徴点地図は、晴天時に取得した周辺環境画像の特徴点情報である。しかし本実施例の自律移動装置は屋外を走行するため必ずしも晴天でない場合がある。悪天候時の走行では、赤外線対応魚眼カメラ108で撮影した画像に雨滴、雪、泥汚れ等が映り込む。自律移動装置の画像処理による自己位置推定では、晴天時の周辺環境画像から検出した特徴点と走行時点で周辺環境画像から検出した特徴点とのマッチング処理により、現在位置の特定を行う。このため、悪天候時の画像に雨滴などの汚れが生じると特徴点の検出ミスが生じて現在位置が正しく検出できなくなる場合がある。 The feature point map of the map information storage unit 107 is feature point information of the surrounding environment image acquired in fine weather. However, since the autonomous mobile device of this embodiment travels outdoors, it may not always be sunny. During traveling in bad weather, raindrops, snow, mud stains, etc. are reflected in the image taken by the infrared fisheye camera 108. In the self-position estimation by the image processing of the autonomous mobile device, the current position is specified by matching processing between the feature point detected from the surrounding environment image in fine weather and the feature point detected from the surrounding environment image at the time of traveling. For this reason, if dirt such as raindrops occurs in an image in bad weather, a feature point detection error may occur and the current position may not be detected correctly.
 この特徴点の検出ミスを回避するため、雨除け対策として魚眼カメラにレンズカバーと複数個のLEDを装着して、LEDをレンズカバーに向けて照射した状態でレンズカバーを回転させる。レンズカバーを回転させることで、入力画像の特徴点の連続した動きから算出した動きベクトルの違いから、周辺環境画像の特徴点(第1特徴点)と雨滴等の汚れの特徴点(第2特徴点)とを区別し、周辺環境画像の第1特徴点のみを抽出して特徴点地図とのマッチング処理を行うようにする。本実施例では、自己位置推定部でこのような画像処理を行うことで、雨天時などカメラに汚れが付着しやすい天候であっても、屋外での自己位置推定における誤差を大幅に低減することができる。 In order to avoid this feature point detection error, a lens cover and a plurality of LEDs are attached to the fisheye camera as a rain protection measure, and the lens cover is rotated while the LEDs are irradiated toward the lens cover. By rotating the lens cover, the feature point of the surrounding environment image (first feature point) and the feature point of dirt such as raindrops (second feature) are calculated from the difference in motion vectors calculated from the continuous motion of the feature points of the input image. Point), only the first feature point of the surrounding environment image is extracted, and matching processing with the feature point map is performed. In the present embodiment, by performing such image processing in the self-position estimation unit, it is possible to greatly reduce errors in self-position estimation outdoors even in rainy weather such as when the camera is likely to get dirty. Can do.
 図2は、本発明の画像処理による自己位置推定方法を示す図である。同図は、魚眼カメラで撮影した画像の様子を示している。魚眼カメラは、撮像中心軸が天頂を向くように設置し、撮影画像に自律移動装置200の車体が映り込まないように設置しているため、実際の画像では自装置が写りこむことはないが、周辺の構造物202と自律移動装置200の位置関係は図2(a)(b)に示す通りである。ここでは理解を助けるため、あえて自装置と魚関カメラで撮像した画像を重ね合わせたイメージ図を記載している。 FIG. 2 is a diagram showing a self-position estimation method using image processing according to the present invention. This figure shows the state of an image taken with a fisheye camera. The fisheye camera is installed so that the imaging center axis faces the zenith, and is installed so that the body of the autonomous mobile device 200 does not appear in the captured image, so that the own device does not appear in the actual image. However, the positional relationship between the surrounding structure 202 and the autonomous mobile device 200 is as shown in FIGS. Here, in order to help understanding, an image diagram in which the image captured by the own device and the Uoseki camera is superimposed is described.
 図2(a)は、特徴点地図の作成方法を示す図である。特徴点地図は、事前に自律移動装置200を手動操作で走行路201を走らせて撮影した画像から特徴点203を算出し、その特徴点を別に用意する座標情報が記録された2D地図上の座標と対応付けながら特徴点地図として保存する。特徴点地図および2D地図は、地図情報記憶部107に保存する。特徴点とは主に撮影画像から検出する特徴点203は、画像上に存在する建造物、標識等の構造物のコーナー部分を検出したものである。 FIG. 2 (a) is a diagram showing a feature point map creation method. The feature point map is a coordinate on the 2D map in which the feature point 203 is calculated from an image obtained by manually driving the autonomous mobile device 200 in advance and running on the traveling path 201 and coordinate information for separately preparing the feature point is recorded. And saved as a feature point map. The feature point map and the 2D map are stored in the map information storage unit 107. The feature point 203 detected mainly from the photographed image is obtained by detecting a corner portion of a structure such as a building or a sign existing on the image.
 特徴点地図は、撮影画像の映り込んだ様々な構造物202の特徴点203を検出、集約し、2D地図上の座標と対応付けて保存したデータである。 The feature point map is data stored by associating and storing feature points 203 of various structures 202 in which captured images are reflected and associating them with coordinates on the 2D map.
 もちろん、事前の撮影時に偶然通りかかった車両や歩行者等が映り込む場合がある。この場合は、構造物202の特徴点203の動きベクトルと、車両や歩行者等の移動体の特徴点の動きベクトルを算出して、それぞれの動きベクトルの違い、即ち移動速度の違いから特徴点地図の作成に有効な構造物202の特徴点203のみを抽出するようにする。例えば、歩行者であれば時速5キロ程度、車両であれば時速50キロ程度で移動しており、本実施例の自律移動装置は時速30キロ程度であるため、移動速度の違いから両者を区別することは容易である。 Of course, vehicles and pedestrians who happened to pass during the shooting may be reflected. In this case, the motion vector of the feature point 203 of the structure 202 and the motion vector of the feature point of a moving object such as a vehicle or a pedestrian are calculated, and the feature point is determined from the difference between the motion vectors, that is, the difference in moving speed. Only the feature points 203 of the structure 202 effective for creating a map are extracted. For example, a pedestrian moves about 5 km / h and a vehicle moves about 50 km / h, and the autonomous mobile device of this embodiment is about 30 km / h. It's easy to do.
 特徴点検出の方法は特に限定はなく、公知の技術を用いればよい。例えば本実施例では、Harrisコーナー検出等の手法を用いている。Harrisコーナー検出は、画像の輝度値分布を解析して、一次微分値(差分)が一方向に大きければエッジ、多方向に大きければコーナーであるという知識に基づいて計算する処理方法である。但し、Harrisコーナー検出に限らず、構造物202のコーナー部分を検出可能な別の方法を用いても良い。 The feature point detection method is not particularly limited, and a known technique may be used. For example, in this embodiment, a technique such as Harris corner detection is used. Harris corner detection is a processing method in which the luminance value distribution of an image is analyzed, and calculation is performed based on the knowledge that if the first derivative (difference) is large in one direction, it is an edge, and if it is large in multiple directions, it is a corner. However, the method is not limited to Harris corner detection, and another method capable of detecting the corner portion of the structure 202 may be used.
 図2(b)は、走行時の検出特徴点を示す図、図2(c)は、地図中の特徴点を示す図である。 FIG. 2 (b) is a diagram showing detected feature points during travel, and FIG. 2 (c) is a diagram showing feature points in the map.
 撮影画像から検出した周辺に存在する構造物の特徴点204の集合205と、特徴点地図中のある部分の特徴点集合206のマッチングを行い、2個の特徴点集合(204、206)が一致するかどうかを判定する。2個の特徴点集合が一致した場合は、特徴点地図中から一致箇所の座標情報を取得して現在位置を把握する。一致しない場合は、一致するまで特徴点地図の別の箇所とのマッチング処理を継続する。所定のマッチング回数または所定の地図上の範囲まで繰り返しても特徴点集合の一致が得られない場合は、同撮影画像に関してはマッチング処理を中断して、エラーとして処理(表示部133への通知等)する。 The set 205 of the feature points 204 of the structures existing in the vicinity detected from the photographed image is matched with the feature point set 206 of a certain part in the feature point map, and the two feature point sets (204, 206) match. Determine whether to do. If the two feature point sets match, the coordinate information of the matching part is acquired from the feature point map to grasp the current position. If they do not match, matching processing with another part of the feature point map is continued until they match. If the matching of feature points cannot be obtained after repeating the predetermined number of matching times or the range on the predetermined map, the matching process is interrupted for the captured image and processed as an error (notification to the display unit 133, etc. )
 図3は、晴天時の特徴点検出処理のブロック構成図である。 FIG. 3 is a block diagram of the feature point detection process in fine weather.
 晴天時の特徴点検出は、撮影画像から周辺の構造物の特徴点を検出する特徴点検出部300と、構造物の特徴点集合および予め作成して保存されている特徴点地図303の特徴点集合のマッチング処理を行う特徴点マッチング部301と、特徴点地図303から2個の特徴点集合が一致した場所の実空間の座標を取得して自律移動装置の内部で管理する走行制御マップ上の現在位置への正規化演算を行う自己位置算出部302の処理ブロックで構成する。計算した自己位置情報は、自律走行制御部104に送出して走行制御処理に反映する。 The feature point detection in fine weather is performed by a feature point detection unit 300 that detects a feature point of a surrounding structure from a captured image, a feature point set of the structure, and a feature point of a feature point map 303 that is created and stored in advance. A feature point matching unit 301 that performs set matching processing, and a travel control map that acquires the coordinates of the real space of the place where the two feature point sets match from the feature point map 303 and manages them inside the autonomous mobile device It is composed of processing blocks of the self-position calculation unit 302 that performs normalization calculation to the current position. The calculated self-location information is sent to the autonomous traveling control unit 104 and reflected in the traveling control process.
 図4は、雨天時の走行における撮影画像の一例を示す図である。 FIG. 4 is a diagram showing an example of a photographed image during traveling in rainy weather.
 悪天候は、雨天、降雪、強風等である。それぞれの天候下では、雨滴、雪、風で飛ばされた土埃やゴミ等がレンズカバーに付着する。以下の、実施例では、雨天時の条件で説明を行う。本発明の方法の性質上、降雪、強風においても同じ効果を得ることが可能である。 悪 Bad weather is rainy, snowfall, strong wind, etc. Under each weather, raindrops, snow, dust and dust blown by wind adhere to the lens cover. In the following examples, the description will be made under conditions in the rain. Due to the nature of the method of the present invention, the same effect can be obtained even in snowfall and strong winds.
 同図のように、雨天時はレンズカバーに雨滴400、401が付着する。この状態の撮影画像で特徴点検出を行うと、構造物406の特徴点402、403と、雨滴400、401の特徴点404、405の両方が検出される場合がある。雨滴400、401の特徴点404、405はノイズであるため、このまま特徴点地図とマッチング処理を行うと、自己位置の誤検知または位置検出不可能という問題が生じる。 As shown in the figure, raindrops 400 and 401 adhere to the lens cover during rainy weather. When feature point detection is performed on the captured image in this state, both the feature points 402 and 403 of the structure 406 and the feature points 404 and 405 of the raindrops 400 and 401 may be detected. Since the feature points 404 and 405 of the raindrops 400 and 401 are noise, if the matching process is performed with the feature point map as it is, there arises a problem that the self-position is erroneously detected or the position cannot be detected.
 図5は、構造物の特徴点と雨滴で生じた特徴点の移動軌跡の差異を示す図である。 FIG. 5 is a diagram showing the difference between the movement points of the feature points of the structure and the feature points generated by raindrops.
 図4で説明した通り、構造物の特徴点(以後、有効特徴点あるいは第1特徴点と呼ぶ)と雨滴で生じた特徴点(以後、無効特徴点あるいは第2特徴点と呼ぶ)とを判別して雨滴で生じる第2特徴点をキャンセルする必要がある。 As described with reference to FIG. 4, a feature point of a structure (hereinafter referred to as an effective feature point or a first feature point) is distinguished from a feature point generated by a raindrop (hereinafter referred to as an invalid feature point or a second feature point). Therefore, it is necessary to cancel the second feature point generated by raindrops.
 図5(a)は、有効特徴点の移動パタンの例である。自律移動装置が、矢印方向511に進行していると仮定する。このとき、魚眼レンズの撮影画像を連続で観測すると、有効特徴点は同図のように進行方向に対して前方から後方に向けて緩やかな孤を描く軌跡で移動する。 FIG. 5A shows an example of the movement pattern of effective feature points. Assume that the autonomous mobile device is traveling in the arrow direction 511. At this time, when the captured images of the fisheye lens are continuously observed, the effective feature points move along a trajectory that draws a gentle arc from the front to the rear in the traveling direction as shown in FIG.
 一方、図5(b)は、無効特徴点の移動パタンの例である。自律移動装置の走行中は、魚眼カメラに装着したレンズカバーが常時回転するようにしているため、レンズカバーに付着した雨滴は画像内を周回する軌跡で移動する。 On the other hand, FIG. 5B is an example of a movement pattern of invalid feature points. Since the lens cover attached to the fisheye camera is always rotating while the autonomous mobile device is traveling, raindrops attached to the lens cover move along a trajectory that circulates in the image.
 このように、有効特徴点と無効特徴点は異なる移動軌跡をとるため、特徴点軌跡の移動パタンを解析して有効特徴点と無効特徴点を判別する。判別の手段は、少なくとも2枚以上の複数の撮影画像からそれぞれの特徴点の追跡処理を行い、特徴点の動きベクトルを算出して所定の移動条件と比較することで有効特徴点の移動パタンであるか、或いは無効特徴点の移動パタンであるかを判別する。 Thus, since the effective feature point and the invalid feature point have different movement trajectories, the movement pattern of the feature point trajectory is analyzed to determine the effective feature point and the invalid feature point. The discrimination means performs tracking processing of each feature point from at least two or more photographed images, calculates a motion vector of the feature point, and compares it with a predetermined movement condition to obtain a movement pattern of effective feature points. It is discriminated whether there is a movement pattern of invalid feature points.
 図6は、自己位置推定部106の特徴点検出処理のブロック構成を示す図である。図1、図3と同じ処理ブロックについては同じ番号を付し、説明を省略する。 FIG. 6 is a diagram showing a block configuration of the feature point detection process of the self-position estimation unit 106. The same processing blocks as those in FIG. 1 and FIG.
 魚眼カメラ108にはレンズカバー414を装着し、レンズカバー414を回転させるための回転制御部413および回転機構412を備える。また、画像処理の際にレンズカバー414に付着した雨滴および雨滴の特徴点を精度良く認識するためのLED照射機構(410)と照射制御部411を備える。 The fish-eye camera 108 is equipped with a lens cover 414 and includes a rotation control unit 413 and a rotation mechanism 412 for rotating the lens cover 414. In addition, an LED irradiation mechanism (410) and an irradiation control unit 411 for accurately recognizing raindrops and the feature points of the raindrops attached to the lens cover 414 during image processing are provided.
 さらに、特徴点検出部300と、入力した画像を含む複数画像から特徴点の動き追跡を行う特徴点追跡部403と、特徴点追跡から動きベクトルを計算する動きベクトル算出部404と、算出した動きベクトルの移動パタンを解析して雨滴付着に起因する無効特徴点を検出する無効特徴点判定部405と、特徴点検出部402で取得した特徴点のうち無効特徴点判定部405で取得した雨滴による無効特徴点をキャンセルして構造物上に存在する有効特徴点のみを取得する有効特徴点抽出部406と、特徴点地図303とのマッチング処理を行う特徴点マッチング部301と、特徴点地図303を用いて自己位置算出を行う自己位置算出部302の処理ブロックで構成する。計算した自己位置情報は、自律走行制御部104に送出して走行制御処理に反映する。 Furthermore, the feature point detection unit 300, a feature point tracking unit 403 that performs motion tracking of feature points from a plurality of images including the input image, a motion vector calculation unit 404 that calculates motion vectors from the feature point tracking, and calculated motion The invalid feature point determination unit 405 that detects the invalid feature point caused by raindrop adhesion by analyzing the vector movement pattern, and the raindrop acquired by the invalid feature point determination unit 405 among the feature points acquired by the feature point detection unit 402 An effective feature point extraction unit 406 that cancels invalid feature points and acquires only effective feature points existing on the structure, a feature point matching unit 301 that performs a matching process with the feature point map 303, and a feature point map 303 It is composed of processing blocks of a self-position calculation unit 302 that uses the self-position calculation. The calculated self-location information is sent to the autonomous traveling control unit 104 and reflected in the traveling control process.
 図7は、魚眼カメラに装着したレンズカバーの回転の様子を示す図である。 FIG. 7 is a diagram showing the state of rotation of the lens cover attached to the fisheye camera.
 レンズカバーは、魚眼カメラのレンズ全体を覆うように装着する。また、魚眼カメラの撮像中心点を軸に一定方向に回転させる。回転方向は、例えば同図の矢印方向704である。回転方向は図示した矢印方向704と逆の方向でも良いし、図示した矢印方向704および逆方向を定期的に切り替えるようにしても良い。いずれの場合も、特徴点追跡および特徴点軌跡の移動パタンの判別を行うため、入力した撮影画像の時点でレンズカバーがどちらの方向に回転させていたかは予め認識しておく。 The lens cover should be attached to cover the entire lens of the fisheye camera. Further, the image is rotated in a fixed direction around the imaging center point of the fisheye camera. The rotation direction is, for example, an arrow direction 704 in the figure. The direction of rotation may be the opposite direction of the illustrated arrow direction 704, or the illustrated arrow direction 704 and the opposite direction may be switched periodically. In either case, since the feature point tracking and the movement pattern of the feature point locus are determined, it is recognized in advance in which direction the lens cover is rotated at the time of the input captured image.
 図8は、構造物の特徴点と雨滴で生じた特徴点の判別方法の概要を示す図である。 FIG. 8 is a diagram showing an outline of a method for discriminating feature points of structures and feature points generated by raindrops.
 図(a)は、撮影画像から検出した特徴点の例を示す図である。自律移動装置は矢印方向800に移動しているものと仮定する。検出した特徴点は、周辺の構造物801の特徴点802と雨滴803による特徴点804が混在している。 FIG. (A) is a diagram showing an example of feature points detected from a captured image. It is assumed that the autonomous mobile device is moving in the arrow direction 800. The detected feature points include a feature point 802 of the surrounding structure 801 and a feature point 804 due to the raindrop 803.
 図(b)は、レンズカバーを所定方向に回転させた時の雨滴の特徴点の移動軌跡の例を示す図である。レンズカバーは、魚眼カメラの撮像中心点を軸に回転するため、雨滴803の特徴点804も撮像中心点を軸に回転軌跡805を描くように移動する。 (B) is a diagram showing an example of the movement trajectory of the feature point of the raindrop when the lens cover is rotated in a predetermined direction. Since the lens cover rotates about the imaging center point of the fisheye camera, the feature point 804 of the raindrop 803 also moves so as to draw a rotation locus 805 about the imaging center point.
 図(c)は、レンズカバーを所定方向に回転させた時の構造物の特徴点の移動軌跡の例を示す図である。構造物807の特徴点802の軌跡806は、レンズカバー回転の影響を受けず、自律移動装置の移動方向に対して前方から後方にかけて緩やかな弧を描くように移動する。 FIG. (C) is a diagram showing an example of the movement trajectory of the feature point of the structure when the lens cover is rotated in a predetermined direction. The locus 806 of the characteristic point 802 of the structure 807 moves so as to draw a gentle arc from the front to the rear with respect to the movement direction of the autonomous mobile device without being affected by the rotation of the lens cover.
 図(d)は、雨滴803の特徴点軌跡805の始点から終点までの移動の例である。雨滴803の特徴点軌跡805は、始点から終点まで回転軌跡を描くように移動する。 FIG. 4D shows an example of movement from the start point to the end point of the feature point locus 805 of the raindrop 803. The feature point locus 805 of the raindrop 803 moves so as to draw a rotation locus from the start point to the end point.
 図(e)は、構造物の特徴点軌跡の始点から終点までの移動の例である。構造物の特徴点802の軌跡806は、自律移動装置の移動方向に対して前方の始点から後方の終点にかけて緩やかな弧を描くように移動する。
同図のように構造物の特徴点と雨滴で生じた特徴点は、異なる移動軌跡をとるため、特徴点軌跡の移動パタンを解析して有効特徴点と無効特徴点を判別する。判別の手段は、少なくとも2枚以上の複数の撮影画像からそれぞれの特徴点の追跡処理を行い、特徴点の動きベクトルを算出して所定の移動条件と比較することで有効特徴点の移動パタンであるか、或いは無効特徴点の移動パタンであるかを判別する。
FIG. 8E is an example of movement from the start point to the end point of the feature point trajectory of the structure. The trajectory 806 of the feature point 802 of the structure moves so as to draw a gentle arc from the front start point to the rear end point with respect to the movement direction of the autonomous mobile device.
As shown in the figure, the feature points of the structure and the feature points generated by raindrops have different movement trajectories, so that the effective feature points and the invalid feature points are discriminated by analyzing the movement patterns of the feature point trajectories. The discrimination means performs tracking processing of each feature point from at least two or more photographed images, calculates a motion vector of the feature point, and compares it with a predetermined movement condition to obtain a movement pattern of effective feature points. It is discriminated whether there is a movement pattern of invalid feature points.
 図9は、魚眼カメラの外観の一例を示す図である。 FIG. 9 is a diagram showing an example of the appearance of a fisheye camera.
 同図の魚眼カメラの例は、魚眼レンズ900と撮像センサを内蔵したカメラ本体901を組合せた例である。魚眼レンズ900には、カバー台座905に固定した状態のレンズカバー902を装着し、さらにレンズカバーを回転させるための回転機構903を備える。モーター(図示せず)等を回転させて発生させた回転エネルギー907を回転機構903からカバー台座905に伝え、台座に固定されたてレンズカバー902を回転908させる。 The example of the fish-eye camera in the figure is an example in which a fish-eye lens 900 and a camera body 901 incorporating an image sensor are combined. The fisheye lens 900 is equipped with a lens cover 902 fixed to the cover base 905, and further includes a rotation mechanism 903 for rotating the lens cover. Rotational energy 907 generated by rotating a motor (not shown) or the like is transmitted from the rotation mechanism 903 to the cover base 905, and the vertical lens cover 902 fixed to the base is rotated 908.
 レンズカバー内部のカバー台座部分には、画像処理の際にレンズカバー902に付着した雨滴および雨滴の特徴点を精度良く認識するためのLED(904)を装着する。このときLED(904)は一つでもかまわないが、魚眼レンズの周囲を囲み、なお且つレンズカバー内側からレンズカバー全体に照射光が均等に照射されるように複数個配置した方が、より有効特徴点と無効特徴点とを判別しやすい。 The LED (904) for accurately recognizing raindrops attached to the lens cover 902 and the feature points of the raindrops during image processing is attached to the cover pedestal portion inside the lens cover. At this time, one LED (904) may be used, but it is more effective to arrange a plurality of LEDs so as to surround the fisheye lens and to irradiate the entire lens cover with the light evenly from the inside of the lens cover. It is easy to distinguish a point from an invalid feature point.
 魚眼レンズ900およびカメラ本体901は、カメラ台座906を介して自律移動装置本体に固定するため、魚眼レンズ900およびカメラ本体901は回転しない。 Since the fisheye lens 900 and the camera body 901 are fixed to the autonomous mobile device body via the camera base 906, the fisheye lens 900 and the camera body 901 do not rotate.
 同図の例は、魚眼レンズ枠上にLED(904)を固定して、LED(904)自体は回転しないようにしているが、LED(904)をカバー台座905に固定してLED(904)自体もレンズカバー902と一緒に回転するように設置しても良い。 In the example of the figure, the LED (904) is fixed on the fisheye lens frame so that the LED (904) itself does not rotate. However, the LED (904) is fixed to the cover base 905 and the LED (904) itself is fixed. May also be installed so as to rotate together with the lens cover 902.
 図10は、自律移動装置の外観の一例を示す図である。 FIG. 10 is a diagram showing an example of the appearance of the autonomous mobile device.
 自律移動装置1000は、搭乗者1006が着座姿勢で乗車できる空間1005を備えている。魚眼カメラおよびGPSアンテナ1002は、自律移動装置1000の屋根に装着する。また、走行において前方の障害物等を検知して回避するためのレーダーセンサ1003を本体前部に装着してもよい。(図1では図示せず)。これにより、自動で障害物を回避する経路を選択でき、自律走行の安全性が向上する。また、前進、後退、右回転、左回転等の走行移動を行うための車輪1004を備える。 The autonomous mobile device 1000 includes a space 1005 in which a passenger 1006 can get in a sitting posture. The fisheye camera and the GPS antenna 1002 are attached to the roof of the autonomous mobile device 1000. Further, a radar sensor 1003 for detecting and avoiding obstacles in front of the vehicle during traveling may be attached to the front of the main body. (Not shown in FIG. 1). Thereby, the route which avoids an obstacle automatically can be selected, and the safety | security of autonomous driving improves. Further, a wheel 1004 for performing traveling movement such as forward movement, backward movement, right rotation, and left rotation is provided.
 図11は、LEDを複数個設けた場合における魚眼レンズ1101の枠上1100に固定したLEDの点灯パタンの一例を示す図(魚眼レンズ側から見た図)である。 FIG. 11 is a diagram showing an example of a lighting pattern of an LED fixed on the frame 1100 of the fisheye lens 1101 when a plurality of LEDs are provided (viewed from the fisheye lens side).
 LED照射光を所定の点灯パタンで周期的に変化させることで、雨滴および雨滴の特徴点を精度良く認識することができる。つまり、背景の構造物にはLED照射光は到達せず、雨滴のみに照射光が当たることを利用して、雨滴に生じる反射光の変化(即ち輝度値の変化)を人工的に作り出し、それを観測することで、構造物の特徴点と雨滴の特徴点を判別する。即ち、LEDの点灯パタンと同期して観測領域の輝度値が変化する部分には雨滴が存在すると判別することができる。輝度値が変化する領域と重なる、または近傍の特徴点は雨滴の特徴点(無効特徴点)としてキャンセルする。なお、このときLEDの照射光が赤外に近い波長である方が、夜間であっても雨滴を区別しやすいため、白色光の場合よりも識別効果が高い。 It is possible to accurately recognize raindrops and feature points of raindrops by periodically changing the LED irradiation light with a predetermined lighting pattern. In other words, by utilizing the fact that the LED irradiation light does not reach the background structure, but only the raindrops, the reflected light changes in the raindrops (that is, changes in luminance values) are artificially created. By observing, the feature point of the structure and the feature point of the raindrop are discriminated. That is, it can be determined that raindrops exist in the portion where the luminance value of the observation region changes in synchronization with the lighting pattern of the LED. A feature point that overlaps or is close to a region where the brightness value changes is canceled as a raindrop feature point (invalid feature point). At this time, when the irradiation light of the LED has a wavelength close to infrared, it is easier to distinguish raindrops even at night, so the discrimination effect is higher than that of white light.
 図11(a)は、全てのLED(1102)を点灯したパタンである。 FIG. 11A shows a pattern in which all LEDs (1102) are lit.
 図11(b)は、全てのLED(1103)を消灯したパタンである。LEDの全点灯と全消灯を所定の時間間隔で切り換えるようにする。全点灯と全消灯の切換え周期は、常に同一の周期でも良いし、異なる周期でも良い。また、全点灯と全消灯の時間間隔がそれぞれ異なるように変則に設定しても良い。 FIG. 11B shows a pattern in which all LEDs (1103) are turned off. The LED is switched on and off at predetermined time intervals. The switching cycle between full lighting and full lighting may always be the same cycle or different cycles. Moreover, you may set irregularly so that the time interval of all lighting and all extinction may each differ.
 図12は、LEDの点灯パタンの別の一例を示す図 (魚眼レンズ側から見た図) である。 FIG. 12 is a diagram showing another example of the LED lighting pattern (viewed from the fisheye lens side).
 同図は、LEDの点灯箇所を複数個所に分割して周期的に点灯位置を変化させる点灯パタンである。図12(a)はLED(1200)が全点灯、図12(b)は向かって左側半分のLED(1201)が点灯、図12(c)は向かって右側半分のLED(1202)が点灯、図12(d)は向かって上側半分のLED(1203)が点灯、図12(e)は向かって下側半分のLED(1204)が点灯する状態を示している。 This figure shows a lighting pattern in which the lighting position of the LED is divided into a plurality of locations and the lighting position is periodically changed. FIG. 12 (a) shows that the LED (1200) is fully lit, FIG. 12 (b) shows that the left half LED (1201) is lit, and FIG. 12 (c) shows that the right half LED (1202) is lit. FIG. 12D shows a state in which the upper half LED 1203 is turned on, and FIG. 12E shows a state in which the lower half LED 1204 is turned on.
 同図の点灯パタンを実施する効果は、図11で説明した効果に加え、LED点灯位置が周期的に変化するため、照射光が雨滴に反射する領域もLED点灯に同期して周期的に変化する。この周期的な雨滴の輝度値変化を観測することで、より確実に雨滴が存在する箇所を判別することができる。 In addition to the effect described in FIG. 11, the LED lighting position periodically changes in addition to the effect described in FIG. 11. Therefore, the region where the irradiated light is reflected by raindrops also changes periodically in synchronization with the LED lighting. To do. By observing the periodic change in the brightness value of the raindrop, it is possible to more reliably determine the location where the raindrop exists.
 別の応用パタンとして、同図に加えて全消灯のパタンを追加しても良い。また、LEDの点灯部分をさらに細かく区切って点灯パタンを増やして、周期的に点灯箇所を変化するようにしても良い。各点灯箇所の切換え周期は、常に同一の周期でも良いし、異なる周期でも良い。また、各点灯箇所の時間間隔がそれぞれ異なるように変則に設定しても良い。各点灯箇所の点灯順序は、一定の順番を繰り返しても良いし、異なる順番でランダムに点灯しても良い。 As another application pattern, in addition to the figure, you may add a pattern with all lights off. Further, the lighting portion may be periodically changed by further dividing the LED lighting portion to increase the lighting pattern. The switching cycle of each lighting location may always be the same cycle or a different cycle. Moreover, you may set irregularly so that the time interval of each lighting location may differ, respectively. The lighting order of each lighting location may be a fixed order or may be randomly turned on in a different order.
 図13は、LEDの点灯パタンのさらに別の一例を示す図 (魚眼レンズ側から見た図) である。 FIG. 13 is a diagram showing another example of the LED lighting pattern (viewed from the fisheye lens side).
 1つ1つのLEDを順番に点灯させ、円を描くように点灯位置を変化させる点灯パタンである。 It is a lighting pattern that turns on each LED in turn and changes the lighting position to draw a circle.
 図13(a)は、向かって反時計回りに1つ1つのLED(1300)が順番に点灯する状態を示している。図13(b)は、向かって時計回りに1つ1つのLED(1301)が順番に点灯する状態を示している。 FIG. 13A shows a state in which each LED (1300) is turned on in turn counterclockwise. FIG. 13B shows a state in which each LED (1301) is turned on in turn clockwise.
 個々のLED点灯箇所の切換え周期は、常に同一の周期でも良いし、異なる周期でも良い。また、個々のLED点灯箇所の時間間隔がそれぞれ異なるように変則に設定しても良い。個々のLED点灯箇所の点灯順序は、図13(a)または図13(b)の点灯パタンを常に繰り返しても良いし、図13(a)および図13(b)の点灯パタンを交互に行うようにしても良い。 The switching cycle of individual LED lighting locations may always be the same cycle or different cycles. Moreover, you may set irregularly so that the time interval of each LED lighting location may each differ. The lighting order of the individual LED lighting locations may always repeat the lighting pattern of FIG. 13 (a) or FIG. 13 (b), or the lighting patterns of FIG. 13 (a) and FIG. 13 (b) are alternately performed. You may do it.
 図14は、雨滴が付着したレンズカバーに向かってLEDを照射した場合の雨滴の撮影状態の一例を示す図である。 FIG. 14 is a diagram illustrating an example of a shooting state of the raindrop when the LED is irradiated toward the lens cover to which the raindrop has adhered.
 図14(a)は、LED照射光の軌跡を示す図である。LED(1403)から照射された光はレンズカバー方向に向かって拡散しながら進み、レンズカバー1402に付着した雨滴1404の内部で屈折現象(光波は異なる物質の境界面を境にして進行方向が変わる性質を持つスネルの法則に基づく)が生じて反射する。その反射した光の一部がカメラ撮像素子1401に届く。 FIG. 14A is a diagram showing the locus of LED irradiation light. The light emitted from the LED (1403) travels while diffusing in the direction of the lens cover, and the refraction phenomenon (the traveling direction of the light wave changes on the boundary surface of different substances) inside the raindrop 1404 attached to the lens cover 1402. (Based on Snell's law with properties). Part of the reflected light reaches the camera image sensor 1401.
 図14(b)は、LEDの照射方向と撮影画像に観測される雨滴の輝度状態の一例を示す図である。
雨滴1406の表面は、表面張力により球面形状を持つので雨滴内で屈折した光は、LED照射光の照射方向1407および光の入射角度の条件に基づき、ある箇所に部分的に集中する性質を持つ。例えば、同図のように向かって左側からLED照射がある場合は、向かって右側の雨滴部分1408に屈折した光が集中する。よって、撮影画像も向かって右側の雨滴部分1408の輝度値が高くなった状態で移り込むことになる。
FIG. 14B is a diagram illustrating an example of the LED irradiation direction and the luminance state of raindrops observed in the captured image.
Since the surface of the raindrop 1406 has a spherical shape due to surface tension, the light refracted in the raindrop has the property of being partially concentrated at a certain location based on the conditions of the irradiation direction 1407 of the LED irradiation light and the incident angle of the light. . For example, when there is LED irradiation from the left side as shown in the figure, the refracted light concentrates on the raindrop portion 1408 on the right side. Therefore, the captured image also moves in the state where the luminance value of the raindrop portion 1408 on the right side is high.
 図15は、本発明の画像処理装置における自己位置推定部の処理フローを示す図である。 FIG. 15 is a diagram showing a processing flow of the self-position estimation unit in the image processing apparatus of the present invention.
 まず、自律移動装置の利用者が表示部113に表示された情報を元に、入力装置112を用いて目的地を設定する(S1)。走行ルート設定部105は、現在位置から入力された目的地までの走行ルートを設定する(S2)。画像処理装置は、地図情報記憶部107から特徴点地図を読み込む(S3)。続いて、レンズカバー回転機構109にてレンズカバー回転を開始(S4)、LED照射機構110がLED照射を開始(S5)して、走行機構114の制御により自律移動装置の移動を開始する(S6)。次に、目的地に到着したか否かを判定(S7)し、まだ目的地に到達していなければカメラ108が撮影した画像を入力(S8)して、特徴点検出部300が入力画像の特徴点抽出を行い(S9)、抽出した特徴点情報(A)をメモリ103に一時保存する。次に、特徴点追跡部403は1つ前の入力画像の特徴点情報と現在画像の特徴点情報(A)を用いて特徴点の追跡計算を行い(S10)、動きベクトル算出部404が追跡結果から特徴点の動きベクトルを計算する(S11)。求めた特徴点の動きベクトルについて、特徴点判定部405が無効特徴点か否かを判定し(S12)、無効特徴点情報(B)をメモリ103に一時保存する。 First, a user of an autonomous mobile device sets a destination using the input device 112 based on information displayed on the display unit 113 (S1). The travel route setting unit 105 sets a travel route from the current position to the input destination (S2). The image processing apparatus reads the feature point map from the map information storage unit 107 (S3). Subsequently, the lens cover rotation mechanism 109 starts rotating the lens cover (S4), the LED irradiation mechanism 110 starts LED irradiation (S5), and the movement of the autonomous mobile device is started by the control of the traveling mechanism 114 (S6). ). Next, it is determined whether or not the destination has been reached (S7). If the destination has not been reached yet, the image taken by the camera 108 is input (S8), and the feature point detection unit 300 detects the input image. Feature point extraction is performed (S9), and the extracted feature point information (A) is temporarily stored in the memory 103. Next, the feature point tracking unit 403 performs feature point tracking calculation using the feature point information of the previous input image and the feature point information (A) of the current image (S10), and the motion vector calculation unit 404 performs tracking. The motion vector of the feature point is calculated from the result (S11). The feature point determination unit 405 determines whether or not the obtained feature point motion vector is an invalid feature point (S12), and temporarily stores the invalid feature point information (B) in the memory 103.
 ここで、有効特徴点修習粒406が特徴点情報(A)から無効特徴点情報(B)を減算(除去)し、有効特徴点情報(C)を得る(S13)。次に特徴点マッチング部301は特徴点地図と有効特徴点とのマッチング処理を行い(S14)、自己位置算出部302で現在位置を特定し(S15)、走行ルート設定部105が走行ルートを更新してステップS7の目的地到達の確認に戻る(S16)。 Here, the effective feature point training particle 406 subtracts (removes) the invalid feature point information (B) from the feature point information (A) to obtain the effective feature point information (C) (S13). Next, the feature point matching unit 301 performs matching processing between the feature point map and the effective feature points (S14), the current position is specified by the self-position calculating unit 302 (S15), and the travel route setting unit 105 updates the travel route. The process returns to the confirmation of arrival at the destination in step S7 (S16).
 目的地到達の確認(S7)において目的地に到着した場合は、自律移動装置の走行停止(S17)、LED照射停止(S18)およびレンズカバー回転停止(S19)を行い、処理フローを終了する。 If the destination has been reached in the destination arrival confirmation (S7), the autonomous mobile device is stopped (S17), the LED irradiation is stopped (S18), and the lens cover rotation is stopped (S19), and the processing flow is terminated.
 図16は、無効特徴点判定(S12)の方法を示す図である。 FIG. 16 is a diagram illustrating a method of determining invalid feature points (S12).
 無効特徴点の判定方法は、特徴点の動きベクトルを入力し(S161)、特徴点の動きベクトルがレンズカバー回転の動きベクトルと一致しているか否かを判定する(S1622)。一致していない場合は、入力した特徴点は周辺の構造物の特徴点であると判断して有効特徴点に設定する(S167)。一致している場合は、まずLED照射機構がONになっているかを判定する(S163)。LED照射機構がOFFあるいは存在していない場合には、雨滴等による特徴点であるとして無効特徴点に設定する(S166)。LED照射機構がONの場合には特徴点付近の輝度値変化周期を計算し(S164)、輝度値の変化周期がLED照射パタンの変化周期と一致しているか否かを判定する(S165)。一致していない場合は、入力した特徴点は周辺の構造物の特徴点であると判断して有効特徴点に設定する(S167)。一致している場合は、入力した特徴点は雨滴等による特徴点であると判断して無効特徴点に設定する(S166)。 In the invalid feature point determination method, the motion vector of the feature point is input (S161), and it is determined whether or not the motion vector of the feature point matches the motion vector of the lens cover rotation (S1622). If they do not match, it is determined that the input feature point is a feature point of a surrounding structure, and is set as an effective feature point (S167). If they match, it is first determined whether the LED irradiation mechanism is ON (S163). If the LED irradiation mechanism is OFF or does not exist, it is set as an invalid feature point as a feature point due to raindrops (S166). When the LED irradiation mechanism is ON, the luminance value change period near the feature point is calculated (S164), and it is determined whether or not the luminance value change period coincides with the LED irradiation pattern change period (S165). If they do not match, it is determined that the input feature point is a feature point of a surrounding structure, and is set as an effective feature point (S167). If they match, it is determined that the input feature point is a feature point due to raindrops or the like, and is set as an invalid feature point (S166).
 図17は、本発明の画像処理装置における別の自己位置推定部の処理フローを示す図である。 FIG. 17 is a diagram showing a processing flow of another self-position estimation unit in the image processing apparatus of the present invention.
 これまで説明したように、本発明の画像処理装置は、天候に関係なく常にレンズカバーを回転させ、LED照射機構が設けられている場合にはLED照射も行うようにしている。晴天時や曇りの場合は、レンズカバーに雨滴が付着しないために撮影画像の特徴点を誤検知することがないため、常にレンズカバー回転とLED照射とを行っても支障がないためである。また、雨天と晴天を判別して動作モードを切り換える必要がないという装置制御上の利点もある。 As described above, the image processing apparatus of the present invention always rotates the lens cover regardless of the weather, and performs LED irradiation when an LED irradiation mechanism is provided. This is because when the weather is fine or cloudy, no raindrops adhere to the lens cover, so that the feature point of the photographed image is not erroneously detected, and there is no problem even if the lens cover is always rotated and the LED is irradiated. There is also an advantage in apparatus control that there is no need to switch between operation modes by discriminating between rainy weather and fine weather.
 一方、省電力化等の理由から雨天時と晴天時を切り換える必要が生じた場合は、目的地到達の確認(S7)以降の処理フローを次のように置き換えても良い。 On the other hand, when it becomes necessary to switch between rainy weather and fine weather for reasons such as power saving, the processing flow after the confirmation of arrival at the destination (S7) may be replaced as follows.
 現在の天候が雨天か否かを判定し(S171)、雨天ならばレンズカバー回転およびLED照射を使用して有効特徴点を特定する自己位置推定処理(図15のS8からS16の処理フロー)を実行する(S172)。晴天ならば、レンズカバー回転およびLED照射を実行しない自己位置推定処理を実行する(S173)。 It is determined whether or not the current weather is rainy (S171), and if it is raining, a self-position estimation process (processing flow from S8 to S16 in FIG. 15) for specifying an effective feature point using lens cover rotation and LED irradiation is performed. Execute (S172). If the weather is fine, a self-position estimation process that does not execute lens cover rotation and LED irradiation is executed (S173).
 現在の天候が雨天か否かの判定は、雨滴センサ等の降雨検知デバイスを用いて判定するようにしても良いし、画像処理による雨滴検知で判定するようにしても良い。画像処理による雨滴検知の例としては、S11において無効特徴点が検出された場合は雨滴が存在すると推測されるため雨天であると判定する方法がある。 Whether the current weather is rainy or not may be determined using a rain detection device such as a raindrop sensor, or may be determined by raindrop detection by image processing. As an example of raindrop detection by image processing, there is a method of determining that it is rainy because it is assumed that raindrops are present when invalid feature points are detected in S11.
 図18は、晴天モード(S173)の自己位置推定処理フローを示す図である。 FIG. 18 is a diagram showing a self-position estimation processing flow in the clear sky mode (S173).
 晴天モードでは、レンズカバー回転およびLED照射を実行しない自己位置推定処理を実行する。 In the clear sky mode, the self-position estimation process that does not execute the lens cover rotation and LED irradiation is executed.
 カメラから撮影画像を入力(S181)して入力画像の特徴点抽出を行う(S182)。抽出した特徴点情報と特徴点地図とのマッチング処理を行い(S183)、現在位置を特定し(S184)、走行ルートを更新(S185)して目的地到達の確認(S7)に戻る。 The photographed image is input from the camera (S181), and feature points of the input image are extracted (S182). The extracted feature point information is matched with the feature point map (S183), the current position is specified (S184), the travel route is updated (S185), and the destination arrival confirmation (S7) is returned.
 以上を踏まえれば、本実施例に記載の画像処理装置は、画像を撮影するカメラ108と、カメラのレンズ900を覆い、可視光を透過するカバー414,902と、カメラの撮影方向を軸としてカバーを回転させる駆動部412,413と、画像から特徴点を抽出する抽出部300と、時間的に連続する複数の画像を用いて抽出した複数の特徴点の動きを追跡し、特徴点の動きベクトルを計算する動きベクトル算出部404と、動きベクトルを用いてカメラの移動に起因して動く第1特徴点とカバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する検出部406とを、有することを特徴とする。 Based on the above, the image processing apparatus described in the present embodiment covers the camera 108 that captures an image, the covers 414 and 902 that cover the lens 900 of the camera and transmit visible light, and the imaging direction of the camera as an axis. , A driving unit 412 and 413 for extracting the feature point, an extraction unit 300 for extracting the feature point from the image, and tracking the motion of the plurality of feature points extracted by using a plurality of temporally continuous images. A motion vector calculation unit 404 that calculates the first feature point that moves due to the movement of the camera and a second feature point that moves due to the rotation of the cover using the motion vector, and determines the first feature point And a detection unit 406 for detection.
 このように、レンズ上にカバーを設け、カバーとカメラ自体とが異なる動きをすることを利用し、ノイズとなる特徴点を除去することができる。 As described above, by providing a cover on the lens and utilizing the fact that the cover and the camera itself move differently, it is possible to remove feature points that become noise.
 また、本実施例に記載の自律移動装置は、地図情報および特徴点地図が蓄積されている記憶部107と、地図情報と入力された目的地とを用いて目的地までの走行ルートを設定するルート設定部105と、画像を撮影するカメラ108と、カメラのレンズ900を覆い、可視光を透過するカバー414,902と、カメラの撮影方向を軸としてカバーを回転させる駆動部412,413と、画像から特徴点を抽出する抽出部300と、時間的に連続する複数の画像を用いて抽出した複数の特徴点の動きを追跡し、特徴点の動きベクトルを計算する動きベクトル算出部404と、動きベクトルを用いてカメラの移動に起因して動く第1特徴点とカバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する検出部406と、第1特徴点と特徴点地図303とを用いて自律移動装置の自己位置を推定する位置推定部106と、を有することを特徴とする。 Moreover, the autonomous mobile device described in the present embodiment sets a travel route to the destination using the storage unit 107 in which the map information and the feature point map are stored, and the map information and the input destination. A route setting unit 105, a camera 108 that captures an image, covers 414 and 902 that cover the lens 900 of the camera and transmit visible light, and drive units 412 and 413 that rotate the cover around the shooting direction of the camera, An extraction unit 300 that extracts feature points from an image, a motion vector calculation unit 404 that tracks the movements of a plurality of feature points extracted using a plurality of temporally continuous images, and calculates a motion vector of the feature points; A first detection unit 406 that discriminates a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover using the motion vector, and detects the first feature point; A position estimation unit 106 that estimates its own position of the autonomous mobile unit by using the feature points and the feature point map 303, and having a.
 このように、ノイズとなる特徴点を除去することで、雨天時においても画像処理のみを用いて自己位置推定ができ、屋外での移動が可能となる。 In this way, by removing feature points that become noise, self-position estimation can be performed using only image processing even in rainy weather, and it is possible to move outdoors.
 さらに、本実施例に記載の自己位置推定方法は、記憶部107に記憶されている地図情報と、入力された目的地戸を用いて目的地までの走行ルートを設定する第1ステップ(S1,S2)と、カメラで撮像された画像を受け取る第2ステップと(S8)、画像から特徴点を抽出する第3ステップ(S9)と、複数の画像から抽出された複数の特徴点を用いて特徴点の動きベクトルを計算する第4ステップ(S11)と、カメラのレンズを覆うカバーの駆動情報と前記動きベクトルとを用いて、カメラの移動に起因して動く第1特徴点とカバーの回転に起因して動く第2特徴点とを判別し、第1特徴点を検出する第5ステップ(S13)と、記憶部107に記憶されている特徴点地図303と第1特徴点とを用いて、移動装置の自己位置を推定する第6ステップ(S15)と、を有することを特徴とする。 Furthermore, the self-position estimation method described in the present embodiment is a first step (S1, S1) for setting a travel route to a destination using the map information stored in the storage unit 107 and the input destination door. S2), a second step of receiving an image captured by the camera (S8), a third step of extracting feature points from the image (S9), and a feature using a plurality of feature points extracted from a plurality of images Using the fourth step (S11) for calculating the motion vector of the point, the drive information of the cover that covers the lens of the camera, and the motion vector, the first feature point that moves due to the movement of the camera and the rotation of the cover Using the fifth step (S13) of discriminating the second feature point that moves due to it and detecting the first feature point, the feature point map 303 and the first feature point stored in the storage unit 107, Estimate the position of the mobile device A sixth step (S15) which is characterized by having a.
 このように、ノイズとなる特徴点を除去することで、雨天時においても画像処理のみを用いて自己位置推定ができる。 In this way, by removing feature points that become noise, self-position estimation can be performed using only image processing even in rainy weather.
 尚、本発明は、上述の各実施の形態に限定されるものではなく、本発明の趣旨を逸脱しない範囲で種々の変更が可能である。例えば、上記した実施形態は本発明を分かりやすく説明するために詳細に説明したのであり、必ずしも説明の全ての構成を備えるものに限定されものではない。また、ある実施形態の構成の一部を他の実施形態の構成に置き換えることが可能であり、また、ある実施形態の構成に他の実施形態の構成を加えることが可能である。また、各実施形態の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 Note that the present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit of the present invention. For example, the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to the one having all the configurations described. A part of the configuration of an embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of an embodiment. In addition, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、各構成、機能等の機能を実現するソフトウェアで実現する場合を主に説明したが、各機能を実現するプログラム、データ、ファイル等の情報は、メモリのみならず、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体におくことができるし、必要に応じて無線ネットワーク等を介してダウンロード、インストロールすることも可能である。 In addition, each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Moreover, although the case where it implement | achieved mainly with the software which implement | achieves functions, such as each structure and function, information, such as a program, data, and a file which implement | achieves each function, not only a memory but a hard disk, SSD (Solid State). Drive) or a recording medium such as an IC card, an SD card, or a DVD, and can be downloaded and installed via a wireless network or the like as necessary.
100…画像処理装置、
101…中央処理装置、
102…記憶装置、
103…メモリ、
104…自律走行制御部、
105…走行ルート設定部、
106…自己位置推定部、
107…地図情報記憶部、
108…魚眼カメラ、
109…レンズカバー回転機構、
110…LED、
111…衛星測位システム、
112…入力装置、
113…表示部、
114…走行機構。
100: Image processing apparatus,
101 ... Central processing unit,
102 ... Storage device,
103 ... Memory,
104 ... autonomous driving control unit,
105: Travel route setting section,
106 ... self-position estimation unit,
107 ... map information storage unit,
108 ... fisheye camera,
109 ... Lens cover rotation mechanism,
110 ... LED,
111 ... satellite positioning system,
112 ... input device,
113 ... display section,
114: Traveling mechanism.

Claims (8)

  1.  画像を撮影するカメラと、
     前記カメラのレンズを覆い、可視光を透過するカバーと、
     前記カメラの撮影方向を軸として前記カバーを回転させる駆動部と、
     前記画像から特徴点を抽出する抽出部と、
     時間的に連続する複数の前記画像を用いて抽出した複数の前記特徴点の動きを追跡し、前記特徴点の動きベクトルを計算する動きベクトル算出部と、
     前記動きベクトルを用いて前記カメラの移動に起因して動く第1特徴点と前記カバーの回転に起因して動く第2特徴点とを判別し、前記第1特徴点を検出する検出部とを、有することを特徴とする画像処理装置。
    A camera for taking images,
    A cover that covers the lens of the camera and transmits visible light;
    A drive unit that rotates the cover around the shooting direction of the camera;
    An extraction unit for extracting feature points from the image;
    A motion vector calculation unit that tracks the motion of the plurality of feature points extracted using the plurality of temporally continuous images and calculates a motion vector of the feature points;
    A detection unit for detecting a first feature point by determining a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover, using the motion vector; And an image processing apparatus.
  2.  請求項1記載に画像処理装置において、
     前記検出部はさらに、前記特徴点の輝度値を用いて前記第1特徴点を検出することを特徴とする画像処理装置。
    The image processing apparatus according to claim 1,
    The image processing apparatus according to claim 1, wherein the detection unit further detects the first feature point using a luminance value of the feature point.
  3.  請求項2記載に画像処理装置において、
     さらに、前記カバーの内側から前記カバーへ向かう向きに光を照射する光源を、前記レンズの周囲に複数有し、
     前記光源はあらかじめ設定された周期で発行することを特徴とする画像処理装置。
    The image processing apparatus according to claim 2,
    And a plurality of light sources that irradiate light in the direction from the inside of the cover toward the cover, around the lens,
    The image processing apparatus according to claim 1, wherein the light source is issued at a preset period.
  4.  あらかじめ設定された目的地へ自律して移動する自律移動装置であって、
     地図情報および特徴点地図が蓄積されている記憶部と、
     前記地図情報と入力された目的地とを用いて目的地までの走行ルートを設定するルート設定部と、
     画像を撮影するカメラと、
     前記カメラのレンズを覆い、可視光を透過するカバーと、
     前記カメラの撮影方向を軸として前記カバーを回転させる駆動部と、
     前記画像から特徴点を抽出する抽出部と、
     時間的に連続する複数の前記画像を用いて抽出した複数の前記特徴点の動きを追跡し、前記特徴点の動きベクトルを計算する動きベクトル算出部と、
     前記動きベクトルを用いて前記カメラの移動に起因して動く第1特徴点と前記カバーの回転に起因して動く第2特徴点とを判別し、前記第1特徴点を検出する検出部と、
     前記第1特徴点と前記特徴点地図とを用いて前記自律移動装置の自己位置を推定する位置推定部と、を有することを特徴とする自律移動装置。
    An autonomous mobile device that moves autonomously to a preset destination,
    A storage unit storing map information and feature point maps;
    A route setting unit for setting a travel route to the destination using the map information and the input destination;
    A camera for taking images,
    A cover that covers the lens of the camera and transmits visible light;
    A drive unit that rotates the cover around the shooting direction of the camera;
    An extraction unit for extracting feature points from the image;
    A motion vector calculation unit that tracks the motion of the plurality of feature points extracted using the plurality of temporally continuous images and calculates a motion vector of the feature points;
    A first detecting unit that detects a first feature point by determining a first feature point that moves due to movement of the camera and a second feature point that moves due to rotation of the cover using the motion vector;
    An autonomous mobile device, comprising: a position estimation unit that estimates the self-location of the autonomous mobile device using the first feature point and the feature point map.
  5.  請求項4に記載の自律移動装置であって、
     前記カバーの内側から前記カバーへ向かう向きに光を照射する光源を、前記レンズの周囲に配置することを特徴とする自律移動装置。
    The autonomous mobile device according to claim 4,
    An autonomous mobile device characterized in that a light source for irradiating light from the inside of the cover toward the cover is disposed around the lens.
  6.  請求項5に記載の自律移動装置であって、
      前記光源は複数のLEDで構成され、
     前記光源のあらかじめ設定された周期で発行することを特徴とする自律移動装置。
    The autonomous mobile device according to claim 5,
    The light source is composed of a plurality of LEDs,
    An autonomous mobile device that issues the light source at a preset cycle.
  7.  移動装置の自己位置を推定する方法であって、
     記憶部に記憶されている地図情報と、入力された目的地戸を用いて目的地までの走行ルートを設定する第1ステップと、
     カメラで撮像された画像を受け取る第2ステップと、
     前記画像から特徴点を抽出する第3ステップと、
     複数の前記画像から抽出された複数の前記特徴点を用いて前記特徴点の動きベクトルを計算する第4ステップと、
     前記カメラのレンズを覆うカバーの駆動情報と前記動きベクトルとを用いて、前記カメラの移動に起因して動く第1特徴点と前記カバーの回転に起因して動く第2特徴点とを判別し、前記第1特徴点を検出する第5ステップと、
     前記記憶部に記憶されている特徴点地図と前記第1特徴点とを用いて、前記移動装置の自己位置を推定する第6ステップと、を有することを特徴とする自己位置推定方法。
    A method for estimating the self-position of a mobile device, comprising:
    Map information stored in the storage unit and a first step of setting a travel route to the destination using the input destination door;
    A second step of receiving an image captured by the camera;
    A third step of extracting feature points from the image;
    A fourth step of calculating a motion vector of the feature points using the plurality of feature points extracted from the plurality of images;
    Using the drive information and the motion vector of the cover that covers the lens of the camera, a first feature point that moves due to the movement of the camera and a second feature point that moves due to the rotation of the cover are discriminated. A fifth step of detecting the first feature point;
    A self-position estimation method comprising: a sixth step of estimating the self-position of the mobile device using the feature point map and the first feature point stored in the storage unit.
  8.  請求項7の自己位置推定方法において、
     前記第5ステップでは、さらに前記特徴点の輝度値を用いて前記第1特徴点を検出することを特徴とする自己位置推定方法。
    The self-position estimation method according to claim 7,
    In the fifth step, the first feature point is further detected using a luminance value of the feature point.
PCT/JP2015/068115 2015-06-24 2015-06-24 Image processing device, autonomous movement device, and self-position estimation method WO2016207987A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2015/068115 WO2016207987A1 (en) 2015-06-24 2015-06-24 Image processing device, autonomous movement device, and self-position estimation method
JP2017524325A JP6469223B2 (en) 2015-06-24 2015-06-24 Image processing device, autonomous mobile device, and self-position estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/068115 WO2016207987A1 (en) 2015-06-24 2015-06-24 Image processing device, autonomous movement device, and self-position estimation method

Publications (1)

Publication Number Publication Date
WO2016207987A1 true WO2016207987A1 (en) 2016-12-29

Family

ID=57586702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/068115 WO2016207987A1 (en) 2015-06-24 2015-06-24 Image processing device, autonomous movement device, and self-position estimation method

Country Status (2)

Country Link
JP (1) JP6469223B2 (en)
WO (1) WO2016207987A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019008431A (en) * 2017-06-22 2019-01-17 株式会社日立製作所 Route searching apparatus and route searching method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007159021A (en) * 2005-12-08 2007-06-21 Matsushita Electric Ind Co Ltd Camera device and wiper control method
JP2010041120A (en) * 2008-07-31 2010-02-18 Olympus Corp Imaging apparatus and lens cover applied to same
JP2013184491A (en) * 2012-03-06 2013-09-19 Nissan Motor Co Ltd Vehicle traveling support device
JP2013211653A (en) * 2012-03-30 2013-10-10 Jvc Kenwood Corp Imaging apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007159021A (en) * 2005-12-08 2007-06-21 Matsushita Electric Ind Co Ltd Camera device and wiper control method
JP2010041120A (en) * 2008-07-31 2010-02-18 Olympus Corp Imaging apparatus and lens cover applied to same
JP2013184491A (en) * 2012-03-06 2013-09-19 Nissan Motor Co Ltd Vehicle traveling support device
JP2013211653A (en) * 2012-03-30 2013-10-10 Jvc Kenwood Corp Imaging apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019008431A (en) * 2017-06-22 2019-01-17 株式会社日立製作所 Route searching apparatus and route searching method

Also Published As

Publication number Publication date
JP6469223B2 (en) 2019-02-13
JPWO2016207987A1 (en) 2018-03-15

Similar Documents

Publication Publication Date Title
US10984261B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
US20230311960A1 (en) System and method for object and obstacle detection and classification in collision avoidance of railway applications
CN110345961B (en) Controlling a host vehicle based on detected parked vehicle characteristics
US10696227B2 (en) Determining a road surface characteristic
JP7334881B2 (en) Cross-field of view for autonomous vehicle systems
CN106461774B (en) Senior Officer's auxiliary system based on radar prompt visual imaging
US9734583B2 (en) Systems and methods for controlling vehicle position and orientation
Eum et al. Enhancing light blob detection for intelligent headlight control using lane detection
CN101918980B (en) Runway surveillance system and method
JP6881464B2 (en) Self-position estimation method and self-position estimation device
JP2019029940A (en) Accretion detector and vehicle system comprising the same
JP2013113781A (en) Attachment detection device and attachment detection method
JP2015156212A (en) Method for detecting static element in video source and image source, system, image-capturing device, movable device, and program product
JP2006279752A (en) Undervehicle image display controlling apparatus and its display controlling program
WO2015055737A1 (en) Method and system for determining a reflection property of a scene
JP6469223B2 (en) Image processing device, autonomous mobile device, and self-position estimation method
Wu et al. A vision-based collision warning system by surrounding vehicles detection
CN114442602A (en) Self-position estimation device and mobile body
Mohd Kiblee Lane detection system for autonomous vehicle using image processing techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15896309

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017524325

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15896309

Country of ref document: EP

Kind code of ref document: A1