WO2017168899A1 - 情報処理方法および情報処理装置 - Google Patents
情報処理方法および情報処理装置 Download PDFInfo
- Publication number
- WO2017168899A1 WO2017168899A1 PCT/JP2016/088890 JP2016088890W WO2017168899A1 WO 2017168899 A1 WO2017168899 A1 WO 2017168899A1 JP 2016088890 W JP2016088890 W JP 2016088890W WO 2017168899 A1 WO2017168899 A1 WO 2017168899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature point
- information
- information processing
- feature
- point list
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 184
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 238000004891 communication Methods 0.000 claims abstract description 34
- 238000004364 calculation method Methods 0.000 claims abstract description 27
- 230000007613 environmental effect Effects 0.000 claims description 30
- 230000001133 acceleration Effects 0.000 claims description 10
- 230000003190 augmentative effect Effects 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 72
- 238000000605 extraction Methods 0.000 description 39
- 238000010586 diagram Methods 0.000 description 22
- 239000013598 vector Substances 0.000 description 12
- 238000000034 method Methods 0.000 description 11
- 238000003860 storage Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/3811—Point data, e.g. Point of Interest [POI]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
- G01C21/383—Indoor data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3863—Structures of map data
- G01C21/3867—Geometry of map features, e.g. shape points, polygons or for simplified maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present disclosure relates to an information processing method and an information processing apparatus.
- the present disclosure proposes an information processing method and an information processing apparatus capable of providing more accurate position information according to the real world environment.
- the processor generates a feature point list in which the three-dimensional coordinates of the feature points detected from the observation information collected around the unit area are associated with the local feature amounts of the feature points.
- a calculation unit that extracts a feature point and a local feature amount related to the feature point from the acquired image information, a communication unit that acquires a feature point list based on the collected observation information, The calculation unit performs self-position estimation based on the local feature amount and the feature point list, and the feature point list includes the feature points associated with the unit area including the observation point of the observation information.
- An information processing apparatus including a three-dimensional coordinate position and a local feature amount related to the feature point is provided.
- the communication unit that receives the observation information collected around the unit area, the three-dimensional coordinates of the feature point detected from the observation information, and the local feature amount of the feature point are associated with each other.
- An information processing apparatus includes a list generation unit that generates a feature point list.
- SLAM Multiple Localization and Mapping
- SLAM is a technique for simultaneously performing self-position estimation and environment map generation.
- it is possible to perform the above self-position estimation and environment map generation by extracting feature points from observation information acquired from a sensor such as a camera and tracking the feature points.
- An apparatus for performing self-position estimation using feature points will be described with a specific example.
- An example of such a device is an autonomous vehicle.
- the autonomous driving vehicle can recognize the surrounding environment from information acquired by various sensors, and can realize autonomous traveling according to the recognized environment. At this time, in order to realize appropriate operation control, it is required to perform self-position estimation with high accuracy.
- FIG. 1A and 1B are diagrams for explaining recognition of the surrounding environment by an autonomous driving vehicle.
- FIG. 1A is a diagram schematically showing the surrounding environment of the autonomous vehicle AV1 in the real world.
- the animal bodies DO1 to DO4 may be dynamically moving objects, and are shown as vehicles, bicycles, or pedestrians in the example shown in FIG. 1A, respectively.
- Still objects SO1 to SO4 may be objects that do not move autonomously, and are shown as traffic lights, signboards, street trees, or buildings in the example of FIG.
- FIG. 1B is a diagram schematically showing an ambient environment recognized by the autonomous driving vehicle AV1.
- the autonomous driving vehicle AV1 can recognize the surrounding environment based on feature points detected based on observation information including image information, for example. Therefore, in FIG. 1B, the moving objects DO1 to DO4 and the stationary objects SO1 to SO4 are represented by sets of feature points detected by the autonomous driving vehicle AV1, respectively.
- the autonomous driving vehicle AV1 can recognize the surrounding environment by tracking the feature point of the detected object based on the information from the mounted camera or the like.
- the self-driving vehicle AV1 performs self-position estimation based on feature points with low reliability, there is a possibility that a difference occurs between the estimated position and the actual position. In this case, the autonomous driving vehicle AV1 cannot perform appropriate driving control, and an accident may be caused. For this reason, more accurate self-position estimation is required from the viewpoint of ensuring safety.
- Feature point reliability >> The specific examples of the self-position estimation using the feature points and the self-position estimation apparatus have been described above.
- the information processing method and the information processing apparatus according to the present disclosure are conceived by focusing on the reliability of feature points as described above, and generate a recommended feature point list based on the reliability of feature points. It is possible.
- One feature of the recommended feature point list according to the present disclosure is that it also includes local feature amounts of feature points.
- the feature points with high reliability may be feature points that can be observed from more observation points in the unit area. That is, a feature point with high reliability in the present disclosure can be said to be a feature point with a high observation record.
- feature points with high reliability in the present disclosure may be feature points that are not easily confused with other feature points in projection error minimization and feature point tracking described later.
- FIG. 2 is a conceptual diagram for explaining the reliability of the feature points according to the present disclosure.
- FIG. 2 shows feature points F1 and F2 observed in the unit area A1 and observation points L1 to L4.
- the arrows extending from the observation points L1 to L4 to the feature points F1 or F2 indicate that the feature points F1 or F2 can be observed from the respective observation points L1 to L4.
- the example shown in FIG. 2 shows that both the feature points F1 and F2 can be observed from the observation points L1 and L2, and only the feature point F1 can be observed from the observation points L3 and L4. At this time, in the present disclosure, it can be said that the feature point F1 has higher reliability than the feature point F2.
- feature points that can be observed from more observation points can be defined as highly reliable feature points.
- FIGS. 3A to 3C are diagrams for explaining the reliability of the feature points affected by the environmental situation.
- FIGS. 3A to 3C may be examples showing images acquired by an in-vehicle camera of an autonomous driving vehicle, for example.
- 3A to 3C may be examples showing images taken from the same observation point and from the same viewpoint.
- FIG. 3A shows an example of an image taken in a clear state.
- a feature point F3 related to a building and a feature point F4 related to a high-rise tower are shown in the image. That is, it can be seen that both the feature points F3 and F4 can be observed at the observation point where the image shown in FIG.
- FIG. 3B shows an example of an image taken in the rainy weather.
- the feature point F3 related to the building is shown in the image.
- the top of the high-rise tower is blocked by clouds, and the feature point F4 is not observed. That is, at the observation point where the image shown in FIG. 3B is acquired, it can be seen that only the feature point F3 can be observed in case of rain.
- FIG. 3C shows an example of an image taken at night.
- the feature point F4 related to the high-rise tower is shown in the image.
- the feature point F3 is not observed. That is, at the observation point where the image shown in FIG. 3C is acquired, it can be seen that only the feature point F4 can be observed at night.
- a recommended feature point list according to the environmental state may be generated.
- it is possible to generate a recommended feature point list for each environmental state such as clear weather, rainy weather, or nighttime. This makes it possible to provide highly reliable feature point information according to the environmental state.
- FIG. 4 is a diagram for explaining observation of feature points in the indoor space.
- FIG. 4 may be an example of an image taken at an airport, for example.
- a feature point F5 related to a plant is shown in the image.
- similar landscapes often continue due to the structure of the building.
- Many buildings have the same structure even on different floors.
- the characteristics of the object related to the feature point may be identified by performing object recognition or the like from the observed information.
- the reliability of the feature points according to the present disclosure has been described.
- the above devices include, for example, an automatic driving AI for controlling an autonomous driving vehicle, a navigation device, an HMD (Head Mounted Display) that provides functions related to virtual reality and augmented reality based on self-position estimation, and the like. It may be.
- the various devices described above can perform self-position estimation with higher accuracy by using the recommended feature point list generated by the information processing method according to the present disclosure. Further, since the recommended feature point list according to the present disclosure includes the local feature amount related to the feature point, it is possible to reduce the processing load related to the feature amount extraction. Furthermore, there is an effect that it is easy to reverse the recommended feature point from the current position.
- the system according to the present embodiment includes an information processing server 10, a plurality of information processing devices 20 a and 20 b, and a moving body 30. Further, the information processing server 10 and the information processing apparatuses 20a and 20b are connected via the network 40 so that communication can be performed.
- the information processing server 10 may be an information processing device that generates a feature point list based on observation information collected around a unit area. As described above, the information processing server 10 according to the present embodiment can generate a feature point list according to the environmental situation.
- the unit area may be an area in an indoor space.
- the information processing apparatus 20 may be various apparatuses that perform self-position estimation based on the feature point list acquired from the information processing server 10.
- the information processing apparatus 20 a may be an automatic operation AI that is mounted on the moving body 30 described later.
- the information processing apparatus 20b may be a glasses-type wearable apparatus.
- the information processing apparatus 20 according to the present embodiment is not limited to the example shown in FIG.
- the information processing apparatus 20 according to the present embodiment may be, for example, a navigation apparatus, an HMD, or various robots.
- the information processing apparatus 20 according to the present embodiment can be defined as various apparatuses that perform self-position estimation.
- the moving body 30 may be a moving body such as a vehicle on which the information processing apparatus 20 is mounted.
- the moving body 30 may be, for example, an automatic driving vehicle controlled by the information processing apparatus 20 having a function as the automatic driving AI.
- the moving body 30 may be an automatic driving vehicle that travels outdoors, or may be a special vehicle that travels indoors at an airport, for example.
- the moving body 30 according to the present embodiment is not limited to a vehicle, and may be, for example, an unmanned aerial vehicle (UAV) including a drone.
- UAV unmanned aerial vehicle
- the moving body 30 has a function of delivering observation information observed in the real world to the information processing apparatus 20.
- the observation information is acquired by, for example, an RGB-D camera, a laser range finder, GPS, Wi-Fi (registered trademark), a geomagnetic sensor, an atmospheric pressure sensor, an acceleration sensor, a gyro sensor, or a vibration sensor. Information may be included.
- the network 40 has a function of connecting the information processing server 10 and the information processing apparatus 20.
- the network 40 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various types of LANs (Local Area Network) including Ethernet (registered trademark), WAN (Wide Area Network), and the like.
- the network 40 may also include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
- the information processing server 10 may have a function of receiving observation information collected around the unit area. Further, the information processing server 10 can generate a feature point list in which the three-dimensional coordinates of the feature points detected from the observation information are associated with the local feature amounts of the feature points.
- the feature point list may be a feature point list associated with the environment information.
- FIG. 6 is a functional block diagram of the information processing server 10 according to the present embodiment.
- the information processing server 10 includes a data selection unit 110, a feature point extraction unit 120, a list generation unit 130, and a device communication unit 140.
- the data selection unit 110 has a function of selecting original observation information when generating the feature point list according to the present embodiment. Specifically, the data selection unit 110 may select observation information used for generating the feature point list based on the unit area and the environment information.
- the data selection unit 110 may select a unit area that is a target for generating a feature point list based on map information including a road map, and a traffic history and a walk history in the unit area. Details of data selection by the data selection unit 110 will be described later.
- the feature point extraction unit 120 has a function of generating a feature point map related to the unit area based on the observation information selected by the data selection unit 110. Specifically, the feature point extraction unit 120 has a function of detecting feature points from a plurality of pieces of observation information and matching the feature points in order to generate the feature point map.
- the feature point extraction unit 120 may calculate camera parameters based on the feature point matching result. Further, the feature point extraction unit 120 can perform a projection error minimization process based on the calculated camera parameters. Details of the feature point map generation by the feature point extraction unit 120 will be described later.
- the list generation unit 130 has a function of generating a feature point list based on the feature point map generated by the feature point extraction unit 120. More specifically, the list generation unit 130 can generate a recommended feature point list in which feature points are ranked for each unit area.
- the list generation unit 130 may perform the ranking based on a projection error of feature points or a position error related to observation points of observation information. That is, the list generation unit 130 can extract feature points with smaller errors as feature points with high reliability.
- the list generation unit 130 may perform the above ranking based on the number of observation information related to feature points. That is, the list generation unit 130 can extract feature points extracted from more observation information as highly reliable feature points.
- the list generation unit 130 may perform the ranking based on the number of observation information relating to feature points observed in another unit area located in the vicinity of the unit area for generating the feature point list. Good. That is, the list generation unit 130 can extract feature points that can be observed from nearby unit areas as feature points with high reliability. Details of the recommended feature point list generation by the list generation unit 130 will be described later.
- the device communication unit 140 has a function of realizing communication with the information processing device 20. Specifically, the device communication unit 140 according to the present embodiment has a function of receiving observation information collected around the unit area from the information processing device 20. Further, the device communication unit 140 has a function of transmitting a recommended feature point list to the information processing device 20 based on a request from the information processing device 20.
- the information processing server 10 according to the present embodiment can generate a recommended feature point list according to the unit area and the environment information based on the acquired observation information.
- the recommended feature point list includes a local feature amount related to the feature point. According to the above function of the information processing server 10 according to the present embodiment, feature point information with high reliability can be provided, and the accuracy of self-position estimation by the information processing apparatus 20 can be effectively increased. it can.
- the information processing apparatus 20 may be various apparatuses that perform self-position estimation. For this reason, the information processing apparatus 20 according to the present embodiment has a function of extracting feature points and local feature amounts related to the feature points from the acquired image information. The information processing apparatus 20 has a function of acquiring a feature point list based on collected observation information.
- the information processing apparatus 20 has a function of performing self-position estimation based on the extracted local feature amount and the feature point list.
- the feature point list may be a recommended feature point list generated by the information processing server 10.
- the feature point list may be a list including the three-dimensional coordinate position of the feature point associated with the unit area including the observation point of the observation information and the local feature amount related to the feature point.
- FIG. 7 is a functional block diagram of the information processing apparatus 20 according to the present embodiment.
- the information processing apparatus 20 according to the present embodiment includes a calculation unit 210, a function control unit 220, and a server communication unit 230.
- the calculation unit 210 has a function of extracting feature points and local feature amounts related to the feature points from the acquired image information.
- the calculation unit 210 has a function of calculating camera parameters of an imaging unit (not shown) that has acquired the image information.
- the camera parameters described above may include a three-dimensional coordinate position, posture information, speed, angular velocity, triaxial rotational posture, triaxial rotational speed, triaxial rotational acceleration, and the like.
- the calculation unit 210 can greatly improve the efficiency of processing by referring to the recommended feature point list acquired from the information processing server 10 in the above camera parameter calculation.
- the function control unit 220 has a function of controlling the operation of the information processing apparatus 20 based on the camera parameters calculated by the calculation unit 210. That is, the function control unit 220 may control various operations according to the characteristics of the information processing apparatus 20 based on the camera parameters.
- the function control unit 220 may have a function as an operation control unit that controls the operation of the moving body 30. That is, in this case, the function control unit 220 can perform operation control of the moving body 30 based on the result of self-position estimation.
- the function control unit 220 performs display control related to virtual reality or augmented reality based on the camera parameters. It may have a function as a display control unit. That is, in this case, the function control unit 220 can perform display control of a virtual object or the like based on the result of self-position estimation.
- the function control unit 220 has a function as a navigation unit that performs route navigation related to the moving body 30 based on the camera parameters. Good. That is, in this case, the function control unit 220 can realize highly accurate navigation based on the result of self-position estimation.
- the server communication unit 230 has a function of acquiring a recommended feature point list from the information processing server 10 based on the collected observation information.
- the observation information may be observation information acquired from a sensor provided in the moving body 30.
- the information processing apparatus 20 is an HMD or a wearable apparatus, the above observation information may be observation information acquired by various sensors included in the information processing apparatus 20.
- the server communication unit 230 may acquire the recommended feature point list based on the collected environment information.
- the recommended feature point list to be acquired may be a feature point list associated with the environment information.
- the environmental information may be information acquired from various sensors included in the information processing apparatus 20 or the moving body 30, or may be information acquired by the server communication unit 230 via a network.
- the server communication unit 230 can request a recommended feature point list from the information processing server 10 based on weather information acquired from the Internet.
- the information processing apparatus 20 according to the present embodiment has been described in detail above. As described above, the information processing apparatus 20 according to the present embodiment can receive a recommended feature point list from the information processing server 10 based on the acquired observation information and environment information. Further, the information processing apparatus 20 can achieve highly accurate self-position estimation using the recommended feature point list. According to the information processing apparatus 20 according to the present embodiment, it is possible to greatly improve control based on self-position estimation.
- the information processing server 10 can generate the recommended feature point list related to the unit area based on the collected observation information.
- the information processing server 10 according to the present embodiment may generate a recommended feature point list associated with the environment information.
- the environmental information may include information related to weather, lighting environment, atmospheric condition, time, date, or the like.
- the weather may include, for example, rain, snow, fog, cloud conditions, and the like.
- the information processing server 10 includes, for example, a plurality of lists such as a recommended feature point list associated with a rainy environment and a recommended feature point list associated with a night environment for the same unit area. Can be generated.
- the information processing apparatus 20 can refer to a list of feature points that can be observed in rainy weather or a feature point that can be observed at night. Thus, the accuracy related to self-position estimation can be improved.
- FIG. 8 is a conceptual diagram showing input / output data relating to generation of a recommended feature point list by the information processing server 10 of the present embodiment.
- the information processing server 10 can output a recommended feature point list associated with a unit area and environment information based on various types of input information.
- the information input to the information processing server 10 may include observation information, control information, map information, environment information, and the like.
- the above observation information may include, for example, an RGDB image and laser range finder information.
- the observation information may include information obtained from GPS, Wi-Fi, geomagnetic sensor, barometric pressure sensor, temperature sensor, acceleration sensor, gyro sensor, vibration sensor, and the like.
- the above observation information may be acquired from a sensor provided in the moving body 30 or the information processing apparatus 20.
- control information may include, for example, information related to the control of the moving body 30.
- control information may include speed information and steering information.
- the information processing server 10 can use the control information for position estimation related to the observation point of the observation information.
- the map information may include information such as a three-dimensional map and a road map.
- the three-dimensional map may be a three-dimensional feature point map or a polygonal three-dimensional model map.
- the three-dimensional map according to the present embodiment is not limited to the map indicated by the feature point group related to the still object, and various types of color information of each feature point, attribute information and physical property information based on the object recognition result, and the like are added. It may be a map of
- the environmental information may include weather information including a weather forecast and time information.
- the environment information may include information such as the lighting environment.
- the information processing server 10 can output a recommended feature point list associated with the unit area and the environment information based on the various types of information.
- the information processing server 10 sequentially selects the use data by the data selection unit 110, the feature point map generation by the feature point extraction unit 120, and the recommended feature point extraction by the list generation unit 130. By processing, the recommended feature point list may be output.
- FIG. 9 is a data configuration example of a recommended feature point list output by the information processing server 10 through the above processing.
- the recommended feature point list may include information such as unit area coordinates, IDs, feature point coordinates, and feature quantity vectors.
- the unit area coordinates and the feature point coordinates are represented by three-dimensional coordinates based on the X axis, the Y axis, and the Z axis.
- the size of the space related to the unit area may be designed to include a predetermined distance from one coordinate point as shown in FIG.
- the size of the space related to the unit area may be defined by a plurality of coordinate points.
- the unit area coordinates in the recommended feature point list may be defined by a plurality of coordinate points.
- the recommended feature point list includes a local feature vector related to the feature point.
- the local feature quantity vector may be a data type corresponding to a local descriptor or the like used when extracting the local feature quantity.
- SIFT Scale Invant Feature Transform
- the local feature value vectors may be represented by 128-dimensional feature value vectors.
- the neural network may be represented by a vector corresponding to the output use.
- the information processing server 10 can improve the accuracy of self-position estimation by the information processing apparatus 20 by generating the recommended feature point list as described above.
- FIG. 10 is a conceptual diagram illustrating input / output related to the data selection unit 110 according to the present embodiment.
- the data selection unit 110 can select use data used to generate a recommended feature point list based on input information. That is, the data selection unit 110 according to the present embodiment has a function of selecting observation information suitable for a target unit area and environmental state based on input information. Note that various information input to the data selection unit 110 is the same as the input information described with reference to FIG. In FIG. 10, the use data selection function of the data selection unit 110 is shown as function B1.
- FIG. 11 is a flowchart showing a flow related to usage data determination according to the present embodiment.
- the data selection unit 110 first sets a target unit area and a target environment (S1101). At this time, the data selection unit 110 may set a target unit area based on map information including road information, traffic information, or walking history. That is, the information processing server 10 according to the present embodiment can generate a recommended feature point list with higher value by intensively setting an area with a lot of traffic and pedestrians as a target unit area. In this case, the above information may be information included in the input road map or three-dimensional map.
- the data selection unit 110 may set the target unit area and the target environment based on the input by the user.
- the user can set a target by arbitrarily inputting information such as a coordinate position related to the unit area, weather, and time.
- the data selection unit 110 performs environmental suitability determination on the acquired observation information (S1102).
- the data selection unit 110 may perform the environmental suitability determination based on, for example, the observation time associated with the observation information.
- the data selection unit 110 can perform environmental suitability determination based on weather information including the input weather forecast.
- the data selection unit 110 may recognize the weather from various sensor information included in the observation information and perform environmental suitability determination. For example, the data selection unit 110 may determine the weather from the acquired image information.
- the data selection unit 110 may deselect the observation information (S1106) and may proceed to use determination related to the next observation information.
- the data selection unit 110 estimates the observation position (S1103). That is, the data selection unit 110 may estimate the position where the observation information is acquired.
- the data selection unit 110 can estimate the approximate observation position and orientation using the GPS information.
- the data selection unit 110 can estimate a rough observation position and orientation based on the immediately preceding GPS information, map information, and control information.
- the control information may be, for example, speed information or steering information acquired from the moving body 30.
- the data selection unit 110 can detect that the moving body 30 has traveled 300 meters in the tunnel and can estimate the observation position based on the immediately preceding GPS information.
- the data selection unit 110 can also estimate a rough observation position based on Wi-Fi information. According to the function of the data selection unit 110, it is possible to estimate a rough observation position related to observation information acquired in a tunnel, indoors, underground, multipath environment, and the like.
- the data selection unit 110 When the estimation of the observation position is completed, the data selection unit 110 subsequently calculates the distance between the estimated observation position and the target unit area, and performs conformity determination regarding the observation position (S1104). At this time, if the distance between the observation position and the target unit area is equal to or greater than the predetermined threshold ⁇ (S1104: No), the data selection unit 110 determines that the observation information is not related to the target unit area, Data non-selection may be determined (S1106).
- the data selection unit 110 determines that the observation information is related to the target unit area, and performs the observation. Information may be selected (S1105).
- the data selection unit 110 can select the observation information used for generating the recommended feature point list based on the input information including the acquired observation information.
- the feature point extraction unit 120 can generate a feature point map based on the observation information selected by the data selection unit 110.
- FIG. 12 is a conceptual diagram of input / output related to the feature point extraction unit 120. In FIG. 12, the functions of the feature point extraction unit 120 are shown as functions B2 to B5.
- the feature point extraction unit 120 has a function of detecting and describing feature points from a plurality of observation information selected by the data selection unit 110 (function B2). At this time, the feature point extraction unit 120 may detect feature points using local descriptors such as SIFT (Scale Invariant Feature Transform) and SURF (Speeded Up Robust Features). Further, for example, the feature point extraction unit 120 may use a Harris corner detection method or the like.
- SIFT Scale Invariant Feature Transform
- SURF Speeded Up Robust Features
- the feature point extraction unit 120 has a function of matching each feature point based on the description of the feature points related to the plurality of observation information output from the function B2 (function B3). At this time, the feature point extraction unit 120 matches feature points having a correspondence relationship among a plurality of pieces of observation information. At this time, the feature point extraction unit 120 may perform matching corresponding to the method used for feature point detection. For example, when SIFT or SURF is used for feature point detection, the feature point extraction unit 120 may perform the above matching using a technique widely used in each local descriptor.
- the feature point extraction unit 120 uses the sensor information acquired by the GPS, the geomagnetic sensor, the Wi-Fi, the atmospheric pressure sensor, the acceleration sensor, the gyro sensor, and the vibration sensor included in the observation information at the time of feature point mapping. Good.
- the feature point extraction unit 120 can improve the feature point mapping by using the rough position information calculated from the sensor information.
- the feature point extraction unit 120 calculates the three-dimensional coordinates of the feature points based on the matching information output from the function B3, and calculates camera parameters corresponding to each observation information from the three-dimensional coordinates of the feature points. It has a function (function B4).
- the camera parameters described above may include a vector of degrees of freedom of the camera and various internal parameters.
- the camera parameters according to the present embodiment may be camera position coordinates (X, Y, Z) and rotation angles ( ⁇ x, ⁇ y, ⁇ z) of the respective coordinate axes.
- the camera parameters according to the present embodiment may include internal parameters such as a focal length, an F value, and a shear coefficient, for example.
- the feature point extraction unit 120 includes the positions of the feature points between consecutive frames (RGBD images), the position vector between the cameras, the three-axis rotation vector of the camera, and the vector connecting the positions between the cameras and the feature points.
- the relative value may be calculated continuously.
- the feature point extraction unit 120 can perform the above calculation by solving an epipolar equation based on epipolar geometry.
- the feature point extraction unit 120 can improve the calculation of the camera parameters by using the rough position information calculated from the sensor information described above when calculating the camera parameters.
- the feature point extraction unit 120 has a function of minimizing a projection error based on the camera parameter output from the function B4 (function B5). Specifically, the feature point extraction unit 120 performs statistical processing that minimizes the position distribution of each camera parameter and each feature point.
- the feature point extraction unit 120 can minimize a projection error by detecting a feature point having a large error and deleting the feature point.
- the feature point extraction unit 120 may estimate an optimum solution of the least square method by, for example, the Levenberg-Marquardt method. Thereby, the feature point extraction unit 120 can obtain the camera position, the camera rotation matrix, and the three-dimensional coordinates of the feature points where the error is converged.
- the feature point extraction unit 120 can generate a feature point map based on the observation information selected by the data selection unit 110. That is, the feature point extraction unit 120 according to the present embodiment can generate a feature point map related to the target unit area and the target environment.
- the above-described feature point map may include a three-dimensional coordinate position and its error relating to the feature point, and a camera position and its error.
- FIG. 13 is a conceptual diagram of input / output related to the list generation unit 130 of the present embodiment.
- the list generation unit 130 outputs a recommended feature point list associated with the unit area and the environment information based on the input feature point map.
- the feature point map may be a feature point map generated by the feature point extraction unit 120.
- the recommended feature point list generation function of the list generation unit 130 is shown as function B6.
- FIG. 14 is a flowchart showing a flow related to the recommended feature point list generation of the present embodiment.
- the list generation unit 130 first ranks feature points described in the feature point map (S1201). At this time, the list generation unit 130 may perform the ranking based on the projection error and camera position error of the feature points included in the feature point map. That is, the list generation unit 130 according to the present embodiment can determine a feature point with a smaller projection error or camera position error as a feature point with high reliability.
- the list generation unit 130 may rank feature points according to the properties of the unit areas. For example, the list generation unit 130 can also select feature points with higher reliability based on the result of object recognition based on the feature point map. For example, in the unit space related to the room, the list generation unit 130 can also determine a feature point related to a static object independent of a building as a feature point with high reliability.
- the list generation unit 130 can perform the above ranking based on the number of observation information in which feature points are observed. That is, the list generation unit 130 according to the present embodiment can determine feature points that can be observed from more observation points in the unit area as feature points with high reliability.
- the list generation unit 130 can perform the above ranking based on the number of observation information observed in another unit area located near the target unit area.
- FIG. 15 is a diagram for explaining feature points that can be observed from a nearby unit area.
- FIG. 15 shows a plurality of unit areas A2 to A5, and a feature point F6 exists in the unit area A2. Further, observation points L5 to L7 are shown in the unit areas A3 to A5, respectively.
- the unit area 2 may be a target unit area for generating a recommended feature point list.
- the feature point F6 existing in the unit area A2 which is the target unit area can be observed from the observation points L5 to L7 in the unit areas A3 to A5 corresponding to the nearby unit areas.
- the list generation unit 130 may rank feature points based on observation information from a unit area located near the target unit area, as shown in FIG. That is, the list generation unit 130 can determine feature points that can be observed in a larger range as feature points with high reliability.
- the list generation unit 130 performs the above ranking based on the observation information acquired in the vicinity unit area, and for example, feature points that can be observed by the moving body 30 entering the target unit area are recommended feature point lists. Can be included. Thereby, the information processing apparatus 20 mounted on the moving body 30 can perform control based on the feature point information existing in the moving direction of the moving body 30.
- the list generation unit 130 registers N feature points in the recommended feature point list based on the ranking (S1202).
- the number of registered feature points may be a predetermined number.
- the predetermined number may be dynamically set according to the size or property of the target unit area.
- the predetermined number may be dynamically set according to the nature of the target environment.
- the list generation unit 130 performs a determination related to the position error (S1203).
- the list generation unit 130 may compare the position information estimated in step S1103 shown in FIG. 11 with the position information calculated at the time of feature point extraction.
- the list generation unit 130 may further register m feature points in the recommended feature point list (S1204). Since it is assumed that it is difficult to estimate the position of a unit area with a large position error because it is difficult to acquire GPS information, the list generation unit 130 may register feature points with respect to the unit area as described above. Out.
- the list generation unit 130 continuously performs self-position identification from a point X at which high-accuracy position information can be acquired by GPS, for example, using a technique such as SLAM, and the acquired error.
- the reliability of each feature point can be calculated backward.
- the list generation unit 130 may change the combination of feature points employed in the same route and perform the above processing a plurality of times. In this case, the list generation unit 130 can reversely calculate the reliability of the feature points by using the fact that the error result changes depending on the combination of feature points to be adopted.
- the list generation unit 130 describes the local feature amount (S1205).
- the local feature amount according to the present embodiment may be a vector that describes how a feature point looks on an image. That is, the local feature amount according to the present embodiment represents a feature of a local region in an image by a vector.
- the list generation unit 130 may describe the local feature amount by a feature amount extractor using a neural network.
- the feature quantity extractor may be a learner that has acquired the description capability related to the local feature quantity by deep learning or deep learning.
- the above feature quantity extractor can acquire the description ability related to the local feature quantity, for example, by learning to distinguish different feature quantities from feature point data obtained from all over the world.
- the feature quantity extractor can absorb differences in appearance due to environmental changes including lighting and the like by the above learning, and can realize a highly accurate feature quantity description.
- the feature quantity extractor according to the present embodiment is not limited to this example.
- the feature quantity extractor according to the present embodiment may be a learning device that acquires the law property from the relationship between input and output.
- the list generation unit 130 can also describe local feature amounts using local descriptors without depending on the feature amount extractor.
- the list generation unit 130 may describe the local feature amount using the above-described local descriptor such as SIFT or SURF.
- the list generation unit 130 ranks feature points described in the feature point map, and can register feature points with higher reliability in the recommended feature point list.
- the information processing server 10 can select observation information to be used based on the target unit area and the target environment. Further, the information processing server 10 can generate a feature point map using observation information corresponding to the target unit area and the target environment. In addition, the information processing server 10 can rank the feature points described in the feature point map and include the feature points with high reliability in the recommended feature point list.
- the information processing server 10 it is possible to realize a reduction in the procedure related to the self-position estimation of the information processing apparatus 20 and to realize high-precision self-position estimation. .
- the recommended feature point list it is possible to realize stable self-position estimation based on feature amount information.
- FIG. 16 is a flowchart showing a flow of self-position estimation by the information processing apparatus 20 according to the present embodiment.
- the server communication unit 230 of the information processing apparatus 20 transmits the acquired observation information, control information, and environment information to the information processing server 10 (S1301).
- the observation information may be information acquired from various sensors provided in the information processing apparatus 20 or the moving body 30, and the control information includes speed information and steering of the moving body 30. Information may be included.
- the server communication unit 230 receives a recommended feature point list from the information processing server (S1302).
- the recommended feature point list may be transmitted based on the observation information transmitted in step S1301. That is, the information processing apparatus 20 can acquire a recommended feature point list according to a unit area or an environmental state.
- the calculation unit 210 performs self-position estimation using the recommended feature point list received in step S1303 (S1303).
- the calculation unit 210 can calculate camera parameters including three-dimensional position, posture information, speed, acceleration, three-axis rotation posture, three-axis rotation speed, three-axis rotation acceleration, and the like.
- the function control unit 220 executes various controls based on the camera parameters calculated in step S1303.
- the information processing apparatus 20 can control the operation of the moving body 30 based on the camera parameters (S1304).
- the information processing apparatus 20 can perform display control related to virtual reality or augmented reality based on the camera parameters (S1305).
- the information processing apparatus 20 can perform navigation-related control based on the above camera parameters (S1306).
- steps S1304 to S1306 described above may be executed simultaneously. Further, the information processing apparatus 20 may execute any of the processes according to steps S1304 to S1306.
- the arithmetic unit 210 outputs camera parameters based on various types of input information.
- the calculation unit 210 can perform mapping of feature points using the recommended feature point list acquired from the information processing server 10.
- the above parameters may include various types of information described with reference to FIG.
- the functions of the arithmetic unit 210 are shown as functions B7 to B10. That is, the calculation unit 210 may have a feature point description function (B7), a feature point matching function (B8), a camera parameter calculation function (B9), and a projection error minimization function (B10).
- the above functions B7 to B10 may be the same functions as the functions B2 to B5 shown in FIG.
- the information processing apparatus 20 according to the present embodiment can acquire a recommended feature point list corresponding to a unit area and an environmental state. Further, the information processing apparatus 20 can calculate camera parameters using the acquired recommended feature point list. According to the above-described function of the information processing apparatus 20 according to the present embodiment, it is possible to reduce the processing load related to self-position estimation and realize high-precision self-position estimation.
- FIG. 18 is a block diagram illustrating a hardware configuration example of the information processing server 10 and the information processing apparatus 20 according to the present disclosure.
- the information processing server 10 and the information processing apparatus 20 include, for example, a CPU 871, a ROM 872, a RAM 873, a host bus 874, a bridge 875, an external bus 876, an interface 877, and an input device 878. , Output device 879, storage 880, drive 881, connection port 882, and communication device 883.
- the hardware configuration shown here is an example, and some of the components may be omitted. Moreover, you may further include components other than the component shown here.
- the CPU 871 functions as, for example, an arithmetic processing unit or a control unit, and controls the overall operation or a part of each component based on various programs recorded in the ROM 872, RAM 873, storage 880, or removable recording medium 901.
- the ROM 872 is a means for storing programs read by the CPU 871, data used for calculations, and the like.
- the RAM 873 for example, a program read by the CPU 871, various parameters that change as appropriate when the program is executed, and the like are temporarily or permanently stored.
- the CPU 871, the ROM 872, and the RAM 873 are connected to each other via, for example, a host bus 874 capable of high-speed data transmission.
- the host bus 874 is connected to an external bus 876 having a relatively low data transmission speed via a bridge 875, for example.
- the external bus 876 is connected to various components via an interface 877.
- Input device 8708 For the input device 878, for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, or the like is used. Furthermore, as the input device 878, a remote controller (hereinafter referred to as a remote controller) capable of transmitting a control signal using infrared rays or other radio waves may be used.
- a remote controller capable of transmitting a control signal using infrared rays or other radio waves may be used.
- Output device 879 In the output device 879, for example, a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL, an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, etc. It is a device that can notify visually or audibly.
- a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL
- an audio output device such as a speaker or a headphone, a printer, a mobile phone, or a facsimile, etc. It is a device that can notify visually or audibly.
- the storage 880 is a device for storing various data.
- a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used.
- the drive 881 is a device that reads information recorded on a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information to the removable recording medium 901.
- a removable recording medium 901 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory
- the removable recording medium 901 is, for example, a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, or various semiconductor storage media.
- the removable recording medium 901 may be, for example, an IC card on which a non-contact IC chip is mounted, an electronic device, or the like.
- connection port 882 is a port for connecting an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- an external connection device 902 such as a USB (Universal Serial Bus) port, an IEEE 1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. is there.
- the external connection device 902 is, for example, a printer, a portable music player, a digital camera, a digital video camera, or an IC recorder.
- the communication device 883 is a communication device for connecting to a network.
- the information processing server 10 can select observation information to be used based on the target unit area and the target environment. Further, the information processing server 10 can generate a feature point map using observation information corresponding to the target unit area and the target environment. In addition, the information processing server 10 can rank the feature points described in the feature point map and include the feature points with high reliability in the recommended feature point list. Further, the information processing apparatus 20 according to the present disclosure can acquire a recommended feature point list according to a unit area and an environmental state. Further, the information processing apparatus 20 can calculate camera parameters using the acquired recommended feature point list. According to such a configuration, it is possible to provide position information with higher accuracy according to the state of the real world.
- Generating the feature point list includes generating the feature point list associated with environmental information; Further including The information processing method according to (1).
- the environmental information includes information on at least one of weather, lighting environment, atmospheric state, time, or date, The information processing method according to (2).
- Generating the feature point list includes generating a feature point list in which the feature points are ranked for each unit area; Further including The information processing method according to any one of (1) to (3).
- Generating the feature point list includes performing the ranking based on at least one of a projection error of the feature points or a position error related to an observation point of the observation information; Further including The information processing method according to (4).
- Generating the feature point list includes performing the ranking based on the number of pieces of observation information related to the feature points; Further including The information processing method according to (4) or (5).
- Generating the feature point list is based on the number of pieces of observation information related to the feature points observed in another unit area located in the vicinity of the unit area that generates the feature point list. What to do, Further including The information processing method according to any one of (4) to (6).
- the unit area is an area in an indoor space. The information processing method according to any one of (1) to (7).
- Generating the feature point list includes generating the feature point list according to a unit area selected based on at least one of map information, traffic history, and walking history; Further including The information processing method according to any one of (1) to (7). (10) Transmitting the feature point list to the information processing device based on a request from the information processing device; Further including The information processing method according to any one of (1) to (9).
- a calculation unit for extracting a feature point and a local feature amount related to the feature point from the acquired image information A communication unit that obtains a feature point list based on the collected observation information; With The calculation unit performs self-position estimation based on the local feature amount and the feature point list, The feature point list includes a three-dimensional coordinate position of the feature point associated with a unit area including an observation point of the observation information, and a local feature amount related to the feature point.
- Information processing device (12)
- the communication unit acquires the feature point list further based on the collected environmental information,
- the feature point list is a feature point list associated with the environment information.
- the calculation unit calculates camera parameters of the imaging unit that acquired the image information,
- the camera parameter includes at least one of a three-dimensional coordinate position, posture information, speed, acceleration, three-axis rotation posture, three-axis rotation speed, or three-axis rotation acceleration.
- the information processing apparatus according to (11) or (12).
- An operation control unit for controlling the operation of the moving body based on the camera parameters; Further comprising The information processing apparatus according to (13).
- a display control unit that performs display control according to at least one of virtual reality and augmented reality based on the camera parameters; Further comprising The information processing apparatus according to (13) or (14).
- a communication unit that receives observation information collected around the unit area;
- a list generation unit that generates a feature point list in which three-dimensional coordinates of the feature points detected from the observation information and local feature amounts of the feature points are associated; Comprising
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
Abstract
Description
1.本開示の概要
1.1.特徴点を用いた自己位置推定
1.2.特徴点追跡による自動運転
1.3.特徴点の信頼度
2.実施形態
2.1.本実施形態に係るシステム構成例
2.2.情報処理サーバ10
2.3.情報処理装置20
2.4.推奨特徴点リスト生成の概要
2.5.使用データ選択の詳細
2.6.三次元地図生成の詳細
2.7.推奨特徴点リスト生成の詳細
2.8.本実施形態に係る作用効果
2.9.情報処理装置20の自己位置推定
3.ハードウェア構成例
4.まとめ
<<1.1.特徴点を用いた自己位置推定>>
近年、地図情報を利用する種々の装置が開発されている。上記のような装置の代表的な例としては、ナゲーション装置が挙げられる。一般的に、ナゲーション装置は、GPSにより取得した座標情報と保有する地図情報とに基づいて、経路ナビゲーションなどの機能を実行することができる。
ここで、特徴点を用いた自己位置推定を行う装置について、具体例を挙げて説明する。上記のような装置には、例えば、自動運転車がある。自動運転車は、各種のセンサにより取得した情報から周囲の環境を認識し、認識した環境に応じて自律走行を実現することができる。この際、適切な運転制御を実現するためには、精度の高い自己位置推定を行うことが求められる。
以上、特徴点を用いた自己位置推定と、自己位置推定を行う装置の具体例について説明した。本開示に係る情報処理方法および情報処理装置は、上記で説明したような特徴点の信頼度に着目して発想されたものであり、特徴点の信頼度に基づいた推奨特徴点リストを生成することが可能である。また、本開示に係る推奨特徴点リストは、特徴点の局所特徴量を併せて有することを特徴の一つとする。
<<2.1.本実施形態に係るシステム構成例>>
まず、本実施形態に係るシステム構成例について説明する。図5を参照すると、本実施系形態に係るシステムは、情報処理サーバ10と、複数の情報処理装置20a及び20bと、移動体30と、を備える。また、情報処理サーバ10と、情報処理装置20a及び20bと、は通信が行えるようにネットワーク40を介して接続される。
次に本実施形態に係る情報処理サーバ10について詳細に説明する。本実施形態に係る情報処理サーバ10は、単位エリア周辺において収集された観測情報を受信する機能を有してよい。また、情報処理サーバ10は、上記の観測情報から検出した特徴点の三次元座標と、上記特徴点の局所特徴量と、を関連付けた特徴点リストを生成することができる。また、ここで、上記の特徴点リストは、環境情報に関連付いた特徴点リストであってよい。
データ選択部110は、本実施形態に係る特徴点リストの生成に際し、元となる観測情報の選択を行う機能を有する。具体的には、データ選択部110は、単位エリアと環境情報とに基づいて、特徴点リストの生成に用いる観測情報を選択してよい。
特徴点抽出部120は、データ選択部110が選択した観測情報に基づいて、単位エリアに係る特徴点地図を生成する機能を有する。具体的には、特徴点抽出部120は、上記の特徴点地図を生成するために、複数の観測情報から特徴点を検出し、当該特徴点のマッチングを行う機能を有する。
リスト生成部130は、特徴点抽出部120が生成した特徴点地図に基づいて、特徴点リストを生成する機能を有する。より具体的には、リスト生成部130は、単位エリアごとに特徴点を順位付けした推奨特徴点リストを生成することができる。
装置通信部140は、情報処理装置20との通信を実現する機能を有する。具体的には、本実施形態に係る装置通信部140は、単位エリア周辺において収集された観測情報を、情報処理装置20から受信する機能を有する。また、装置通信部140は、情報処理装置20からの要求に基づいて、推奨特徴点リストを情報処理装置20に送信する機能を有する。
次に、本実施形態に係る情報処理装置20について詳細に説明する。本実施形態に係る情報処理装置20は、自己位置推定を行う各種の装置であってよい。このため、本実施形態に係る情報処理装置20は、取得された画像情報から特徴点及び当該特徴点に係る局所特徴量を抽出する機能を有する。また、情報処理装置20は、収集された観測情報に基づいて特徴点リストを取得する機能を有する。
演算部210は、取得された画像情報から、特徴点と当該特徴点に係る局所特徴量と、を抽出する機能を有する。また、演算部210は、上記の画像情報を取得した撮像部(図示しない)のカメラパラメータを算出する機能を有する。ここで、上記のカメラパラメータは、三次元座標位置、姿勢情報、速度、角速度、3軸回転姿勢、3軸回転速度、及び3軸回転加速度などを含んでよい。
機能制御部220は、演算部210が算出したカメラパラメータに基づいて情報処理装置20の動作を制御する機能を有する。すなわち、機能制御部220は、上記のカメラパラメータに基づいて情報処理装置20の特性に応じた種々の動作を制御してよい。
サーバ通信部230は、収集された観測情報に基づいて情報処理サーバ10から推奨特徴点リストを取得する機能を有する。ここで、情報処理装置20が自動運転AIやナビゲーション装置である場合、上記の観測情報は、移動体30に備えられたセンサから取得された観測情報であってよい。また、情報処理装置20がHMDやウェアラブル装置である場合、上記の観測情報は、情報処理装置20が備える各種のセンサにより取得される観測情報であってよい。
次に、本実施形態に係る推奨特徴点リストの生成について概要を説明する。上述したとおり、本実施形態に係る情報処理サーバ10は、収集した観測情報に基づいて単位エリアに係る推奨特徴点リストを生成することができる。この際、本実施形態に係る情報処理サーバ10は、環境情報に関連付いた推奨特徴点リストを生成してよい。
次に、本実施形態に係るデータ選択部110による使用データの選択について詳細に説明する。上述したとおり、本実施形態に係るデータ選択部110は、推奨特徴点リストを生成する対象の単位エリア、及び環境状態に基づいて、使用するデータの選択を行うことができる。図10は、本実施形態に係るデータ選択部110に係る入出力を説明する概念図である。
次に、本実施形態に係る三次元地図生成の詳細について説明する。本実施形態に係る特徴点抽出部120は、データ選択部110が選択した観測情報に基づいて、特徴点地図を生成することができる。図12は、特徴点抽出部120に係る入出力の概念図である。なお、図12においては、特徴点抽出部120が有する各機能が機能B2~B5として示されている。
図12を参照すると、特徴点抽出部120は、データ選択部110が選択した複数の観測情報から特徴点を検出し記述する機能を有する(機能B2)。この際、特徴点抽出部120は、例えば、SIFT(Scale Invariant Feature Transform)やSURF(Speeded Up Robust Features)などの局所記述子を用いて特徴点の検出を行ってよい。また、例えば、特徴点抽出部120は、Harrisのコーナー検出法などを用いることもできる。
また、特徴点抽出部120は、機能B2から出力される複数の観測情報に係る特徴点の記述に基づいて、各特徴点のマッチングを行う機能を有する(機能B3)。この際、特徴点抽出部120は、複数の観測情報間で対応関係にある特徴点をマッチングする。この際、特徴点抽出部120は、特徴点検出に用いた手法に対応したマッチングを行ってよい。例えば、特徴点検出にSIFTやSURFを用いた場合、特徴点抽出部120はそれぞれの局所記述子で広く用いられる手法を利用して、上記のマッチングを行ってもよい。
また、特徴点抽出部120は、機能B3から出力されるマッチング情報に基づいて、特徴点の三次元座標を算出し、当該特徴点の三次元座標から各観測情報に対応したカメラパラメータを算出する機能を有する(機能B4)。ここで、上記のカメラパラメータは、カメラの有する自由度のベクトルや各種の内部パラメータを含んでよい。例えば、本実施形態に係るカメラパラメータは、カメラの位置座標(X,Y,Z)と、それぞれの座標軸の回転角(Φx、Φy、Φz)と、であってよい。また、本実施形態に係るカメラパラメータは、例えば、焦点距離、F値、せん断係数などの内部パラメータを含んでよい。
また、特徴点抽出部120は、機能B4から出力されるカメラパラメータに基づいて、投影誤差の最小化を行う機能を有する(機能B5)。具体的には、特徴点抽出部120は、各カメラパラメータと各特徴点の位置分布を最小化する統計処理を行う。
次に、本実施形態に係る推奨特徴点リスト生成の詳細について説明する。図13は、本実施形態のリスト生成部130に係る入出力の概念図である。図13を参照すると、本実施形態に係るリスト生成部130は、入力された特徴点地図に基づいて、単位エリア及び環境情報に関連付いた推奨特徴点リストを出力している。ここで、上記の特徴点地図は、特徴点抽出部120により生成された特徴点地図であってよい。なお、図13においては、リスト生成部130が有する推奨特徴点リスト生成機能が機能B6として示されている。
以上、本実施形態に係る情報処理サーバ10が有する機能について詳細に説明した。上述したとおり、本実施形態に係る情報処理サーバ10は、対象単位エリア及び対象環境に基づいて、使用する観測情報の選択を行うことができる。また、情報処理サーバ10は、対象単位エリアかつ対象環境に該当する観測情報を用いて特徴点地図を生成することができる。また、情報処理サーバ10は、特徴点地図に記述される特徴点を順位付けし、信頼度の高い特徴点を推奨特徴点リストに含めることができる。
次に、本実施形態に係る情報処理装置20の自己位置推定について説明する。上述したとおり、本実施形態に係る情報処理装置20は、情報処理サーバ10から受信した推奨特徴点リストを利用して自己位置推定を行うことができる。
次に、本開示に係る情報処理サーバ10及び情報処理装置20に共通するハードウェア構成例について説明する。図18は、本開示に係る情報処理サーバ10及び情報処理装置20のハードウェア構成例を示すブロック図である。図18を参照すると、情報処理サーバ10及び情報処理装置20は、例えば、CPU871と、ROM872と、RAM873と、ホストバス874と、ブリッジ875と、外部バス876と、インターフェース877と、入力装置878と、出力装置879と、ストレージ880と、ドライブ881と、接続ポート882と、通信装置883と、を有する。なお、ここで示すハードウェア構成は一例であり、構成要素の一部が省略されてもよい。また、ここで示される構成要素以外の構成要素をさらに含んでもよい。
CPU871は、例えば、演算処理装置又は制御装置として機能し、ROM872、RAM873、ストレージ880、又はリムーバブル記録媒体901に記録された各種プログラムに基づいて各構成要素の動作全般又はその一部を制御する。
ROM872は、CPU871に読み込まれるプログラムや演算に用いるデータ等を格納する手段である。RAM873には、例えば、CPU871に読み込まれるプログラムや、そのプログラムを実行する際に適宜変化する各種パラメータ等が一時的又は永続的に格納される。
CPU871、ROM872、RAM873は、例えば、高速なデータ伝送が可能なホストバス874を介して相互に接続される。一方、ホストバス874は、例えば、ブリッジ875を介して比較的データ伝送速度が低速な外部バス876に接続される。また、外部バス876は、インターフェース877を介して種々の構成要素と接続される。
入力装置878には、例えば、マウス、キーボード、タッチパネル、ボタン、スイッチ、及びレバー等が用いられる。さらに、入力装置878としては、赤外線やその他の電波を利用して制御信号を送信することが可能なリモートコントローラ(以下、リモコン)が用いられることもある。
出力装置879には、例えば、CRT(Cathode Ray Tube)、LCD、又は有機EL等のディスプレイ装置、スピーカ、ヘッドホン等のオーディオ出力装置、プリンタ、携帯電話、又はファクシミリ等、取得した情報を利用者に対して視覚的又は聴覚的に通知することが可能な装置である。
ストレージ880は、各種のデータを格納するための装置である。ストレージ880としては、例えば、ハードディスクドライブ(HDD)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス、又は光磁気記憶デバイス等が用いられる。
ドライブ881は、例えば、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリ等のリムーバブル記録媒体901に記録された情報を読み出し、又はリムーバブル記録媒体901に情報を書き込む装置である。
リムーバブル記録媒体901は、例えば、DVDメディア、Blu-ray(登録商標)メディア、HD DVDメディア、各種の半導体記憶メディア等である。もちろん、リムーバブル記録媒体901は、例えば、非接触型ICチップを搭載したICカード、又は電子機器等であってもよい。
接続ポート882は、例えば、USB(Universal Serial Bus)ポート、IEEE1394ポート、SCSI(Small Computer System Interface)、RS-232Cポート、又は光オーディオ端子等のような外部接続機器902を接続するためのポートである。
外部接続機器902は、例えば、プリンタ、携帯音楽プレーヤ、デジタルカメラ、デジタルビデオカメラ、又はICレコーダ等である。
通信装置883は、ネットワークに接続するための通信デバイスであり、例えば、有線又は無線LAN、Bluetooth(登録商標)、又はWUSB(Wireless USB)用の通信カード、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、又は各種通信用のモデム等である。
以上説明したように、本開示に係る情報処理サーバ10は、対象単位エリア及び対象環境に基づいて、使用する観測情報の選択を行うことができる。また、情報処理サーバ10は、対象単位エリアかつ対象環境に該当する観測情報を用いて特徴点地図を生成することができる。また、情報処理サーバ10は、特徴点地図に記述される特徴点を順位付けし、信頼度の高い特徴点を推奨特徴点リストに含めることができる。また、本開示に係る情報処理装置20は、単位エリア及び環境状態に応じた推奨特徴点リストを取得することができる。また、情報処理装置20は、取得した推奨特徴点リストを用いてカメラパラメータの算出を行うことができる。係る構成によれば、実世界の状態に応じたより精度の高い位置情報を提供することが可能となる。
(1)
プロセッサが、単位エリア周辺において収集された観測情報から検出した特徴点の三次元座標と、前記特徴点の局所特徴量と、を関連付けた特徴点リストを生成すること、
を含む、
情報処理方法。
(2)
前記特徴点リストを生成することは、環境情報に関連付いた前記特徴点リストを生成すること、
をさらに含む、
前記(1)に記載の情報処理方法。
(3)
前記環境情報は、天候、照明環境、大気状態、時刻、または日付のうち少なくともいずれかに係る情報を含む、
前記(2)に記載の情報処理方法。
(4)
前記特徴点リストを生成することは、前記単位エリアごとに前記特徴点を順位付けした特徴点リストを生成すること、
をさらに含む、
前記(1)~(3)のいずれかに記載の情報処理方法。
(5)
前記特徴点リストを生成することは、前記特徴点の投影誤差、または前記観測情報の観測地点に係る位置誤差のうち少なくとも一方に基づいて、前記順位付けを行うこと、
をさらに含む、
前記(4)に記載の情報処理方法。
(6)
前記特徴点リストを生成することは、前記特徴点に係る観測情報の数に基づいて、前記順位付けを行うこと、
をさらに含む、
前記(4)または(5)に記載の情報処理方法。
(7)
前記特徴点リストを生成することは、前記特徴点リストを生成する単位エリアの近傍に位置する別の単位エリアにおいて観測された、前記特徴点に係る観測情報の数に基づいて、前記順位付けを行うこと、
をさらに含む、
前記(4)~(6)のいずれかに記載の情報処理方法。
(8)
前記単位エリアは、屋内空間におけるエリアである、
前記(1)~(7)のいずれかに記載の情報処理方法。
(9)
前記特徴点リストを生成することは、地図情報、通行履歴、または歩行履歴のうち少なくともいずれかに基づいて選択した単位エリアに係る前記特徴点リストを生成すること、
をさらに含む、
前記(1)~(7)のいずれかに記載の情報処理方法。
(10)
情報処理装置からの要求に基づいて、前記特徴点リストを前記情報処理装置に送信すること、
をさらに含む、
前記(1)~(9)のいずれかに記載の情報処理方法。
(11)
取得された画像情報から特徴点及び前記特徴点に係る局所特徴量を抽出する演算部と、
収集された観測情報に基づいて特徴点リストを取得する通信部と、
を備え、
前記演算部は、前記局所特徴量と前記特徴点リストに基づいて自己位置推定を行い、
前記特徴点リストは、前記観測情報の観測地点を含む単位エリアに関連付いた前記特徴点の三次元座標位置、及び前記特徴点に係る局所特徴量を含む、
情報処理装置。
(12)
前記通信部は、収集された環境情報にさらに基づいて前記特徴点リストを取得し、
前記特徴点リストは、前記環境情報に関連付いた特徴点リストである、
前記(11)に記載の情報処理装置。
(13)
前記演算部は、前記画像情報を取得した撮像部のカメラパラメータを算出し、
前記カメラパラメータは、三次元座標位置、姿勢情報、速度、加速度、3軸回転姿勢、3軸回転速度、または3軸回転加速度のうち少なくともいずれかを含む、
前記(11)または(12)に記載の情報処理装置。
(14)
前記カメラパラメータに基づいて移動体の動作を制御する動作制御部、
をさらに備える、
前記(13)に記載の情報処理装置。
(15)
前記カメラパラメータに基づいて仮想現実または拡張現実のうち少なくともいずれかに係る表示制御を行う表示制御部、
をさらに備える、
前記(13)または(14)に記載の情報処理装置。
(16)
前記カメラパラメータに基づいて移動体に係る経路ナビゲーションを行うナビゲーション部、
をさらに備える、
前記(13)~(15)のいずれかに記載の情報処理装置。
(17)
単位エリア周辺において収集された観測情報を受信する通信部と、
前記観測情報から検出した特徴点の三次元座標と、前記特徴点の局所特徴量と、を関連付けた特徴点リストを生成するリスト生成部、
を備える、
情報処理装置。
110 データ選択部
120 特徴点抽出部
130 リスト生成部
140 装置通信部
20 情報処理装置
210 演算部
220 機能制御部
230 サーバ通信部
30 移動体
40 ネットワーク
Claims (17)
- プロセッサが、単位エリア周辺において収集された観測情報から検出した特徴点の三次元座標と、前記特徴点の局所特徴量と、を関連付けた特徴点リストを生成すること、
を含む、
情報処理方法。 - 前記特徴点リストを生成することは、環境情報に関連付いた前記特徴点リストを生成すること、
をさらに含む、
請求項1に記載の情報処理方法。 - 前記環境情報は、天候、照明環境、大気状態、時刻、または日付のうち少なくともいずれかに係る情報を含む、
請求項2に記載の情報処理方法。 - 前記特徴点リストを生成することは、前記単位エリアごとに前記特徴点を順位付けした特徴点リストを生成すること、
をさらに含む、
請求項1に記載の情報処理方法。 - 前記特徴点リストを生成することは、前記特徴点の投影誤差、または前記観測情報の観測地点に係る位置誤差のうち少なくとも一方に基づいて、前記順位付けを行うこと、
をさらに含む、
請求項4に記載の情報処理方法。 - 前記特徴点リストを生成することは、前記特徴点に係る観測情報の数に基づいて、前記順位付けを行うこと、
をさらに含む、
請求項4に記載の情報処理方法。 - 前記特徴点リストを生成することは、前記特徴点リストを生成する単位エリアの近傍に位置する別の単位エリアにおいて観測された、前記特徴点に係る観測情報の数に基づいて、前記順位付けを行うこと、
をさらに含む、
請求項4に記載の情報処理方法。 - 前記単位エリアは、屋内空間におけるエリアを含む、
請求項1に記載の情報処理方法。 - 前記特徴点リストを生成することは、地図情報、通行履歴、または歩行履歴のうち少なくともいずれかに基づいて選択した単位エリアに係る前記特徴点リストを生成すること、
をさらに含む、
請求項1に記載の情報処理方法。 - 情報処理装置からの要求に基づいて、前記特徴点リストを前記情報処理装置に送信すること、
をさらに含む、
請求項1に記載の情報処理方法。 - 取得された画像情報から特徴点及び前記特徴点に係る局所特徴量を抽出する演算部と、
収集された観測情報に基づいて特徴点リストを取得する通信部と、
を備え、
前記演算部は、前記局所特徴量と前記特徴点リストに基づいて自己位置推定を行い、
前記特徴点リストは、前記観測情報の観測地点を含む単位エリアに関連付いた前記特徴点の三次元座標位置、及び前記特徴点に係る局所特徴量を含む、
情報処理装置。 - 前記通信部は、収集された環境情報にさらに基づいて前記特徴点リストを取得し、
前記特徴点リストは、前記環境情報に関連付いた特徴点リストである、
請求項11に記載の情報処理装置。 - 前記演算部は、前記画像情報を取得した撮像部のカメラパラメータを算出し、
前記カメラパラメータは、三次元座標位置、姿勢情報、速度、加速度、3軸回転姿勢、3軸回転速度、または3軸回転加速度のうち少なくともいずれかを含む、
請求項11に記載の情報処理装置。 - 前記カメラパラメータに基づいて移動体の動作を制御する動作制御部、
をさらに備える、
請求項13に記載の情報処理装置。 - 前記カメラパラメータに基づいて仮想現実または拡張現実のうち少なくともいずれかに係る表示制御を行う表示制御部、
をさらに備える、
請求項13に記載の情報処理装置。 - 前記カメラパラメータに基づいて移動体に係る経路ナビゲーションを行うナビゲーション部、
をさらに備える、
請求項13に記載の情報処理装置。 - 単位エリア周辺において収集された観測情報を受信する通信部と、
前記観測情報から検出した特徴点の三次元座標と、前記特徴点の局所特徴量と、を関連付けた特徴点リストを生成するリスト生成部、
を備える、
情報処理装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/068,794 US10949712B2 (en) | 2016-03-30 | 2016-12-27 | Information processing method and information processing device |
EP16897114.1A EP3438925A4 (en) | 2016-03-30 | 2016-12-27 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING DEVICE |
JP2018508398A JP6897668B2 (ja) | 2016-03-30 | 2016-12-27 | 情報処理方法および情報処理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-068929 | 2016-03-30 | ||
JP2016068929 | 2016-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017168899A1 true WO2017168899A1 (ja) | 2017-10-05 |
Family
ID=59963882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/088890 WO2017168899A1 (ja) | 2016-03-30 | 2016-12-27 | 情報処理方法および情報処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10949712B2 (ja) |
EP (1) | EP3438925A4 (ja) |
JP (1) | JP6897668B2 (ja) |
WO (1) | WO2017168899A1 (ja) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019074532A (ja) * | 2017-10-17 | 2019-05-16 | 有限会社ネットライズ | Slamデータに実寸法を付与する方法とそれを用いた位置測定 |
JP2019097065A (ja) * | 2017-11-24 | 2019-06-20 | Kddi株式会社 | パラメータ特定装置及びパラメータ特定方法 |
JP2019182412A (ja) * | 2018-04-13 | 2019-10-24 | バイドゥ ユーエスエイ エルエルシーBaidu USA LLC | 自動運転車に用いられる自動データラベリング |
WO2019216005A1 (ja) * | 2018-05-09 | 2019-11-14 | 株式会社日立製作所 | 自己位置推定システム、自律移動システム及び自己位置推定方法 |
WO2019220765A1 (ja) * | 2018-05-17 | 2019-11-21 | 株式会社Soken | 自己位置推定装置 |
CN111344716A (zh) * | 2017-11-14 | 2020-06-26 | 奇跃公司 | 经由单应性变换适应的全卷积兴趣点检测和描述 |
EP3761629A4 (en) * | 2018-02-26 | 2021-04-21 | Sony Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM |
WO2021117552A1 (ja) * | 2019-12-10 | 2021-06-17 | ソニーグループ株式会社 | 情報処理システム、情報処理方法、及びプログラム |
JP2022508103A (ja) * | 2018-11-15 | 2022-01-19 | マジック リープ, インコーポレイテッド | 自己改良ビジュアルオドメトリを実施するためのシステムおよび方法 |
WO2023090213A1 (ja) | 2021-11-18 | 2023-05-25 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
WO2023188927A1 (ja) | 2022-03-31 | 2023-10-05 | 株式会社アイシン | 自己位置誤差推定装置及び自己位置誤差推定方法 |
US11797603B2 (en) | 2020-05-01 | 2023-10-24 | Magic Leap, Inc. | Image descriptor network with imposed hierarchical normalization |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6882664B2 (ja) * | 2017-02-07 | 2021-06-02 | 富士通株式会社 | 移動***置推定システム、移動***置推定端末装置、情報格納装置、及び移動***置推定方法 |
JP6894725B2 (ja) * | 2017-03-09 | 2021-06-30 | キヤノン株式会社 | 画像処理装置及びその制御方法、プログラム、記憶媒体 |
US10424079B2 (en) * | 2017-04-05 | 2019-09-24 | Here Global B.V. | Unsupervised approach to environment mapping at night using monocular vision |
US11694303B2 (en) * | 2019-03-19 | 2023-07-04 | Electronics And Telecommunications Research Institute | Method and apparatus for providing 360 stitching workflow and parameter |
JP7227072B2 (ja) * | 2019-05-22 | 2023-02-21 | 日立Astemo株式会社 | 車両制御装置 |
CN110348463B (zh) * | 2019-07-16 | 2021-08-24 | 北京百度网讯科技有限公司 | 用于识别车辆的方法和装置 |
AU2020366385A1 (en) * | 2019-10-15 | 2022-05-12 | Alarm.Com Incorporated | Navigation using selected visual landmarks |
DE102021000652A1 (de) | 2021-02-09 | 2022-08-11 | Mercedes-Benz Group AG | Verfahren zur Prädikation verkehrsführungsrelevanter Parameter |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005038402A1 (ja) * | 2003-10-21 | 2005-04-28 | Waro Iwane | ナビゲーション装置 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007069724A1 (ja) * | 2005-12-16 | 2007-06-21 | Ihi Corporation | 三次元形状データの位置合わせ方法と装置 |
US7925049B2 (en) * | 2006-08-15 | 2011-04-12 | Sri International | Stereo-based visual odometry method and system |
DE102006062061B4 (de) * | 2006-12-29 | 2010-06-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung, Verfahren und Computerprogramm zum Bestimmen einer Position basierend auf einem Kamerabild von einer Kamera |
US8189925B2 (en) * | 2009-06-04 | 2012-05-29 | Microsoft Corporation | Geocoding by image matching |
JP2011043419A (ja) * | 2009-08-21 | 2011-03-03 | Sony Corp | 情報処理装置、および情報処理方法、並びにプログラム |
JP5505723B2 (ja) * | 2010-03-31 | 2014-05-28 | アイシン・エィ・ダブリュ株式会社 | 画像処理システム及び位置測位システム |
KR101686171B1 (ko) * | 2010-06-08 | 2016-12-13 | 삼성전자주식회사 | 영상 및 거리 데이터를 이용한 위치 인식 장치 및 방법 |
KR20120071203A (ko) * | 2010-12-22 | 2012-07-02 | 한국전자통신연구원 | 운전자의 시야 확보 장치 및, 운전자의 시야 확보 방법 |
US8401225B2 (en) * | 2011-01-31 | 2013-03-19 | Microsoft Corporation | Moving object segmentation using depth images |
US8195394B1 (en) * | 2011-07-13 | 2012-06-05 | Google Inc. | Object detection and classification for autonomous vehicles |
US20140323148A1 (en) * | 2013-04-30 | 2014-10-30 | Qualcomm Incorporated | Wide area localization from slam maps |
US9037396B2 (en) * | 2013-05-23 | 2015-05-19 | Irobot Corporation | Simultaneous localization and mapping for a mobile robot |
WO2015049717A1 (ja) * | 2013-10-01 | 2015-04-09 | 株式会社日立製作所 | 移動***置推定装置および移動***置推定方法 |
EP3100206B1 (en) * | 2014-01-30 | 2020-09-09 | Mobileye Vision Technologies Ltd. | Systems and methods for lane end recognition |
JP6394005B2 (ja) * | 2014-03-10 | 2018-09-26 | 株式会社リコー | 投影画像補正装置、投影する原画像を補正する方法およびプログラム |
US9958864B2 (en) * | 2015-11-04 | 2018-05-01 | Zoox, Inc. | Coordination of dispatching and maintaining fleet of autonomous vehicles |
US9754490B2 (en) * | 2015-11-04 | 2017-09-05 | Zoox, Inc. | Software application to request and control an autonomous vehicle service |
US9734455B2 (en) * | 2015-11-04 | 2017-08-15 | Zoox, Inc. | Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles |
US9630619B1 (en) * | 2015-11-04 | 2017-04-25 | Zoox, Inc. | Robotic vehicle active safety systems and methods |
CN107554430B (zh) * | 2017-09-20 | 2020-01-17 | 京东方科技集团股份有限公司 | 车辆盲区可视化方法、装置、终端、***及车辆 |
US10970553B2 (en) * | 2017-11-15 | 2021-04-06 | Uatc, Llc | Semantic segmentation of three-dimensional data |
-
2016
- 2016-12-27 JP JP2018508398A patent/JP6897668B2/ja active Active
- 2016-12-27 US US16/068,794 patent/US10949712B2/en active Active
- 2016-12-27 EP EP16897114.1A patent/EP3438925A4/en not_active Ceased
- 2016-12-27 WO PCT/JP2016/088890 patent/WO2017168899A1/ja active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005038402A1 (ja) * | 2003-10-21 | 2005-04-28 | Waro Iwane | ナビゲーション装置 |
Non-Patent Citations (2)
Title |
---|
AKIRA KUDO ET AL.: "Kotonaru Kogen Kankyo ni Okeru Gazo Tokucho no Gankensei no Chosa", IPSJ SIG NOTES COMPUTER VISION TO IMAGE MEDIA (CVIM, 15 January 2015 (2015-01-15), XP009509226 * |
See also references of EP3438925A4 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019074532A (ja) * | 2017-10-17 | 2019-05-16 | 有限会社ネットライズ | Slamデータに実寸法を付与する方法とそれを用いた位置測定 |
JP7270623B2 (ja) | 2017-11-14 | 2023-05-10 | マジック リープ, インコーポレイテッド | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP7403700B2 (ja) | 2017-11-14 | 2023-12-22 | マジック リープ, インコーポレイテッド | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
CN111344716A (zh) * | 2017-11-14 | 2020-06-26 | 奇跃公司 | 经由单应性变换适应的全卷积兴趣点检测和描述 |
JP2021503131A (ja) * | 2017-11-14 | 2021-02-04 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP2019097065A (ja) * | 2017-11-24 | 2019-06-20 | Kddi株式会社 | パラメータ特定装置及びパラメータ特定方法 |
EP3761629A4 (en) * | 2018-02-26 | 2021-04-21 | Sony Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM |
JP2019182412A (ja) * | 2018-04-13 | 2019-10-24 | バイドゥ ユーエスエイ エルエルシーBaidu USA LLC | 自動運転車に用いられる自動データラベリング |
US10816984B2 (en) | 2018-04-13 | 2020-10-27 | Baidu Usa Llc | Automatic data labelling for autonomous driving vehicles |
WO2019216005A1 (ja) * | 2018-05-09 | 2019-11-14 | 株式会社日立製作所 | 自己位置推定システム、自律移動システム及び自己位置推定方法 |
WO2019220765A1 (ja) * | 2018-05-17 | 2019-11-21 | 株式会社Soken | 自己位置推定装置 |
JP2019200160A (ja) * | 2018-05-17 | 2019-11-21 | 株式会社Soken | 自己位置推定装置 |
US11921291B2 (en) | 2018-11-15 | 2024-03-05 | Magic Leap, Inc. | Systems and methods for performing self-improving visual odometry |
JP7357676B2 (ja) | 2018-11-15 | 2023-10-06 | マジック リープ, インコーポレイテッド | 自己改良ビジュアルオドメトリを実施するためのシステムおよび方法 |
JP2022508103A (ja) * | 2018-11-15 | 2022-01-19 | マジック リープ, インコーポレイテッド | 自己改良ビジュアルオドメトリを実施するためのシステムおよび方法 |
WO2021117552A1 (ja) * | 2019-12-10 | 2021-06-17 | ソニーグループ株式会社 | 情報処理システム、情報処理方法、及びプログラム |
US11797603B2 (en) | 2020-05-01 | 2023-10-24 | Magic Leap, Inc. | Image descriptor network with imposed hierarchical normalization |
WO2023090213A1 (ja) | 2021-11-18 | 2023-05-25 | ソニーグループ株式会社 | 情報処理装置、情報処理方法及びプログラム |
WO2023188927A1 (ja) | 2022-03-31 | 2023-10-05 | 株式会社アイシン | 自己位置誤差推定装置及び自己位置誤差推定方法 |
Also Published As
Publication number | Publication date |
---|---|
US20190019062A1 (en) | 2019-01-17 |
EP3438925A4 (en) | 2019-04-17 |
JPWO2017168899A1 (ja) | 2019-02-07 |
US10949712B2 (en) | 2021-03-16 |
JP6897668B2 (ja) | 2021-07-07 |
EP3438925A1 (en) | 2019-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017168899A1 (ja) | 情報処理方法および情報処理装置 | |
US11900536B2 (en) | Visual-inertial positional awareness for autonomous and non-autonomous tracking | |
KR102434580B1 (ko) | 가상 경로를 디스플레이하는 방법 및 장치 | |
US11749124B2 (en) | User interaction with an autonomous unmanned aerial vehicle | |
US10410328B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
US10366508B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
US11415986B2 (en) | Geocoding data for an automated vehicle | |
WO2017045251A1 (en) | Systems and methods for uav interactive instructions and control | |
KR20180050823A (ko) | 3차원의 도로 모델을 생성하는 방법 및 장치 | |
KR102239562B1 (ko) | 항공 관측 데이터와 지상 관측 데이터 간의 융합 시스템 | |
CN111288989A (zh) | 一种小型无人机视觉定位方法 | |
WO2022062480A1 (zh) | 移动设备的定位方法和定位装置 | |
Qian et al. | Wearable-assisted localization and inspection guidance system using egocentric stereo cameras | |
US20220412741A1 (en) | Information processing apparatus, information processing method, and program | |
Chai et al. | Multi-sensor fusion-based indoor single-track semantic map construction and localization | |
US11947354B2 (en) | Geocoding data for an automated vehicle | |
Abdulov et al. | Visual odometry approaches to autonomous navigation for multicopter model in virtual indoor environment | |
Partanen et al. | Implementation and Accuracy Evaluation of Fixed Camera-Based Object Positioning System Employing CNN-Detector | |
Hernández et al. | Visual SLAM with oriented landmarks and partial odometry | |
Zeng et al. | Robotic Relocalization Algorithm Assisted by Industrial Internet of Things and Artificial Intelligence | |
Wang et al. | Pedestrian positioning in urban city with the aid of Google maps street view | |
JP2021047744A (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
CN113932814B (zh) | 一种基于多模态地图的协同定位方法 | |
Mengling et al. | A crowdsourcing based indoor topology construction algorithm using the forward and backward track fusion of user closed trajectory | |
US20240037759A1 (en) | Target tracking method, device, movable platform and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2018508398 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016897114 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016897114 Country of ref document: EP Effective date: 20181030 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16897114 Country of ref document: EP Kind code of ref document: A1 |