WO2024085286A1 - Method for establishing context map based on collaboration of drone and robot - Google Patents

Method for establishing context map based on collaboration of drone and robot Download PDF

Info

Publication number
WO2024085286A1
WO2024085286A1 PCT/KR2022/016139 KR2022016139W WO2024085286A1 WO 2024085286 A1 WO2024085286 A1 WO 2024085286A1 KR 2022016139 W KR2022016139 W KR 2022016139W WO 2024085286 A1 WO2024085286 A1 WO 2024085286A1
Authority
WO
WIPO (PCT)
Prior art keywords
context
map
area
grid
location
Prior art date
Application number
PCT/KR2022/016139
Other languages
French (fr)
Korean (ko)
Inventor
이석준
최충재
성낙명
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2024085286A1 publication Critical patent/WO2024085286A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present invention relates to map creation technology, and more specifically, to a method of constructing a context map of an outdoor environment through collaboration between various moving objects such as drones and robots.
  • Maps are constructed using data obtained through various sensors of moving objects.
  • SLAM Simultaneous Localization and Mapping
  • Maps are constructed using data obtained through various sensors of moving objects.
  • SLAM Simultaneous Localization and Mapping
  • a map constructed in this manner has no information other than specific geographical information, such as context (situational awareness information). Although there are maps with approximate environmental information added through semantic technology, they are very inadequate to utilize for path planning for autonomous driving or control of dangerous situations.
  • the present invention was devised to solve the above problems, and the purpose of the present invention is to provide a method of constructing a context map of an outdoor environment through collaboration between various moving objects such as drones and robots.
  • a context map construction method includes the steps of dividing a target area into a grid and creating a 2D grid map dividing a plurality of areas; Obtaining context for a specific area of the target area; Determining the location of a specific area; It includes a step of updating the acquired context by matching it to the corresponding area of the 2D grid map based on the identified location.
  • the generation step includes generating a 3D map of the target area; Projecting the generated 3D map into a 2D map; It may include dividing the generated 2D map into a grid and dividing it into multiple areas.
  • the identification step may include, if the ground mobile device has acquired the context in the acquisition step, treating its location determined by the ground mobile device as the location of a specific area.
  • the identification step may include calculating the location of a specific area by the aerial vehicle, if the aerial vehicle has acquired the context in the acquisition step.
  • the calculation step includes acquiring images showing ground moving objects and specific areas; A first calculation step of calculating positions in the image for ground moving objects and a specific area; Obtaining positions in a 2D grid map of ground moving objects; Using the positions of the ground moving objects in the image and the positions of the ground moving objects in the 2D grid map, determining a transformation matrix for converting the coordinate system of the image acquired in the acquisition step into the coordinate system of the 2D grid map; and a second calculation step of calculating the location of a specific area using the determined transformation matrix.
  • the determining step includes a first transformation step of transforming the positions of the ground moving objects in the image by applying a transformation matrix to the positions; Replacing the position converted in the first conversion step for a specific ground moving object among the ground moving objects with a position in a 2D grid map; A second transformation step of converting the positions converted in the first transformation step for the remaining ground mobile objects again based on the replaced position of the specific ground mobile object; A third calculation step of calculating each error between the positions of the remaining ground moving objects converted in the second conversion step and the positions in the 2D grid map; It may include repeating the first to third conversion steps while modifying the transformation matrix until the total sum of the calculated errors becomes less than a predetermined value.
  • the second calculation step includes a third transformation step of converting the position again by applying a transformation matrix when the total sum of the calculated errors is less than a predetermined value to the position in the image for a specific area;
  • Contexts may include information about the location of the area, information about objects in the area, and information about the state of the area.
  • Information about objects may include information about the type and number of objects in the area, and information about the status may include information about congestion and risk factors in the area.
  • a context map construction system includes a communication unit that acquires context for a specific area of a target area; And a processor that divides the target area into a grid and generates a 2D grid map with multiple zones, determines the location of a specific zone, and updates the context obtained through the communication unit by matching it to the corresponding zone of the 2D grid map.
  • a context map construction method includes the steps of acquiring context for a specific area of a target area; Determining the location of a specific area; Updating the acquired context by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides it into a plurality of areas, based on the identified location; and providing an updated 2D grid map.
  • a context map construction system includes a communication unit that acquires context for a specific area of a target area; And the location of a specific area is identified and the context acquired through the communication department is updated by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides multiple areas, and the updated 2D grid map is sent through the communication department.
  • a communication unit that acquires context for a specific area of a target area; And the location of a specific area is identified and the context acquired through the communication department is updated by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides multiple areas, and the updated 2D grid map is sent through the communication department.
  • the context occurrence point can be accurately identified by utilizing the positions of ground mobile devices such as robots for the context acquired by an aerial mobile device such as a drone, thereby improving the accuracy of the context map.
  • FIG. 1 is a flowchart provided to explain a context map construction method according to an embodiment of the present invention
  • Figure 2 is a diagram illustrating a situation in which drones and mobile robots are acquiring context in the target area
  • FIG. 3 is a detailed flow chart of step S150 of Figure 1,
  • Figure 4 is a diagram illustrating a situation in which one of the ground moving objects detects the context
  • FIG. 5 is a diagram illustrating the context detected by a ground moving object
  • Figure 6 is a diagram showing the results of matching the detected context to the corresponding area of the context map based on the identified location
  • FIGS 7 and 8 are detailed flow charts of step S200 of Figure 3,
  • Figure 9 is a diagram illustrating a situation in which an aerial vehicle acquires an image
  • FIG. 10 is a diagram illustrating the context detected by an airborne vehicle
  • Figure 11 is a diagram showing the result of matching the detected context to the corresponding area of the context map based on the calculated location
  • Figure 12 is a block diagram of a context map construction system according to another embodiment of the present invention.
  • An embodiment of the present invention presents a method of constructing and updating a context map by acquiring contexts in an outdoor environment through collaboration between a ground and aerial vehicle.
  • the context fused to the context map is specific situational awareness information rather than rough environmental information, and this is matched and stored in each area where the context map is divided into a grid format.
  • FIG. 1 is a flowchart provided to explain a context map construction method according to an embodiment of the present invention.
  • a 3D map of the target area is first created (S110), and the generated 3D map is projected onto a 2D plane and converted to a 2D map (S120).
  • the 2D map generated in step S120 is divided into a grid format to generate a 2D grid map in which the target area is divided into multiple zones (S130).
  • Context is acquired/collected from corresponding areas of the target area (S140).
  • Context can be provided from a moving object moving in the target area. It is also possible to receive contexts from multiple moving objects rather than one. Even if there are multiple moving objects, only one context map is constructed.
  • a mobile object can create context by recognizing various situations based on the sensors it possesses (image sensor, lidar, radar, environmental sensor, etc.).
  • Mobile vehicles can be divided into aerial vehicles such as drones and ground mobile vehicles such as mobile robots and autonomous vehicles.
  • Figure 2 illustrates a situation where drones and mobile robots are acquiring contexts in the target area.
  • the context includes 1) location information, 2) object information, and 3) state information for the area.
  • 1) Location information is information that explains what kind of place (road, building, park, sidewalk, etc.) the area is.
  • Object information is information that describes objects (people, vehicles, etc.) in the area, and includes information about the type and number of objects.
  • 3) Status information is information that explains the condition of the area, including information on congestion (road congestion, possible movement speed, cohesion of people, etc.) and risk factors (obstacles, obstructions, fire, flooding, crime, etc.). Includes.
  • the location information of the points where the contexts occurred must be identified (S150). This is because location information is needed to match the context to the corresponding area in the context map. A specific method for determining location information will be described in detail later with reference to FIG. 3.
  • the context map is updated by matching the contexts obtained in step S140 to corresponding areas of the 2D grid map based on the location information identified in step S150 (S160).
  • the context map updated through step S160 can be used to create a route for autonomous driving of a moving object and for local control (S170).
  • the mobile object that utilizes the context map in step S170 may be a mobile object that obtains and provides context in step S140, that is, a mobile object that contributes to the update of the context map, or it may be a mobile object that does not, that is, a mobile object that does not contribute to the update of the context map. It may be possible.
  • FIG. 3 is a detailed flowchart of step S150 of Figure 1.
  • the location information of the context occurrence point is identified in different ways depending on the type of moving object that senses the context.
  • the mobile object that senses the context is a ground mobile object such as a mobile robot (S151-Y)
  • its own position determined by the ground mobile object is treated as a context generation point (S152). This is because the ground moving object is located at the context occurrence point.
  • Figure 4 illustrates a situation in which one of the ground moving objects detects the context
  • Figure 5 illustrates the context detected by the ground moving object
  • Figure 6 illustrates the sensed context based on the location identified through step S152. The results of matching to the corresponding area of the map are shown.
  • the mobile device that has acquired the context is an aerial vehicle such as a drone (S153-Y)
  • the aerial vehicle calculates the location of the context occurrence point (S200). This is because the airborne moving object may not be located at the context occurrence point, but may be located in an area far away from it.
  • the airborne vehicle utilizes the positions of ground moving vehicles around the point.
  • the process by which an airborne vehicle calculates the location of a context occurrence point will be described in detail below with reference to FIGS. 7 and 8.
  • Figures 7 and 8 are detailed flowcharts of step S200 of Figure 3.
  • the aerial moving object In order to calculate the location of the context occurrence point, the aerial moving object first acquires an image showing the ground moving objects and the context generating point (S205). The situation in which an aerial vehicle acquires an image in step S205 is illustrated in FIG. 9.
  • the aerial moving object calculates the positions in the image of the ground moving objects and the context occurrence point (S210).
  • the position in the image calculated in step S210 means the pixel coordinates in the image.
  • the airborne vehicle acquires location information on the 2D grid map of the ground moving vehicles shown in the image (S215).
  • the positions of the ground moving objects in the 2D grid map refer to the actual positions of the ground moving objects.
  • the aerial moving object converts the positions by applying the 'image-map coordinate conversion matrix for converting the coordinate system of the image to the coordinate system of the 2D grid map' to the positions in the image of the ground moving objects calculated in step S210 (S220).
  • the aerial mobile object replaces the position converted in step S220 for a specific (one) ground mobile object among the ground mobile objects with the position in the 2D grid map obtained in step S215 (S225).
  • step S230 based on the replaced position of the specific ground moving object in step S225, the positions of the remaining ground moving objects transformed by the image-map coordinate transformation matrix in step S220 are converted again (S230).
  • the position conversion in step S230 is the same as the 'position movement of a specific ground mobile object through step S225' and moves the positions of the remaining ground mobile objects in parallel.
  • the airborne vehicle calculates each error (difference) between the positions of the remaining ground moving objects converted in step S230 and the positions in the 2D grid map of the remaining ground moving objects obtained in step S215 (S235 ).
  • the aerial vehicle repeats steps S220 to S235 while modifying the image-map coordinate conversion matrix until the total sum of errors calculated in step S235 becomes less than a predetermined value (S240, S245). This corresponds to the process of determining the optimal transformation matrix.
  • the aerial vehicle sets the finally determined image-map coordinates at the position in the image of the occurrence point of the context calculated in step S210 of FIG. 7.
  • the position is transformed by applying the transformation matrix (S250).
  • the aerial moving object again transforms the location of the occurrence point of the context transformed by the image-map coordinate conversion matrix in step S250, based on the replaced position of the specific ground moving object in step S225 (S255).
  • the position conversion in step S255 is to move the position of the context occurrence point in parallel, in the same way as 'position movement of a specific ground moving object through step S225'.
  • the aerial moving object outputs the location of the context occurrence point converted in step S255 (S260).
  • Figure 10 illustrates the context detected by the aerial vehicle
  • Figure 11 shows the result of matching the sensed context to the corresponding area of the context map based on the location identified through the steps shown in Figures 7 and 8. .
  • Figure 12 is a block diagram of a context map construction system according to another embodiment of the present invention.
  • the context map construction system according to an embodiment of the present invention is constructed including a communication unit 310, a processor 320, and a storage unit 330.
  • the communication unit 310 is a means for mutual communication with a mobile object moving in the target area.
  • the processor 320 creates a 2D grid map for the target area and matches contexts obtained from moving objects through the communication unit 310 to the 2D grid map to build and update the context map.
  • the processor 320 provides a context map through the communication unit 310 to a moving object that requires a context map.
  • the storage unit 330 provides the storage space needed by the processor 320 to build and update the context map.
  • the location of the context occurrence can be accurately identified by utilizing the positions of a ground vehicle such as a robot, thereby improving the accuracy of the context map.
  • Figures 7 and 8 assume that the operation is performed by an aerial vehicle. However, if the resources of the aerial vehicle are not of high specifications, it is possible to implement it so that only step S205 of FIG. 7 is performed by the aerial vehicle, and the remaining steps are performed by the context map construction system shown in FIG. 12.
  • priority when matching the context to the relevant area, it is possible to prioritize operations according to the type of moving object. Assuming that contexts are acquired by mobile robots and drones, priority can be assigned as follows.
  • the context acquired by the mobile robot which is a ground mobile device, is trusted, and the object information in the area matches the information acquired by the mobile robot.
  • the context acquired by the aerial mobile drone is trusted, and the congestion information of the area matches the information acquired by the drone.
  • the technical projection of the present invention can be applied to a computer-readable recording medium containing a computer program that performs the functions of the device and method according to the present embodiment.
  • technical projection according to various embodiments of the present invention may be implemented in the form of computer-readable code recorded on a computer-readable recording medium.
  • a computer-readable recording medium can be any data storage device that can be read by a computer and store data.
  • computer-readable recording media can be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, etc.
  • computer-readable codes or programs stored on a computer-readable recording medium may be transmitted through a network connected between computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is a method for establishing a context map based on a collaboration of a drone and a robot. The method for establishing a context map according to an embodiment of the present invention includes the steps of: generating a 2D grid map in which a target area is divided to have a grid shape so as to distinguish a plurality of zones and obtaining context with respect to a specific zone of the target area; and identifying a position of a specific zone, and matching and updating the obtained context to a corresponding zone of the 2D grid map on the basis of the identified position. As such, it is possible to provide a high-quality service using various pieces of context by matching multiple pieces of specific context obtained and collected by collaboration of various moving vehicles to the context map in which the target area is divided in a grid shape and establishing and updating same.

Description

드론과 로봇의 협업 기반 컨텍스트 맵 구축 방법How to build a context map based on collaboration between drones and robots
본 발명은 맵 제작 기술에 관한 것으로, 더욱 상세하게는 드론, 로봇과 같은 다양한 이동체들이 협업하여 실외 환경의 컨텍스트 맵을 구축하는 방법에 관한 것이다.The present invention relates to map creation technology, and more specifically, to a method of constructing a context map of an outdoor environment through collaboration between various moving objects such as drones and robots.
맵(Map)은 이동체의 다양한 센서들을 통해 획득한 데이터들을 활용하여 구축한다. SLAM(Simultaneous Localization and Mapping)은 대표적인 맵 구축 기술이다.Maps are constructed using data obtained through various sensors of moving objects. SLAM (Simultaneous Localization and Mapping) is a representative map construction technology.
이와 같은 방식에 의해 구축된 맵은 구체적인 지리적인 정보 이외에 다른 정보, 이를 테면 컨텍스트(context : 상황인지 정보)가 없다. 시맨틱 기술을 통해 대략적인 환경 정보가 부가되어 있는 맵이 있기는 하지만, 자율주행을 위한 경로 계획(path planning)이나 위험 상황의 관제 등을 위해 활용하기에는 매우 미흡하다.A map constructed in this manner has no information other than specific geographical information, such as context (situational awareness information). Although there are maps with approximate environmental information added through semantic technology, they are very inadequate to utilize for path planning for autonomous driving or control of dangerous situations.
한편 맵에 컨텍스트를 반영하는 경우, 다양한 이동체들로부터 획득된 많은 컨텍스트들을 포함시키기 위한 방안 역시 고려되어져야 한다.Meanwhile, when reflecting context in a map, a plan to include many contexts obtained from various moving objects must also be considered.
본 발명은 상기와 같은 문제점을 해결하기 위하여 안출된 것으로서, 본 발명의 목적은, 드론, 로봇과 같은 다양한 이동체들이 협업하여 실외 환경의 컨텍스트 맵을 구축하는 방법을 제공함에 있다.The present invention was devised to solve the above problems, and the purpose of the present invention is to provide a method of constructing a context map of an outdoor environment through collaboration between various moving objects such as drones and robots.
상기 목적을 달성하기 위한 본 발명의 일 실시예에 따른 컨텍스트 맵 구축 방법은, 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵을 생성하는 단계; 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 단계; 특정 구역의 위치를 파악하는 단계; 획득한 컨텍스트를 파악된 위치를 기초로 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하는 단계;를 포함한다.A context map construction method according to an embodiment of the present invention to achieve the above object includes the steps of dividing a target area into a grid and creating a 2D grid map dividing a plurality of areas; Obtaining context for a specific area of the target area; Determining the location of a specific area; It includes a step of updating the acquired context by matching it to the corresponding area of the 2D grid map based on the identified location.
생성단계는, 대상 지역의 3D 맵을 생성하는 단계; 생성된 3D 맵을 2D 맵으로 프로젝션하는 단계; 생성된 2D 맵을 그리드 형태로 구획하여 다수의 구역들을 구분하는 단계;를 포함할 수 있다.The generation step includes generating a 3D map of the target area; Projecting the generated 3D map into a 2D map; It may include dividing the generated 2D map into a grid and dividing it into multiple areas.
파악 단계는, 획득단계에서 지상 이동체가 컨텍스트를 획득하였으면, 지상 이동체가 측위한 자신의 위치를 특정 구역의 위치로 취급하는 단계;를 포함할 수 있다.The identification step may include, if the ground mobile device has acquired the context in the acquisition step, treating its location determined by the ground mobile device as the location of a specific area.
파악 단계는, 획득단계에서 공중 이동체가 컨텍스트를 획득하였으면, 공중 이동체가 특정 구역의 위치를 산출하는 단계;를 포함할 수 있다.The identification step may include calculating the location of a specific area by the aerial vehicle, if the aerial vehicle has acquired the context in the acquisition step.
산출 단계는, 지상 이동체들과 특정 구역이 나타난 영상을 획득하는 단계; 지상 이동체들과 특정 구역에 대한 영상에서의 위치들을 계산하는 제1 계산단계; 지상 이동체들의 2D 그리드 맵에서의 위치들을 획득하는 단계; 지상 이동체들에 대한 영상에서의 위치들과 지상 이동체들의 2D 그리드 맵에서의 위치들을 이용하여, 획득단계에서 획득된 영상의 좌표계를 2D 그리드 맵의 좌표계로 변환하기 위한 변환 행렬을 결정하는 단계; 및 결정된 변환 행렬을 이용하여 특정 구역의 위치를 계산하는 제2 계산단계;를 포함할 수 있다.The calculation step includes acquiring images showing ground moving objects and specific areas; A first calculation step of calculating positions in the image for ground moving objects and a specific area; Obtaining positions in a 2D grid map of ground moving objects; Using the positions of the ground moving objects in the image and the positions of the ground moving objects in the 2D grid map, determining a transformation matrix for converting the coordinate system of the image acquired in the acquisition step into the coordinate system of the 2D grid map; and a second calculation step of calculating the location of a specific area using the determined transformation matrix.
결정 단계는, 지상 이동체들의 영상에서의 위치들에 변환 행렬을 적용하여 위치들을 변환하는 제1 변환단계; 지상 이동체들 중 특정 지상 이동체에 대한 제1 변환단계에서 변환된 위치를 2D 그리드 맵에서의 위치로 교체하는 단계; 특정 지상 이동체의 교체된 위치를 기준으로, 나머지 지상 이동체들에 대한 제1 변환단계에서 변환된 위치들을 다시 변환하는 제2 변환단계; 제2 변환단계에서 변환된 나머지 지상 이동체들의 위치들과 2D 그리드 맵에서의 위치들 간의 각 오차들을 계산하는 제3 계산단계; 계산된 오차들의 전체 합이 정해진 값 미만이 될 때까지, 변환 행렬을 수정하면서 제1 변환단계 내지 제3 계산단계를 반복하는 단계;를 포함할 수 있다.The determining step includes a first transformation step of transforming the positions of the ground moving objects in the image by applying a transformation matrix to the positions; Replacing the position converted in the first conversion step for a specific ground moving object among the ground moving objects with a position in a 2D grid map; A second transformation step of converting the positions converted in the first transformation step for the remaining ground mobile objects again based on the replaced position of the specific ground mobile object; A third calculation step of calculating each error between the positions of the remaining ground moving objects converted in the second conversion step and the positions in the 2D grid map; It may include repeating the first to third conversion steps while modifying the transformation matrix until the total sum of the calculated errors becomes less than a predetermined value.
제2 계산단계는, 계산된 오차들의 전체 합이 정해진 값 미만이 될 때의 변환 행렬을 특정 구역에 대한 영상에서의 위치에 적용하여 위치를 다시 변환하는 제3 변환단계; 특정 지상 이동체의 교체된 위치를 기준으로, 제3 변환단계에서 변환된 위치를 다시 변환하는 제4 변환단계; 제4 변환단계에서 변환된 위치를 특정 구역의 위치로 출력하는 단계;를 포함할 수 있다.The second calculation step includes a third transformation step of converting the position again by applying a transformation matrix when the total sum of the calculated errors is less than a predetermined value to the position in the image for a specific area; A fourth conversion step of converting the position converted in the third conversion step again based on the replaced position of the specific ground moving object; It may include outputting the location converted in the fourth conversion step as the location of a specific area.
컨텍스트들은, 해당 구역의 장소에 대한 정보, 해당 구역에 있는 객체들에 대한 정보, 해당 구역의 상태에 대한 정보를 포함할 수 있다.Contexts may include information about the location of the area, information about objects in the area, and information about the state of the area.
객체들에 대한 정보는, 해당 구역에 있는 객체들의 종류와 수에 대한 정보를 포함하고, 상태에 대한 정보는, 해당 구역의 혼잡도, 위험 요소에 대한 정보를 포함할 수 있다.Information about objects may include information about the type and number of objects in the area, and information about the status may include information about congestion and risk factors in the area.
본 발명의 다른 실시예에 따른 컨텍스트 맵 구축 시스템은, 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 통신부; 및 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵을 생성하고, 특정 구역의 위치를 파악하여 통신부를 통해 획득한 컨텍스트를 2D 그리드 맵의 해당 구역에 매칭하여 업데이트하는 프로세서;를 포함한다.A context map construction system according to another embodiment of the present invention includes a communication unit that acquires context for a specific area of a target area; And a processor that divides the target area into a grid and generates a 2D grid map with multiple zones, determines the location of a specific zone, and updates the context obtained through the communication unit by matching it to the corresponding zone of the 2D grid map. Includes.
본 발명의 또 다른 실시예에 따른 컨텍스트 맵 구축 방법은, 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 단계; 특정 구역의 위치를 파악하는 단계; 획득한 컨텍스트를 파악된 위치를 기초로, 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하는 단계; 및 업데이트 되는 2D 그리드 맵을 제공하는 단계;를 포함한다.A context map construction method according to another embodiment of the present invention includes the steps of acquiring context for a specific area of a target area; Determining the location of a specific area; Updating the acquired context by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides it into a plurality of areas, based on the identified location; and providing an updated 2D grid map.
본 발명의 또 다른 실시예에 따른 컨텍스트 맵 구축 시스템은, 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 통신부; 및 특정 구역의 위치를 파악하여 통신부를 통해 획득한 컨텍스트를 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하고, 업데이트 되는 2D 그리드 맵을 통신부를 통해 외부 개체들에게 제공하는 프로세서;를 포함한다.A context map construction system according to another embodiment of the present invention includes a communication unit that acquires context for a specific area of a target area; And the location of a specific area is identified and the context acquired through the communication department is updated by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides multiple areas, and the updated 2D grid map is sent through the communication department. Includes a processor provided to external entities.
이상 설명한 바와 같이, 본 발명의 실시예들에 따르면, 드론, 로봇, 자율 주행차와 같은 다양한 이동체들이 협업하여 획득/수집한 구체적인 컨텍스트들을 대상 지역이 그리드 형태로 구획된 컨텍스트 맵에 매칭하여 구축/업데이트 함으로써, 다양한 컨텍스트를 이용한 고품질의 서비스 제공이 가능해진다.As described above, according to embodiments of the present invention, specific contexts acquired/collected through collaboration between various moving objects such as drones, robots, and self-driving cars are matched to a context map in which the target area is divided into a grid to build/ By updating, it becomes possible to provide high-quality services using various contexts.
또한 본 발명의 실시예들에 따르면, 드론과 같은 공중 이동체가 획득한 컨텍스트에 대해서도 로봇과 같은 지상 이동체의 위치들을 활용하여 컨텍스트 발생 지점을 정확하게 파악할 수 있어, 컨텍스트 맵의 정확도를 높일 수 있게 된다.In addition, according to embodiments of the present invention, the context occurrence point can be accurately identified by utilizing the positions of ground mobile devices such as robots for the context acquired by an aerial mobile device such as a drone, thereby improving the accuracy of the context map.
도 1은 본 발명의 일 실시예에 따른 컨텍스트 맵 구축 방법의 설명에 제공되는 흐름도,1 is a flowchart provided to explain a context map construction method according to an embodiment of the present invention;
도 2는 드론과 이동로봇들이 대상 지역에서 컨텍스트들을 획득하고 있는 상황을 예시한 도면,Figure 2 is a diagram illustrating a situation in which drones and mobile robots are acquiring context in the target area;
도 3은, 도 1의 S150단계의 상세 흐름도,Figure 3 is a detailed flow chart of step S150 of Figure 1,
도 4는 지상 이동체들 중 하나가 컨텍스트를 감지한 상황을 예시한 도면,Figure 4 is a diagram illustrating a situation in which one of the ground moving objects detects the context;
도 5는 지상 이동체에 의해 감지된 컨텍스트를 예시한 도면,5 is a diagram illustrating the context detected by a ground moving object;
도 6는 감지된 컨텍스트를 파악된 위치를 기초로 컨텍스트 맵의 해당 구역에 매칭한 결과를 나타낸 도면,Figure 6 is a diagram showing the results of matching the detected context to the corresponding area of the context map based on the identified location;
도 7 및 도 8은, 도 3의 S200단계의 상세 흐름도,Figures 7 and 8 are detailed flow charts of step S200 of Figure 3,
도 9는 공중 이동체가 영상을 획득하는 상황을 예시한 도면,Figure 9 is a diagram illustrating a situation in which an aerial vehicle acquires an image;
도 10은 공중 이동체에 의해 감지된 컨텍스트를 예시한 도면,10 is a diagram illustrating the context detected by an airborne vehicle;
도 11은 감지된 컨텍스트를 산출된 위치를 기초로 컨텍스트 맵의 해당 구역에 매칭한 결과를 나타낸 도면,Figure 11 is a diagram showing the result of matching the detected context to the corresponding area of the context map based on the calculated location;
도 12는 본 발명의 다른 실시예에 따른 컨텍스트 맵 구축 시스템의 블럭도이다.Figure 12 is a block diagram of a context map construction system according to another embodiment of the present invention.
이하에서는 도면을 참조하여 본 발명을 보다 상세하게 설명한다.Hereinafter, the present invention will be described in more detail with reference to the drawings.
본 발명의 실시예에서는 지상 이동체와 공중 이동체가 협업하여 실외 환경에서 컨텍스트들을 획득하여 컨텍스트 맵을 구축하고 업데이트 하는 방법을 제시한다.An embodiment of the present invention presents a method of constructing and updating a context map by acquiring contexts in an outdoor environment through collaboration between a ground and aerial vehicle.
컨텍스트 맵에 융합되는 컨텍스트는 대략적인 환경 정보가 아닌 구체적인 상황인지 정보이며, 이는 컨텍스트 맵을 그리드 형태로 구획한 각 구역들에 매칭되어 저장된다.The context fused to the context map is specific situational awareness information rather than rough environmental information, and this is matched and stored in each area where the context map is divided into a grid format.
도 1은 본 발명의 일 실시예에 따른 컨텍스트 맵 구축 방법의 설명에 제공되는 흐름도이다.1 is a flowchart provided to explain a context map construction method according to an embodiment of the present invention.
컨텍스트 맵 구축을 위해 먼저 대상 지역의 3D 맵을 생성하고(S110), 생성된 3D 맵을 2D 평면으로 프로젝션 하여 2D 맵으로 변환한다(S120). 다음 S120단계에서 생성된 2D 맵을 그리드 형태로 구획하여, 대상 지역이 다수의 구역들로 구분된 2D 그리드 맵을 생성한다(S130).To build a context map, a 3D map of the target area is first created (S110), and the generated 3D map is projected onto a 2D plane and converted to a 2D map (S120). Next, the 2D map generated in step S120 is divided into a grid format to generate a 2D grid map in which the target area is divided into multiple zones (S130).
S110단계 내지 S130단계를 통해 컨텍스트 맵 구축에 필요한 2D 그리드 맵이 확보되었다. 이하에서는 컨텍스트들을 획득하여 2D 그리드 맵에 융합시켜 컨텍스트 맵을 구축하는 과정에 대해 설명한다.Through steps S110 to S130, the 2D grid map necessary for constructing the context map was secured. Below, we describe the process of constructing a context map by acquiring contexts and fusing them into a 2D grid map.
이를 위해, 대상 지역의 해당 구역들에서 컨텍스트들을 획득/수집한다(S140). 컨텍스트는 대상 지역을 이동하고 있는 이동체로부터 제공받을 수 있다. 한 대가 아닌 여러 대의 이동체들로부터 컨텍스트들을 제공받는 것도 가능하다. 이동체들이 다수라 할지라도 컨텍스트 맵은 하나만 구축된다.To this end, contexts are acquired/collected from corresponding areas of the target area (S140). Context can be provided from a moving object moving in the target area. It is also possible to receive contexts from multiple moving objects rather than one. Even if there are multiple moving objects, only one context map is constructed.
이동체는 보유하고 있는 센서들(이미지 센서, 라이다, 레이더, 환경 센서 등)을 기반으로 다양한 상황들을 인지하여 컨텍스트를 생성할 수 있다. 이동체는 드론 등과 같은 공중 이동체와 이동로봇, 자율 주행차 등과 같은 지상 이동체로 구분될 수 있다. 도 2에는 드론과 이동로봇들이 대상 지역에서 컨텍스트들을 획득하고 있는 상황을 예시하였다.A mobile object can create context by recognizing various situations based on the sensors it possesses (image sensor, lidar, radar, environmental sensor, etc.). Mobile vehicles can be divided into aerial vehicles such as drones and ground mobile vehicles such as mobile robots and autonomous vehicles. Figure 2 illustrates a situation where drones and mobile robots are acquiring contexts in the target area.
컨텍스트는 해당 구역에 대한 1) 장소 정보, 2) 객체 정보, 3) 상태 정보를 포함한다. 1) 장소 정보는 해당 구역이 어떠한 장소(도로, 건물, 공원, 인도 등) 인지 설명하여 주는 정보이다. 2) 객체 정보는 해당 구역에 있는 객체들(사람, 차량 등)을 설명하여 주는 정보로, 객체의 종류와 수에 대한 정보를 포함한다. 3) 상태 정보는 해당 구역의 상태를 설명하여 주는 정보로, 혼잡도(도로 정체 여부, 이동 가능 속도, 사람들의 응집도 등)와 위험 요소(장애물, 방해물, 화재, 침수, 범죄 등)에 대한 정보를 포함한다.The context includes 1) location information, 2) object information, and 3) state information for the area. 1) Location information is information that explains what kind of place (road, building, park, sidewalk, etc.) the area is. 2) Object information is information that describes objects (people, vehicles, etc.) in the area, and includes information about the type and number of objects. 3) Status information is information that explains the condition of the area, including information on congestion (road congestion, possible movement speed, cohesion of people, etc.) and risk factors (obstacles, obstructions, fire, flooding, crime, etc.). Includes.
컨텍스트들이 획득되면, 컨텍스트들이 발생한 지점들의 위치 정보들을 파악하여야 한다(S150). 위치 정보는 컨텍스트를 컨텍스트 맵에서 해당 구역에 매칭시키는데 필요하기 때문이다. 위치 정보들을 파악하는 구체적인 방법에 대해서는, 도 3을 참조하여 상세히 후술한다.Once the contexts are acquired, the location information of the points where the contexts occurred must be identified (S150). This is because location information is needed to match the context to the corresponding area in the context map. A specific method for determining location information will be described in detail later with reference to FIG. 3.
다음 S140단계에서 획득된 컨텍스트들을 S150단계에서 파악된 위치 정보들을 기초로 2D 그리드 맵의 해당 구역들에 매칭하여 컨텍스트 맵을 업데이트 한다(S160).Next, the context map is updated by matching the contexts obtained in step S140 to corresponding areas of the 2D grid map based on the location information identified in step S150 (S160).
S160단계를 통해 업데이트 되는 컨텍스트 맵은 이동체의 자율 주행을 위한 경로 생성 및 지역 관제 등에 활용될 수 있다(S170). S170단계에서 컨텍스트 맵을 활용하는 이동체는 S140단계에서 컨텍스트를 획득하여 제공하여 주는 이동체, 즉 컨텍스트 맵의 업데이트에 기여하는 이동체일 수도 있고, 그렇지 않은 이동체, 즉 컨텍스트 맵의 업데이트에 기여하지 않는 이동체일 수도 있다.The context map updated through step S160 can be used to create a route for autonomous driving of a moving object and for local control (S170). The mobile object that utilizes the context map in step S170 may be a mobile object that obtains and provides context in step S140, that is, a mobile object that contributes to the update of the context map, or it may be a mobile object that does not, that is, a mobile object that does not contribute to the update of the context map. It may be possible.
이하에서는 컨텍스트 발생 지점을 파악하는 구체적인 방법에 대해 도 3을 참조하여 상세히 설명한다. 도 3은 도 1의 S150단계의 상세 흐름도이다. 컨텍스트 발생 지점의 위치 정보는 컨텍스트를 감지한 이동체의 종류에 따라 다른 방식으로 파악된다.Hereinafter, a specific method for identifying a context occurrence point will be described in detail with reference to FIG. 3. Figure 3 is a detailed flowchart of step S150 of Figure 1. The location information of the context occurrence point is identified in different ways depending on the type of moving object that senses the context.
구체적으로 도 3에 도시된 바와 같이 컨텍스트를 감지한 이동체가 이동로봇과 같은 지상 이동체의 경우(S151-Y), 지상 이동체가 측위한 자신의 위치를 컨텍스트 발생 지점으로 취급한다(S152). 지상 이동체는 컨텍스트 발생 지점에 위치하고 있기 때문이다.Specifically, as shown in FIG. 3, when the mobile object that senses the context is a ground mobile object such as a mobile robot (S151-Y), its own position determined by the ground mobile object is treated as a context generation point (S152). This is because the ground moving object is located at the context occurrence point.
도 4에는 지상 이동체들 중 하나가 컨텍스트를 감지한 상황을 예시하였고, 도 5에는 지상 이동체에 의해 감지된 컨텍스트를 예시하였으며, 도 6에는 감지된 컨텍스트를 S152단계를 통해 파악된 위치를 기초로 컨텍스트 맵의 해당 구역에 매칭한 결과를 나타내었다.Figure 4 illustrates a situation in which one of the ground moving objects detects the context, Figure 5 illustrates the context detected by the ground moving object, and Figure 6 illustrates the sensed context based on the location identified through step S152. The results of matching to the corresponding area of the map are shown.
다시 도 3을 참조하여 설명한다. 컨텍스트를 획득한 이동체가 드론과 같은 공중 이동체의 경우(S153-Y), 공중 이동체가 컨텍스트 발생 지점의 위치를 계산한다(S200). 공중 이동체는 컨텍스트 발생 지점에 있지 않고, 그로부터 멀리 떨어진 구역에 위치할 수도 있기 때문이다.Description will be made again with reference to FIG. 3 . If the mobile device that has acquired the context is an aerial vehicle such as a drone (S153-Y), the aerial vehicle calculates the location of the context occurrence point (S200). This is because the airborne moving object may not be located at the context occurrence point, but may be located in an area far away from it.
공중 이동체는 컨텍스트 발생 지점의 위치를 계산하기 위해 해당 지점의 주변에 있는 지상 이동체들의 위치들을 활용한다. 공중 이동체가 컨텍스트 발생 지점의 위치를 계산하는 과정에 대해, 이하에서 도 7과 도 8을 참조하여 상세히 설명한다. 도 7과 도 8은 도 3의 S200단계의 상세 흐름도이다.To calculate the location of the context occurrence point, the airborne vehicle utilizes the positions of ground moving vehicles around the point. The process by which an airborne vehicle calculates the location of a context occurrence point will be described in detail below with reference to FIGS. 7 and 8. Figures 7 and 8 are detailed flowcharts of step S200 of Figure 3.
컨텍스트 발생 지점의 위치를 계산하기 위해, 공중 이동체는 먼저 지상 이동체들과 컨텍스트 발생 지점이 나타난 영상을 획득한다(S205). S205단계에서 공중 이동체가 영상을 획득하는 상황을 도 9에 예시하였다.In order to calculate the location of the context occurrence point, the aerial moving object first acquires an image showing the ground moving objects and the context generating point (S205). The situation in which an aerial vehicle acquires an image in step S205 is illustrated in FIG. 9.
다음 공중 이동체는 지상 이동체들과 컨텍스트 발생 지점에 대한 영상에서의 위치들을 계산한다(S210). S210단계에서 계산하는 영상에서의 위치란 영상에서의 픽셀 좌표를 의미한다.Next, the aerial moving object calculates the positions in the image of the ground moving objects and the context occurrence point (S210). The position in the image calculated in step S210 means the pixel coordinates in the image.
그리고 공중 이동체는 영상에 나타난 지상 이동체들의 2D 그리드 맵에서의 위치 정보들을 획득한다(S215). 지상 이동체들의 2D 그리드 맵에서의 위치들은 지상 이동체들의 실제 위치들을 의미한다.Then, the airborne vehicle acquires location information on the 2D grid map of the ground moving vehicles shown in the image (S215). The positions of the ground moving objects in the 2D grid map refer to the actual positions of the ground moving objects.
이후 공중 이동체는 S210단계에서 계산한 지상 이동체들의 영상에서의 위치들에 '영상의 좌표계를 2D 그리드 맵의 좌표계로 변환하기 위한 영상-맵 좌표 변환 행렬'을 적용하여 위치들을 변환한다(S220).Afterwards, the aerial moving object converts the positions by applying the 'image-map coordinate conversion matrix for converting the coordinate system of the image to the coordinate system of the 2D grid map' to the positions in the image of the ground moving objects calculated in step S210 (S220).
다음 공중 이동체는 지상 이동체들 중 특정(어느 하나의) 지상 이동체에 대해 S220단계에서 변환된 위치를 S215단계에서 획득된 2D 그리드 맵에서의 위치로 교체한다(S225).Next, the aerial mobile object replaces the position converted in step S220 for a specific (one) ground mobile object among the ground mobile objects with the position in the 2D grid map obtained in step S215 (S225).
그리고 S225단계에서 특정 지상 이동체의 교체된 위치를 기준으로, 나머지 지상 이동체들에 대해 S220단계에서 영상-맵 좌표 변환 행렬에 의해 변환된 위치들을 다시 변환한다(S230). S230단계에서의 위치 변환은 '특정 지상 이동체의 S225단계를 통한 위치 이동'과 동일하게 나머지 지상 이동체들의 위치들을 평행 이동하는 것이다.Then, based on the replaced position of the specific ground moving object in step S225, the positions of the remaining ground moving objects transformed by the image-map coordinate transformation matrix in step S220 are converted again (S230). The position conversion in step S230 is the same as the 'position movement of a specific ground mobile object through step S225' and moves the positions of the remaining ground mobile objects in parallel.
그리고 도 8에 도시된 바와 같이 공중 이동체는 S230단계에서 변환된 나머지 지상 이동체들의 위치들과 S215단계에서 획득된 나머지 지상 이동체들의 2D 그리드 맵에서의 위치들 간의 각 오차(차이)들을 계산한다(S235).And, as shown in FIG. 8, the airborne vehicle calculates each error (difference) between the positions of the remaining ground moving objects converted in step S230 and the positions in the 2D grid map of the remaining ground moving objects obtained in step S215 (S235 ).
이후 S235단계에서 계산된 오차들의 전체 합이 정해진 값 미만이 될 때까지, 공중 이동체는 영상-맵 좌표 변환 행렬을 수정하면서 S220단계 내지 S235단계를 반복한다(S240, S245). 이는 최적의 변환 행렬을 결정하는 과정에 해당한다.Afterwards, the aerial vehicle repeats steps S220 to S235 while modifying the image-map coordinate conversion matrix until the total sum of errors calculated in step S235 becomes less than a predetermined value (S240, S245). This corresponds to the process of determining the optimal transformation matrix.
한편 S235단계에서 계산된 오차들의 전체 합이 정해진 값 미만이 되면(S240-Y), 공중 이동체는 도 7의 S210단계에서 계산된 컨텍스트의 발생 지점에 대한 영상에서의 위치에 최종 결정된 영상-맵 좌표 변환 행렬을 적용하여 위치를 변환한다(S250).On the other hand, if the total sum of errors calculated in step S235 is less than the predetermined value (S240-Y), the aerial vehicle sets the finally determined image-map coordinates at the position in the image of the occurrence point of the context calculated in step S210 of FIG. 7. The position is transformed by applying the transformation matrix (S250).
다음 공중 이동체는 S225단계에서 특정 지상 이동체의 교체된 위치를 기준으로, S250단계에서 영상-맵 좌표 변환 행렬에 의해 변환된 컨텍스트의 발생 지점의 위치를 다시 변환한다(S255). S255단계에서의 위치 변환은 '특정 지상 이동체의 S225단계를 통한 위치 이동'과 동일하게 컨텍스트 발생 지점의 위치를 평행 이동하는 것이다.Next, the aerial moving object again transforms the location of the occurrence point of the context transformed by the image-map coordinate conversion matrix in step S250, based on the replaced position of the specific ground moving object in step S225 (S255). The position conversion in step S255 is to move the position of the context occurrence point in parallel, in the same way as 'position movement of a specific ground moving object through step S225'.
그리고 공중 이동체는 S255단계에서 변환된 컨텍스트 발생 지점의 위치를 출력한다(S260).Then, the aerial moving object outputs the location of the context occurrence point converted in step S255 (S260).
도 10에는 공중 이동체에 의해 감지된 컨텍스트를 예시하였으며, 도 11에는 감지된 컨텍스트를 도 7과 도 8에 도시된 단계들을 통해 파악된 위치를 기초로 컨텍스트 맵의 해당 구역에 매칭한 결과를 나타내었다.Figure 10 illustrates the context detected by the aerial vehicle, and Figure 11 shows the result of matching the sensed context to the corresponding area of the context map based on the location identified through the steps shown in Figures 7 and 8. .
도 12는 본 발명의 다른 실시예에 따른 컨텍스트 맵 구축 시스템의 블럭도이다. 본 발명의 실시예에 따른 컨텍스트 맵 구축 시스템은 도시된 바와 같이, 통신부(310), 프로세서(320) 및 저장부(330)를 포함하여 구축된다.Figure 12 is a block diagram of a context map construction system according to another embodiment of the present invention. As shown, the context map construction system according to an embodiment of the present invention is constructed including a communication unit 310, a processor 320, and a storage unit 330.
통신부(310)는 대상 지역에서 이동하는 이동체와 상호 통신을 위한 수단이다.The communication unit 310 is a means for mutual communication with a mobile object moving in the target area.
프로세서(320)는 대상 지역에 대한 2D 그리드 맵을 생성하고, 통신부(310)를 통해 이동체들로부터 획득되는 컨텍스트들을 2D 그리드 맵에 매칭하여 컨텍스트 맵을 구축하고 업데이트한다.The processor 320 creates a 2D grid map for the target area and matches contexts obtained from moving objects through the communication unit 310 to the 2D grid map to build and update the context map.
또한 프로세서(320)는 컨텍스트 맵을 필요로 하는 이동체에 대해서는 통신부(310)를 통해 컨텍스트 맵을 제공한다.Additionally, the processor 320 provides a context map through the communication unit 310 to a moving object that requires a context map.
저장부(330)는 프로세서(320)가 컨텍스트 맵을 구축하고 업데이트 함에 있어 필요로 하는 저장 공간을 제공한다.The storage unit 330 provides the storage space needed by the processor 320 to build and update the context map.
지금까지 드론과 로봇의 협업 기반 컨텍스트 맵 구축 방법에 대해 바람직한 실시예를 들어 상세히 설명하였다.So far, the method of constructing a context map based on collaboration between drones and robots has been explained in detail using preferred embodiments.
본 발명의 실시예에서는 드론, 이동로봇, 자율 주행차와 같은 다양한 이동체들이 협업하여 획득/수집한 구체적인 컨텍스트들을 대상 지역이 그리드 형태로 구획된 컨텍스트 맵에 매칭하여 구축/업데이트 하여, 다양한 컨텍스트를 이용한 고품질의 서비스 제공을 가능하게 하였다.In an embodiment of the present invention, specific contexts acquired/collected through collaboration between various mobile objects such as drones, mobile robots, and self-driving cars are constructed/updated by matching them to a context map where the target area is divided into a grid format, thereby enabling the use of various contexts. It made it possible to provide high-quality services.
특히 드론과 같은 공중 이동체가 획득한 컨텍스트에 대해서도 로봇과 같은 지상 이동체의 위치들을 활용하여 컨텍스트 발생 지점을 정확하게 파악할 수 있어, 컨텍스트 맵의 정확도를 높였다.In particular, for the context acquired by an aerial vehicle such as a drone, the location of the context occurrence can be accurately identified by utilizing the positions of a ground vehicle such as a robot, thereby improving the accuracy of the context map.
위 실시예에서 도 7과 도 8은 공중 이동체에 의해 수행되는 것을 상정하였다. 하지만 공중 이동체의 리소스가 높은 사양이 아닌 경우에는, 도 7의 S205단계만 공중 이동체가 수행하고, 나머지 단계들은 도 12에 제시된 컨텍스트 맵 구축 시스템이 수행하도록 구현하는 것이 가능하다.In the above embodiment, Figures 7 and 8 assume that the operation is performed by an aerial vehicle. However, if the resources of the aerial vehicle are not of high specifications, it is possible to implement it so that only step S205 of FIG. 7 is performed by the aerial vehicle, and the remaining steps are performed by the context map construction system shown in FIG. 12.
한편 동일 구역에 대해 동일 시점에 다수의 이동체들로부터 컨텍스트가 획득되는 경우가 있을 수 있다. 이 경우 획득된 컨텍스트들을 해당 구역에 매칭시켜야 하는데, 문제는 컨텍스트의 내용이 다른 경우이다.Meanwhile, there may be cases where context is obtained from multiple moving objects in the same area at the same time. In this case, the obtained contexts must be matched to the corresponding area, but the problem is when the contents of the context are different.
이 경우에는 컨텍스트를 해당 구역에 매칭함에 있어, 이동체의 종류에 따라 우선순위를 두어 운용하는 것이 가능하다. 이동로봇과 드론에 의해 컨텍스트들이 획득된 경우를 상정하면, 우선순위는 다음과 같이 부여할 수 있다.In this case, when matching the context to the relevant area, it is possible to prioritize operations according to the type of moving object. Assuming that contexts are acquired by mobile robots and drones, priority can be assigned as follows.
1) 장소 정보(도로, 건물, 공원, 인도 등)에 대해서는, 공중 이동체인 드론에 의해 획득된 컨텍스트를 신뢰하여, 해당 구역의 장소 정보는 드론에 의해 획득된 정보를 매칭1) For place information (roads, buildings, parks, sidewalks, etc.), the context acquired by the aerial mobile drone is trusted, and the place information of the area matches the information acquired by the drone.
2) 객체 정보(사람, 차량 등)에 대해서는, 지상 이동체인 이동로봇에 의해 획득된 컨텍스트를 신뢰하여, 해당 구역의 객체 정보는 이동로봇에 의해 획득된 정보를 매칭2) For object information (people, vehicles, etc.), the context acquired by the mobile robot, which is a ground mobile device, is trusted, and the object information in the area matches the information acquired by the mobile robot.
3) 상태 정보 중 혼잡도(도로 정체 여부, 이동 가능 속도, 사람들의 응집도 등)에 대해서는, 공중 이동체인 드론에 의해 획득된 컨텍스트를 신뢰하여, 해당 구역의 혼잡도 정보는 드론에 의해 획득된 정보를 매칭3) Regarding the congestion level among the status information (road congestion, possible movement speed, cohesion of people, etc.), the context acquired by the aerial mobile drone is trusted, and the congestion information of the area matches the information acquired by the drone.
4) 상태 정보 중 위험 요소(장애물, 방해물, 화재, 침수, 사고발생, 범죄 등)에 대해서는, 지상 이동체인 이동로봇에 의해 획득된 컨텍스트를 신뢰하여, 해당 구역의 위험 요소 정보는 이동로봇에 의해 획득된 정보를 매칭4) Regarding risk factors (obstacles, obstructions, fire, flooding, accidents, crimes, etc.) among the status information, the context acquired by the mobile robot, which is a ground mobile chain, is trusted, and the risk factor information in the area is provided by the mobile robot. Matching the obtained information
한편, 본 실시예에 따른 장치와 방법의 기능을 수행하게 하는 컴퓨터 프로그램을 수록한 컴퓨터로 읽을 수 있는 기록매체에도 본 발명의 기술적 프로젝션이 적용될 수 있음은 물론이다. 또한, 본 발명의 다양한 실시예에 따른 기술적 프로젝션은 컴퓨터로 읽을 수 있는 기록매체에 기록된 컴퓨터로 읽을 수 있는 코드 형태로 구현될 수도 있다. 컴퓨터로 읽을 수 있는 기록매체는 컴퓨터에 의해 읽을 수 있고 데이터를 저장할 수 있는 어떤 데이터 저장 장치이더라도 가능하다. 예를 들어, 컴퓨터로 읽을 수 있는 기록매체는 ROM, RAM, CD-ROM, 자기 테이프, 플로피 디스크, 광디스크, 하드 디스크 드라이브, 등이 될 수 있음은 물론이다. 또한, 컴퓨터로 읽을 수 있는 기록매체에 저장된 컴퓨터로 읽을 수 있는 코드 또는 프로그램은 컴퓨터간에 연결된 네트워크를 통해 전송될 수도 있다.Meanwhile, of course, the technical projection of the present invention can be applied to a computer-readable recording medium containing a computer program that performs the functions of the device and method according to the present embodiment. Additionally, technical projection according to various embodiments of the present invention may be implemented in the form of computer-readable code recorded on a computer-readable recording medium. A computer-readable recording medium can be any data storage device that can be read by a computer and store data. For example, of course, computer-readable recording media can be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical disk, hard disk drive, etc. Additionally, computer-readable codes or programs stored on a computer-readable recording medium may be transmitted through a network connected between computers.
또한, 이상에서는 본 발명의 바람직한 실시예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술분야에서 통상의 지식을 가진자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 프로젝션이나 전망으로부터 개별적으로 이해되어져서는 안될 것이다.In addition, although preferred embodiments of the present invention have been shown and described above, the present invention is not limited to the specific embodiments described above, and the technical field to which the invention pertains without departing from the gist of the present invention as claimed in the claims. Of course, various modifications can be made by those skilled in the art, and these modifications should not be understood individually from the technical projection or perspective of the present invention.

Claims (12)

  1. 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵을 생성하는 단계;Creating a 2D grid map dividing the target area into a grid and dividing it into multiple zones;
    대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 단계;Obtaining context for a specific area of the target area;
    특정 구역의 위치를 파악하는 단계;Determining the location of a specific area;
    획득한 컨텍스트를 파악된 위치를 기초로 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising: updating the acquired context by matching it to the corresponding area of the 2D grid map based on the identified location.
  2. 청구항 1에 있어서,In claim 1,
    생성단계는,The creation stage is,
    대상 지역의 3D 맵을 생성하는 단계;generating a 3D map of the target area;
    생성된 3D 맵을 2D 맵으로 프로젝션하는 단계;Projecting the generated 3D map into a 2D map;
    생성된 2D 맵을 그리드 형태로 구획하여 다수의 구역들을 구분하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising dividing the generated 2D map into a grid and dividing it into a plurality of zones.
  3. 청구항 1에 있어서,In claim 1,
    파악 단계는,The understanding step is,
    획득단계에서 지상 이동체가 컨텍스트를 획득하였으면, 지상 이동체가 측위한 자신의 위치를 특정 구역의 위치로 취급하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.If the ground mobile unit acquires the context in the acquisition step, treating its location as determined by the ground mobile unit as the location of a specific area. A context map construction method comprising:
  4. 청구항 1에 있어서,In claim 1,
    파악 단계는,The understanding step is,
    획득단계에서 공중 이동체가 컨텍스트를 획득하였으면, 공중 이동체가 특정 구역의 위치를 산출하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.If the aerial vehicle acquires the context in the acquisition step, a context map construction method comprising: calculating the location of a specific area by the aerial vehicle.
  5. 청구항 4에 있어서,In claim 4,
    산출 단계는,The output step is,
    지상 이동체들과 특정 구역이 나타난 영상을 획득하는 단계;Obtaining images showing ground moving objects and specific areas;
    지상 이동체들과 특정 구역에 대한 영상에서의 위치들을 계산하는 제1 계산단계;A first calculation step of calculating positions in the image for ground moving objects and a specific area;
    지상 이동체들의 2D 그리드 맵에서의 위치들을 획득하는 단계;Obtaining positions in a 2D grid map of ground moving objects;
    지상 이동체들에 대한 영상에서의 위치들과 지상 이동체들의 2D 그리드 맵에서의 위치들을 이용하여, 획득단계에서 획득된 영상의 좌표계를 2D 그리드 맵의 좌표계로 변환하기 위한 변환 행렬을 결정하는 단계; 및Using the positions of the ground moving objects in the image and the positions of the ground moving objects in the 2D grid map, determining a transformation matrix for converting the coordinate system of the image acquired in the acquisition step into the coordinate system of the 2D grid map; and
    결정된 변환 행렬을 이용하여 특정 구역의 위치를 계산하는 제2 계산단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising a second calculation step of calculating the location of a specific area using the determined transformation matrix.
  6. 청구항 5에 있어서,In claim 5,
    결정 단계는,The decision step is,
    지상 이동체들의 영상에서의 위치들에 변환 행렬을 적용하여 위치들을 변환하는 제1 변환단계;A first transformation step of transforming the positions of ground moving objects in the image by applying a transformation matrix to the positions;
    지상 이동체들 중 특정 지상 이동체에 대한 제1 변환단계에서 변환된 위치를 2D 그리드 맵에서의 위치로 교체하는 단계;Replacing the position converted in the first conversion step for a specific ground moving object among the ground moving objects with a position in a 2D grid map;
    특정 지상 이동체의 교체된 위치를 기준으로, 나머지 지상 이동체들에 대한 제1 변환단계에서 변환된 위치들을 다시 변환하는 제2 변환단계;A second transformation step of converting the positions converted in the first transformation step for the remaining ground mobile objects again based on the replaced position of the specific ground mobile object;
    제2 변환단계에서 변환된 나머지 지상 이동체들의 위치들과 2D 그리드 맵에서의 위치들 간의 각 오차들을 계산하는 제3 계산단계;A third calculation step of calculating each error between the positions of the remaining ground moving objects converted in the second conversion step and the positions in the 2D grid map;
    계산된 오차들의 전체 합이 정해진 값 미만이 될 때까지, 변환 행렬을 수정하면서 제1 변환단계 내지 제3 계산단계를 반복하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.Context map construction method comprising: repeating the first to third conversion steps while modifying the transformation matrix until the total sum of the calculated errors becomes less than a predetermined value.
  7. 청구항 6에 있어서,In claim 6,
    제2 계산단계는,The second calculation step is,
    계산된 오차들의 전체 합이 정해진 값 미만이 될 때의 변환 행렬을 특정 구역에 대한 영상에서의 위치에 적용하여 위치를 다시 변환하는 제3 변환단계;A third transformation step of converting the position again by applying a transformation matrix when the total sum of the calculated errors is less than a predetermined value to the position in the image for a specific area;
    특정 지상 이동체의 교체된 위치를 기준으로, 제3 변환단계에서 변환된 위치를 다시 변환하는 제4 변환단계;A fourth conversion step of converting the position converted in the third conversion step again based on the replaced position of the specific ground moving object;
    제4 변환단계에서 변환된 위치를 특정 구역의 위치로 출력하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising: outputting the location converted in the fourth conversion step as the location of a specific area.
  8. 청구항 1에 있어서,In claim 1,
    컨텍스트들은,The contexts are,
    해당 구역의 장소에 대한 정보, 해당 구역에 있는 객체들에 대한 정보, 해당 구역의 상태에 대한 정보를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising information about the location of the area, information about objects in the area, and information about the state of the area.
  9. 청구항 5에 있어서,In claim 5,
    객체들에 대한 정보는,Information about objects:
    해당 구역에 있는 객체들의 종류와 수에 대한 정보를 포함하고,Contains information about the type and number of objects in the area,
    상태에 대한 정보는,For information about the status,
    해당 구역의 혼잡도, 위험 요소에 대한 정보를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method characterized by including information on congestion and risk factors in the area.
  10. 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 통신부; 및a communications department that obtains context for specific areas of the target area; and
    대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵을 생성하고, 특정 구역의 위치를 파악하여 통신부를 통해 획득한 컨텍스트를 2D 그리드 맵의 해당 구역에 매칭하여 업데이트하는 프로세서;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 시스템.A processor that divides the target area into a grid and creates a 2D grid map with multiple zones, determines the location of a specific zone, and updates the context obtained through the communication unit by matching it to the corresponding zone of the 2D grid map. A context map construction system characterized by:
  11. 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 단계;Obtaining context for a specific area of the target area;
    특정 구역의 위치를 파악하는 단계;Determining the location of a specific area;
    획득한 컨텍스트를 파악된 위치를 기초로, 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하는 단계; 및Updating the acquired context by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides it into a plurality of areas, based on the identified location; and
    업데이트 되는 2D 그리드 맵을 제공하는 단계;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 방법.A context map construction method comprising: providing an updated 2D grid map.
  12. 대상 지역의 특정 구역에 대한 컨텍스트를 획득하는 통신부; 및a communications department that obtains context for specific areas of the target area; and
    특정 구역의 위치를 파악하여 통신부를 통해 획득한 컨텍스트를 대상 지역을 그리드 형태로 구획하여 다수의 구역들을 구분한 2D 그리드 맵의 해당 구역에 매칭하여 업데이트 하고, 업데이트 되는 2D 그리드 맵을 통신부를 통해 외부 개체들에게 제공하는 프로세서;를 포함하는 것을 특징으로 하는 컨텍스트 맵 구축 시스템.By identifying the location of a specific area, the context acquired through the communication unit is updated by matching it to the corresponding area of a 2D grid map that divides the target area into a grid and divides multiple areas, and the updated 2D grid map is sent to the outside through the communication unit. A context map construction system comprising a processor provided to entities.
PCT/KR2022/016139 2022-10-21 2022-10-21 Method for establishing context map based on collaboration of drone and robot WO2024085286A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2022-0136214 2022-10-21
KR1020220136214A KR102530521B1 (en) 2022-10-21 2022-10-21 Context map build method based on collaboration between drones and robots

Publications (1)

Publication Number Publication Date
WO2024085286A1 true WO2024085286A1 (en) 2024-04-25

Family

ID=86408042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/016139 WO2024085286A1 (en) 2022-10-21 2022-10-21 Method for establishing context map based on collaboration of drone and robot

Country Status (2)

Country Link
KR (1) KR102530521B1 (en)
WO (1) WO2024085286A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180109455A (en) * 2017-03-28 2018-10-08 엘지전자 주식회사 Method of managing map by context-awareness and device implementing thereof
KR20200002217A (en) * 2018-06-29 2020-01-08 현대엠엔소프트 주식회사 Apparatus and method for generating and updating precision map
KR20200094384A (en) * 2019-01-30 2020-08-07 현대자동차주식회사 Apparatus for clustering of point cloud and method thereof
JP7047306B2 (en) * 2017-09-27 2022-04-05 沖電気工業株式会社 Information processing equipment, information processing methods, and programs
KR20220129218A (en) * 2021-03-16 2022-09-23 한국전자통신연구원 Speed control method of unmanned vehicle to awareness the flight situation about an obstacle, and, unmanned vehicle the performed the method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180109455A (en) * 2017-03-28 2018-10-08 엘지전자 주식회사 Method of managing map by context-awareness and device implementing thereof
JP7047306B2 (en) * 2017-09-27 2022-04-05 沖電気工業株式会社 Information processing equipment, information processing methods, and programs
KR20200002217A (en) * 2018-06-29 2020-01-08 현대엠엔소프트 주식회사 Apparatus and method for generating and updating precision map
KR20200094384A (en) * 2019-01-30 2020-08-07 현대자동차주식회사 Apparatus for clustering of point cloud and method thereof
KR20220129218A (en) * 2021-03-16 2022-09-23 한국전자통신연구원 Speed control method of unmanned vehicle to awareness the flight situation about an obstacle, and, unmanned vehicle the performed the method

Also Published As

Publication number Publication date
KR102530521B1 (en) 2023-05-09

Similar Documents

Publication Publication Date Title
Furgale et al. Toward automated driving in cities using close-to-market sensors: An overview of the v-charge project
WO2018101526A1 (en) Method for detecting road region and lane by using lidar data, and system therefor
WO2021006441A1 (en) Road sign information collection method using mobile mapping system
Fleck et al. Towards large scale urban traffic reference data: Smart infrastructure in the test area autonomous driving baden-württemberg
WO2019139243A1 (en) Apparatus and method for updating high definition map for autonomous driving
WO2011034308A2 (en) Method and system for matching panoramic images using a graph structure, and computer-readable recording medium
CN113286081B (en) Target identification method, device, equipment and medium for airport panoramic video
WO2023277371A1 (en) Lane coordinates extraction method using projection transformation of three-dimensional point cloud map
WO2020235734A1 (en) Method for estimating distance to and location of autonomous vehicle by using mono camera
WO2022146000A1 (en) Near-future object position prediction system
WO2011034305A2 (en) Method and system for hierarchically matching images of buildings, and computer-readable recording medium
CN114200481A (en) Positioning method, positioning system and vehicle
López et al. Interoperability in a heterogeneous team of search and rescue robots
WO2024085286A1 (en) Method for establishing context map based on collaboration of drone and robot
CN113804182B (en) Grid map creation method based on information fusion
Bastani et al. Inferring and improving street maps with data-driven automation
CN116774603B (en) Multi-AGV cooperative scheduling simulation platform and simulation method
WO2024085287A1 (en) Method for constructing context map for autonomous driving and control
WO2024085285A1 (en) Robot-based optimal indoor delivery path planning method using context map
CN114442627B (en) Dynamic desktop path finding system and method for intelligent home mobile equipment
CN111754388A (en) Picture construction method and vehicle-mounted terminal
CN109427202A (en) The device and method that running section for predicting to be determined by construction site changes
CN115499467A (en) Intelligent networking test platform based on digital twin and construction method and system thereof
CN111985715A (en) Corridor path automatic navigation method and device based on multi-target lines
WO2023096037A1 (en) Device for generating real-time lidar data in virtual environment and control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22962848

Country of ref document: EP

Kind code of ref document: A1