WO2017033254A1 - Mobile robot operation system, mobile robot, and object take-out method - Google Patents

Mobile robot operation system, mobile robot, and object take-out method Download PDF

Info

Publication number
WO2017033254A1
WO2017033254A1 PCT/JP2015/073694 JP2015073694W WO2017033254A1 WO 2017033254 A1 WO2017033254 A1 WO 2017033254A1 JP 2015073694 W JP2015073694 W JP 2015073694W WO 2017033254 A1 WO2017033254 A1 WO 2017033254A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
area
point
shooting distance
detailed
Prior art date
Application number
PCT/JP2015/073694
Other languages
French (fr)
Japanese (ja)
Inventor
潔人 伊藤
宣隆 木村
敬介 藤本
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to JP2017501748A priority Critical patent/JP6293968B2/en
Priority to PCT/JP2015/073694 priority patent/WO2017033254A1/en
Publication of WO2017033254A1 publication Critical patent/WO2017033254A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices

Definitions

  • the present invention relates to a mobile robot operation system including a mobile robot that takes out a target article from a shelf in which the article is stored.
  • Patent Document 1 JP-A-2005-88175 (Patent Document 1) as background art in this technical field.
  • the robot device operates to bring the hand (control attention point) to be controlled into contact with the surface of the object.
  • the robot device is (2A): approach by walking, (2B): lower body.
  • Based on the probability distribution O (x) each operation is switched so that the control attention point can be brought into contact with the object at the position where the control attention point is most easily moved in the movable range L of the upper body. ).
  • Patent Document 1 The technique described in Patent Document 1 is based on the assumption that only an object exists in space. For this reason, the technique described in Patent Document 1 cannot identify an object from among a plurality of articles, and cannot be operated in a space (for example, a warehouse or a factory) where a plurality of articles exist.
  • a mobile robot operation system comprising: a mobile robot that moves to a shelf in which articles are stored and takes out an object stored in the shelf; and a mobile robot controller that controls the mobile robot.
  • the mobile robot has an imaging unit that captures an area including an article as an image, and the mobile robot operation system is imaged under a first condition capable of capturing the first object area including the object.
  • An object area extracting unit for extracting a second object area that may be present from the image and limited to the first object area; and a second object extracted by the object area extracting unit
  • a second condition setting unit for setting a second condition capable of photographing an area without including the entire first object area; and identifying the object from an image photographed under the second condition;
  • Special position An object position specifying unit that performs imaging of the first object region under the first condition, and the second object under the second condition set by the second condition setting unit. A region is photographed, and the object is taken out from the shelf based on the position of the object specified by the object position specifying unit.
  • the present invention it is possible to provide a mobile robot operation system that can specify a target object and take it out from the target object specified by the mobile robot even when there are a plurality of articles.
  • FIG. 3 is a hardware configuration diagram of the mobile robot according to the first embodiment.
  • FIG. 2 is a hardware configuration diagram of the mobile robot control apparatus according to the first embodiment. It is explanatory drawing of the feature-value table of Example 1.
  • FIG. 3 is a flowchart of an object gripping process according to the first embodiment. 3 is a flowchart of second point calculation processing according to the first embodiment. It is explanatory drawing of the target object shooting distance calculation process of Example 1.
  • FIG. 10 is a flowchart of second point calculation processing according to the second embodiment. It is explanatory drawing of the feature-value table of Example 3.
  • FIG. 10 is a flowchart of second point calculation processing according to the third embodiment.
  • FIG. 10 is a sequence diagram of gripping operation processing according to the fourth embodiment.
  • FIG. 1 is a configuration diagram of the mobile robot operation system according to the first embodiment.
  • the mobile robot operation system includes a mobile robot control device 10 and at least one mobile robot 20.
  • the mobile robot controller 10 and the mobile robot 20 are connected wirelessly.
  • the mobile robot control device 10 manages the mobile robot 20.
  • the mobile robot control device 10 transmits a transport instruction including article information of the target object, target object position information, and the like to the mobile robot 20 and manages the positions of the mobile robot 20 and the shelf 30 (see FIG. 2).
  • the mobile robot 20 takes out the object from the shelf 30 in which the object is stored.
  • the object to be taken out may be stored in a predetermined box or may be stored in another shelf 30, but is not limited thereto.
  • the mobile robot 20 includes an imaging unit 250 (see FIG. 2) that captures an area including an article as an image.
  • the mobile robot 20 can capture an image of the first object area 9999 including an object.
  • the first object area 9999 is imaged by the imaging unit 250 by moving to the first point, which is a special point.
  • the mobile robot 20 extracts a second object area, which is an area where there is a high possibility that the object exists, from the image taken at the first point, and does not include the first object area 9999.
  • the entire object area is moved to a second point where the photographing unit 250 can photograph.
  • the mobile robot 20 specifies the position of the object based on the image taken at the second point, and takes out the object from the shelf 30.
  • the mobile robot 20 holds information on the objects included in the first object area 9999 as prior information.
  • the first object area 9999 may be the entire shelf 30, one stage in the shelf 30, a plurality of shelves 30, and the like. From here, in the first to fourth embodiments, the first object area 9999 will be described taking the case of the entire shelf 30 as an example. Therefore, the first object area 9999 may be described as the entire shelf 30.
  • the photographing unit 250 is mounted as a camera, a distance image sensor, or a combination thereof.
  • the photographing results are an image having color information, a distance image having distance information, and a combination thereof. From here, in the first to fourth embodiments, the photographing unit 250 will be described taking the case of a camera as an example, and thus the photographing unit 250 may be expressed as a camera 250. In accordance with this, the photographing result is simply expressed as an image.
  • FIG. 2 is an explanatory diagram of a state in which the mobile robot 20 of the first embodiment has reached the first point.
  • the shelf 30 is provided with a plurality of openings 32A and 32B partitioned by partition plates 31A to 31C.
  • a box A33A and a box B33B are stored in the front opening 32A of the shelf 30.
  • the box A33A and the box B33B have the same color and are different in color from the other boxes stored in the shelf 30. For example, it is assumed that the box A33A is an object.
  • the mobile robot 20 has two arms 26A and 26B, and a camera 250 is attached to one arm 26A.
  • the tips of the two arms 26A and 26B are grippers capable of gripping articles and the like.
  • the camera 250 takes an image of the entire shelf 30.
  • FIG. 3 is an explanatory diagram of a state where the mobile robot 20 of the first embodiment has reached the second point.
  • an area including the box A33A as the object and the box B33B having the same color as the box A33A is extracted as the second object area.
  • the mobile robot 20 moves to the second point where the entire second object region can be imaged, and images the entire second object region with the camera 250.
  • the mobile robot 20 can read more detailed information (for example, a barcode) than the image photographed at the first point, and the object box A33A from the second object region. Can be specified.
  • the mobile robot 20 operates the arm 26B, holds the box A33A with a gripper provided at the tip of the arm 26B, and takes out the box A33A from the shelf 30.
  • FIG. 4 is a hardware configuration diagram of the mobile robot 20 according to the first embodiment.
  • the mobile robot 20 includes a controller 200, a power supply unit 205, a communication interface (IF) 240, a camera 250, an arm control motor 260, a gripper control motor 270, a travel motor 280, and a distance sensor 290. These are connected via a bus 295.
  • IF communication interface
  • Controller 200 controls operations of camera 250, arm control motor 260, gripper control motor 270, and travel motor 280.
  • the communication IF 240 is an interface for wirelessly communicating with the mobile robot controller 10 or another mobile robot 20.
  • the camera 250 is a device that captures an image, and is attached to the arm 26A, for example, but is not limited thereto.
  • the arm control motor 260 is a motor for independently operating the arm 26A and the arm 26B.
  • the gripper control motor 270 is a motor for independently operating the grippers at the tips of the arms 26A and 26B.
  • the traveling motor 280 is a motor for moving the mobile robot 20.
  • the distance sensor 290 is a sensor for measuring the distance to the obstacle, and is an infrared sensor, for example.
  • the power supply unit 205 is mounted as a battery and supplies power to the controller 200, the communication IF 240, the camera 250, the arm control motor 260, the gripper control motor 270, the travel motor 280, and the distance sensor 290.
  • the controller 200 matches the measurement result of the distance sensor 290 and the map data 222, grasps its own position, controls the traveling motor 280 to move to the first point and the second point, and controls the first
  • the arm control motor 260 and the gripper control motor 270 are controlled in order to control the camera 250 at the point and the second point and take a picture with the camera 250 and take out the object from the shelf 30.
  • the controller 200 includes a processor 210, a memory 220, and a secondary storage device 230.
  • the processor 210 executes various arithmetic processes.
  • the secondary storage device 230 is a non-volatile non-transitory storage medium, and stores various programs and various data.
  • the memory 220 is a volatile temporary storage medium.
  • the memory 220 is loaded with various programs and various data stored in the secondary storage device 230, and the processor 210 executes the various programs loaded in the memory 220. Then, various data loaded in the memory 220 are read and written.
  • the processor 210 includes a movement point calculation unit 211 and an operation control unit 216.
  • the movement point calculation unit 211 calculates the first point and the second point.
  • the movement point calculation unit 211 includes a first point calculation unit 212, an object region extraction unit 213, a second point calculation unit 214, and an object position specifying unit 215. Have.
  • the operation control unit 216 controls operations of the camera 250, the arm control motor 260, the gripper control motor 270, and the traveling motor 280.
  • the first point calculation unit 212 calculates a first point where the camera 250 can photograph the entire shelf 30.
  • the object area extraction unit 213 extracts a second object area in which the object may exist from the image taken at the first point.
  • the second point calculation unit 214 calculates a second point that is an image of the entire second object region and is closer to the object than the first point.
  • the object position specifying unit 215 specifies the position of the object from the image taken at the second point.
  • a feature amount table 221, map data 222, and a parameter table 223 are stored.
  • the feature quantity table 221 the feature quantity and detailed feature quantity of the article are registered.
  • the feature amount is referred to when the second object region is extracted, and the detailed feature amount is referred to when the position of the object is specified. Details of the feature amount table 221 will be described with reference to FIG.
  • the map data 222 a map of the entire space in which the mobile robot 20 travels is registered.
  • the map data 222 is distributed by the mobile robot controller 10.
  • the parameter table 223 the angle of view of the camera 250 and the like are registered.
  • the memory 220 stores a program corresponding to the movement point calculation unit 211 and a program corresponding to the operation control unit 216.
  • the programs corresponding to the movement point calculation unit 211 include programs corresponding to the first point calculation unit 212, the object region extraction unit 213, the second point calculation unit 214, and the object position specifying unit 215, respectively.
  • the processor 210 implements the moving point calculation unit 211 by executing a program corresponding to the moving point calculation unit 211 stored in the memory 220.
  • the processor 210 executes a program corresponding to each of the first point calculation unit 212, the object region extraction unit 213, the second point calculation unit 214, and the object position specifying unit 215, whereby the first point calculation unit 212, an object region extracting unit 213, a second point calculating unit 214, and an object position specifying unit 215 are implemented.
  • the processor 210 implements the operation control unit 216 by executing a program corresponding to the operation control unit 216 stored in the memory 220.
  • FIG. 5 is a hardware configuration diagram of the mobile robot control apparatus 10 according to the first embodiment.
  • the mobile robot controller 10 includes a processor 510, a memory 520, a secondary storage device 530, and a wireless interface (IF) 540.
  • the processor 510, the memory 520, the secondary storage device 530, and the wireless interface (IF) 540 are connected to each other via a bus 550.
  • the hardware operations of the processor 510, the memory 520, and the secondary storage device 530 are the same as those of the processor 210, the memory 220, and the secondary storage device 230 shown in FIG.
  • the wireless IF 540 is an interface for communicating with the mobile robot 20 wirelessly.
  • the mobile robot controller 10 may have an input device and an output device (not shown).
  • the input device is, for example, a keyboard and a mouse
  • the output device is, for example, a display.
  • the mobile robot control device 10 determines an object that is an object to be taken out by the mobile robot 20 and transmits an instruction for conveying the object to the mobile robot 20.
  • the conveyance instruction includes, for example, article information of the target object, storage frontage information indicating the position of the frontage 32 where the target object is stored, and a route to the shelf 30 where the target object is stored.
  • the article information of the object includes at least an identifier of the object.
  • the storage frontage information includes the position of the shelf 30 where the target object storing the target object is stored and the position of the frontage 32 where the target object of the shelf 30 is stored. Further, when the mobile robot 20 can calculate a route to the shelf 30 in which the object is stored based on the storage frontage information included in the transfer instruction, the route to the shelf 30 in which the object is stored in the transfer instruction. May not be included.
  • FIG. 6 is an explanatory diagram of the feature amount table 221 according to the first embodiment.
  • the feature amount table 221 includes an article ID 601, a feature amount 602, a detailed feature amount 603, and a detailed shooting distance 604.
  • the feature quantity includes, for example, at least one of a feature quantity based on the color of the article, a feature quantity based on the texture of the article, and a feature quantity based on the edge of the article.
  • the detailed feature amount 603 includes, for example, at least one of a feature amount based on character information (for example, an article number) included in the label of the article and a feature quantity based on a barcode attached to the article.
  • the feature amount is a feature amount of the entire appearance of the article, whereas the detailed feature amount is a feature amount of information attached to a part of the article, and is a feature quantity that can identify the article itself.
  • the detailed shooting distance 604 information that can specify the detailed shooting distance, which is the distance from the object necessary for acquiring the detailed feature amount to the camera 250, is registered.
  • the information registered in the detailed shooting distance 604 may be a distance from the object to the camera 250, or may be the number of pixels included in a predetermined area where the camera 250 is shot. In other words, the detailed shooting distance 604 only needs to register information that can specify the detailed shooting distance.
  • FIG. 7 is a flowchart of the object gripping process according to the first embodiment.
  • the object gripping process is a process from when the mobile robot 20 receives a transport instruction from the mobile robot control device 10 until it grips the target object.
  • the mobile robot 20 receives a transfer instruction from the mobile robot controller 10 (701).
  • the conveyance instruction includes at least the article information of the object and the storage frontage information.
  • the first point calculation unit 212 calculates a first point at which the entire shelf 30 in which the object is stored can be photographed (702). Specifically, the first point calculation unit 212 determines the target based on its own position, the size (vertical, horizontal, and height) of the shelf 30 in which the target object is stored, and the angle of view of the camera 250. A point where the entire shelf 30 in which the object is stored can be photographed is calculated, and any one of the calculated points is set as the first point.
  • the size of the shelf 30 may be registered in advance in the parameter table 223 or may be included in the transport instruction.
  • the mobile robot 20 can specify the first point based on the storage frontage information included in the transport instruction. 211 does not necessarily need to have the first point calculation unit 212.
  • the operation control unit 216 controls the traveling motor 280 while recognizing its own position, and moves to the first point calculated in the processing of Step 702 (703).
  • the motion control unit 216 controls the camera 250 to photograph the shelf 30 in which the object is stored (704).
  • the object region extraction unit 213 calculates the feature amounts of all the pixels included in the image captured in the process of step 704, and the calculated feature amounts correspond to the article IDs of the objects in the feature amount table 221. Pixels within the first predetermined range are extracted from the feature amount of the record as the second object region (705). Note that the second object region may include articles other than the object, as described with reference to FIG.
  • the second point calculation unit 214 calculates a second point that can capture the entire second object region extracted in the process of step 705 and is closer to the object than the first point (706). Details of the second point calculation process will be described with reference to FIG.
  • the operation control unit 216 controls the traveling motor 280 while recognizing its own position, and moves to the second point calculated in the process of Step 706 (707).
  • the motion control unit 216 controls the camera 250 to photograph the entire second object area (708).
  • the operation control unit 216 calculates the detailed feature amount of all the pixels included in the image captured in the process of step 708, and the calculated detailed feature amount corresponds to the article ID of the object in the feature amount table 221. Pixels within a predetermined range from the detailed feature amount of the record are extracted as objects, and the position of the object is specified (709).
  • the operation control unit 216 controls the arm control motor 260 and the gripper control motor 270 to grip the object at the position of the processing in Step 709 and take out the object from the shelf 30 (710). finish.
  • FIG. 8 is a flowchart of the second point calculation process according to the first embodiment.
  • the second point calculation unit 214 uses the distance recorded in the detailed shooting distance 604 of the record in which the article ID of the target object included in the received conveyance instruction is registered in the article ID 601 of the feature amount table 221 as the detailed shooting distance. Obtained as (D1) (801).
  • the second point calculation unit 214 can capture the entire second object region imaged in the process of step 705 and calculates the distance closest to the object as the object imaging distance (D2) ( 802). Calculation of the object shooting distance (D2) will be described with reference to FIG.
  • FIG. 9 is an explanatory diagram of an object shooting distance calculation process according to the first embodiment.
  • the length of the side of the second object area closest to the camera 250 is W
  • the angle of view of the camera 250 is A
  • the object shooting distance to be calculated is D2.
  • W may be a value obtained by adding a predetermined value to the length of the nearest side.
  • the object shooting distance may be a value obtained by adding a predetermined value to the value obtained by calculating Equation 1.
  • the second point calculation unit 214 determines whether or not the detailed shooting distance (D1) acquired in the process of step 801 is equal to or greater than the object area shooting distance (D2) calculated in the process of step 802 ( 803).
  • the second point calculation unit 214 determines that the second object area is the most camera. A point that is greater than or equal to the object region shooting distance (D2) and less than or equal to the detailed shooting distance (D1) from the middle point near 250 is calculated as the second point (804), and the second point calculation process is terminated.
  • the second point calculation unit 214 displays the angle of view of the camera 250. Based on the object area shooting distance (D2), a range (W1) that can be imaged is calculated, and the length (W) of the side closest to the camera 250 of the second object area is calculated (W1). By dividing, the number of times of photographing necessary for photographing the entire object region is calculated from the object region photographing distance (D2) (805). Specifically, the second point calculation unit 214 sets the value obtained by adding 1 to the quotient obtained by dividing W by W1 as the number of photographing, and the quotient obtained by dividing if there is no remainder. Is the number of shots.
  • the second point calculation unit 214 calculates the position of the camera 250 each time as the second point (806), and ends the second point calculation process.
  • FIG. 10 is an explanatory diagram of processing for calculating a plurality of second points according to the first embodiment.
  • an imageable range (W1) from the object area imaging distance (D2) is calculated by calculating Equation 2.
  • the second point in the first round is a point that is less than or equal to the detailed shooting distance (D1) from a position W / 2 away from the end of the second object region closer to the mobile robot 20 along the side of the second object region. It becomes a point.
  • the second point of the second time is a position away from the second point of the first time by W1 in a direction parallel to the side of the second object area. In this way, the second point until the entire second object area (W) is imaged is obtained using the position obtained by adding W1 to the position of the previous second point as the next second point.
  • the mobile robot 20 captures the entire shelf 30 at the first point, and extracts the second target region in which the target extracted from the image captured at the first point may exist.
  • the target area is moved to a second point where the image can be photographed, the second object area is photographed at the second point, and the object is determined based on the position of the object identified from the image photographed at the second point.
  • the object area shooting distance when the object area shooting distance is equal to or greater than the detailed shooting distance specification, a plurality of points satisfying the detailed shooting distance are calculated as the second points in order to capture the entire second object area.
  • a point satisfying the object area shooting distance is calculated as the second point.
  • the second object region can be photographed at a distance where the detailed feature amount can be calculated, and the object can be accurately identified.
  • Example 2 will be described with reference to FIG. In Example 2, even when the object area shooting distance is equal to or greater than the detailed shooting distance, if the area where only the object exists can be specified in the second object area, the entire area where only the object exists can be shot. The second point is calculated as the second point. According to this, since the article which is not the object is not photographed, the processing load of the mobile robot 20 can be reduced.
  • FIG. 11 is a flowchart of the second point calculation process according to the second embodiment. In FIG. 11, the same process as the second point calculation process shown in FIG.
  • step 803 If it is determined in step 803 that the object area shooting distance (D2) is equal to or greater than the detailed shooting distance (D1) (803: YES), the second point calculation unit 214 is included in the second object area.
  • the pixel whose feature quantity is within the second predetermined range from the feature quantity of the record corresponding to the article ID of the object in the feature quantity table 221 is extracted as an area where only the object exists, and the area is extracted. It is determined whether or not (1101).
  • the second predetermined range is a range narrower than the first predetermined range used in the process of step 703.
  • the second point calculation unit 214 executes the processing of steps 805 and 806 as in the first embodiment. .
  • step 1101 If it is determined in step 1101 that an area where only the object exists is extracted (1101: YES), the second point calculation unit 214 can shoot the entire object in the extracted area.
  • the object shooting distance (D3) is calculated (1102).
  • the second point calculation unit 214 determines whether or not the object shooting distance (D3) calculated in the process of step 1102 is equal to or greater than the detailed shooting distance (D1) acquired in the process of step 801 (1103). ).
  • the second point calculation unit 214 is the most in the area where only the object exists. A point where the distance from the midpoint of the side close to the camera 250 is not less than the object shooting distance (D3) and not more than the detailed shooting distance (D1) is calculated as the second point (1104), and the second point calculation processing is performed. finish.
  • step 1103 if it is determined in step 1103 that the object shooting distance (D3) is greater than or equal to the detailed shooting distance (D1) (1103: YES), the second point calculation unit 214 performs steps 805 and 806. Execute. In this case, the number of times and the second point at which the entire area where only the object exists are captured, not the entire second object area, are calculated.
  • the processing load of the mobile robot 20 can be reduced. Further, since the number of times of photographing can be reduced, the object can be taken out of the shelf 30 earlier.
  • Example 3 will be described with reference to FIGS.
  • the detailed shooting distance is smaller than the object area shooting distance
  • the detailed feature amount included in the area where only the object exists can be specified in the second object area when the area where only the object exists can be specified.
  • a point away from the reading position by the detailed shooting distance is calculated as the second point. According to this, since the area not related to the calculation of the detailed feature amount is not photographed, the processing load of the mobile robot 20 can be reduced.
  • FIG. 12 is an explanatory diagram of the feature amount table 221 according to the third embodiment.
  • the feature amount table 221 of the third embodiment includes a detailed feature amount reading position 1201 in addition to the item ID 601, the feature amount 602, the detailed feature amount 603, and the detailed shooting distance 604.
  • the detailed feature amount reading position 1201 information regarding the position in the article of the region (reading region) for calculating the detailed feature amount is registered.
  • the detailed feature amount reading position 1201 registers the area of the surface including the region and the position of the region on the surface.
  • FIG. 13 is a flowchart of the second point calculation process of the third embodiment.
  • the same processes as the second spot calculation process shown in FIG. 8 of the first embodiment and the second spot calculation process shown in FIG. 11 of the second embodiment are given the same reference numerals, and description thereof is omitted.
  • the second point calculation unit 214 uses the article ID 601 of the feature quantity table 221 as the identifier of the article as the object.
  • the information registered in the detailed feature reading position 1201 of the record in which is registered is acquired (1301).
  • the second point calculation unit 214 identifies an area corresponding to the reading area in the area where only the target object exists based on the information acquired in the process of step 1301 (1302). Specifically, the second point calculation unit 214 calculates the area of the region where only the object is present in the image taken at the first point, and based on the distance between the first point and the region, The calculated area is converted into an actual area. Then, when the converted area is within a predetermined range from the area of the surface including the reading area included in the information acquired in step 1301, the second point calculation unit 214 reads the area where only the object exists. It is determined that the surface includes an area. Then, the second point calculation unit 214 identifies an area corresponding to the reading area registered in the detailed feature amount reading position 1201 in the area where only the object exists.
  • the second point calculation unit 214 calculates, as a second point, a point where the entire area corresponding to the reading area specified in the process of step 1302 can be photographed (1303), and ends the second point calculation process.
  • the reading area is often a label or the like on which a barcode or the like is printed, for example.
  • the minimum distance at which the entire area corresponding to the reading area can be photographed is smaller than the detailed photographing distance, so the second point calculation is performed.
  • the unit 214 may calculate a point that is equal to or larger than the minimum distance at which the entire area corresponding to the reading area can be photographed and less than or equal to the detailed photographing distance as the second point.
  • the second point calculation unit 214 performs a plurality of operations that are equal to or less than the detailed image capture distance, as in the processing of steps 805 and 806. May be calculated as the second point.
  • the processing load of the mobile robot 20 can be reduced. Further, since the number of times of photographing can be reduced, the object can be taken out of the shelf 30 earlier.
  • Example 4 will be described with reference to FIG.
  • the mobile robot controller 10 calculates the first point and the second point.
  • the mobile robot 20 does not have the moving point calculation unit 211 and the feature amount table 221
  • the processor 510 of the mobile robot control device 10 has the moving point calculation unit 211
  • 520 stores a feature value table 221 and a parameter table 223 of each mobile robot 20.
  • FIG. 14 is a sequence diagram of the gripping operation process according to the fourth embodiment. In FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG. 14, the same processes as those shown in FIG.
  • the first point calculation unit 212 calculates the first point and includes the calculated first point in the process of step 702.
  • a first point movement instruction is transmitted to the mobile robot 20 (1401).
  • the operation control unit 216 controls the traveling motor 280 to move to the first point, and controls the camera 250 in step 704. Then, the entire shelf 30 in which the object is stored is photographed. Then, the mobile robot 20 transmits a shelf photographed image, which is an image photographed at the first point, to the mobile robot control device 10 (1402).
  • the object area extracting unit 213 extracts the second object area in the process of step 705, and the second point calculating unit 214 in the process of step 706. Two points are calculated, and a second point movement instruction including the calculated second point is transmitted to the mobile robot 20 (1403).
  • the operation control unit 216 controls the travel motor 280 to move to the second point in the process of step 707 and controls the camera 250 in the process of step 708. Then, the entire second object area is photographed. Then, the mobile robot 20 transmits a detailed captured image, which is an image captured at the second point, to the mobile robot control apparatus 10 (1404).
  • the object position specifying unit 215 specifies the position of the target object based on the received detailed captured image in the processing of step 709, and An object gripping instruction including the position is transmitted to the mobile robot 20 (1405).
  • the operation control unit 216 controls the arm control motor 260 and the gripper control motor 270 in the process of step 710, and the object included in the received object gripping instruction. And the object is taken out from the shelf 30.
  • the mobile robot control device 10 calculates the first point and the second point and specifies the position of the object, the processing load on the mobile robot 20 can be reduced.
  • the mobile robot 20 has photographed the entire shelf 30 at the first point and moved to the second point to photograph the second object area, but the present invention is not limited to this.
  • the mobile robot 20 takes an image of the entire shelf 30 at a first focal length that can image the entire shelf 30 of the camera 250, and uses a second focal length that allows an image of the second object area without including the entire shelf 30. You may image
  • the second focal length is longer than the first focal length.
  • the focal length is short when the angle of view is wide and the focal length is long when the angle of view is narrow, the first focal length and the second focal length may be set by adjusting the angle of view.
  • the mobile robot 20 shoots the entire shelf 30 with a first angle of view that can shoot the entire shelf 30 of the camera 250, and the second angle with a second angle of view that can shoot the second object area without including the entire shelf 30.
  • the object area may be photographed.
  • the second field angle is narrower than the first field angle. Therefore, in the claims, the concept including the first point, the first focal length, the first angle of view, etc. is described as the first condition, and the second point, the second focal length, the second angle of view, etc. The concept including is described as the second condition.
  • the first point calculation unit 212 corresponds to a first condition setting unit
  • the second point calculation unit 214 corresponds to a second condition setting unit.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of a certain embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of a certain embodiment.
  • each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files that realize each function is stored in memory, a hard disk, a recording device such as SSD (Solid State Drive), or an IC (Integrated Circuit) card, SD card, DVD (Digital Versatile Disc), etc. Can be placed on any recording medium.
  • SSD Solid State Drive
  • IC Integrated Circuit
  • SD card Digital Card
  • DVD Digital Versatile Disc
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
  • the first object area 9999 including the object held as prior information may be generated during the operation of the mobile robot 20.
  • the first object region 9999 is extracted in advance from an image obtained by the image capturing unit 250 capturing a wider region including the first object region 9999.
  • the mobile robot 20 includes the target area extraction unit 9998, and the mobile robot 20 first extracts the first target area 9999 from the image captured by the imaging unit 250 using the prior target area extraction unit 9998. good.
  • the shelf 30 may be extracted from the image based on the model of the shelf 30 that the mobile robot 20 holds as prior information, and may be used as the first object region 9999.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a mobile robot operation system provided with: a mobile robot that moves to a shelf on which items are stored, and takes out an object stored on the shelf; and a mobile robot control device that controls the mobile robot, wherein even when a plurality of items exist, an object can be specified and the mobile robot can take out the specified object. The mobile robot photographs a first object region including the object under a first condition in which the first object region can be photographed, photographs a second object region under a second condition in which the second object region can be photographed without including the first object region, and takes out the object from a shelf on the basis of the position of the object specified by an object position specifying unit.

Description

移動ロボット運用システム、移動ロボット、及び対象物取り出し方法Mobile robot operation system, mobile robot, and object extraction method
 本発明は、物品が格納された棚から対象物となる物品を取り出す移動ロボットを備える移動ロボット運用システムに関する。 The present invention relates to a mobile robot operation system including a mobile robot that takes out a target article from a shelf in which the article is stored.
 本技術分野の背景技術として、特開2005-88175号公報(特許文献1)がある。この公報には、「ロボット装置は、制御の対象となる手先(制御注目点)を対象物の表面に接触させる動作をする。ロボット装置は、(2A):歩行によるアプローチ、(2B):下半身動作によるアプローチ、(2C):腕等の上半身動作によるアプローチ等の複数の動作を有し、(2A)>(2B)>(2C)の順に、手先を移動させることが可能な可動範囲は狭いが制御精度が高いものとなっており、これらの各動作により得られる制御注目点の可動範囲分布(L(x),H(x))と、対象物との距離から得られる対象物の存在確率分布O(x)とに基づき上半身の可動範囲Lのなかで制御注目点を最も移動させやすい位置にて対象物に接触できるように各動作を切り替える。」と記載されている(例えば要約参照)。 There is JP-A-2005-88175 (Patent Document 1) as background art in this technical field. This publication states that “the robot device operates to bring the hand (control attention point) to be controlled into contact with the surface of the object. The robot device is (2A): approach by walking, (2B): lower body. Approach by motion, (2C): It has a plurality of motions such as an approach by upper body motion such as an arm, and the movable range in which the hand can be moved in the order of (2A)> (2B)> (2C) is narrow Has a high control accuracy, and the existence of an object obtained from the distance between the movable range distribution (L (x), H (x)) of the control attention point obtained by each of these operations and the object. Based on the probability distribution O (x), each operation is switched so that the control attention point can be brought into contact with the object at the position where the control attention point is most easily moved in the movable range L of the upper body. ).
特開2005-88175号公報JP 2005-88175 A
 特許文献1に記載の技術は、空間に対象物しか存在しない場合を前提としている。このため、特許文献1に記載の技術は、複数の物品の中から対象物を特定できず、複数の物品が存在する空間(例えば、倉庫又は工場等)で運用できない。 The technique described in Patent Document 1 is based on the assumption that only an object exists in space. For this reason, the technique described in Patent Document 1 cannot identify an object from among a plurality of articles, and cannot be operated in a space (for example, a warehouse or a factory) where a plurality of articles exist.
 本発明は、複数の物品が存在する場合であっても、対象物を特定し、移動ロボットが特定した対象物から取り出すことができる移動ロボット運用システムを提供することを目的とする。 It is an object of the present invention to provide a mobile robot operation system that can specify a target object and take it out from the target object specified by the mobile robot even when there are a plurality of articles.
 上記課題を解決するために、物品が格納された棚に移動し、前記棚に格納された対象物を取り出す移動ロボットと、前記移動ロボットを制御する移動ロボット制御装置とを備える移動ロボット運用システムであって、前記移動ロボットは、物品を含む領域を画像として撮影する撮影部を有し、前記移動ロボット運用システムは、前記対象物を含む第1対象物領域を撮影可能な第1条件で撮影された画像から対象物が存在する可能性があり第1対象物領域よりも限定された第2対象物領域を抽出する対象物領域抽出部と、前記対象物領域抽出部が抽出した第2対象物領域を前記第1対象物領域の全体を含まずに撮影可能な第2条件を設定する第2条件設定部と、前記第2条件で撮影された画像から前記対象物を識別し、前記対象物の位置を特定する対象物位置特定部と、をさらに備え、前記移動ロボットは、前記第1条件で前記第1対象物領域を撮影し、前記第2条件設定部で設定した第2条件で前記第2対象物領域を撮影し、前記対象物位置特定部が特定した対象物の位置に基づいて前記対象物を前記棚から取り出すことを特徴とする。 In order to solve the above problems, a mobile robot operation system comprising: a mobile robot that moves to a shelf in which articles are stored and takes out an object stored in the shelf; and a mobile robot controller that controls the mobile robot. The mobile robot has an imaging unit that captures an area including an article as an image, and the mobile robot operation system is imaged under a first condition capable of capturing the first object area including the object. An object area extracting unit for extracting a second object area that may be present from the image and limited to the first object area; and a second object extracted by the object area extracting unit A second condition setting unit for setting a second condition capable of photographing an area without including the entire first object area; and identifying the object from an image photographed under the second condition; Special position An object position specifying unit that performs imaging of the first object region under the first condition, and the second object under the second condition set by the second condition setting unit. A region is photographed, and the object is taken out from the shelf based on the position of the object specified by the object position specifying unit.
 本発明によれば、複数の物品が存在する場合であっても、対象物を特定し、移動ロボットが特定した対象物から取り出すことができる移動ロボット運用システムを提供できる。 According to the present invention, it is possible to provide a mobile robot operation system that can specify a target object and take it out from the target object specified by the mobile robot even when there are a plurality of articles.
 上記した以外の課題、構成、及び効果は、以下の実施形態の説明により明らかにされる。 Problems, configurations, and effects other than those described above will be clarified by the following description of the embodiments.
実施例1の移動ロボット運用システムの構成図である。It is a block diagram of the mobile robot operation system of Example 1. 実施例1の移動ロボットが第1地点に達した状態の説明図である。It is explanatory drawing of the state which the mobile robot of Example 1 reached the 1st point. 実施例1の移動ロボットが第2地点に達した状態の説明図である。It is explanatory drawing of the state which the mobile robot of Example 1 reached to the 2nd point. 実施例1の移動ロボットのハードウェア構成図である。FIG. 3 is a hardware configuration diagram of the mobile robot according to the first embodiment. 実施例1の移動ロボット制御装置のハードウェア構成図である。FIG. 2 is a hardware configuration diagram of the mobile robot control apparatus according to the first embodiment. 実施例1の特徴量テーブルの説明図である。It is explanatory drawing of the feature-value table of Example 1. FIG. 実施例1の対象物把持処理のフローチャートである。3 is a flowchart of an object gripping process according to the first embodiment. 実施例1の第2地点算出処理のフローチャートである。3 is a flowchart of second point calculation processing according to the first embodiment. 実施例1の対象物撮影距離算出処理の説明図である。It is explanatory drawing of the target object shooting distance calculation process of Example 1. FIG. 実施例1の複数の第2地点の算出する処理の説明図である。It is explanatory drawing of the process which calculates the several 2nd point of Example 1. FIG. 実施例2の第2地点算出処理のフローチャートである。10 is a flowchart of second point calculation processing according to the second embodiment. 実施例3の特徴量テーブルの説明図である。It is explanatory drawing of the feature-value table of Example 3. FIG. 実施例3の第2地点算出処理のフローチャートである。10 is a flowchart of second point calculation processing according to the third embodiment. 実施例4の把持動作処理のシーケンス図である。FIG. 10 is a sequence diagram of gripping operation processing according to the fourth embodiment.
 図1は、実施例1の移動ロボット運用システムの構成図である。 FIG. 1 is a configuration diagram of the mobile robot operation system according to the first embodiment.
 移動ロボット運用システムは、移動ロボット制御装置10及び少なくとも一つの移動ロボット20を有する。移動ロボット制御装置10と移動ロボット20とは、無線で接続される。 The mobile robot operation system includes a mobile robot control device 10 and at least one mobile robot 20. The mobile robot controller 10 and the mobile robot 20 are connected wirelessly.
 移動ロボット制御装置10は、移動ロボット20を管理する。例えば、移動ロボット制御装置10は、対象物の物品情報及び対象物位置情報等を含む搬送指示を移動ロボット20に送信したり、移動ロボット20及び棚30(図2参照)の位置を管理する。 The mobile robot control device 10 manages the mobile robot 20. For example, the mobile robot control device 10 transmits a transport instruction including article information of the target object, target object position information, and the like to the mobile robot 20 and manages the positions of the mobile robot 20 and the shelf 30 (see FIG. 2).
 移動ロボット20は、対象物が格納された棚30から対象物を取り出す。取り出した対象物は、例えば、所定の箱に格納されてもよいし、他の棚30に格納されてもよいが、これらに限定されない。 The mobile robot 20 takes out the object from the shelf 30 in which the object is stored. For example, the object to be taken out may be stored in a predetermined box or may be stored in another shelf 30, but is not limited thereto.
 移動ロボット20は物品を含む領域を画像として撮影する撮影部250(図2参照)を搭載しており、移動ロボット20は、対象物を含んだ第1対象物領域9999を撮影部250が撮影可能な地点である第1地点まで移動して、撮影部250によって第1対象物領域9999全体を撮影する。そして、移動ロボット20は、第1地点で撮影された画像から対象物が存在する可能性が高い領域である第2対象物領域を抽出して、第1対象物領域9999を含まずに第2対象物領域全体を撮影部250が撮影可能な地点である第2地点まで移動する。そして、移動ロボット20は、第2地点で撮影された画像に基づいて対象物の位置を特定して、対象物を棚30から取り出す。なお、移動ロボット20は、第1対象物領域9999に含まれる対象物の情報を事前情報として保持している。ここで、第1対象物領域9999は、棚30全体、棚30の中のある一つの段、複数の棚30などが考えられる。ここから、実施例1~4において、第1対象物領域9999は、棚30全体の場合を例にとって説明する。そのため、第1対象物領域9999は棚30全体と記載する場合がある。また、撮影部250は、カメラや距離画像センサ、それらの組み合わせとして実装される。撮影結果はそれぞれ色情報を持つ画像、距離情報を持つ距離画像、それらの組み合わせとなる。ここから、実施例1~4において、撮影部250は、カメラの場合を例にとり説明することから、撮影部250をカメラ250と表現することがある。それに合わせて、撮影結果は単に画像と表現する。 The mobile robot 20 includes an imaging unit 250 (see FIG. 2) that captures an area including an article as an image. The mobile robot 20 can capture an image of the first object area 9999 including an object. The first object area 9999 is imaged by the imaging unit 250 by moving to the first point, which is a special point. Then, the mobile robot 20 extracts a second object area, which is an area where there is a high possibility that the object exists, from the image taken at the first point, and does not include the first object area 9999. The entire object area is moved to a second point where the photographing unit 250 can photograph. Then, the mobile robot 20 specifies the position of the object based on the image taken at the second point, and takes out the object from the shelf 30. Note that the mobile robot 20 holds information on the objects included in the first object area 9999 as prior information. Here, the first object area 9999 may be the entire shelf 30, one stage in the shelf 30, a plurality of shelves 30, and the like. From here, in the first to fourth embodiments, the first object area 9999 will be described taking the case of the entire shelf 30 as an example. Therefore, the first object area 9999 may be described as the entire shelf 30. The photographing unit 250 is mounted as a camera, a distance image sensor, or a combination thereof. The photographing results are an image having color information, a distance image having distance information, and a combination thereof. From here, in the first to fourth embodiments, the photographing unit 250 will be described taking the case of a camera as an example, and thus the photographing unit 250 may be expressed as a camera 250. In accordance with this, the photographing result is simply expressed as an image.
 図2は、実施例1の移動ロボット20が第1地点に達した状態の説明図である。 FIG. 2 is an explanatory diagram of a state in which the mobile robot 20 of the first embodiment has reached the first point.
 棚30は、仕切り板31A~31Cによって区切られる複数の間口32A及び32Bが設けられる。棚30の上方側の間口32Aに箱A33A及び箱B33Bが格納される。箱A33Aと箱B33Bとは同じ色をしており、棚30に格納されたその他の箱とは色が異なる。例えば、箱A33Aが対象物であるとする。 The shelf 30 is provided with a plurality of openings 32A and 32B partitioned by partition plates 31A to 31C. A box A33A and a box B33B are stored in the front opening 32A of the shelf 30. The box A33A and the box B33B have the same color and are different in color from the other boxes stored in the shelf 30. For example, it is assumed that the box A33A is an object.
 移動ロボット20は、二つのアーム26A及び26Bを有し、一方のアーム26Aにはカメラ250が取り付けられる。二つのアーム26A及び26Bの先端は物品等を把持可能なグリッパとなっている。移動ロボット20は、第1地点まで移動すると、カメラ250で棚30全体を撮影する。 The mobile robot 20 has two arms 26A and 26B, and a camera 250 is attached to one arm 26A. The tips of the two arms 26A and 26B are grippers capable of gripping articles and the like. When the mobile robot 20 moves to the first point, the camera 250 takes an image of the entire shelf 30.
 図3は、実施例1の移動ロボット20が第2地点に達した状態の説明図である。 FIG. 3 is an explanatory diagram of a state where the mobile robot 20 of the first embodiment has reached the second point.
 第1地点で撮影された画像から、対象物となる箱A33A及び当該箱A33Aと同じ色の箱B33Bを含む領域が第2対象物領域として抽出される。移動ロボット20は、第2対象物領域全体が撮影可能な地点である第2地点まで移動し、第2対象物領域全体をカメラ250で撮影する。 From the image photographed at the first point, an area including the box A33A as the object and the box B33B having the same color as the box A33A is extracted as the second object area. The mobile robot 20 moves to the second point where the entire second object region can be imaged, and images the entire second object region with the camera 250.
 第2地点で撮影された画像から、移動ロボット20は、第1地点で撮影された画像より詳細な情報(例えばバーコード等)を読み取ることができ、第2対象物領域から対象物の箱A33Aの位置を特定できる。移動ロボット20は、アーム26Bを動作させて、箱A33Aをアーム26Bの先端に設けられるグリッパにより把持し、箱A33Aを棚30から取り出す。 From the image photographed at the second point, the mobile robot 20 can read more detailed information (for example, a barcode) than the image photographed at the first point, and the object box A33A from the second object region. Can be specified. The mobile robot 20 operates the arm 26B, holds the box A33A with a gripper provided at the tip of the arm 26B, and takes out the box A33A from the shelf 30.
 図4は、実施例1の移動ロボット20のハードウェア構成図である。 FIG. 4 is a hardware configuration diagram of the mobile robot 20 according to the first embodiment.
 移動ロボット20は、コントローラ200、電源ユニット205、通信インタフェース(IF)240、カメラ250、アーム制御モータ260、グリッパ制御モータ270、走行用モータ280、及び距離センサ290を有する。これらは、バス295を介して接続される。 The mobile robot 20 includes a controller 200, a power supply unit 205, a communication interface (IF) 240, a camera 250, an arm control motor 260, a gripper control motor 270, a travel motor 280, and a distance sensor 290. These are connected via a bus 295.
 コントローラ200は、カメラ250、アーム制御モータ260、グリッパ制御モータ270、及び走行用モータ280の動作を制御する。 Controller 200 controls operations of camera 250, arm control motor 260, gripper control motor 270, and travel motor 280.
 通信IF240は、移動ロボット制御装置10又は他の移動ロボット20と無線で通信するためのインタフェースである。 The communication IF 240 is an interface for wirelessly communicating with the mobile robot controller 10 or another mobile robot 20.
 カメラ250は、画像を撮影する装置であり、例えば、アーム26Aに取り付けられるが、これに限定されない。アーム制御モータ260は、アーム26A及びアーム26Bそれぞれを独立して動作させるためのモータである。グリッパ制御モータ270は、アーム26A及びアーム26Bの先端のグリッパをそれぞれ独立して動作させるためのモータである。走行用モータ280は、移動ロボット20を移動させるためのモータである。距離センサ290は、障害物までの距離を測定するためのセンサであり、例えば赤外線センサ等である。電源ユニット205は、バッテリとして実装され、コントローラ200、通信IF240、カメラ250、アーム制御モータ260、グリッパ制御モータ270、走行用モータ280、及び距離センサ290に電源を供給する。 The camera 250 is a device that captures an image, and is attached to the arm 26A, for example, but is not limited thereto. The arm control motor 260 is a motor for independently operating the arm 26A and the arm 26B. The gripper control motor 270 is a motor for independently operating the grippers at the tips of the arms 26A and 26B. The traveling motor 280 is a motor for moving the mobile robot 20. The distance sensor 290 is a sensor for measuring the distance to the obstacle, and is an infrared sensor, for example. The power supply unit 205 is mounted as a battery and supplies power to the controller 200, the communication IF 240, the camera 250, the arm control motor 260, the gripper control motor 270, the travel motor 280, and the distance sensor 290.
 コントローラ200は、距離センサ290の測定結果と、地図データ222をマッチングして、自身の位置を把握して、第1地点及び第2地点まで移動するように走行用モータ280を制御し、第1地点及び第2地点でカメラ250を制御してカメラ250によって撮影し、対象物を棚30から取り出すために、アーム制御モータ260及びグリッパ制御モータ270を制御する。 The controller 200 matches the measurement result of the distance sensor 290 and the map data 222, grasps its own position, controls the traveling motor 280 to move to the first point and the second point, and controls the first The arm control motor 260 and the gripper control motor 270 are controlled in order to control the camera 250 at the point and the second point and take a picture with the camera 250 and take out the object from the shelf 30.
 コントローラ200は、プロセッサ210、メモリ220、及び二次記憶装置230を有する。プロセッサ210は、各種演算処理を実行する。二次記憶装置230は非揮発性の非一時的な記憶媒体であり、各種プログラム及び各種データが記憶される。メモリ220は揮発性の一時的な記憶媒体であり、メモリ220には、二次記憶装置230に記憶された各種プログラム及び各種データがロードされ、プロセッサ210がメモリ220にロードされた各種プログラムを実行し、メモリ220にロードされた各種データを読み書きする。 The controller 200 includes a processor 210, a memory 220, and a secondary storage device 230. The processor 210 executes various arithmetic processes. The secondary storage device 230 is a non-volatile non-transitory storage medium, and stores various programs and various data. The memory 220 is a volatile temporary storage medium. The memory 220 is loaded with various programs and various data stored in the secondary storage device 230, and the processor 210 executes the various programs loaded in the memory 220. Then, various data loaded in the memory 220 are read and written.
 プロセッサ210は、移動地点算出部211、及び動作制御部216を有する。移動地点算出部211は、第1地点及び第2地点を算出するものであり、第1地点算出部212、対象物領域抽出部213、第2地点算出部214、及び対象物位置特定部215を有する。動作制御部216は、カメラ250、アーム制御モータ260、グリッパ制御モータ270、及び走行用モータ280の動作を制御する。 The processor 210 includes a movement point calculation unit 211 and an operation control unit 216. The movement point calculation unit 211 calculates the first point and the second point. The movement point calculation unit 211 includes a first point calculation unit 212, an object region extraction unit 213, a second point calculation unit 214, and an object position specifying unit 215. Have. The operation control unit 216 controls operations of the camera 250, the arm control motor 260, the gripper control motor 270, and the traveling motor 280.
 第1地点算出部212は、カメラ250が棚30全体を撮影可能な第1地点を算出する。対象物領域抽出部213は、第1地点で撮影された画像から対象物が存在する可能性がある第2対象物領域を抽出する。第2地点算出部214は、第2対象物領域全体が撮影可能であって、第1地点より対象物に近い地点である第2地点を算出する。対象物位置特定部215は、第2地点で撮影された画像から対象物の位置を特定する。 The first point calculation unit 212 calculates a first point where the camera 250 can photograph the entire shelf 30. The object area extraction unit 213 extracts a second object area in which the object may exist from the image taken at the first point. The second point calculation unit 214 calculates a second point that is an image of the entire second object region and is closer to the object than the first point. The object position specifying unit 215 specifies the position of the object from the image taken at the second point.
 メモリ220には、特徴量テーブル221、地図データ222、及びパラメータテーブル223が記憶される。特徴量テーブル221には、物品の特徴量、及び詳細特徴量が登録される。特徴量は第2対象物領域を抽出する場合に参照され、詳細特徴量は対象物の位置を特定する場合に参照される。特徴量テーブル221の詳細は、図6で説明する。地図データ222には、移動ロボット20が走行する空間全体の地図が登録される。なお、地図データ222は、移動ロボット制御装置10によって配布される。パラメータテーブル223には、カメラ250の画角等が登録される。 In the memory 220, a feature amount table 221, map data 222, and a parameter table 223 are stored. In the feature quantity table 221, the feature quantity and detailed feature quantity of the article are registered. The feature amount is referred to when the second object region is extracted, and the detailed feature amount is referred to when the position of the object is specified. Details of the feature amount table 221 will be described with reference to FIG. In the map data 222, a map of the entire space in which the mobile robot 20 travels is registered. The map data 222 is distributed by the mobile robot controller 10. In the parameter table 223, the angle of view of the camera 250 and the like are registered.
 また、メモリ220には、移動地点算出部211に対応するプログラム、及び動作制御部216に対応するプログラムが記憶される。移動地点算出部211に対応するプログラムは、第1地点算出部212、対象物領域抽出部213、第2地点算出部214、及び対象物位置特定部215それぞれに対応するプログラムを含む。プロセッサ210が、メモリ220に記憶された移動地点算出部211に対応するプログラムを実行することによって、移動地点算出部211を実装する。また、プロセッサ210が、第1地点算出部212、対象物領域抽出部213、第2地点算出部214、及び対象物位置特定部215それぞれに対応するプログラムを実行することによって、第1地点算出部212、対象物領域抽出部213、第2地点算出部214、及び対象物位置特定部215を実装する。また、プロセッサ210が、メモリ220に記憶された動作制御部216に対応するプログラムを実行することによって、動作制御部216を実装する。 Further, the memory 220 stores a program corresponding to the movement point calculation unit 211 and a program corresponding to the operation control unit 216. The programs corresponding to the movement point calculation unit 211 include programs corresponding to the first point calculation unit 212, the object region extraction unit 213, the second point calculation unit 214, and the object position specifying unit 215, respectively. The processor 210 implements the moving point calculation unit 211 by executing a program corresponding to the moving point calculation unit 211 stored in the memory 220. In addition, the processor 210 executes a program corresponding to each of the first point calculation unit 212, the object region extraction unit 213, the second point calculation unit 214, and the object position specifying unit 215, whereby the first point calculation unit 212, an object region extracting unit 213, a second point calculating unit 214, and an object position specifying unit 215 are implemented. In addition, the processor 210 implements the operation control unit 216 by executing a program corresponding to the operation control unit 216 stored in the memory 220.
 図5は、実施例1の移動ロボット制御装置10のハードウェア構成図である。移動ロボット制御装置10は、プロセッサ510、メモリ520、二次記憶装置530、及び無線インタフェース(IF)540を有する。プロセッサ510、メモリ520、二次記憶装置530、及び無線インタフェース(IF)540は、バス550を介して互いに接続される。 FIG. 5 is a hardware configuration diagram of the mobile robot control apparatus 10 according to the first embodiment. The mobile robot controller 10 includes a processor 510, a memory 520, a secondary storage device 530, and a wireless interface (IF) 540. The processor 510, the memory 520, the secondary storage device 530, and the wireless interface (IF) 540 are connected to each other via a bus 550.
 プロセッサ510、メモリ520、及び二次記憶装置530のハードウェア的な動作については、図4に示すプロセッサ210、メモリ220、及び二次記憶装置230と同じであるので、説明を省略する。無線IF540は、移動ロボット20と無線で通信するためのインタフェースである。 The hardware operations of the processor 510, the memory 520, and the secondary storage device 530 are the same as those of the processor 210, the memory 220, and the secondary storage device 230 shown in FIG. The wireless IF 540 is an interface for communicating with the mobile robot 20 wirelessly.
 なお、移動ロボット制御装置10は、図示しない入力デバイス及び出力デバイスを有してもよい。入力デバイスは、例えば、キーボード及びマウス等であり、出力デバイスは、例えば、ディスプレイ等である。 Note that the mobile robot controller 10 may have an input device and an output device (not shown). The input device is, for example, a keyboard and a mouse, and the output device is, for example, a display.
 移動ロボット制御装置10は、移動ロボット20が取り出す対象となる物品である対象物を決定し、対象物の搬送指示を移動ロボット20に送信する。搬送指示は、例えば、対象物の物品情報、対象物が格納された間口32の位置を示す格納間口情報、及び対象物が格納された棚30までの経路を含む。なお、対象物の物品情報は、少なくとも対象物の識別子を含む。移動ロボット20が特徴量テーブル221を有さない場合、当該対象物の特徴量及び詳細特徴量を含んでもよい。また、格納間口情報は、当該対象物が格納された当該対象物が格納された棚30の位置と、当該棚30の当該対象物が格納された間口32の位置とを含む。また、移動ロボット20が、搬送指示に含まれる格納間口情報に基づいて、対象物が格納された棚30までの経路を算出できる場合、搬送指示に、対象物が格納された棚30までの経路は含まれなくてもよい。 The mobile robot control device 10 determines an object that is an object to be taken out by the mobile robot 20 and transmits an instruction for conveying the object to the mobile robot 20. The conveyance instruction includes, for example, article information of the target object, storage frontage information indicating the position of the frontage 32 where the target object is stored, and a route to the shelf 30 where the target object is stored. The article information of the object includes at least an identifier of the object. When the mobile robot 20 does not have the feature amount table 221, the feature amount and the detailed feature amount of the target object may be included. Moreover, the storage frontage information includes the position of the shelf 30 where the target object storing the target object is stored and the position of the frontage 32 where the target object of the shelf 30 is stored. Further, when the mobile robot 20 can calculate a route to the shelf 30 in which the object is stored based on the storage frontage information included in the transfer instruction, the route to the shelf 30 in which the object is stored in the transfer instruction. May not be included.
 図6は、実施例1の特徴量テーブル221の説明図である。 FIG. 6 is an explanatory diagram of the feature amount table 221 according to the first embodiment.
 特徴量テーブル221は、物品ID601、特徴量602、詳細特徴量603、及び詳細撮影距離604を含む。 The feature amount table 221 includes an article ID 601, a feature amount 602, a detailed feature amount 603, and a detailed shooting distance 604.
 物品ID601には、物品の識別子が登録される。特徴量602には、物品の特徴量が登録される。特徴量は、例えば、物品の色に基づく特徴量、物品のテクスチャに基づく特徴量、及び物品のエッジに基づく特徴量の少なくとも一つを含む。 In the product ID 601, an identifier of the product is registered. In the feature amount 602, the feature amount of the article is registered. The feature quantity includes, for example, at least one of a feature quantity based on the color of the article, a feature quantity based on the texture of the article, and a feature quantity based on the edge of the article.
 詳細特徴量603には、物品の詳細特徴量が登録される。詳細特徴量は、例えば、物品のラベルに含まれる文字情報(例えば、物品番号等)に基づく特徴量、及び物品に貼付されるバーコードに基づく特徴量の少なくとも一方を含む。特徴量は、物品外観全体の特徴量であるのに対して、詳細特徴量は、物品の一部に付された情報の特徴量であり、物品自体を識別可能な特徴量である。 In the detailed feature quantity 603, the detailed feature quantity of the article is registered. The detailed feature amount includes, for example, at least one of a feature amount based on character information (for example, an article number) included in the label of the article and a feature quantity based on a barcode attached to the article. The feature amount is a feature amount of the entire appearance of the article, whereas the detailed feature amount is a feature amount of information attached to a part of the article, and is a feature quantity that can identify the article itself.
 詳細撮影距離604には、詳細特徴量を取得するために必要な対象物からカメラ250までの距離である詳細撮影距離を特定可能な情報が登録される。詳細撮影距離604に登録された情報は、対象物からカメラ250までの距離であってもよいし、カメラ250が撮影された所定の面積に含まれる画素数であってもよい。すなわち、詳細撮影距離604には、詳細撮影距離を特定可能な情報が登録されていればよい。 In the detailed shooting distance 604, information that can specify the detailed shooting distance, which is the distance from the object necessary for acquiring the detailed feature amount to the camera 250, is registered. The information registered in the detailed shooting distance 604 may be a distance from the object to the camera 250, or may be the number of pixels included in a predetermined area where the camera 250 is shot. In other words, the detailed shooting distance 604 only needs to register information that can specify the detailed shooting distance.
 図7は、実施例1の対象物把持処理のフローチャートである。 FIG. 7 is a flowchart of the object gripping process according to the first embodiment.
 対象物把持処理は、移動ロボット20が、移動ロボット制御装置10から搬送指示を受信してから対象物を把持するまでの処理である。 The object gripping process is a process from when the mobile robot 20 receives a transport instruction from the mobile robot control device 10 until it grips the target object.
 まず、移動ロボット20は、移動ロボット制御装置10から搬送指示を受信する(701)。搬送指示は、図5で説明したように、対象物の物品情報、及び格納間口情報を少なくとも含む。 First, the mobile robot 20 receives a transfer instruction from the mobile robot controller 10 (701). As described with reference to FIG. 5, the conveyance instruction includes at least the article information of the object and the storage frontage information.
 次に、第1地点算出部212は、パラメータテーブル223に登録されたカメラ250の画角に基づいて、対象物が格納された棚30全体を撮影可能な第1地点を算出する(702)。具体的には、第1地点算出部212は、自身の位置と、対象物が格納された棚30のサイズ(縦、横、及び高さ)と、カメラ250の画角とに基づいて、対象物が格納された棚30全体を撮影可能な地点を算出し、算出した地点のうち任意の一つの地点を第1地点とする。なお、棚30のサイズは、パラメータテーブル223に予め登録されていてもよいし、搬送指示に含まれていてもよい。また、棚30に対応する第1地点が予め登録されている場合等には、移動ロボット20は、搬送指示に含まれる格納間口情報に基づいて、第1地点を特定できるので、移動地点算出部211は、必ずしも第1地点算出部212を有さなくてもよい。 Next, based on the angle of view of the camera 250 registered in the parameter table 223, the first point calculation unit 212 calculates a first point at which the entire shelf 30 in which the object is stored can be photographed (702). Specifically, the first point calculation unit 212 determines the target based on its own position, the size (vertical, horizontal, and height) of the shelf 30 in which the target object is stored, and the angle of view of the camera 250. A point where the entire shelf 30 in which the object is stored can be photographed is calculated, and any one of the calculated points is set as the first point. Note that the size of the shelf 30 may be registered in advance in the parameter table 223 or may be included in the transport instruction. In addition, when the first point corresponding to the shelf 30 is registered in advance, the mobile robot 20 can specify the first point based on the storage frontage information included in the transport instruction. 211 does not necessarily need to have the first point calculation unit 212.
 次に、動作制御部216は、自身の位置を認識しながら、走行用モータ280を制御して、ステップ702の処理で算出した第1地点まで移動する(703)。 Next, the operation control unit 216 controls the traveling motor 280 while recognizing its own position, and moves to the first point calculated in the processing of Step 702 (703).
 移動ロボット20が第1地点に達した場合、動作制御部216は、カメラ250を制御して、対象物が格納された棚30を撮影する(704)。 When the mobile robot 20 reaches the first point, the motion control unit 216 controls the camera 250 to photograph the shelf 30 in which the object is stored (704).
 次に、対象物領域抽出部213は、ステップ704の処理で撮影した画像に含まれる全ての画素の特徴量を算出し、算出した特徴量が特徴量テーブル221の対象物の物品IDに対応するレコードの特徴量から第1所定範囲内である画素を、第2対象物領域として抽出する(705)。なお、第2対象物領域は、図3で説明したように、対象物以外の物品を含む可能性がある。 Next, the object region extraction unit 213 calculates the feature amounts of all the pixels included in the image captured in the process of step 704, and the calculated feature amounts correspond to the article IDs of the objects in the feature amount table 221. Pixels within the first predetermined range are extracted from the feature amount of the record as the second object region (705). Note that the second object region may include articles other than the object, as described with reference to FIG.
 次に、第2地点算出部214は、ステップ705の処理で抽出した第2対象物領域全体を撮影可能であって、第1地点より対象物に近い第2地点を算出する(706)。第2地点の算出処理の詳細は、図8で説明する。 Next, the second point calculation unit 214 calculates a second point that can capture the entire second object region extracted in the process of step 705 and is closer to the object than the first point (706). Details of the second point calculation process will be described with reference to FIG.
 次に、動作制御部216は、自身の位置を認識しながら、走行用モータ280を制御して、ステップ706の処理で算出した第2地点まで移動する(707)。 Next, the operation control unit 216 controls the traveling motor 280 while recognizing its own position, and moves to the second point calculated in the process of Step 706 (707).
 移動ロボット20が第2地点に達した場合、動作制御部216は、カメラ250を制御して、第2対象物領域全体を撮影する(708)。 When the mobile robot 20 reaches the second point, the motion control unit 216 controls the camera 250 to photograph the entire second object area (708).
 次に、動作制御部216は、ステップ708の処理で撮影した画像に含まれる全ての画素の詳細特徴量を算出し、算出した詳細特徴量が特徴量テーブル221の対象物の物品IDに対応するレコードの詳細特徴量から所定範囲内である画素を、対象物として抽出し、対象物の位置を特定する(709)。 Next, the operation control unit 216 calculates the detailed feature amount of all the pixels included in the image captured in the process of step 708, and the calculated detailed feature amount corresponds to the article ID of the object in the feature amount table 221. Pixels within a predetermined range from the detailed feature amount of the record are extracted as objects, and the position of the object is specified (709).
 次に、動作制御部216は、アーム制御モータ260及びグリッパ制御モータ270を制御して、ステップ709の処理の位置の対象物を把持して、対象物を棚30から取り出し(710)、処理を終了する。 Next, the operation control unit 216 controls the arm control motor 260 and the gripper control motor 270 to grip the object at the position of the processing in Step 709 and take out the object from the shelf 30 (710). finish.
 図8は、実施例1の第2地点算出処理のフローチャートである。 FIG. 8 is a flowchart of the second point calculation process according to the first embodiment.
 まず、第2地点算出部214は、特徴量テーブル221の物品ID601に、受信した搬送指示に含まれる対象物の物品IDが登録されたレコードの詳細撮影距離604に登録された距離を詳細撮影距離(D1)として取得する(801)。 First, the second point calculation unit 214 uses the distance recorded in the detailed shooting distance 604 of the record in which the article ID of the target object included in the received conveyance instruction is registered in the article ID 601 of the feature amount table 221 as the detailed shooting distance. Obtained as (D1) (801).
 次に、第2地点算出部214は、ステップ705の処理で撮影された第2対象物領域全体を撮影可能であって、対象物に最も近い距離を対象物撮影距離(D2)として算出する(802)。対象物撮影距離(D2)の算出について図9を用いて説明する。図9は、実施例1の対象物撮影距離算出処理の説明図である。 Next, the second point calculation unit 214 can capture the entire second object region imaged in the process of step 705 and calculates the distance closest to the object as the object imaging distance (D2) ( 802). Calculation of the object shooting distance (D2) will be described with reference to FIG. FIG. 9 is an explanatory diagram of an object shooting distance calculation process according to the first embodiment.
 図9では、第2対象物領域のカメラ250に最も近い辺の長さをWとし、カメラ250の画角をAとし、算出する対象物撮影距離をD2とする。この対象物撮影距離は式1を計算することによって算出される。なお、Wは、最も近い辺の長さに所定値を加えた値であってもよい。 In FIG. 9, the length of the side of the second object area closest to the camera 250 is W, the angle of view of the camera 250 is A, and the object shooting distance to be calculated is D2. This object shooting distance is calculated by calculating equation (1). Note that W may be a value obtained by adding a predetermined value to the length of the nearest side.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 なお、対象物撮影距離は、式1を計算した値に所定値を加算した値であってもよい。 It should be noted that the object shooting distance may be a value obtained by adding a predetermined value to the value obtained by calculating Equation 1.
 次に、第2地点算出部214は、ステップ801の処理で取得した詳細撮影距離(D1)がステップ802の処理で算出した対象物領域撮影距離(D2)以上であるか否かを判定する(803)。 Next, the second point calculation unit 214 determines whether or not the detailed shooting distance (D1) acquired in the process of step 801 is equal to or greater than the object area shooting distance (D2) calculated in the process of step 802 ( 803).
 ステップ803の処理で、対象物領域撮影距離(D2)が詳細撮影距離(D1)より小さいと判定された場合(803:NO)、第2地点算出部214は、第2対象物領域の最もカメラ250に近い辺の中点から対象物領域撮影距離(D2)以上であって詳細撮影距離(D1)以下となる地点を第2地点として算出し(804)、第2地点算出処理を終了する。 When it is determined in the process of step 803 that the object area shooting distance (D2) is smaller than the detailed shooting distance (D1) (803: NO), the second point calculation unit 214 determines that the second object area is the most camera. A point that is greater than or equal to the object region shooting distance (D2) and less than or equal to the detailed shooting distance (D1) from the middle point near 250 is calculated as the second point (804), and the second point calculation process is terminated.
 一方、ステップ803の処理で、対象物領域撮影距離(D2)が詳細撮影距離(D1)以上であると判定された場合(803:YES)、第2地点算出部214は、カメラ250の画角に基づいて、対象物領域撮影距離(D2)から撮影可能な範囲(W1)を算出し、第2対象物領域のカメラ250に最も近い辺の長さ(W)を算出した範囲(W1)で除算して、対象物領域撮影距離(D2)から対象物領域全体を撮影するために必要な撮影回数を算出する(805)。具体的には、第2地点算出部214は、WをW1で除算した結果に余りが存在すれば除算した商に1を加算した値を、撮影回数とし、余りが存在しなければ除算した商を撮影回数とする。 On the other hand, if it is determined in step 803 that the object area shooting distance (D2) is greater than or equal to the detailed shooting distance (D1) (803: YES), the second point calculation unit 214 displays the angle of view of the camera 250. Based on the object area shooting distance (D2), a range (W1) that can be imaged is calculated, and the length (W) of the side closest to the camera 250 of the second object area is calculated (W1). By dividing, the number of times of photographing necessary for photographing the entire object region is calculated from the object region photographing distance (D2) (805). Specifically, the second point calculation unit 214 sets the value obtained by adding 1 to the quotient obtained by dividing W by W1 as the number of photographing, and the quotient obtained by dividing if there is no remainder. Is the number of shots.
 次に、第2地点算出部214は、各回のカメラ250の位置を第2地点として算出し(806)、第2地点算出処理を終了する。 Next, the second point calculation unit 214 calculates the position of the camera 250 each time as the second point (806), and ends the second point calculation process.
 図10は、実施例1の複数の第2地点の算出する処理の説明図である。 FIG. 10 is an explanatory diagram of processing for calculating a plurality of second points according to the first embodiment.
 まず、対象物領域撮影距離(D2)から撮影可能な範囲(W1)は、式2を計算することによって、算出される。 First, an imageable range (W1) from the object area imaging distance (D2) is calculated by calculating Equation 2.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 第2対象物領域の移動ロボット20に近い方の端部から第2対象物領域の辺に沿ってW/2離れた位置から詳細撮影距離(D1)以下となる地点が最初の回の第2地点となる。 The second point in the first round is a point that is less than or equal to the detailed shooting distance (D1) from a position W / 2 away from the end of the second object region closer to the mobile robot 20 along the side of the second object region. It becomes a point.
 二回目の第2地点は、最初の回の第2地点から第2対象物領域の辺と平行な方向にW1だけ離れた位置となる。このように、前回の第2地点の位置にW1を加算した位置を次の第2地点として、第2対象物領域の全体(W)が撮影されるまでの第2地点を求める。 The second point of the second time is a position away from the second point of the first time by W1 in a direction parallel to the side of the second object area. In this way, the second point until the entire second object area (W) is imaged is obtained using the position obtained by adding W1 to the position of the previous second point as the next second point.
 本実施例によれば、移動ロボット20は、第1地点で棚30全体を撮影し、第1地点で撮影された画像から抽出された対象物が存在する可能性がある第2対象物領域を含む領域を撮影可能な第2地点に移動し、前記第2地点で前記第2対象物領域を撮影し、前記第2地点で撮影された画像から特定された対象物の位置に基づいて前記対象物を前記棚から取り出す。これによって、複数の物品が存在する場合であっても、対象物を特定し、移動ロボットが特定した対象物から取り出すことができる。 According to the present embodiment, the mobile robot 20 captures the entire shelf 30 at the first point, and extracts the second target region in which the target extracted from the image captured at the first point may exist. The target area is moved to a second point where the image can be photographed, the second object area is photographed at the second point, and the object is determined based on the position of the object identified from the image photographed at the second point. Remove an object from the shelf. Thereby, even when there are a plurality of articles, the object can be identified and taken out from the object identified by the mobile robot.
 また、本実施例によれば、対象物領域撮影距離が詳細撮影距離特定以上である場合、第2対象物領域全体を撮影するために、詳細撮影距離を満たす複数の地点を第2地点として算出し、対象物領域撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離より小さい場合、対象物領域撮影距離を満たす地点を第2地点として算出する。これによって、詳細特徴量を算出可能な距離で、第2対象物領域を撮影でき、正確に対象物を特定できる。 In addition, according to the present embodiment, when the object area shooting distance is equal to or greater than the detailed shooting distance specification, a plurality of points satisfying the detailed shooting distance are calculated as the second points in order to capture the entire second object area. When the object area shooting distance is smaller than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, a point satisfying the object area shooting distance is calculated as the second point. As a result, the second object region can be photographed at a distance where the detailed feature amount can be calculated, and the object can be accurately identified.
 実施例2を図11を用いて説明する。実施例2では、対象物領域撮影距離が詳細撮影距離以上であっても、第2対象物領域で対象物のみが存在する領域が特定できる場合、当該対象物のみが存在する領域全体を撮影可能な地点を第2地点として算出する。これによれば、対象物でない物品を撮影しないので、移動ロボット20の処理負荷を軽減できる。 Example 2 will be described with reference to FIG. In Example 2, even when the object area shooting distance is equal to or greater than the detailed shooting distance, if the area where only the object exists can be specified in the second object area, the entire area where only the object exists can be shot. The second point is calculated as the second point. According to this, since the article which is not the object is not photographed, the processing load of the mobile robot 20 can be reduced.
 図11は、実施例2の第2地点算出処理のフローチャートである。図11では、実施例1の図8に示す第2地点算出処理と同じ処理は、同じ符号を付与し、説明を省略する。 FIG. 11 is a flowchart of the second point calculation process according to the second embodiment. In FIG. 11, the same process as the second point calculation process shown in FIG.
 ステップ803の処理で、対象物領域撮影距離(D2)が詳細撮影距離(D1)以上であると判定された場合(803:YES)、第2地点算出部214は、第2対象物領域に含まれる画素の特徴量が特徴量テーブル221の対象物の物品IDに対応するレコードの特徴量から第2所定範囲内である画素を、対象物のみが存在する領域として抽出し、当該領域が抽出されたか否かを判定する(1101)。第2所定範囲は、ステップ703の処理で用いる第1所定範囲より狭い範囲である。 If it is determined in step 803 that the object area shooting distance (D2) is equal to or greater than the detailed shooting distance (D1) (803: YES), the second point calculation unit 214 is included in the second object area. The pixel whose feature quantity is within the second predetermined range from the feature quantity of the record corresponding to the article ID of the object in the feature quantity table 221 is extracted as an area where only the object exists, and the area is extracted. It is determined whether or not (1101). The second predetermined range is a range narrower than the first predetermined range used in the process of step 703.
 ステップ1101の処理で、対象物のみが存在する領域が抽出されないと判定された場合(1101:NO)、第2地点算出部214は、実施例1と同様にステップ805及び806の処理を実行する。 When it is determined in step 1101 that the region where only the object exists is not extracted (1101: NO), the second point calculation unit 214 executes the processing of steps 805 and 806 as in the first embodiment. .
 一方、ステップ1101の処理で、対象物のみが存在する領域が抽出されたと判定された場合(1101:YES)、第2地点算出部214は、抽出された領域の対象物全体を撮影可能な距離である対象物撮影距離(D3)を算出する(1102)。 On the other hand, if it is determined in step 1101 that an area where only the object exists is extracted (1101: YES), the second point calculation unit 214 can shoot the entire object in the extracted area. The object shooting distance (D3) is calculated (1102).
 次に、第2地点算出部214は、ステップ1102の処理で算出した対象物撮影距離(D3)がステップ801の処理で取得した詳細撮影距離(D1)以上であるか否かを判定する(1103)。 Next, the second point calculation unit 214 determines whether or not the object shooting distance (D3) calculated in the process of step 1102 is equal to or greater than the detailed shooting distance (D1) acquired in the process of step 801 (1103). ).
 ステップ1103の処理で、対象物撮影距離(D3)が詳細撮影距離(D1)より小さいと判定された場合(1103:NO)、第2地点算出部214は、対象物のみが存在する領域の最もカメラ250に近い辺の中点からの距離が対象物撮影距離(D3)以上であって詳細撮影距離(D1)以下となる地点を第2地点として算出し(1104)、第2地点算出処理を終了する。 When it is determined in the processing of step 1103 that the object shooting distance (D3) is smaller than the detailed shooting distance (D1) (1103: NO), the second point calculation unit 214 is the most in the area where only the object exists. A point where the distance from the midpoint of the side close to the camera 250 is not less than the object shooting distance (D3) and not more than the detailed shooting distance (D1) is calculated as the second point (1104), and the second point calculation processing is performed. finish.
 一方、ステップ1103の処理で、対象物撮影距離(D3)が詳細撮影距離(D1)以上である判定された場合(1103:YES)、第2地点算出部214は、ステップ805及び806の処理を実行する。この場合、第2対象物領域全体ではなく対象物のみが存在する領域全体を撮影するようにする回数及び第2地点を算出する。 On the other hand, if it is determined in step 1103 that the object shooting distance (D3) is greater than or equal to the detailed shooting distance (D1) (1103: YES), the second point calculation unit 214 performs steps 805 and 806. Execute. In this case, the number of times and the second point at which the entire area where only the object exists are captured, not the entire second object area, are calculated.
 以上によって、対象物でない物品を撮影しないので、移動ロボット20の処理負荷を軽減できる。また、撮影回数も減らすことができるので、より早く対象物を棚30から取り出すことができる。 As described above, since the article that is not the object is not photographed, the processing load of the mobile robot 20 can be reduced. Further, since the number of times of photographing can be reduced, the object can be taken out of the shelf 30 earlier.
 実施例3を図12及び図13を用いて説明する。実施例3では、詳細撮影距離が対象物領域撮影距離より小さくても、第2対象物領域で対象物のみが存在する領域が特定できる場合、対象物のみが存在する領域に含まれる詳細特徴量を算出するために読み取る位置から詳細撮影距離だけ離れた地点を第2地点として算出する。これによれば、詳細特徴量の算出に関係しない領域を撮影しないので、移動ロボット20の処理負荷を軽減できる。 Example 3 will be described with reference to FIGS. In Example 3, even if the detailed shooting distance is smaller than the object area shooting distance, the detailed feature amount included in the area where only the object exists can be specified in the second object area when the area where only the object exists can be specified. In order to calculate, a point away from the reading position by the detailed shooting distance is calculated as the second point. According to this, since the area not related to the calculation of the detailed feature amount is not photographed, the processing load of the mobile robot 20 can be reduced.
 図12は、実施例3の特徴量テーブル221の説明図である。 FIG. 12 is an explanatory diagram of the feature amount table 221 according to the third embodiment.
 実施例3の特徴量テーブル221は、物品ID601、特徴量602、詳細特徴量603、及び詳細撮影距離604の他に、詳細特徴量読取位置1201を含む。 The feature amount table 221 of the third embodiment includes a detailed feature amount reading position 1201 in addition to the item ID 601, the feature amount 602, the detailed feature amount 603, and the detailed shooting distance 604.
 詳細特徴量読取位置1201には、詳細特徴量を算出するための領域(読取領域)の物品内での位置に関する情報が登録される。例えば、詳細特徴量読取位置1201には、当該領域を含む面の面積と、当該面における当該領域の位置とが登録される。 In the detailed feature amount reading position 1201, information regarding the position in the article of the region (reading region) for calculating the detailed feature amount is registered. For example, the detailed feature amount reading position 1201 registers the area of the surface including the region and the position of the region on the surface.
 図13は、実施例3の第2地点算出処理のフローチャートである。図13では、実施例1の図8に示す第2地点算出処理及び実施例2の図11に示す第2地点算出処理と同じ処理は、同じ符号を付与し、説明を省略する。 FIG. 13 is a flowchart of the second point calculation process of the third embodiment. In FIG. 13, the same processes as the second spot calculation process shown in FIG. 8 of the first embodiment and the second spot calculation process shown in FIG. 11 of the second embodiment are given the same reference numerals, and description thereof is omitted.
 ステップ1101の処理で、対象物のみが存在する領域が抽出されたと判定された場合(1101:YES)、第2地点算出部214は、特徴量テーブル221の物品ID601に対象物となる物品の識別子が登録されたレコードの詳細特徴量読取位置1201に登録された情報を取得する(1301)。 When it is determined in step 1101 that an area where only the object exists is extracted (1101: YES), the second point calculation unit 214 uses the article ID 601 of the feature quantity table 221 as the identifier of the article as the object. The information registered in the detailed feature reading position 1201 of the record in which is registered is acquired (1301).
 次に、第2地点算出部214は、ステップ1301の処理で取得した情報に基づいて、対象物のみが存在する領域における読取領域に対応する領域を特定する(1302)。具体的には、第2地点算出部214は、第1地点で撮影された画像における対象物のみが存在する領域の面積を算出し、第1地点と当該領域との間の距離に基づいて、算出した面積を実際の面積に変換する。そして、第2地点算出部214は、変換した面積が、ステップ1301の処理で取得した情報に含まれる読取領域を含む面の面積から所定範囲内である場合、対象物のみが存在する領域が読取領域を含む面であると判断する。そして、第2地点算出部214は、対象物のみが存在する領域内における詳細特徴量読取位置1201に登録された読取領域に対応する領域を特定する。 Next, the second point calculation unit 214 identifies an area corresponding to the reading area in the area where only the target object exists based on the information acquired in the process of step 1301 (1302). Specifically, the second point calculation unit 214 calculates the area of the region where only the object is present in the image taken at the first point, and based on the distance between the first point and the region, The calculated area is converted into an actual area. Then, when the converted area is within a predetermined range from the area of the surface including the reading area included in the information acquired in step 1301, the second point calculation unit 214 reads the area where only the object exists. It is determined that the surface includes an area. Then, the second point calculation unit 214 identifies an area corresponding to the reading area registered in the detailed feature amount reading position 1201 in the area where only the object exists.
 次に、第2地点算出部214は、ステップ1302の処理で特定した読取領域に対応する領域全体を撮影可能な地点を第2地点として算出し(1303)、第2地点算出処理を終了する。読取領域は、例えばバーコード等が印刷されたラベル等である場合が多く、通常、読取領域に対応する領域全体を撮影可能な最小の距離は、詳細撮影距離より小さくなるので、第2地点算出部214は、読取領域に対応する領域全体を撮影可能な最小の距離以上であって詳細撮影距離以下となる地点を第2地点として算出すればよい。 Next, the second point calculation unit 214 calculates, as a second point, a point where the entire area corresponding to the reading area specified in the process of step 1302 can be photographed (1303), and ends the second point calculation process. The reading area is often a label or the like on which a barcode or the like is printed, for example. Usually, the minimum distance at which the entire area corresponding to the reading area can be photographed is smaller than the detailed photographing distance, so the second point calculation is performed. The unit 214 may calculate a point that is equal to or larger than the minimum distance at which the entire area corresponding to the reading area can be photographed and less than or equal to the detailed photographing distance as the second point.
 なお、読取領域に対応する領域全体を撮影可能な最小の距離が詳細撮影距離以上である場合、ステップ805及び806の処理のように、第2地点算出部214は、詳細撮影距離以下となる複数の地点を第2地点として算出してもよい。 Note that when the minimum distance at which the entire area corresponding to the reading area can be imaged is equal to or greater than the detailed image capture distance, the second point calculation unit 214 performs a plurality of operations that are equal to or less than the detailed image capture distance, as in the processing of steps 805 and 806. May be calculated as the second point.
 以上によって、詳細特徴量の算出に関係しない領域を撮影しないので、移動ロボット20の処理負荷を軽減できる。また、撮影回数も減らすことができるので、より早く対象物を棚30から取り出すことができる。 As described above, since the area not related to the calculation of the detailed feature amount is not photographed, the processing load of the mobile robot 20 can be reduced. Further, since the number of times of photographing can be reduced, the object can be taken out of the shelf 30 earlier.
 実施例4を図14を用いて説明する。本実施例では、移動ロボット制御装置10が第1地点及び第2地点を算出する。本実施例では、移動ロボット20は移動地点算出部211及び特徴量テーブル221を有さず、移動ロボット制御装置10のプロセッサ510は、移動地点算出部211を有し、移動ロボット制御装置10のメモリ520が、特徴量テーブル221及び各移動ロボット20のパラメータテーブル223を記憶する。 Example 4 will be described with reference to FIG. In this embodiment, the mobile robot controller 10 calculates the first point and the second point. In this embodiment, the mobile robot 20 does not have the moving point calculation unit 211 and the feature amount table 221, and the processor 510 of the mobile robot control device 10 has the moving point calculation unit 211, and the memory of the mobile robot control device 10. 520 stores a feature value table 221 and a parameter table 223 of each mobile robot 20.
 図14は、実施例4の把持動作処理のシーケンス図である。図14では、図7に示す処理と同じ処理は、同じ符号を付与し、説明を省略する。 FIG. 14 is a sequence diagram of the gripping operation process according to the fourth embodiment. In FIG. 14, the same processes as those shown in FIG.
 まず、移動ロボット制御装置10は移動ロボット20に対象物を棚30から取り出させる場合、ステップ702の処理で、第1地点算出部212が、第1地点を算出し、算出した第1地点を含む第1地点移動指示を移動ロボット20に送信する(1401)。 First, when the mobile robot control device 10 causes the mobile robot 20 to take out an object from the shelf 30, the first point calculation unit 212 calculates the first point and includes the calculated first point in the process of step 702. A first point movement instruction is transmitted to the mobile robot 20 (1401).
 移動ロボット20が第1地点移動指示を受信した場合、ステップ703の処理で、動作制御部216は、走行用モータ280を制御し、第1地点まで移動し、ステップ704の処理でカメラ250を制御して対象物が格納された棚30全体を撮影する。そして、移動ロボット20は、第1地点で撮影した画像である棚撮影画像を移動ロボット制御装置10に送信する(1402)。 When the mobile robot 20 receives the first point movement instruction, in step 703, the operation control unit 216 controls the traveling motor 280 to move to the first point, and controls the camera 250 in step 704. Then, the entire shelf 30 in which the object is stored is photographed. Then, the mobile robot 20 transmits a shelf photographed image, which is an image photographed at the first point, to the mobile robot control device 10 (1402).
 移動ロボット制御装置10が棚撮影画像を受信した場合、ステップ705の処理で、対象物領域抽出部213が第2対象物領域を抽出し、ステップ706の処理で、第2地点算出部214が第2地点を算出し、算出した第2地点を含む第2地点移動指示を移動ロボット20に送信する(1403)。 When the mobile robot controller 10 receives the shelf photographed image, the object area extracting unit 213 extracts the second object area in the process of step 705, and the second point calculating unit 214 in the process of step 706. Two points are calculated, and a second point movement instruction including the calculated second point is transmitted to the mobile robot 20 (1403).
 移動ロボット20が第2地点移動指示を受信した場合、ステップ707の処理で、動作制御部216は、走行用モータ280を制御し、第2地点まで移動し、ステップ708の処理でカメラ250を制御して第2対象物領域全体を撮影する。そして、移動ロボット20は、第2地点で撮影した画像である詳細撮影画像を移動ロボット制御装置10に送信する(1404)。 When the mobile robot 20 receives the second point movement instruction, the operation control unit 216 controls the travel motor 280 to move to the second point in the process of step 707 and controls the camera 250 in the process of step 708. Then, the entire second object area is photographed. Then, the mobile robot 20 transmits a detailed captured image, which is an image captured at the second point, to the mobile robot control apparatus 10 (1404).
 移動ロボット制御装置10が詳細撮影画像を受信した場合、ステップ709の処理で、対象物位置特定部215は、受信した詳細撮影画像に基づいて、対象物の位置を特定し、特定した対象物の位置を含む対象物把持指示を移動ロボット20に送信する(1405)。 When the mobile robot control device 10 receives the detailed captured image, the object position specifying unit 215 specifies the position of the target object based on the received detailed captured image in the processing of step 709, and An object gripping instruction including the position is transmitted to the mobile robot 20 (1405).
 移動ロボット20が対象物把持指示を受信した場合、ステップ710の処理で、動作制御部216は、アーム制御モータ260及びグリッパ制御モータ270を制御して、受信した対象物把持指示に含まれる対象物の位置の対象物を把持して、対象物を棚30から取り出す。 When the mobile robot 20 receives the object gripping instruction, the operation control unit 216 controls the arm control motor 260 and the gripper control motor 270 in the process of step 710, and the object included in the received object gripping instruction. And the object is taken out from the shelf 30.
 以上によって、移動ロボット制御装置10が第1地点及び第2地点を算出し、対象物の位置を特定するので、移動ロボット20の処理負荷を軽減することができる。 As described above, since the mobile robot control device 10 calculates the first point and the second point and specifies the position of the object, the processing load on the mobile robot 20 can be reduced.
 実施例1~実施例4では、移動ロボット20は、第1地点で棚30全体を撮影し、第2地点まで移動して第2対象物領域を撮影したが、これに限定されない。例えば、移動ロボット20は、カメラ250の棚30全体を撮影可能な第1焦点距離で棚30全体を撮影し、棚30全体を含まずに第2対象物領域を撮影可能な第2焦点距離で第2対象物領域を撮影してもよい。なお、第2焦点距離は第1焦点距離より長い。また、画角が広ければ焦点距離が短く、画角が狭ければ焦点距離が長いので、画角を調整することによって第1焦点距離及び第2焦点距離を設定してもよい。移動ロボット20は、カメラ250の棚30全体を撮影可能な第1画角で棚30全体を撮影し、棚30全体を含まずに第2対象物領域を撮影可能な第2画角で第2対象物領域を撮影しもよい。なお、第2画角は第1画角より狭い。このため、特許請求の範囲では、第1地点、第1焦点距離、及び第1画角等を含む概念を第1条件として記載し、第2地点、第2焦点距離、及び第2画角等を含む概念を第2条件として記載した。第1地点算出部212は第1条件設定部に相当し、第2地点算出部214は第2条件設定部に相当する。 In the first to fourth embodiments, the mobile robot 20 has photographed the entire shelf 30 at the first point and moved to the second point to photograph the second object area, but the present invention is not limited to this. For example, the mobile robot 20 takes an image of the entire shelf 30 at a first focal length that can image the entire shelf 30 of the camera 250, and uses a second focal length that allows an image of the second object area without including the entire shelf 30. You may image | photograph a 2nd target object area | region. Note that the second focal length is longer than the first focal length. Further, since the focal length is short when the angle of view is wide and the focal length is long when the angle of view is narrow, the first focal length and the second focal length may be set by adjusting the angle of view. The mobile robot 20 shoots the entire shelf 30 with a first angle of view that can shoot the entire shelf 30 of the camera 250, and the second angle with a second angle of view that can shoot the second object area without including the entire shelf 30. The object area may be photographed. The second field angle is narrower than the first field angle. Therefore, in the claims, the concept including the first point, the first focal length, the first angle of view, etc. is described as the first condition, and the second point, the second focal length, the second angle of view, etc. The concept including is described as the second condition. The first point calculation unit 212 corresponds to a first condition setting unit, and the second point calculation unit 214 corresponds to a second condition setting unit.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることも可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加、削除、及び置換をすることが可能である。 In addition, this invention is not limited to the above-mentioned Example, Various modifications are included. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described. Further, a part of the configuration of a certain embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of a certain embodiment. Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
 また、各実施例の構成の一部について、他の構成の追加、削除、及び置換をすることが可能である。また、上記の各構成、機能、処理部、処理手段等は、それらの一部または全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。 Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment. Each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
 また、前記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。 Further, each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
 各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、IC(Integrated Circuit)カード、SDカード、DVD(Digital Versatile Disc)等の記録媒体に置くことができる。 Information such as programs, tables, and files that realize each function is stored in memory, a hard disk, a recording device such as SSD (Solid State Drive), or an IC (Integrated Circuit) card, SD card, DVD (Digital Versatile Disc), etc. Can be placed on any recording medium.
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際にはほとんど全ての構成が相互に接続されていると考えてもよい。 Also, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
 事前情報として保持している対象物を含んだ第1対象物領域9999は、移動ロボット20の運用中に生成されても良い。例えば、第2対象物領域を抽出する対象物領域抽出部213のように、第1対象物領域9999を含むより広い領域を撮影部250が撮影した画像から第1対象物領域9999を抽出する事前対象物領域抽出部9998を移動ロボット20が備え、移動ロボット20は、まず、事前対象物領域抽出部9998を用いて、撮影部250が撮影した画像から第1対象物領域9999を抽出しても良い。例えば、移動ロボット20が事前情報として保持している棚30のモデルに基づき、画像から棚30を抽出し、第1対象物領域9999としても良い。 The first object area 9999 including the object held as prior information may be generated during the operation of the mobile robot 20. For example, like the object region extraction unit 213 that extracts the second object region, the first object region 9999 is extracted in advance from an image obtained by the image capturing unit 250 capturing a wider region including the first object region 9999. The mobile robot 20 includes the target area extraction unit 9998, and the mobile robot 20 first extracts the first target area 9999 from the image captured by the imaging unit 250 using the prior target area extraction unit 9998. good. For example, the shelf 30 may be extracted from the image based on the model of the shelf 30 that the mobile robot 20 holds as prior information, and may be used as the first object region 9999.

Claims (14)

  1.  物品が格納された棚に移動し、前記棚に格納された対象物を取り出す移動ロボットと、前記移動ロボットを制御する移動ロボット制御装置とを備える移動ロボット運用システムであって、
     前記移動ロボットは、物品を含む領域を画像として撮影する撮影部を有し、
     前記移動ロボット運用システムは、
     前記対象物を含む第1対象物領域を撮影可能な第1条件で撮影された画像から対象物が存在する可能性がある第2対象物領域を抽出する対象物領域抽出部と、
     前記対象物領域抽出部が抽出した第2対象物領域を前記第1対象物領域を含まずに撮影可能な第2条件を設定する第2条件設定部と、
     前記第2条件で撮影された画像から前記対象物を識別し、前記対象物の位置を特定する対象物位置特定部と、をさらに備え、
     前記移動ロボットは、
     前記第1条件で前記第1対象物領域を撮影し、
     前記第2条件設定部で設定した第2条件で前記第2対象物領域を撮影し、
     前記対象物位置特定部が特定した対象物の位置に基づいて前記対象物を前記棚から取り出すことを特徴とする移動ロボット運用システム。
    A mobile robot operation system comprising: a mobile robot that moves to a shelf in which articles are stored and takes out an object stored in the shelf; and a mobile robot controller that controls the mobile robot,
    The mobile robot has a photographing unit that photographs an area including an article as an image,
    The mobile robot operation system includes:
    A target area extracting unit that extracts a second target area in which the target may exist from an image captured under a first condition that allows the first target area including the target to be captured;
    A second condition setting unit that sets a second condition that allows the second target region extracted by the target region extracting unit to be photographed without including the first target region;
    An object position identifying unit that identifies the object from an image photographed under the second condition and identifies a position of the object;
    The mobile robot is
    Photographing the first object region under the first condition;
    Photographing the second object region under the second condition set by the second condition setting unit;
    A mobile robot operation system that takes out the object from the shelf based on the position of the object specified by the object position specifying unit.
  2.  請求項1に記載の移動ロボット運用システムであって、
     前記第1条件は、前記第1対象物領域を撮影可能な第1地点から撮影するものであり、
     前記第2条件は、前記第2対象物領域を撮影可能であって前記第1地点より前記対象物に近い第2地点から撮影するものであり、
     前記移動ロボットは、
     前記第1条件に含まれる第1地点に移動して、前記第1対象物領域を撮影し、
     前記第1地点から前記第2条件に含まれる第2地点に移動して、前記第2対象物領域を撮影することを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 1,
    The first condition is to shoot from the first point where the first object region can be shot,
    The second condition is that the second object region can be photographed and photographed from a second point that is closer to the object than the first point,
    The mobile robot is
    Move to the first point included in the first condition, photograph the first object region,
    The mobile robot operation system, wherein the second object area is photographed by moving from the first point to a second point included in the second condition.
  3.  請求項2に記載の移動ロボット運用システムであって、
     前記対象物の特徴量と、詳細特徴量と、詳細撮影距離を特定可能な詳細撮影距離特定情報と、が登録された特徴量情報を保持し、
     前記詳細撮影距離は、前記詳細特徴量を算出するために必要な前記撮影部から前記対象物までの距離であり、
     前記対象物領域抽出部は、
     前記第1地点で撮影された画像に含まれる画素の特徴量が前記特徴量情報に登録された対象物の特徴量から第1所定範囲内の画素を第2対象物領域に含まれる画素として抽出し、
     前記第2条件設定部は、
     前記第2対象物領域全体を最も近くで撮影可能な距離を示す対象物領域撮影距離を算出し、
     前記対象物領域撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離以上である場合、前記第2対象物領域全体を撮影するために、前記第2対象物領域までの距離が前記詳細撮影距離以下となる複数の地点を第2地点として算出し、
     前記対象物領域撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離より小さい場合、前記第2対象物領域までの距離が前記対象物領域撮影距離以上で前記詳細撮影距離以下となる地点を第2地点として算出し、
     前記対象物位置特定部は、前記第2地点で撮影された画像に含まれる画素の詳細特徴量が前記特徴量情報に登録された対象物の詳細特徴量から所定範囲内の画素を対象物として識別し、前記識別した対象物の位置を特定することを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 2,
    Holding the feature amount information in which the feature amount of the object, the detailed feature amount, and the detailed shooting distance specifying information capable of specifying the detailed shooting distance are registered,
    The detailed photographing distance is a distance from the photographing unit to the target object necessary for calculating the detailed feature amount,
    The object region extraction unit includes:
    The pixel feature amount included in the image photographed at the first point is extracted from the feature amount of the target object registered in the feature amount information as a pixel included in the second target region. And
    The second condition setting unit includes:
    Calculating an object area shooting distance indicating a distance at which the entire second object area can be imaged closest;
    When the object area shooting distance is equal to or greater than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, in order to capture the entire second object area, up to the second object area A plurality of points whose distance is equal to or less than the detailed shooting distance are calculated as second points,
    When the object area shooting distance is smaller than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, the distance to the second object area is not less than the object area shooting distance and the detailed shooting distance Calculate the following point as the second point,
    The object position specifying unit uses a pixel within a predetermined range from a detailed feature amount of an object registered in the feature amount information as a detailed feature amount of a pixel included in an image photographed at the second point. A mobile robot operation system characterized by identifying and specifying the position of the identified object.
  4.  請求項3に記載の移動ロボット運用システムであって、
     前記第2条件設定部は、
     前記第2対象物領域から前記対象物のみが存在する領域を特定できる場合、前記対象物のみが存在する領域全体を最も近くで撮影可能な距離を示す対象物撮影距離を算出し、
     前記対象物撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離以上である場合、前記対象物のみが存在する領域を撮影するために、前記詳細撮影距離以下となる複数の地点を第2地点として算出し、
     前記対象物撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離より小さい場合、前記対象物撮影距離以上で前記詳細撮影距離以下となる地点を第2地点として算出することを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 3,
    The second condition setting unit includes:
    When an area where only the object exists can be specified from the second object area, an object shooting distance indicating a distance at which the entire area where only the object exists can be imaged closest is calculated.
    When the object shooting distance is equal to or greater than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, a plurality of the shooting distances equal to or less than the detailed shooting distance is used to capture a region where only the object exists. Is calculated as the second point,
    When the object shooting distance is smaller than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, a point that is not less than the object shooting distance and not more than the detailed shooting distance is calculated as a second point. Mobile robot operation system characterized by
  5.  請求項4に記載の移動ロボット運用システムであって、
     前記第2条件設定部は、前記第2対象物領域に含まれる画素の特徴量が前記特徴量情報に登録された対象物の特徴量から第2所定範囲内の画素を前記対象物のみが存在する領域に含まれる画素として抽出し、
     前記第2所定範囲は前記第1所定範囲より狭いことを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 4,
    In the second condition setting unit, only the target object has pixels within a second predetermined range from the feature amount of the object registered in the feature amount information as the feature amount of the pixel included in the second object region. Extracted as pixels included in the area to be
    The mobile robot operation system, wherein the second predetermined range is narrower than the first predetermined range.
  6.  請求項3に記載の移動ロボット運用システムであって、
     前記特徴量情報には、前記詳細特徴量が算出される前記対象物の領域である読取領域の当該対象物内での位置に関する情報が登録され、
     前記第2条件設定部は、
     前記第2対象物領域から前記対象物のみが存在する領域を特定できる場合、前記特徴量情報を参照し、前記対象物のみ存在する領域から前記読取領域に対応する領域を特定し、
     前記特定した領域を撮影可能な地点を前記第2地点として算出することを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 3,
    In the feature amount information, information related to the position in the object of the reading region, which is the region of the object in which the detailed feature amount is calculated, is registered,
    The second condition setting unit includes:
    When an area where only the object exists can be identified from the second object area, the feature information is referred to, and an area corresponding to the reading area is identified from an area where only the object exists,
    A mobile robot operation system, wherein a point where the specified area can be photographed is calculated as the second point.
  7.  請求項1に記載の移動ロボット運用システムであって、
     前記対象物領域抽出部、前記第2条件設定部、及び前記対象物位置特定部の各々は、前記移動ロボット又は前記移動ロボット制御装置に配置されることを特徴とする移動ロボット運用システム。
    The mobile robot operation system according to claim 1,
    Each of the object area extraction unit, the second condition setting unit, and the object position specifying unit is arranged in the mobile robot or the mobile robot control device.
  8.  物品が格納された棚に移動し、前記棚に格納された対象物を取り出す移動ロボットであって、
     前記移動ロボットは撮影部を備え、
     前記対象物が格納された第1対象物領域を撮影可能な第1条件で前記第1対象物領域を撮影し、
     前記第1条件で撮影された画像から抽出された対象物が存在する可能性がある第2対象物領域を前記第1対象物領域を含まずに撮影可能な第2条件で前記第2対象物領域を含む領域を撮影し、
     前記第2条件で撮影された画像から特定された対象物の位置に基づいて前記対象物を前記棚から取り出すことを特徴とする移動ロボット。
    A mobile robot that moves to a shelf in which articles are stored and takes out an object stored in the shelf,
    The mobile robot includes a photographing unit,
    Photographing the first object region under a first condition capable of photographing the first object region in which the object is stored;
    The second object under a second condition capable of photographing a second object area in which an object extracted from an image photographed under the first condition may exist without including the first object area. Shoot the area including the area,
    A mobile robot, wherein the object is taken out from the shelf based on the position of the object specified from an image photographed under the second condition.
  9.  請求項8に記載の移動ロボットであって、
     前記第1条件は、前記第1対象物領域を撮影可能な第1地点から撮影するものであり、
     前記第2条件は、前記第2対象物領域を撮影可能であって前記第1地点より前記対象物に近い第2地点から撮影するものであり、
     前記第1条件に含まれる第1地点に移動して、前記第1対象物領域を撮影し、
     前記第1地点から前記第2条件に含まれる第2地点に移動して、前記第2対象物領域を撮影することを特徴とする移動ロボット。
    The mobile robot according to claim 8,
    The first condition is to shoot from the first point where the first object region can be shot,
    The second condition is that the second object region can be photographed and photographed from a second point that is closer to the object than the first point,
    Move to the first point included in the first condition, photograph the first object region,
    The mobile robot moves from the first point to a second point included in the second condition and images the second object region.
  10.  請求項9に記載の移動ロボットであって、
     前記対象物の特徴量と、詳細特徴量と、詳細撮影距離を特定可能な詳細撮影距離特定情報と、が登録された特徴量情報を保持し、
     前記詳細撮影距離は、前記詳細特徴量を算出するために必要な前記撮影部から前記対象物までの距離であり、
     前記移動ロボットは、
     前記第1地点で撮影された画像に含まれる画素ごとに特徴量を算出し、前記算出した特徴量が前記特徴量情報に登録された対象物の特徴量から第1所定範囲内の画素を第2対象物領域に含まれる画素として抽出し、
     前記第2対象物領域全体を最も近くで撮影可能な距離を示す対象物領域撮影距離を算出し、
     前記対象物領域撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離以上である場合、前記第2対象物領域全体を撮影するために、前記第2対象物領域までの距離が前記詳細撮影距離以下となる複数の地点を第2地点として算出し、
     前記対象物領域撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離より小さい場合、前記第2対象物領域までの距離が前記対象物領域撮影距離以上で前記詳細撮影距離以下となる地点を第2地点として算出し、
     前記第2地点で撮影された画像に含まれる画素の詳細特徴量が前記特徴量情報に登録された対象物の詳細特徴量から所定範囲内の画素を対象物として識別し、前記識別した対象物の位置を特定することを特徴とする移動ロボット。
    The mobile robot according to claim 9,
    Holding the feature amount information in which the feature amount of the object, the detailed feature amount, and the detailed shooting distance specifying information capable of specifying the detailed shooting distance are registered,
    The detailed photographing distance is a distance from the photographing unit to the target object necessary for calculating the detailed feature amount,
    The mobile robot is
    A feature amount is calculated for each pixel included in the image photographed at the first point, and the calculated feature amount is a pixel within a first predetermined range from the feature amount of the object registered in the feature amount information. 2 extracted as pixels included in the object area,
    Calculating an object area shooting distance indicating a distance at which the entire second object area can be imaged closest;
    When the object area shooting distance is equal to or greater than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, in order to capture the entire second object area, up to the second object area A plurality of points whose distance is equal to or less than the detailed shooting distance are calculated as second points,
    When the object area shooting distance is smaller than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, the distance to the second object area is not less than the object area shooting distance and the detailed shooting distance Calculate the following point as the second point,
    A pixel in which a detailed feature amount of a pixel included in an image photographed at the second point is within a predetermined range from a detailed feature amount of an object registered in the feature amount information is identified as an object, and the identified object A mobile robot characterized by specifying the position of the robot.
  11.  請求項10に記載の移動ロボットであって、
     前記第2対象物領域から前記対象物のみが存在する領域を特定できる場合、前記対象物のみが存在する領域全体を最も近くで撮影可能な距離を示す対象物撮影距離を算出し、
     前記対象物撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離以上である場合、前記対象物のみが存在する領域を撮影するために、前記詳細撮影距離以下となる複数の地点を第2地点として算出し、
     前記対象物撮影距離が前記対象物の前記詳細撮影距離特定情報によって特定される詳細撮影距離より小さい場合、前記対象物撮影距離以上で前記詳細撮影距離以下となる地点を第2地点として算出することを特徴とする移動ロボット。
    The mobile robot according to claim 10,
    When an area where only the object exists can be specified from the second object area, an object shooting distance indicating a distance at which the entire area where only the object exists can be imaged closest is calculated.
    When the object shooting distance is equal to or greater than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, a plurality of the shooting distances equal to or less than the detailed shooting distance is used to capture a region where only the object exists. Is calculated as the second point,
    When the object shooting distance is smaller than the detailed shooting distance specified by the detailed shooting distance specifying information of the object, a point that is not less than the object shooting distance and not more than the detailed shooting distance is calculated as a second point. A mobile robot characterized by
  12.  請求項10に記載の移動ロボットであって、
     前記特徴量情報には、前記詳細特徴量が算出される前記対象物の領域である読取領域の当該対象物内での位置に関する情報が登録され、
     前記第2対象物領域から前記対象物のみが存在する領域を特定できる場合、前記特徴量情報を参照し、前記対象物のみ存在する領域から前記読取領域に対応する領域を特定し、
     前記特定した領域を撮影可能な地点を前記第2地点として算出することを特徴とする移動ロボット。
    The mobile robot according to claim 10,
    In the feature amount information, information related to the position in the object of the reading region, which is the region of the object in which the detailed feature amount is calculated, is registered,
    When an area where only the object exists can be identified from the second object area, the feature information is referred to, and an area corresponding to the reading area is identified from an area where only the object exists,
    A mobile robot characterized in that a point where the specified area can be photographed is calculated as the second point.
  13.  物品が格納された棚に移動する移動ロボットにおける前記棚に格納された対象物を取り出す対象物取り出し方法であって、
     前記移動ロボットは、撮影部を有し、
     前記対象物取り出し方法は、
     前記移動ロボットが、前記対象物が格納された第1対象物領域を撮影可能な第1条件で前記第1対象物領域を撮影するように前記撮影部を制御し、
     前記移動ロボットが、前記第1条件で撮影された画像から抽出された対象物が存在する可能性がある第2対象物領域を前記第1対象物領域を含まずに撮影可能な第2条件で前記第2対象物領域を含む領域を撮影するように前記撮影部を制御し、
     前記移動ロボットが、前記第2条件で撮影された画像から特定された対象物の位置の前記対象物を前記棚から取り出すことを特徴とする対象物取り出し方法。
    An object take-out method for taking out an object stored in the shelf in a mobile robot that moves to a shelf in which articles are stored,
    The mobile robot has a photographing unit,
    The object retrieval method is:
    The mobile robot controls the imaging unit to image the first object area under a first condition capable of imaging the first object area in which the object is stored;
    The second condition in which the mobile robot can photograph a second object area that may contain an object extracted from an image photographed under the first condition without including the first object area. Controlling the imaging unit to image an area including the second object area;
    The object extraction method, wherein the mobile robot extracts the object at the position of the object specified from the image photographed under the second condition from the shelf.
  14.  請求項13に記載の対象物取り出し方法であって、
     前記第1条件は、前記第1対象物領域を撮影可能な第1地点から撮影するものであり、
     前記第2条件は、前記第2対象物領域を撮影可能であって前記第1地点より前記対象物に近い第2地点から撮影するものであり、
     前記対象物取り出し方法は、
     前記移動ロボットが、前記第1条件に含まれる第1地点に移動して、前記第1対象物領域を撮影するように前記撮影部を制御し、
     前記移動ロボットが、前記第1地点から前記第2条件に含まれる第2地点に移動して、前記第2対象物領域を撮影するように前記撮影部を制御することを特徴とする対象物取り出し方法。
    The method of taking out an object according to claim 13,
    The first condition is to shoot from the first point where the first object region can be shot,
    The second condition is that the second object region can be photographed and photographed from a second point that is closer to the object than the first point,
    The object retrieval method is:
    The mobile robot moves to a first point included in the first condition and controls the imaging unit to image the first object region;
    The mobile robot moves from the first point to a second point included in the second condition and controls the photographing unit so as to photograph the second target region. Method.
PCT/JP2015/073694 2015-08-24 2015-08-24 Mobile robot operation system, mobile robot, and object take-out method WO2017033254A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2017501748A JP6293968B2 (en) 2015-08-24 2015-08-24 Mobile robot operation system, mobile robot, and object extraction method
PCT/JP2015/073694 WO2017033254A1 (en) 2015-08-24 2015-08-24 Mobile robot operation system, mobile robot, and object take-out method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/073694 WO2017033254A1 (en) 2015-08-24 2015-08-24 Mobile robot operation system, mobile robot, and object take-out method

Publications (1)

Publication Number Publication Date
WO2017033254A1 true WO2017033254A1 (en) 2017-03-02

Family

ID=58099693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/073694 WO2017033254A1 (en) 2015-08-24 2015-08-24 Mobile robot operation system, mobile robot, and object take-out method

Country Status (2)

Country Link
JP (1) JP6293968B2 (en)
WO (1) WO2017033254A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107363834A (en) * 2017-07-20 2017-11-21 电子科技大学 A kind of mechanical arm grasping means based on cognitive map
JP2019018250A (en) * 2017-07-11 2019-02-07 ファナック株式会社 Programming device for generating operation program, and program generating method
DE112021003349T5 (en) 2020-06-24 2023-04-06 Fanuc Corporation robotic system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60123974A (en) * 1983-12-09 1985-07-02 Hitachi Ltd Multistage visual field recognizing system
JP2004338889A (en) * 2003-05-16 2004-12-02 Hitachi Ltd Image recognition device
JP2006102881A (en) * 2004-10-06 2006-04-20 Nagasaki Prefecture Gripping robot device
JP2009217456A (en) * 2008-03-10 2009-09-24 Toyota Motor Corp Landmark device and control system for mobile robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60123974A (en) * 1983-12-09 1985-07-02 Hitachi Ltd Multistage visual field recognizing system
JP2004338889A (en) * 2003-05-16 2004-12-02 Hitachi Ltd Image recognition device
JP2006102881A (en) * 2004-10-06 2006-04-20 Nagasaki Prefecture Gripping robot device
JP2009217456A (en) * 2008-03-10 2009-09-24 Toyota Motor Corp Landmark device and control system for mobile robot

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019018250A (en) * 2017-07-11 2019-02-07 ファナック株式会社 Programming device for generating operation program, and program generating method
US10434650B2 (en) 2017-07-11 2019-10-08 Fanuc Corporation Programming device which generates operation program and method for generating program
CN107363834A (en) * 2017-07-20 2017-11-21 电子科技大学 A kind of mechanical arm grasping means based on cognitive map
CN107363834B (en) * 2017-07-20 2020-09-29 电子科技大学 Mechanical arm grabbing method based on cognitive map
DE112021003349T5 (en) 2020-06-24 2023-04-06 Fanuc Corporation robotic system

Also Published As

Publication number Publication date
JPWO2017033254A1 (en) 2017-08-31
JP6293968B2 (en) 2018-03-14

Similar Documents

Publication Publication Date Title
WO2017163714A1 (en) Projection instruction device, parcel sorting system, and projection instruction method
WO2018077011A1 (en) Visual identification system and method thereof, and classifying and sorting system and method thereof
JP5911299B2 (en) Information processing apparatus, information processing apparatus control method, and program
WO2020034872A1 (en) Target acquisition method and device, and computer readable storage medium
WO2017163710A1 (en) Instruction projecting device, package sorting system and instruction projecting method
CN108108655B (en) Article identification device, control method and terminal equipment
JP2018169403A5 (en)
CN110520259B (en) Control device, pickup system, logistics system, storage medium, and control method
WO2017163709A1 (en) Instruction projecting device, package sorting system and instruction projecting method
CN110494259B (en) Control device, pickup system, logistics system, program, control method, and production method
JP6293968B2 (en) Mobile robot operation system, mobile robot, and object extraction method
CN109955244B (en) Grabbing control method and device based on visual servo and robot
WO2017163711A1 (en) Projection indicator, cargo assortment system, and projection indicating method
CN109454004B (en) Robot scanning and sorting system and method
CN114820781A (en) Intelligent carrying method, device and system based on machine vision and storage medium
CN109213202A (en) Cargo arrangement method, device, equipment and storage medium based on optical servo
CN110621451A (en) Information processing apparatus, pickup system, logistics system, program, and information processing method
JP6888349B2 (en) Accumulation management device, integration management method, program, integration management system
Pan et al. Manipulator package sorting and placing system based on computer vision
JP2011203936A (en) Characteristic point extraction device, operation teaching device using the same and operation processor
WO2017163713A1 (en) Projection instruction device, parcel sorting system, and projection instruction method
JP7021620B2 (en) Manipulators and mobile robots
JP2020114778A (en) Projection instruction device, goods assort system and projection instruction method
JP2021016910A (en) Object recognition device for picking or devanning, object recognition method for picking or devanning, and program
WO2017163712A1 (en) Projection instruction device, parcel sorting system, and projection instruction method

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2017501748

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15902225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15902225

Country of ref document: EP

Kind code of ref document: A1