CN115884855A - Robot system - Google Patents

Robot system Download PDF

Info

Publication number
CN115884855A
CN115884855A CN202180044393.4A CN202180044393A CN115884855A CN 115884855 A CN115884855 A CN 115884855A CN 202180044393 A CN202180044393 A CN 202180044393A CN 115884855 A CN115884855 A CN 115884855A
Authority
CN
China
Prior art keywords
search range
robot
feature amount
image
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180044393.4A
Other languages
Chinese (zh)
Inventor
井航太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fanuc Corp
Original Assignee
Fanuc Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fanuc Corp filed Critical Fanuc Corp
Publication of CN115884855A publication Critical patent/CN115884855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40006Placing, palletize, un palletize, paper roll placing, box stacking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot system capable of appropriately taking out an object deformable according to a mounting state. The disclosed robot system is provided with: an image pickup device that picks up an image of an object, a robot that takes out the object, an image processing device that specifies a position of the object based on an image picked up by the image pickup device, and a robot control device that causes the robot to take out the object whose position is specified by the image processing device, the image processing device including: a teaching section that sets a search range in which a feature amount of a pattern of the object in a captured image of the imaging device can be acquired; a recognition unit that extracts the pattern having the feature amount in the search range from a captured image of the imaging device; and a correction unit that corrects the search range based on the feature amount of the pattern extracted by the recognition unit.

Description

Robot system
Technical Field
The present invention relates to a robot system.
Background
A robot system in which a plurality of objects are taken out one by a robot is being used. For example, as described in patent document 1, the following techniques are known: even when the arrangement of the object is not fixed, the region where the object is arranged is photographed, and a pattern matching the feature of the object is extracted from the photographed image to specify the position of the object, so that the object can be appropriately extracted.
Documents of the prior art
Patent literature
Patent document 1: international publication No. 2017/033254
Disclosure of Invention
Problems to be solved by the invention
In the case where the object is, for example, a pouch package, the object can be deformed according to the placement state. If the amount of deformation of the object becomes large, the pattern of the object (the shape recognized by the image processing) in the captured image does not match the set pattern, and therefore the presence of the object may not be detected. For example, when a plurality of pouch packages are stacked on a tray, the pouch packages disposed on the lower side are crushed from above and deformed so that the planar size thereof increases. Therefore, when the shape of the bag package placed in an uncompressed state is taught to the robot system, the lower bag package may not be properly taken out. Therefore, a robot system capable of appropriately taking out an object that can be deformed according to a mounting state is required.
Means for solving the problems
The robot system of the present disclosure includes: an imaging device that images an object; a robot that takes out the object; an image processing device that determines a position of the object based on a captured image of the imaging device; and a robot controller that causes the robot to take out the object whose position is specified by the image processing device, the image processing device including: a teaching section that sets a search range in which a feature amount of a pattern of the object in a captured image of the imaging device can be acquired; a recognition unit that extracts the pattern having the feature amount within the search range from a captured image of the imaging device; and a correction unit that corrects the search range based on the feature amount of the pattern extracted by the recognition unit.
Effects of the invention
According to the robot system of the present disclosure, it is possible to appropriately take out an object that can be deformed according to a mounting state.
Drawings
Fig. 1 is a schematic diagram showing a configuration of a robot system according to an embodiment of the present disclosure.
Fig. 2 is a diagram illustrating a captured image of the robot system of fig. 1.
Fig. 3 is a flowchart showing a procedure of teaching processing of the robot system of fig. 1.
Fig. 4 is a flowchart showing a procedure of an object extracting process of the robot system of fig. 1.
Fig. 5 is a diagram illustrating feature values of an object on the nth layer extracted by the robot system of fig. 1.
Fig. 6 is a diagram illustrating feature values of an object of the N-1 st layer extracted by the robot system of fig. 1.
Fig. 7 is a diagram illustrating feature quantities of an object of the N-2 th layer taken out by the robot system of fig. 1.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. Fig. 1 is a schematic diagram showing a configuration of a robot system 1 according to an embodiment of the present disclosure.
The robot system 1 of the present embodiment is a palletizing system that takes out a plurality of objects T stacked on a pallet P one by one. As the object T to be taken out by the robot system 1 of the present embodiment, for example, a bag package in which powder or granular material is bagged is assumed.
The robot system 1 includes an imaging device 10 that images an object T, a robot 20 that takes out the object T, an image processing device 30 that specifies the position of the object T based on an image taken by the imaging device 10, and a robot control device 40 that causes the robot 20 to take out the object T whose position is specified by the image processing device 30.
The imaging device 10 captures an image of a plurality of objects T on the tray P in a range where the tray P is present from above, and acquires a captured image in which at least the outline of the object T on the uppermost layer can be confirmed. The imaging device 10 may be fixed above the place where the tray P is disposed by a structure such as a gantry, but the structure for fixing the imaging device 10 may be omitted by being held by the robot 20.
The captured image acquired by the imaging device 10 may be a visible light image, but as illustrated in fig. 2, a distance image indicating the distance from the imaging device 10 to the subject for each pixel is preferably used. The distance image of the object T is acquired by the imaging device 10, and a sensor for detecting the height position of the object T may not be separately provided. The imaging device 10 for acquiring a distance image includes: a projector that projects pattern light to the entire shooting range; and 2 cameras for capturing pattern light projected onto the object from different positions, and a 3-dimensional vision sensor capable of calculating a distance to the object for each plane position based on a positional shift of the pattern light due to parallax between images captured by the 2 cameras can be used. The components for calculating the distance from the object to the images captured by the 2 cameras may be provided separately from the projector and the cameras, or may be integrated with the image processing device 30 described later.
The robot 20 may typically be a vertical articulated robot, but may also be a vertical robot, a parallel link type robot, an orthogonal coordinate type robot, or the like. The robot 20 has a holding head 21 at a distal end portion thereof, which can hold the object T. The holding head 21 is appropriately designed according to the object T, and may have a structure having a vacuum suction pad for sucking the object T, for example. The holding head 21 may have a contact sensor or the like for detecting contact with the object T. The holding head 21 has a contact sensor, and thus can confirm the height position of the object T to be taken out.
The image processing apparatus 30 includes: a teaching unit 31 that sets a search range in which a feature amount of a pattern of the object T in a captured image of the imaging device 10 can be acquired; a recognition unit 32 that extracts a pattern having a feature amount within a search range from a captured image of the imaging device 10; and a correcting unit 33 that corrects the search range based on the feature amount of the pattern extracted by the identifying unit 32.
The image processing apparatus 30 can be realized by importing an appropriate control program into a computer apparatus having a CPU, a memory, and the like. The teaching section 31, the recognition section 32, and the correction section 33 are obtained by classifying the functions of the image processing apparatus 30, and may not be clearly distinguishable among the functions and the program configuration. The image processing device 30 may be integrated with a robot controller 40 described later.
The teaching section 31 sets an initial value of a search range of the feature amount used by the recognition section 32 to specify the pattern of the object T in the captured image of the imaging device 10 (a range in which the pattern of the feature amount is determined as the pattern of the object T) before actually performing the task of taking out the object T by the robot 20. The initial value of the search range of the feature amount by the teaching section 31 can be set for each tray P, but if the cargo posture (the number and arrangement of the objects on the tray P) is the same, the setting of the feature amount by the teaching section 31 is omitted, and the initial value of the search range of the feature amount can be reused. Therefore, the teaching section 31 is preferably capable of storing an initial value of the search range of the feature amount for each of the cargo postures of the object T.
Fig. 3 shows a procedure of setting an initial value of the search range of the feature amount by the teaching section 31. The setting of the initial value of the search range includes: a step of disposing the object T in the field of view of the imaging device 10 (step S11: imaging device disposing step); a step of capturing an image by the imaging device 10 (step S12: image capturing step); a step of accepting an input of a position in the image of the target object T by an operator (step S13: input accepting step); a step of calculating the feature amount of the pattern at the received position (step S14: feature amount calculation step); and a step of setting a search range for the feature amount based on the feature amount calculated in the feature amount calculation step (step S15: search range setting step).
In the image pickup device arrangement step of step S11, the robot 20 positions the image pickup device 10, for example, so that the object T is brought into the imaging range of the image pickup device 10. In the image pickup device arranging step, the object T may be placed on a table or the like, but it is preferable that the object T in a predetermined cargo posture, that is, an article in which a predetermined number of objects T are stacked on the tray P in a predetermined arrangement can be picked up from above.
In the image capturing step of step S12, a captured image including the pattern of the object T is acquired by the imaging device 10.
In the input receiving step of step S13, the captured image acquired by the imaging device 10 is displayed on the screen, and the input of the position in the image of the object T on the screen by the operator is received. The input of the positions of the plurality of objects T may be received on 1 image.
In the feature amount calculation step S14, the captured image is divided into a pattern region in which an object exists and a background region in a predetermined distance range by a known method such as binarization, for example, and further, feature amounts of the pattern at a position designated in the input reception step, such as an area, a length of the major axis, and a length of the minor axis, for example, are calculated by a known method such as speckle analysis, for example.
In the search range setting step of step S15, an initial value of the search range of the feature amount is set based on the 1 or more feature amounts calculated in the feature amount calculation step. As a specific example, the average value of the feature amounts of the plurality of patterns calculated in the feature amount calculation step may be set as the center value of the search range of the feature amount.
In this way, the robot system 1 can easily perform various tasks of picking up the object T by including the teaching section 31 in which the operator can easily set the initial value of the search range of the feature amount by simply specifying the object T on the screen.
The recognition unit 32 cuts out a pattern region where an object exists by a known method such as binarization from the captured image acquired by the imaging device 10, and calculates a feature amount for each pattern by speckle analysis or the like. Then, the recognition unit 32 extracts a pattern having the calculated feature amount within the search range as an area of the object T. Thus, the positions of all the objects T disposed at the uppermost layer among the plurality of objects T stacked on the tray P are specified.
The correction unit 33 resets the search range of the feature amount based on the feature amount of the pattern of the object T whose position is specified by the recognition unit 32. Preferably, the correction unit 33 corrects the search range of the feature amount to be applied to the determination of the position of the object T on the lower side of the group of objects T, that is, the object T on the uppermost layer in the captured image, which is the object T on the lower side of the group of objects T, based on the feature amount of the group of objects T whose height is within the constant range, that is, the feature amount of the object T on the uppermost layer in the captured image. This makes it possible to reflect the feature amount in a state in which the object T of each layer is compressed by the lamination of the objects T of the next layer in the determination of the object T of the next layer, and thus the object T can be detected relatively accurately.
As a method of correcting the search range of the feature amount, a method of changing the center value of the search range without changing the width of the search range can be adopted. Specifically, the correction unit 33 may set the center value of the search range as the average value of the feature values of the top-most object T, which is the set of objects T. According to this method, the feature amount can be corrected relatively easily and efficiently. Although the width of the search range can be increased to cope with the distortion, it is preferable that the width of the search range is not changed because the number of false detections increases.
The robot control device 40 receives information of the position specified by the recognition unit 32 from the image processing device 30, positions the holding head 21 at the received position to hold the object T, and controls the operation of the robot 20 so that the holding head 21 holding the object T moves and takes out the object T from the tray P. Further, the robot controller 40 instructs the image processing apparatus 30 to specify the position of the next object T in accordance with the operation of the robot 20.
The robot control device 40 can be realized by introducing an appropriate control program into a computer device having a CPU, a memory, and the like, as in the image processing device 30.
Fig. 4 shows a procedure of the taking-out process of the object T from the tray P by the robot system 1. The process for taking out the object T includes: a step of acquiring an initial value of a search range of the feature quantity set by the teaching section 31 (step S21: initial value acquisition step); a step of arranging the imaging device 10 (step S22: imaging device arranging step); a step of imaging the object T by the imaging device 10 (step S23: image capturing step); a step of specifying the position of the object T by the recognition unit 32 (step S24: position specifying step); a step of taking out the object T whose position is determined by the robot 20 (step S25: object taking-out step); a step of correcting the search range of the feature value by the correction unit 33 (step S26: search range correction step); and a step of checking whether the extracted object T is the object of the lowest layer (step S27: the lowest layer checking step).
In the initial value acquisition step of step S21, the recognition unit 32 acquires, from the teaching unit 31, an initial value of a search range to be applied to the feature value of the object T to be extracted. As a specific example, the following structure can be adopted: when the operator inputs a code or the like for specifying the type of the object T to be processed by the robot control device 40 and the posture of the load (pallet pattern) thereof, the robot control device 40 instructs the teaching section 31 to obtain the initial value of the search range of the feature amount set for the posture of the load of the object T input to the recognition section 32. In addition, when the teaching section 31 does not set the search range of the feature value corresponding to the cargo posture of the object T to be processed, the operator may be prompted to set the search range of the feature value by the teaching section 31.
In the image pickup device arranging step of step S22, the image pickup device 10 is arranged by the robot 20 at a position where the object T can be picked up. The position of the imaging device 10 may be fixed or may be changed according to the height of the uppermost object T at that time.
In the image capturing step of step S23, the image of the uppermost layer object T at that time is captured by the imaging device 10. As described above, the captured image is preferably a distance image. Therefore, in the image capturing step, the imaging device 10 may perform a process of analyzing the images of the 2 cameras to generate a distance image.
In the position specifying step of step S24, the recognition unit 32 analyzes the captured image acquired by the imaging device 10, and extracts a pattern having a feature amount within the search range as the pattern of the object T. Then, the recognition unit 32 calculates the positions and directions of the objects T, and inputs the calculated positions and directions of all the objects T to the robot controller 40.
In the object extracting step of step S25, the robot controller 40 operates the robot 20 so that the holding head 21 is moved to a position where the object T whose position is input from the recognition unit 32 can be held, and operates the robot 20 so that the holding head 21 holds the object T at the position and the holding head 21 holding the object T is moved to extract the object T from the tray P. The robot controller 40 controls the robot 20 so that the object T taken out of the tray P is placed on and released to, for example, a conveyor not shown. The extraction operation of all the objects T input from the recognition unit 32 is performed one by one in order.
In the search range correction step of step S26, as described above, the correction unit 33 corrects the search range of the feature amount stored in the recognition unit 32, based on the feature amount of the pattern extracted as the pattern of the object T by the recognition unit 32 in the latest captured image.
In the lowermost layer checking step of step S27, it is checked whether or not the latest captured image is an image in which the object T in the lowermost layer on the tray P is captured. Whether or not the object T is the lowermost layer can be determined from the height position of the object T whose position is specified. If the latest captured image is an image in which the object T at the lowermost layer on the tray P is captured, it is considered that all the objects T on the tray P are taken out, and the processing for the tray P is terminated. If the latest captured image is not the image of the object T on the lowermost layer on the tray P, the process returns to the image pickup device arrangement step of step S22 to take out the object T on the next layer.
Fig. 5 illustrates the feature amount of the pattern in the captured image of the object T of the nth layer (uppermost layer), fig. 6 illustrates the feature amount of the pattern in the captured image of the object T of the nth-1 layer (second layer from the top), and fig. 7 illustrates the feature amount of the pattern in the captured image of the object T of the nth-2 layer (third layer from the top). In this example, the short axis length of the pattern is used as the feature amount.
In the nth layer, since the object T is not crushed from above, the minor axis length of the object T is substantially the same as the value at the time of initial arrangement, and is substantially the same as the minor axis length of the object T at the time of setting the feature value in the teaching section 31. When the initial value of the short axis length search range is 100 at the center and the lower limit and the upper limit of ± 3 σ (3 times the standard deviation) are 80 and 120, for example, the short axis length of the uppermost object T naturally converges between the lower limit and the upper limit. In the N-1 th layer, the minor axis length slightly increases as a result of the vertical compression of each object T by the weight of the object T at the uppermost layer, but in this example, the minor axis length is maximally converged between the lower limit value and the upper limit value set on the assumption of the uppermost layer. However, in the N-2 th layer, as a result of the object T being compressed up and down by the weight of the object T in the upper 2 layers, the short axis length of a part of the object T in the N-2 nd layer becomes larger than the upper limit value set on the assumption of the uppermost layer. Therefore, if the search range of the feature amount is fixed, the object T of the lower layer may not be appropriately detected.
When the correction unit 33 sets the average value of the short axis lengths of the patterns of the object T in each layer as the center value of the search range of the next layer, the search range of the nth layer is 80 to 120 (center value 100), the search range of the N-1 st layer is 81 bits to 121 (center value 101), and the search range of the N-2 nd layer is 87 to 127 (center value 107). In this case, the maximum value of the minor axis length in the pattern of the N-2 th layer is 125, but since this value is also within the search range, all the patterns of the object T can be extracted, and all the objects T can be appropriately extracted by the robot 20.
As described above, when the robot system 1 extracts the object T on the same layer, the feature amount of the pattern extracted as the object T is reflected, and the search range of the feature amount for specifying the pattern of the object T on the lower layer 1 is corrected, whereby the object T on the lower layer is compressed more and the width and area of the pattern in the captured image are increased more, and the object T can be recognized appropriately.
The embodiments of the robot system of the present disclosure have been described above, but the scope of the present disclosure is not limited to the embodiments described above. The effects described in the above embodiments are merely the best effects obtained from the robot system of the present disclosure, and the effects of the robot system of the present disclosure are not limited to the effects described in the above embodiments.
As a specific example, in the object extracting process performed by the robot system, the object extracting step and the search range correcting step may be performed in the order of the object extracting step and the search range correcting step, or may be performed simultaneously. In the robot system according to the present disclosure, the processing may be terminated when the pattern of the object cannot be found in the captured image after the position determination step.
Description of the reference numerals
1 robot system
10 image pickup device
20 robot
21 holding head
30 image processing device
31 teaching part
32 identification part
33 correcting part
40 robot control device
P tray
And (4) a T object.

Claims (5)

1. A robot system, characterized in that,
the robot system includes:
an imaging device that images an object;
a robot that takes out the object;
an image processing device that determines a position of the object based on a captured image of the imaging device; and
a robot controller that causes the robot to take out the object whose position is specified by the image processing device,
the image processing apparatus includes:
a teaching unit that sets a search range in which a feature amount of a pattern of the object in a captured image of the imaging device can be acquired;
a recognition unit that extracts the pattern having the feature amount in the search range from a captured image of the imaging device; and
and a correction unit that corrects the search range based on the feature amount of the pattern extracted by the recognition unit.
2. The robotic system of claim 1,
the correction unit changes the center value of the search range without changing the width of the search range.
3. The robotic system of claim 1 or 2,
the correction unit corrects the search range applied to the object below the object based on a feature amount of the object having a height within a predetermined range.
4. The robotic system of claim 3,
the correction unit sets an average value of the feature values of the group of the objects as a center value of the search range.
5. The robotic system of any of claims 1-4,
the imaging device acquires a distance image of the object.
CN202180044393.4A 2020-06-24 2021-06-17 Robot system Pending CN115884855A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020108600 2020-06-24
JP2020-108600 2020-06-24
PCT/JP2021/023073 WO2021261378A1 (en) 2020-06-24 2021-06-17 Robot system

Publications (1)

Publication Number Publication Date
CN115884855A true CN115884855A (en) 2023-03-31

Family

ID=79281243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180044393.4A Pending CN115884855A (en) 2020-06-24 2021-06-17 Robot system

Country Status (5)

Country Link
US (1) US20230173668A1 (en)
JP (1) JPWO2021261378A1 (en)
CN (1) CN115884855A (en)
DE (1) DE112021003349T5 (en)
WO (1) WO2021261378A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09255158A (en) * 1996-03-22 1997-09-30 Kobe Steel Ltd Article disposition recognizing device
JP2013154457A (en) * 2012-01-31 2013-08-15 Asahi Kosan Kk Workpiece transfer system, workpiece transfer method, and program
JP6293968B2 (en) 2015-08-24 2018-03-14 株式会社日立製作所 Mobile robot operation system, mobile robot, and object extraction method

Also Published As

Publication number Publication date
US20230173668A1 (en) 2023-06-08
DE112021003349T5 (en) 2023-04-06
WO2021261378A1 (en) 2021-12-30
JPWO2021261378A1 (en) 2021-12-30

Similar Documents

Publication Publication Date Title
JP6813229B1 (en) Robot system equipped with automatic object detection mechanism and its operation method
JP5201411B2 (en) Bulk picking device and control method thereof
EP3173194B1 (en) Manipulator system, image capturing system, transfer method of object, and carrier medium
JP4565023B2 (en) Article take-out device
JP2019509559A (en) Box location, separation, and picking using a sensor-guided robot
US20090000115A1 (en) Surface mounting apparatus and method
KR102534983B1 (en) Apparatus and method for detecting attitude of electronic component
CN108290286A (en) Method for instructing industrial robot to pick up part
JP6595691B1 (en) Unloading device, unloading method and program
CN108160530A (en) A kind of material loading platform and workpiece feeding method
JP5360369B2 (en) Picking apparatus and method
JPH0753054A (en) Automatic unloading device
JP6167760B2 (en) Article position recognition device
JP6666764B2 (en) Work recognition method and random picking method
WO2019240273A1 (en) Information processing device, unloading system provided with information processing device, and computer-readable storage medium
CN115884855A (en) Robot system
JP5263501B2 (en) Work position recognition apparatus and method for depalletizing
JP2019199335A (en) Information processing device, information processing program, and sorting system
KR101993262B1 (en) Apparatus for micro ball mounting work
CN115003613A (en) Device and method for separating piece goods
JP7481926B2 (en) Information processing device, sorting system, and program
JP7481867B2 (en) Control device and program
JP2019098431A (en) Information processing apparatus and sorting system
JP6026365B2 (en) Image recognition method
JPH1196378A (en) Load position/attitude recognizing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination