WO2023137871A1 - 一种基于高度传感器的自动分拣方法、装置及可读介质 - Google Patents

一种基于高度传感器的自动分拣方法、装置及可读介质 Download PDF

Info

Publication number
WO2023137871A1
WO2023137871A1 PCT/CN2022/084339 CN2022084339W WO2023137871A1 WO 2023137871 A1 WO2023137871 A1 WO 2023137871A1 CN 2022084339 W CN2022084339 W CN 2022084339W WO 2023137871 A1 WO2023137871 A1 WO 2023137871A1
Authority
WO
WIPO (PCT)
Prior art keywords
height
grasping
image
pose
sum
Prior art date
Application number
PCT/CN2022/084339
Other languages
English (en)
French (fr)
Inventor
曹礼禧
杨建红
张宝裕
王英俊
毕雪涛
庄汉强
黄文景
黄骁明
陈海生
Original Assignee
福建南方路面机械股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 福建南方路面机械股份有限公司 filed Critical 福建南方路面机械股份有限公司
Publication of WO2023137871A1 publication Critical patent/WO2023137871A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots

Definitions

  • the invention relates to the field of automatic sorting, in particular to an automatic sorting method, device and readable medium based on a height sensor.
  • the traditional method for automatic sorting robots to grab objects is to collect images through color cameras, identify and locate the objects to be grabbed, and then send the object category and plane position information to the lower computer.
  • the mechanical claw Since only the two-dimensional position information of the object is obtained by positioning, the mechanical claw needs to be lowered close to the belt every time it grabs, and then grabs.
  • This grasping method depends on the distribution density and stacking of objects on the conveyor belt. If the distribution density of objects on the conveyor belt is high and the stacking rate is high, the possibility of the mechanical claw to grasp is relatively small. This will lead to the problem that the automatic sorting robot does not grasp because there is not enough grasping space under the working condition of high object distribution density, which leads to the problem of low grasping efficiency of the automatic sorting robot.
  • the purpose of the embodiments of the present application is to propose an automatic sorting method, device and readable medium based on a height sensor to solve the technical problems mentioned in the background technology section above.
  • the embodiment of the present application provides an automatic sorting method based on a height sensor, comprising the following steps:
  • S5 search for the grasping pose based on the processed height image, and determine the grasping pose of the mechanical claw, which includes the grasping center point, grasping width and grasping angle.
  • the grasping pose search in step S5 includes a grasping pose search based on the minimum rectangular frame and a grasping pose search based on the shape of the object, and the grasping pose search is based on the processed height images under different height thresholds, and the grasping pose of the mechanical claw is adjusted to obtain a non-interfering grasping pose.
  • step S5 specifically includes:
  • the grabbing pose search based on the minimum rectangular frame in step S5 specifically includes:
  • the grasping pose search based on the shape of the object in step S5 specifically includes: judging the shape of the object, if the object is strip-shaped, then fixing the grasping angle, changing the grasping center point and grasping width to draw the grasping line of the mechanical claw and judging whether the difference between the first sum of pixel values Sum1 and the sum of second pixel values Sum2 is less than a preset threshold value, and if so, output the grasping center point, grasping when the difference between the sum of first pixel values Sum1 and the sum of second pixel values Sum2 is the smallest Take the width and grabbing angle, otherwise change the height threshold and repeat steps S3-S5; if the object is non-strip, then fix the grabbing center point, change the grabbing angle to draw the grabbing line of the mechanical claw and judge whether the difference between the first pixel value sum Sum1 and the second pixel value sum Sum2 is less than the preset threshold value, if so, output the grabbing center point, grabbing width and grabbing angle when the difference between
  • step S3 specifically includes:
  • step S1 specifically includes:
  • the height mosaic image and the color mosaic image are cropped according to the offset d between the first fixed position and the second fixed position to obtain a height image and a color image.
  • embodiments of the present application provide an automatic sorting device based on a height sensor, including:
  • An image acquisition module configured to acquire height images and color images of the same sorting area
  • the object recognition module is configured to identify the mask and the type of the object on the color image through the instance segmentation model, and obtain the minimum rectangular frame surrounding the object based on the mask, and obtain the center point, length, width and deflection angle of the object according to the minimum rectangular frame;
  • the height image processing module is configured to process the height image according to different height thresholds to obtain the processed height image
  • the area comparison module is configured to compare the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determine whether the grasping pose of the mechanical claw can be found based on the area ratio;
  • the grasping pose search module is configured to search for the grasping pose based on the processed height image to determine the grasping pose of the mechanical claw, and the grasping pose includes a grasping center point, a grasping width and a grasping angle.
  • an embodiment of the present application provides an electronic device, including one or more processors; a storage device for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any implementation manner in the first aspect.
  • the embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored.
  • the computer program is executed by a processor, the method described in any implementation manner in the first aspect is implemented.
  • the present invention has the following beneficial effects:
  • the present invention uses the sensor to locate the spatial position of the object, and uses the sensor to determine whether the grasping is successful, which can increase the grasping possibility of the automatic sorting robot for the working conditions with high object distribution density and high stacking rate, and effectively improve the sorting efficiency.
  • the instance segmentation model adopted in the present invention is relatively mature, and the height image is processed by adjusting different height thresholds to filter the height information below the height threshold, so that the accurate grasping posture can be obtained in combination with the height information of objects at different heights.
  • the automatic sorting method based on the height sensor of the present invention increases the positioning of the three-dimensional position of the object in space by the automatic sorting robot, and can simulate the grasping lines of the mechanical claws at different heights, and the grasping is accurate, thereby improving the grasping efficiency of the automatic sorting robot.
  • Fig. 1 is an exemplary device architecture diagram to which an embodiment of the present application can be applied;
  • Fig. 2 is the schematic flow chart of the automatic sorting method based on the height sensor of the embodiment of the present invention
  • Fig. 3 is a schematic diagram of modeling the size of the light source system of the automatic sorting method based on the height sensor according to the embodiment of the present invention
  • FIG. 4 is a timing diagram of data acquisition triggering of an automatic sorting method based on a height sensor according to an embodiment of the present invention
  • Fig. 5 is the image mosaic schematic diagram of the automatic sorting method based on the height sensor of the embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a building solid waste image acquisition platform based on an automatic sorting method based on a height sensor according to an embodiment of the present invention
  • Fig. 7 is the color image of the automatic sorting method based on the height sensor of the embodiment of the present invention.
  • Fig. 8 is a result diagram of the instance segmentation model recognition of the color image based on the automatic sorting method of the height sensor according to the embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the minimum rectangular frame and its center point, length, width and deflection angle obtained from the recognition results of the instance segmentation model in the automatic sorting method based on the height sensor according to the embodiment of the present invention.
  • Fig. 10 is the height image of the automatic sorting method based on the height sensor of the embodiment of the present invention.
  • Fig. 11 is the processed height image of the automatic sorting method based on the height sensor according to the embodiment of the present invention.
  • FIG. 12 is a schematic diagram of the area ratio acquisition process of the height sensor-based automatic sorting method according to an embodiment of the present invention.
  • Fig. 13 is a color image of a red brick on a wooden block that needs to be grabbed according to the height sensor-based automatic sorting method of the embodiment of the present invention
  • Fig. 14 is the height image after drawing the grasping line of the mechanical claw in the automatic sorting method based on the height sensor according to the embodiment of the present invention.
  • Fig. 15 is a schematic diagram of obtaining the center point and length or width of the smallest rectangular frame in the automatic sorting method based on the height sensor according to the embodiment of the present invention.
  • Fig. 16 is a schematic diagram of drawing a straight line according to the minimum rectangular frame, length or width and deflection angle of the height sensor-based automatic sorting method according to the embodiment of the present invention.
  • Fig. 17 is a schematic diagram of drawing the grasping line of the mechanical claw according to the straight line in the automatic sorting method based on the height sensor according to the embodiment of the present invention.
  • Fig. 18 is the height image after drawing the grasping line of the mechanical claw under different grasping postures with a height threshold of 10 in the automatic sorting method based on the height sensor according to the embodiment of the present invention
  • Fig. 19 is the height image after drawing the grasping line of the mechanical claw under different grasping postures with a height threshold of 20 in the automatic sorting method based on the height sensor according to the embodiment of the present invention
  • Fig. 20 is the height image after drawing the grasping line of the mechanical claw under different grasping postures with a height threshold of 30 in the automatic sorting method based on the height sensor according to the embodiment of the present invention
  • Fig. 21 is a schematic diagram of an automatic sorting device based on a height sensor according to an embodiment of the present invention.
  • Fig. 22 is a schematic structural diagram of a computer device suitable for realizing the electronic equipment of the embodiment of the present application.
  • Fig. 1 shows an exemplary device architecture 100 to which the height sensor-based automatic sorting method or the height sensor-based automatic sorting device according to the embodiments of the present application can be applied.
  • the device architecture 100 may include terminal devices 101 , 102 , 103 , a network 104 and a server 105 .
  • the network 104 is used as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the server 105 .
  • Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • Terminal devices 101 , 102 , 103 Users can use terminal devices 101 , 102 , 103 to interact with server 105 via network 104 to receive or send messages and the like.
  • Various applications may be installed on the terminal devices 101 , 102 , and 103 , such as data processing applications, file processing applications, and the like.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, 103 When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices, including but not limited to smart phones, tablet computers, laptop computers, desktop computers and the like.
  • the terminal devices 101, 102, 103 When the terminal devices 101, 102, 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (such as software or software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the server 105 may be a server that provides various services, such as a background data processing server that processes files or data uploaded by the terminal devices 101 , 102 , and 103 .
  • the background data processing server can process the obtained files or data and generate processing results.
  • the height sensor-based automatic sorting method provided in the embodiment of the present application can be executed by the server 105, and can also be executed by the terminal devices 101, 102, 103.
  • the height sensor-based automatic sorting device can be set in the server 105, or can be set in the terminal devices 101, 102, 103.
  • terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the above device architecture may not include a network, but only need a server or a terminal device.
  • Fig. 2 shows a kind of automatic sorting method based on height sensor provided by the embodiment of the present application, comprises the following steps:
  • S1 acquire the height image and color image of the same sorting area.
  • step S1 specifically includes:
  • the height mosaic image and the color mosaic image are cropped according to the offset d between the first fixed position and the second fixed position to obtain a height image and a color image.
  • the brightness, uniformity, and illumination angle of the light source will all affect the final imaging quality of the camera.
  • the OPTLSG1254-W high-brightness linear white LED light source is selected in the embodiment of this application. Since the direction of the light source is at a certain angle to the shooting direction of the camera, the brightness obtained by shooting the same target at different heights is different. In order to alleviate the above problems to the greatest extent, the light source irradiation angle and the camera shooting angle should be as small as possible.
  • the light source system is modeled, and the schematic diagram shown in Figure 3 is obtained.
  • the side shape of the light source is simplified as a rectangle whose length is L, width is W, and the installation height of the light source is H, then the angle should satisfy the following expression:
  • the acquisition of the height single-line image and the color single-line image includes two steps of data acquisition trigger and data matching.
  • the encoder is used to convert the displacement of the conveyor belt into a pulse signal at a fixed ratio and input to the height camera and the color camera at the same time.
  • the height camera and the color camera are set to trigger acquisition on the rising edge. Then when the encoder sends out a pulse, the height single-line image H i and the color single-line image C i corresponding to the current pulse can be obtained.
  • the trigger timing diagram is shown in Figure 4.
  • the physical installation positions of the height camera and the color camera have a fixed offset d in the running direction of the conveyor belt, the actual positions corresponding to the data of the height camera and the color camera are different at the same time. Therefore, during data matching, the corresponding data should be intercepted according to the actual offset d of the installation positions of the height camera and the color camera for matching.
  • the specific operation is to remove the data in the dotted frame of the two data, and the remaining data is the matching data.
  • the objects in the height image after the final matching are corresponding to the objects in the color image.
  • the final acquisition system structure is shown in Figure 6.
  • the monochrome camera in the height camera in the embodiment of the present application is integrated with the line laser. This camera can be directly used to obtain the height of the object at the line laser, that is, one frame during splicing.
  • One encoder pulse signal triggers one acquisition, and 960 frames collected by 960 pulses are spliced into one height photo.
  • the color image as shown in FIG. 7 is obtained through step S1 and the contour and type of objects on the color image are obtained through instance segmentation model recognition, as shown in FIG. 8 .
  • the instance segmentation model includes the Mask RCNN neural network, which can not only identify the category and location of the object, but also obtain the mask of the object. Then use the mask to obtain the minimum rectangular frame surrounding the object, and obtain the center point, length, width, deflection angle and other information of the object according to the minimum rectangular frame, as shown in Figure 9.
  • the instance segmentation model can also use other neural network models, as long as the outline and type of the object can be obtained to obtain the minimum rectangular frame of the object, as well as the center point, length, width, and deflection angle of the object.
  • step S3 specifically includes:
  • a pixel with an object is 1, and a pixel without an object is 0.
  • the height information of objects on the conveyor belt can be obtained according to the height image, as shown in FIG. 10 .
  • the height information can be correspondingly obtained from the height image, and the height information below the height threshold can be filtered.
  • the pixel value of each point in the height image is the height of the object at the point. Filtering refers to setting the pixel value of the point below the height threshold to 0, while the pixel value of the part above the height threshold remains unchanged, so as to filter objects below a certain height plane and display the image of the part higher than the height threshold.
  • different height thresholds can be set, and filtered height images corresponding to different height thresholds can be obtained through processing. Further binarize the filtered height image, that is, in the processed height image, the pixels with objects are 1, and the pixels without objects are 0.
  • the processed height image is shown in FIG. 11 .
  • the ratio of the area of the object mask in the color image to the area of the object mask in the height image after the capture plane is raised is calculated.
  • the ratio is less than 0.8, it means that the shape of the object has changed, and a suitable capture pose cannot be found even if the capture plane is continued to be raised. It is to increase the height threshold and obtain the processed height image, and give up the capture of the object, as shown in Figure 12, where Figure a is the recognition result of the color image; Figure b is the mask of an object extracted from the recognition result; When the height image is processed, the contrast area of the object mask extracted from the color image does not change much at this time, so the grasping pose can be found in this state; Figure d shows the height image after processing when the height threshold is 25. At this time, the contrast area of the object mask extracted from the color image changes greatly, so the grasping pose cannot be found in this state.
  • S5 search for the grasping pose based on the processed height image, and determine the grasping pose of the mechanical claw, which includes the grasping center point, grasping width and grasping angle.
  • the grasping pose search in step S5 includes a grasping pose search based on the minimum rectangular frame and a grasping pose search based on the shape of the object, and the grasping pose search is based on the processed height images under different height thresholds to adjust the grasping pose of the mechanical claw to obtain a non-interfering grasping pose.
  • step S5 specifically includes:
  • the capture pose search based on the smallest rectangular frame in step S5 specifically includes:
  • the grasping pose search based on the shape of the object in step S5 specifically includes: judging the shape of the object, if the object is strip-shaped, then fixing the grasping angle, changing the grasping center point and grasping width to draw the grasping line of the mechanical claw and judging whether the difference between the first sum of pixel values Sum1 and the sum of second pixel values Sum2 is less than a preset threshold value, and if so, output the grasping center point when the difference between the sum of first pixel values Sum1 and the sum of second pixel values Sum2 is the smallest, Grab the width and grabbing angle, otherwise change the height threshold and repeat steps S3-S5; if the object is non-strip, then fix the grabbing center point, change the grabbing angle to draw the grabbing line of the mechanical claw and judge whether the difference between the first pixel value sum Sum1 and the second pixel value sum Sum2 is less than the preset threshold, if so, output the grabbing center point, grabbing width and grabbing angle when the difference between the first
  • the pixel value of the thick line is 0, as shown in Figure 14.
  • the pixel value of the grasping line of the mechanical claw is 0, which is black.
  • the center point, length, width, and deflection angle of the smallest rectangular frame are obtained from the smallest rectangular frame.
  • a straight line parallel to the width direction or length direction and passing through the center point of the smallest rectangular frame can be calculated according to the center point, length, width, and deflection angle of the smallest rectangular frame.
  • the straight line with the same length of the claw is used as the grasping line, as shown in Figure 17, which can represent the grasping position of the mechanical claw in the actual grasping process.
  • the mechanical claw moves to the center point of the object, rotates to be parallel to the object, and then opens to an opening angle 20mm wider than the object, and directly moves down and closes the mechanical claw to grab the object.
  • the processed height images of the grasping curves of the mechanical claws are drawn under different grasping postures with height thresholds of 10, 20, and 30.
  • SUM1 and SUM2 are 0 or less than a certain value, it means that there is no object interference in this position. If there is object interference, when drawing the grasping line of the mechanical claw on the map, the pixel value of the object position will be changed from 1 to 0.
  • the calculated overall pixel value SUM2 of the image drawn with the grasping line of the mechanical claw will be smaller than the overall pixel value SUM1 of the grasping line image without the mechanical claw drawn .
  • Pose search if the object is strip-shaped, fix the rotation angle, change the grasping center point and grasping width, draw the grasping line of the mechanical claw and calculate the difference between SUM1 and SUM2, and output the grasping center point and grasping width when the difference is the smallest; if the object is not strip-shaped, fix the grasping center point, change the grasping angle, draw the grasping line of the mechanical claw and calculate the difference between SUM1 and SUM2, and output the grasping angle when the difference is the smallest; Grab the pose, if there is interference, adjust the height threshold and repeat steps S3-S5 for the next search.
  • the height threshold is 0 when entering the loop for the first time, and increases by 5 in each subsequent loop, that is, the incremental height of 0, 5, 10, 15..., using binarization to filter objects whose height is lower than the height threshold, so as to search for grasping postures of different height planes.
  • the present application provides an embodiment of an automatic sorting device based on a height sensor.
  • This device embodiment corresponds to the method embodiment shown in FIG. 2 , and this device can be specifically applied to various electronic devices.
  • An embodiment of the present application provides an automatic sorting device based on a height sensor, including:
  • the image acquisition module 1 is configured to acquire height images and color images of the same sorting area
  • the object recognition module 2 is configured to identify the mask and the type of the object on the color image through the instance segmentation model, and obtain the minimum rectangular frame surrounding the object based on the mask, and obtain the center point, length, width and deflection angle of the object according to the minimum rectangular frame;
  • the height image processing module 3 is configured to process the height image according to different height thresholds to obtain the processed height image
  • the area comparison module 4 is configured to compare the area of the mask of the color image with the area of the object at the corresponding position on the processed height image to obtain an area ratio, and determine whether to find the grasping pose of the mechanical claw based on the area ratio;
  • the grasping pose search module 5 is configured to search for the grasping pose based on the processed height image to determine the grasping pose of the gripper, which includes the grasping center point, grasping width and grasping angle.
  • FIG. 22 shows a schematic structural diagram of a computer device 2200 suitable for implementing an electronic device (such as the server or terminal device shown in FIG. 1 ) in the embodiment of the present application.
  • the electronic device shown in FIG. 22 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
  • a computer device 2200 includes a central processing unit (CPU) 2201 and a graphics processing unit (GPU) 2202, which can perform various appropriate actions and processes according to programs stored in a read only memory (ROM) 2203 or loaded from a storage section 2209 into a random access memory (RAM) 2204.
  • ROM read only memory
  • RAM random access memory
  • various programs and data necessary for the operation of the device 2200 are also stored.
  • the CPU 2201, GPU 2202, ROM 2203, and RAM 2204 are connected to each other through a bus 2205.
  • An input/output (I/O) interface 2206 is also connected to the bus 2205 .
  • the following components are connected to the I/O interface 2206: an input section 2207 including a keyboard, a mouse, etc.; an output section 2208 including a liquid crystal display (LCD) etc., a speaker, etc.; a storage section 2209 including a hard disk, etc.; and a communication section 2210 including a network interface card such as a LAN card, a modem, etc.
  • the communication section 2210 performs communication processing via a network such as the Internet.
  • a drive 2211 can also be connected to the I/O interface 2206 as needed.
  • a removable medium 2212 such as a magnetic disk, optical disk, magneto-optical disk, semiconductor memory, etc., is mounted on the drive 2211 as necessary so that a computer program read therefrom is installed into the storage section 2209 as necessary.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via communication portion 2210 and/or installed from removable media 2212 .
  • CPU central processing unit
  • GPU graphics processing unit
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable medium or any combination of the above two.
  • a computer readable medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor device, device, or device, or a combination of any of the above. More specific examples of computer readable media may include, but are not limited to, electrical connections having one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution device, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, in which computer-readable program codes are carried. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable medium that can transmit, propagate, or transport a program for use by or in conjunction with an instruction execution apparatus, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of the present application may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (e.g., through the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or portion of code that includes one or more executable instructions for implementing specified logical functions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented by dedicated hardware-based devices that perform the specified functions or operations, or by a combination of dedicated hardware and computer instructions.
  • the modules involved in the embodiments described in the present application may be implemented by means of software or hardware.
  • the described modules may also be provided in a processor.
  • the present application also provides a computer-readable medium, which may be included in the electronic device described in the above embodiments, or may exist independently without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device obtains a height image and a color image of the same sorting area; recognizes the mask and type of the object on the color image through the instance segmentation model, and obtains the minimum rectangular frame surrounding the object based on the mask, and obtains the center point, length, width, and deflection angle of the minimum rectangular frame; processes the height image according to different height thresholds to obtain a processed height image; Comparing the areas of objects at the corresponding positions, the area ratio is obtained.
  • the grasping pose includes the grasping center point, grasping width and grasping angle.
  • the invention provides an automatic sorting method, device and readable medium based on a height sensor.
  • the sensor is used to locate the spatial position of the object, and the sensor is used to determine whether the grasping is successful, so as to increase the grasping possibility for working conditions such as high object distribution density and high stacking rate, and improve sorting efficiency.

Landscapes

  • Image Analysis (AREA)

Abstract

一种基于高度传感器的自动分拣方法、装置及可读介质,通过获取同一分拣区域的高度图像和彩色图像(S1);通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,并得到最小矩形框的中心点、长度、宽度和偏转角度(S2);根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像(S3);将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿(S4);基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度(S5)。该方法增加对于物体分布密度大、堆叠率高等工况的抓取可能性,提高分拣效率。

Description

一种基于高度传感器的自动分拣方法、装置及可读介质 技术领域
本发明涉及自动分拣领域,具体涉及一种基于高度传感器的自动分拣方法、装置及可读介质。
背景技术
传统的自动分拣机器人实现抓取物体的方法是通过彩色相机采集到图像,对需要抓取的物体进行识别和定位,然后再把物体的类别和平面位置信息发送给下位机。
由于定位得到的只是物体的二维位置信息,机械爪每次进行抓取时都需要下降到贴近皮带,再进行抓取。这种抓取方式取决于传送带上物体的分布密度和堆叠情况,如果传送带上物体分布密度大、堆叠率高,则机械爪进行抓取的可能性就比较小。这将导致物体分布密度大的工况下,自动分拣机器人由于没有足够的抓取空间而不进行抓取从而导致自动分拣机器人抓取效率低下的问题。
在实际工况下物体的堆叠率高是比较常见的场景,因此物体周边需要有足够的空间让机械爪下抓,由此带来抓取难度越发大。
发明内容
针对上述提到的在实际工况下物体的堆叠率高、分布密度大导致抓取难度大等问题。本申请的实施例的目的在于提出了一种基于高度传感器的自动分拣方法、装置及可读介质,来解决以上背景技术部分提到的技术问题。
第一方面,本申请的实施例提供了一种基于高度传感器的自动分拣方法,包括以下步骤:
S1,获取同一分拣区域的高度图像和彩色图像;
S2,通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,并得到最小矩形框的中心点、长度、宽度和偏转角度;
S3,根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像;
S4,将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面积相 比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿;
S5,基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度。
在一些实施例中,步骤S5中的抓取位姿搜索包括基于最小矩形框的抓取位姿搜索和基于物体的形状的抓取位姿搜索,抓取位姿搜索根据不同高度阈值下的处理后的高度图像,调整机械爪的抓取位姿以获得无干涉的抓取位姿。
在一些实施例中,步骤S5具体包括:
S51,计算处理后的高度图像的第一像素值总和Sum1;
S52,根据最小矩形框的中心点、长度、宽度和偏转角度在处理后的高度图像上根据不同抓取中心点、抓取宽度和抓取角度绘制机械爪的抓取线,并且令抓取线的像素值为0,计算绘制抓取线的处理后的高度图像的第二像素值总和Sum2,先进行基于最小矩形框的抓取位姿搜索,再进行基于物体的形状的抓取位姿搜索,响应于第一像素值总和Sum1与第二像素值总和Sum2之间的差值小于预设阈值,确定抓取位姿。
在一些实施例中,步骤S5中基于最小矩形框的抓取位姿搜索具体包括:
判断当机械爪以第一抓取位姿绘制抓取线时,第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围最小矩形框的中心点为抓取中心点,沿最小矩形框的长度方向进行抓取;否则将第一抓取位姿的抓取角度旋转90°以获得第二抓取位姿,并判断当机械爪以第二抓取位姿绘制抓取线时,第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围最小矩形框的中心点为抓取中心点,沿最小矩形框的宽度方向进行抓取;否则进行基于物体的形状的抓取位姿搜索。
在一些实施例中,步骤S5中的基于物体的形状的抓取位姿搜索,具体包括:对物体进行形状判断,若物体为条状,则固定抓取角度,变换抓取中心点和抓取宽度以绘制机械爪的抓取线并判断第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则输出第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5;若是物体为非条状,则固定抓取中心点,变换抓取角度以绘制机械爪的抓取线并判断第一像素值总和Sum1与第二像素值总和 Sum2之间的差值是否小于预设阈值,若是则输出第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5。
在一些实施例中,步骤S3具体包括:
基于高度图像获取高度信息,并根据高度阈值过滤低于高度阈值的高度信息,得到过滤后的高度图像;
对过滤后的高度图像进行二值化处理,得到处理后的高度图像。
在一些实施例中,步骤S1中具体包括:
获取设置在传送带上第一固定位置的单色相机所拍摄的多张高度单线图像,按顺序进行拼接后得到高度拼接图像;
获取设置在传送带上第二固定位置的彩色相机所拍摄的多张彩色单线图像,按顺序进行拼接后得到彩色拼接图像;
根据第一固定位置与第二固定位置的偏移量d对高度拼接图像和彩色拼接图像进行裁剪,得到高度图像和彩色图像。
第二方面,本申请的实施例提供了一种基于高度传感器的自动分拣装置,包括:
图像获取模块,被配置为获取同一分拣区域的高度图像和彩色图像;
物体识别模块,被配置为通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,根据最小矩形框得到物体的中心点、长度、宽度和偏转角度;
高度图像处理模块,被配置为根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像;
面积相比模块,被配置为将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿;
抓取位姿搜索模块,被配置为基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度。
第三方面,本申请的实施例提供了一种电子设备,包括一个或多个处理器;存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个 处理器执行,使得一个或多个处理器实现如第一方面中任一实现方式描述的方法。
第四方面,本申请的实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如第一方面中任一实现方式描述的方法。
相比于现有技术,本发明具有以下有益效果:
(1)对比传统的自动分拣机器人,本发明利用传感器对物体的空间位置进行定位,并利用传感器判定是否抓取成功,可以增加自动分拣机器人对于物体分布密度大,堆叠率高的工况的抓取可能性,有效提高分拣效率。
(2)本发明采用的实例分割模型比较成熟,通过调整不同高度阈值下对高度图像进行处理,以过滤低于高度阈值的高度信息,因此结合不同高度下的物体的高度信息获得准确的抓取姿态。
(3)本发明的基于高度传感器的自动分拣方法增加自动分拣机器人对空间中物体三维位置的定位,能够模拟不同高度下机械爪的抓取线,抓取准确,从而提高自动分拣机器人的抓取效率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请的一个实施例可以应用于其中的示例性装置架构图;
图2为本发明的实施例的基于高度传感器的自动分拣方法的流程示意图;
图3为本发明的实施例的基于高度传感器的自动分拣方法的光源***尺寸建模示意图;
图4为本发明的实施例的基于高度传感器的自动分拣方法的数据采集触发时序图;
图5为本发明的实施例的基于高度传感器的自动分拣方法的图像拼接示意图;
图6为本发明的实施例的基于高度传感器的自动分拣方法的建筑固废图像采集平台的结构示意图;
图7为本发明的实施例的基于高度传感器的自动分拣方法的彩色图像;
图8为本发明的实施例的基于高度传感器的自动分拣方法的彩色图像的实例分割模型识别的结果图;
图9为本发明的实施例的基于高度传感器的自动分拣方法中由实例分割模型识别结果得到最小矩形框及其中心点、长度、宽度和偏转角度的示意图;
图10为本发明的实施例的基于高度传感器的自动分拣方法的高度图像;
图11为本发明的实施例的基于高度传感器的自动分拣方法的处理后的高度图像;
图12为本发明的实施例的基于高度传感器的自动分拣方法的面积比获取过程的示意图;
图13为本发明的实施例的基于高度传感器的自动分拣方法的具有需要抓取木块上的红砖的彩色图像;
图14为本发明的实施例的基于高度传感器的自动分拣方法的绘制完机械爪的抓取线的处理后的高度图像;
图15为本发明的实施例的基于高度传感器的自动分拣方法的获得最小矩形框的中心点和长度或宽度的示意图;
图16为本发明的实施例的基于高度传感器的自动分拣方法的根据最小矩形框、长度或宽度和偏转角度画出直线的示意图;
图17为本发明的实施例的基于高度传感器的自动分拣方法的根据直线画出机械爪的抓取线的示意图;
图18为本发明的实施例的基于高度传感器的自动分拣方法的在高度阈值为10的不同抓取姿态下的绘制完机械爪的抓取线的处理后的高度图像;
图19为本发明的实施例的基于高度传感器的自动分拣方法的在高度阈值为20的不同抓取姿态下的绘制完机械爪的抓取线的处理后的高度图像;
图20为本发明的实施例的基于高度传感器的自动分拣方法的在高度阈值为30的不同抓取姿态下的绘制完机械爪的抓取线的处理后的高度图像;
图21为本发明的实施例的基于高度传感器的自动分拣装置的示意图;
图22是适于用来实现本申请实施例的电子设备的计算机装置的结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
图1示出了可以应用本申请实施例的基于高度传感器的自动分拣方法或基于高度传感器的自动分拣装置的示例性装置架构100。
如图1所示,装置架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种应用,例如数据处理类应用、文件处理类应用等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上传的文件或数据进行处理的后台数据处理服务器。后台数据处理服务器可以对获取的文件或数据进行处理,生成处理结果。
需要说明的是,本申请实施例所提供的基于高度传感器的自动分拣方法可以由服务器105执行,也可以由终端设备101、102、103执行,相应地, 基于高度传感器的自动分拣装置可以设置于服务器105中,也可以设置于终端设备101、102、103中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。在所处理的数据不需要从远程获取的情况下,上述装置架构可以不包括网络,而只需服务器或终端设备。
图2示出了本申请的实施例提供的一种基于高度传感器的自动分拣方法,包括以下步骤:
S1,获取同一分拣区域的高度图像和彩色图像。
在具体的实施例中,步骤S1中具体包括:
获取设置在传送带上第一固定位置的单色相机所拍摄的多张高度单线图像,按顺序进行拼接后得到高度拼接图像;
获取设置在传送带上第二固定位置的彩色相机所拍摄的多张彩色单线图像,按顺序进行拼接后得到彩色拼接图像;
根据第一固定位置与第二固定位置的偏移量d对高度拼接图像和彩色拼接图像进行裁剪,得到高度图像和彩色图像。
具体地,参考图3,光源的亮度、均匀性以及照射角度均会影响相机最终的成像质量。为了满足高光强、大范围和寿命长等要求,在本申请的实施例中选择奥普特公司型号为OPTLSG1254-W的高亮线性白色LED光源,由于光源照射方向与相机拍摄方向呈一定的角度,使得同一个目标在不同高度下拍摄得到的亮度不同。为最大化缓解上述问题,光源照射角度与相机拍摄角度应尽可能小,对光源***进行建模,得到如图3所示的示意图。
为了简化模型,将光源侧面形状简化为长为L,宽为W的矩形,光源的安装高度为H,则角度应满足以下表达式:
Figure PCTCN2022084339-appb-000001
根据上式进行编程近似求解,得到角度约为4.87°。
高度单线图像和彩色单线图像的获取包括数据采集触发和数据匹配两个步骤,在数据采集触发过程中,使用编码器将传送带的位移量按固定的比例转换为 脉冲信号同时输入至高度相机和彩色相机,设置高度相机和彩色相机均为上升沿触发采集,则当编码器发出一个脉冲,便可得到当前脉冲对应的高度单线图像H i和彩色单线图像C i,其触发时序图如图4所示。
根据图4可知,由于脉冲数与传送带位移量之间的转换系数固定,可保证当传送带变速或存在速度波动时采集得到的图像数据不变形,且两种图像的采集节奏一致。对得到的所有高度单线图像H i和彩色单线图像C i进行拼接,即得到的高度单线图像H i-1,H i,H i+1…进行拼接得到一张高度图像,彩色单线图像C i-1,C i,C i+1…进行拼接得到一张彩色图像,得到完整的彩色图像和高度图像,如图5所示。由于高度相机和彩色相机的物理安装位置在传送带运行方向有一个固定的偏移d,因此造成在同一时刻高度相机和彩色相机的数据对应的实际位置不同。因此在数据匹配时,应根据高度相机和彩色相机安装位置的实际偏移量d截取相应的数据进行匹配,具体操作为去除两种数据虚线框部分的数据,则剩余的数据即为匹配数据,最终匹配完成的高度图像中的物体与彩色图像中的物体相对应。最终的采集***结构如图6所示。
本申请的实施例中的高度相机中的单色相机和线激光为一体的,可以直接利用这种相机得到线激光处物体的高度,即拼接时候的一帧,一个编码器脉冲信号触发一次采集,960个脉冲采集到的960帧拼接成一张高度照片。
S2,通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,并得到最小矩形框的中心点、长度、宽度和偏转角度。
在具体的实施例中,通过步骤S1获得如图7所示的彩色图像,并通过实例分割模型识别得到彩色图像上物体的轮廓和种类,如图8所示。具体地,实例分割模型包括Mask RCNN神经网络,不仅能够识别物体的类别、位置,还能获得物体的掩膜(mask)。再利用掩膜得到包围物体的最小矩形框,根据最小矩形框获得物体的中心点、长度、宽度、偏转角度等信息,如图9所示。在其他可选的实施例中,实例分割模型还可以采用其他神经网络构成的模型,只要能获得物体的轮廓和种类,以得到物体的最小矩形框,以及物体的中心点、长度、宽度、偏转角度等即可。
S3,根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像。
在具体的实施例中,步骤S3具体包括:
基于高度图像获取高度信息,并根据高度阈值过滤低于高度阈值的高度信息,得到过滤后的高度图像;
对过滤后的高度图像进行二值化处理,得到处理后的高度图像,在处理后的高度图像中存在物体的像素点为1,无物体的像素点为0。
具体地,根据高度图像可获取传送带上物体的高度信息,如图10所示。从高度图像中可以对应得到高度信息,过滤低于高度阈值的高度信息,高度图像中每个点的像素值为此点处物体的高度,过滤指的是把低于高度阈值处的点的像素值设为0,而高于高度阈值部分的像素值不变,从而达到过滤低于某个高度平面以下的物体,将高于高度阈值部分的图像显示出来。通过该方式可以设置不同的高度阈值,处理得到对应于不同高度阈值的过滤后的高度图像。进一步将过滤后的高度图像进行二值化处理,即,在处理后的高度图像中存在物体的像素点为1,无物体的像素点为0,处理后的高度图像如图11所示。
S4,将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿。
在具体的实施例中,计算彩色图像中物体掩膜的面积与拉高抓取平面后高度图像的物体掩膜面积的比值,当比值小于0.8时,代表物体形状出现变化,继续拉高抓取平面也无法找到合适的抓取位姿,就是提高高度阈值并获得处理后的高度图像,放弃此物体的抓取,如图12所示,其中图a为彩色图像的识别结果;图b为识别结果提取到的某个物体的掩膜;图c为在高度阈值为5时处理后的高度图像,此时与彩色图像提取的物体掩膜对比面积变化不大,因此可以在此状态下寻找抓取位姿;图d为在高度阈值为25时处理后的高度图像,此时与彩色图像提取的物体掩膜对比面积变化很大,因此不能在此状态下寻找抓取位姿。
S5,基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度。
在具体的实施例中,步骤S5中的抓取位姿搜索包括基于最小矩形框的抓取位姿搜索和基于物体的形状的抓取位姿搜索,抓取位姿搜索根据不同高度阈值下的处理后的高度图像,调整机械爪的抓取位姿以获得无干涉的抓取位姿。
在具体的实施例中,步骤S5具体包括:
S51,计算处理后的高度图像的第一像素值总和Sum1;
S52,根据最小矩形框的中心点、长度、宽度和偏转角度在处理后的高度图像上根据不同抓取中心点、抓取宽度和抓取角度绘制机械爪的抓取线,并且令抓取线的像素值为0,计算绘制抓取线的处理后的高度图像的第二像素值总和Sum2,先进行基于最小矩形框的抓取位姿搜索,再进行基于物体的形状的抓取位姿搜索,响应于第一像素值总和Sum1与第二像素值总和Sum2之间的差值小于预设阈值,确定抓取位姿。
在具体的实施例中,步骤S5中基于最小矩形框的抓取位姿搜索具体包括:
判断当机械爪以第一抓取位姿绘制抓取线时,第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围最小矩形框的中心点为抓取中心点,沿最小矩形框的长度方向进行抓取;否则将第一抓取位姿的抓取角度旋转90°以获得第二抓取位姿,并判断当机械爪以第二抓取位姿绘制抓取线时,第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围最小矩形框的中心点为抓取中心点,沿最小矩形框的宽度方向进行抓取;否则进行基于物体的形状的抓取位姿搜索。
在具体的实施例中,步骤S5中的基于物体的形状的抓取位姿搜索,具体包括:对物体进行形状判断,若物体为条状,则固定抓取角度,变换抓取中心点和抓取宽度以绘制机械爪的抓取线并判断第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则输出第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5;若是物体为非条状,则固定抓取中心点,变换抓取角度以绘制机械爪的抓取线并判断第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则输出第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5。
具体地,如图13所示,需要抓取图13中的木块上的红砖,在不同的高度阈值下获得处理后的图像,计算处理后的高度图像的第一像素值总和Sum1,由于处理后的高度图像进行高度信息过滤及二值化处理,保留下来的物体的像素点为1,其余部分像素点的值为0,因此可以计算出不同高度阈值下处理后的高度图像的第一像素值总和Sum1。利用得到的物体最小矩形框的中心点、长宽、 偏转角度等信息,在处理后的高度图像上绘制两段粗线代表机械爪的抓取线(图中灰色粗线),粗线的像素值为0,如图14所示,机械爪的抓取线的像素值为0则为黑色,此处为了说明这里弄成灰色方便解释。具体地,如图15所示,由最小矩形框得到最小矩形框的中心点、长度、宽度和偏转角度,如图16所示,根据最小矩形框的中心点、长度、宽度、偏转角度可以计算出平行于宽度方向或长度方向并且经过最小矩形框的中心点的直线,由于检测结果存在偏差,机械爪张开大小要比物体的宽度或长度左右各大10mm,因此直线比最小矩形框的宽度或长度要大20mm,最后在直线的两个端点继续画2条与机械爪等长的直线作为抓取线,如图17所示,就可以代表实际抓取过程中的机械爪的抓取位置,实际抓取时就是机械爪移到物体的中心点,旋转到与物体平行,然后张开到比物体宽20mm的开角,直接下移合上机械爪抓取物体。
如图18、19、20所示分别为高度阈值为10、20、30下的不同抓取姿态下的绘制完机械爪的抓曲线的处理后的高度图像。计算绘制完机械爪的抓取线的处理后的高度图像的像素值总和SUM2,当SUM1和SUM2的差值为0或小于一定值时,则代表这个位置抓取没有物体干涉,若是有物体干涉的话,在图上绘制机械爪的抓取线时,会把物***置的像素值从1改为0,则计算出的绘制了机械爪的抓取线的图像的总体像素值SUM2会小于未绘制机械爪的抓取线图像的总体像素值SUM1。若是没有物体干涉则以最小矩形框的中心点为抓取中心点,沿矩形长度方向进行抓取,输出抓取位姿,若是存在物体干涉则将抓取角度旋转90度,在长度方向无法找到抓取姿态的情况下,换成宽度方向进行尝试,判断是否有干涉,若是则以最小矩形框的中心点为抓取中心点,沿矩形宽度方向进行抓取,输出抓取位姿;否则进行物体的形状判定,根据条状和非条状进行基于物体的形状的抓取位姿搜索;若物体为条状,则固定旋转角度,变换抓取中心点和抓取宽度,绘制机械爪的抓取线并计算SUM1和SUM2的差值,输出差值最小时的抓取中心点和抓取宽度;若物体是非条状,则固定抓取中心点,变换抓取角度,绘制机械爪的抓取线并计算SUM1和SUM2的差值,输出差值最小时的抓取角度;最后进行抓取位姿的干涉判定,没有干涉则输出抓取位姿,有干涉则调整高度阈值重复步骤S3-S5进行下一次搜索。高度阈值在第一次进入循环时为0,后续每个循环时 增加5,即0,5,10,15…的递增高度,利用二值化过滤高度低于高度阈值的物体,从而搜索不同高度平面的抓取姿态。
进一步参考图21,作为对上述各图所示方法的实现,本申请提供了一种基于高度传感器的自动分拣装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
本申请实施例提供了一种基于高度传感器的自动分拣装置,包括:
图像获取模块1,被配置为获取同一分拣区域的高度图像和彩色图像;
物体识别模块2,被配置为通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,根据最小矩形框得到物体的中心点、长度、宽度和偏转角度;
高度图像处理模块3,被配置为根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像;
面积相比模块4,被配置为将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿;
抓取位姿搜索模块5,被配置为基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度。
下面参考图22,其示出了适于用来实现本申请实施例的电子设备(例如图1所示的服务器或终端设备)的计算机装置2200的结构示意图。图22示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图22所示,计算机装置2200包括中央处理单元(CPU)2201和图形处理器(GPU)2202,其可以根据存储在只读存储器(ROM)2203中的程序或者从存储部分2209加载到随机访问存储器(RAM)2204中的程序而执行各种适当的动作和处理。在RAM 2204中,还存储有装置2200操作所需的各种程序和数据。CPU 2201、GPU2202、ROM 2203以及RAM 2204通过总线2205彼此相连。输入/输出(I/O)接口2206也连接至总线2205。
以下部件连接至I/O接口2206:包括键盘、鼠标等的输入部分2207;包括诸如、液晶显示器(LCD)等以及扬声器等的输出部分2208;包括硬盘等 的存储部分2209;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分2210。通信部分2210经由诸如因特网的网络执行通信处理。驱动器2211也可以根据需要连接至I/O接口2206。可拆卸介质2212,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器2211上,以便于从其上读出的计算机程序根据需要被安装入存储部分2209。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分2210从网络上被下载和安装,和/或从可拆卸介质2212被安装。在该计算机程序被中央处理单元(CPU)2201和图形处理器(GPU)2202执行时,执行本申请的方法中限定的上述功能。
需要说明的是,本申请所述的计算机可读介质可以是计算机可读信号介质或者计算机可读介质或者是上述两者的任意组合。计算机可读介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的装置、装置或器件,或者任意以上的组合。计算机可读介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行装置、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行装置、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的装置来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的模块也可以设置在处理器中。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取同一分拣区域的高度图像和彩色图像;通过实例分割模型识别彩色图像上的物体的掩膜和种类,并基于掩膜获取包围物体的最小矩形框,并得到最小矩形框的中心点、长度、宽度和偏转角度;根据不同的高度阈值对高度图像进行处理,得到处理后的高度图像;将彩色图像的掩膜的面积与处理后的高度图像上相应位置物体的面 积相比,得到面积比值,基于面积比值确定能否寻找机械爪的抓取位姿;基于处理后的高度图像进行抓取位姿搜索,确定机械爪的抓取位姿,抓取位姿包括抓取中心点、抓取宽度和抓取角度。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
工业实用性
本发明一种基于高度传感器的自动分拣方法、装置及可读介质,利用传感器对物体的空间位置进行定位,并利用传感器判定是否抓取成功,增加对于物体分布密度大、堆叠率高等工况的抓取可能性,提高分拣效率。

Claims (12)

  1. 一种基于高度传感器的自动分拣方法,其特征在于,包括以下步骤:
    S1,获取同一分拣区域的高度图像和彩色图像;
    S2,通过实例分割模型识别所述彩色图像上的物体的掩膜和种类,并基于所述掩膜获取包围所述物体的最小矩形框,并得到所述最小矩形框的中心点、长度、宽度和偏转角度;
    S3,根据不同的高度阈值对所述高度图像进行处理,得到处理后的高度图像;
    S4,将所述彩色图像的所述掩膜的面积与所述处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于所述面积比值确定能否寻找机械爪的抓取位姿;
    S5,基于所述处理后的高度图像进行抓取位姿搜索,确定所述机械爪的抓取位姿,所述抓取位姿包括抓取中心点、抓取宽度和抓取角度。
  2. 根据权利要求1所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S5中的抓取位姿搜索包括基于最小矩形框的抓取位姿搜索和基于物体的形状的抓取位姿搜索,所述抓取位姿搜索根据不同高度阈值下的所述处理后的高度图像,调整所述机械爪的抓取位姿以获得无干涉的抓取位姿。
  3. 根据权利要求2所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S5具体包括:
    S51,计算所述处理后的高度图像的第一像素值总和Sum1;
    S52,根据所述最小矩形框的中心点、长度、宽度和偏转角度在所述处理后的高度图像上根据不同抓取中心点、抓取宽度和抓取角度绘制机械爪的抓取线,并且令所述抓取线的像素值为0,计算绘制所述抓取线的处理后的高度图像的第二像素值总和Sum2,先进行基于最小矩形框的抓取位姿搜索,再进行基于物体的形状的抓取位姿搜索,响应于第一像素值总和Sum1与第二像素值总和Sum2之间的差值小于预设阈值,确定所述抓取位姿。
  4. 根据权利要求3所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S5中基于最小矩形框的抓取位姿搜索具体包括:
    判断当所述机械爪以第一抓取位姿绘制所述抓取线时,所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围所 述最小矩形框的中心点为抓取中心点,沿所述最小矩形框的长度方向进行抓取;否则将所述第一抓取位姿的抓取角度旋转90°以获得第二抓取位姿,并判断当所述机械爪以第二抓取位姿绘制所述抓取线时,所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则以包围所述最小矩形框的中心点为抓取中心点,沿所述最小矩形框的宽度方向进行抓取;否则进行基于物体的形状的抓取位姿搜索。
  5. 根据权利要求3所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S5中的基于物体的形状的抓取位姿搜索,具体包括:对所述物体进行形状判断,若所述物体为条状,则固定所述抓取角度,变换所述抓取中心点和抓取宽度以绘制机械爪的抓取线并判断所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则输出所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的所述抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5;若是所述物体为非条状,则固定所述抓取中心点,变换所述抓取角度以绘制机械爪的抓取线并判断所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值是否小于预设阈值,若是则输出所述第一像素值总和Sum1与第二像素值总和Sum2之间的差值最小时的所述抓取中心点、抓取宽度和抓取角度,否则变换高度阈值重复步骤S3-S5。
  6. 根据权利要求2所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S3具体包括:
    基于所述高度图像获取高度信息,并根据所述高度阈值过滤低于高度阈值的高度信息,得到过滤后的高度图像;
    对所述过滤后的高度图像进行二值化处理,得到所述处理后的高度图像。
  7. 根据权利要求1所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S1中具体包括:
    获取设置在传送带上第一固定位置的单色相机所拍摄的多张高度单线图像,按顺序进行拼接后得到高度拼接图像;
    获取设置在传送带上第二固定位置的彩色相机所拍摄的多张彩色单线图像,按顺序进行拼接后得到彩色拼接图像;
    根据所述第一固定位置与第二固定位置的偏移量d对所述高度拼接图像和 彩色拼接图像进行裁剪,得到所述高度图像和彩色图像。
  8. 根据权利要求7所述的基于高度传感器的自动分拣方法,其特征在于,所述步骤S1中选用高亮线性白色LED光源;光源侧面形状简化为长为L,宽为W的矩形光源的安装高度为H,则角度应满足以下表达式:
    Figure PCTCN2022084339-appb-100001
  9. 根据权利要求7所述的基于高度传感器的自动分拣方法,其特征在于,所述单色相机和线激光为一体,利用该相机得到线激光处物体的高度,即拼接时候的一帧,一个编码器脉冲信号触发一次采集,960个脉冲采集到的960帧拼接成一张高度照片。
  10. [根据细则91更正 29.01.2023]
    一种基于高度传感器的自动分拣装置,其特征在于,包括:
    图像获取模块,被配置为获取同一分拣区域的高度图像和彩色图像;
    物体识别模块,被配置为通过实例分割模型识别所述彩色图像上的物体的掩膜和种类,并基于所述掩膜获取包围所述物体的最小矩形框,根据所述最小矩形框得到所述物体的中心点、长度、宽度和偏转角度;
    高度图像处理模块,被配置为根据不同的高度阈值对所述高度图像进行处理,得到处理后的高度图像;
    面积相比模块,被配置为将所述彩色图像的所述掩膜的面积与所述处理后的高度图像上相应位置物体的面积相比,得到面积比值,基于所述面积比值确定能否寻找机械爪的抓取位姿;
    抓取位姿搜索模块,被配置为基于所述处理后的高度图像进行抓取位姿搜索,确定所述机械爪的抓取位姿,所述抓取位姿包括抓取中心点、抓取宽度和抓取角度。
  11. [根据细则91更正 29.01.2023]
    一种电子设备,包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的方法。
  12. [根据细则91更正 29.01.2023]
    一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一所述的方法。
PCT/CN2022/084339 2022-01-19 2022-03-31 一种基于高度传感器的自动分拣方法、装置及可读介质 WO2023137871A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210057346.XA CN114570674A (zh) 2022-01-19 2022-01-19 一种基于高度传感器的自动分拣方法、装置及可读介质
CN202210057346.X 2022-01-19

Publications (1)

Publication Number Publication Date
WO2023137871A1 true WO2023137871A1 (zh) 2023-07-27

Family

ID=81770965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/084339 WO2023137871A1 (zh) 2022-01-19 2022-03-31 一种基于高度传感器的自动分拣方法、装置及可读介质

Country Status (2)

Country Link
CN (1) CN114570674A (zh)
WO (1) WO2023137871A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612061A (zh) * 2023-11-09 2024-02-27 中科微至科技股份有限公司 一种用于叠件分离的包裹堆叠状态视觉检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648364A (zh) * 2019-09-17 2020-01-03 华侨大学 一种多维度空间固废视觉检测定位及识别方法与***
CN111144426A (zh) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 一种分拣方法、装置、设备和存储介质
US20210187741A1 (en) * 2019-12-18 2021-06-24 Vicarious Fpc, Inc. System and method for height-map-based grasp execution
CN113379849A (zh) * 2021-06-10 2021-09-10 南开大学 基于深度相机的机器人自主识别智能抓取方法及***
CN113420746A (zh) * 2021-08-25 2021-09-21 中国科学院自动化研究所 机器人视觉分拣方法、装置、电子设备和存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086736A (zh) * 2018-08-17 2018-12-25 深圳蓝胖子机器人有限公司 目标获取方法、设备和计算机可读存储介质
CN110580725A (zh) * 2019-09-12 2019-12-17 浙江大学滨海产业技术研究院 一种基于rgb-d相机的箱体分拣方法及***
CN111079548B (zh) * 2019-11-22 2023-04-07 华侨大学 基于目标高度信息和色彩信息的固废在线识别方法
CN111515945A (zh) * 2020-04-10 2020-08-11 广州大学 机械臂视觉定位分拣抓取的控制方法、***及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648364A (zh) * 2019-09-17 2020-01-03 华侨大学 一种多维度空间固废视觉检测定位及识别方法与***
US20210187741A1 (en) * 2019-12-18 2021-06-24 Vicarious Fpc, Inc. System and method for height-map-based grasp execution
CN111144426A (zh) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 一种分拣方法、装置、设备和存储介质
CN113379849A (zh) * 2021-06-10 2021-09-10 南开大学 基于深度相机的机器人自主识别智能抓取方法及***
CN113420746A (zh) * 2021-08-25 2021-09-21 中国科学院自动化研究所 机器人视觉分拣方法、装置、电子设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612061A (zh) * 2023-11-09 2024-02-27 中科微至科技股份有限公司 一种用于叠件分离的包裹堆叠状态视觉检测方法

Also Published As

Publication number Publication date
CN114570674A (zh) 2022-06-03

Similar Documents

Publication Publication Date Title
US11958197B2 (en) Visual navigation inspection and obstacle avoidance method for line inspection robot
CN109145759B (zh) 车辆属性识别方法、装置、服务器及存储介质
CN108108746B (zh) 基于Caffe深度学习框架的车牌字符识别方法
WO2021008019A1 (zh) 姿态跟踪方法、装置及计算机可读存储介质
WO2015196616A1 (zh) 图像边缘检测方法、图像目标识别方法及装置
CN107067015B (zh) 一种基于多特征深度学习的车辆检测方法及装置
JP4309927B2 (ja) まぶた検出装置及びプログラム
CN109753878B (zh) 一种恶劣天气下的成像识别方法及***
WO2021013049A1 (zh) 前景图像获取方法、前景图像获取装置和电子设备
WO2023137871A1 (zh) 一种基于高度传感器的自动分拣方法、装置及可读介质
CN112561899A (zh) 电力巡检图像识别方法
US11631261B2 (en) Method, system, server, and storage medium for logistics management based on QR code
CN110110666A (zh) 目标检测方法和装置
CN114037087B (zh) 模型训练方法及装置、深度预测方法及装置、设备和介质
CN113034526A (zh) 一种抓取方法、抓取装置及机器人
McAlorum et al. Automated concrete crack inspection with directional lighting platform
CN113033355B (zh) 一种基于密集输电通道的异常目标识别方法和装置
CN115115546A (zh) 一种图像处理方法、***、电子设备及可读存储介质
CN111767751B (zh) 二维码图像识别方法和装置
Tupper et al. Pedestrian proximity detection using RGB-D data
CN111583341A (zh) 云台像机移位检测方法
KR100801665B1 (ko) 얼라인 마크 인식 머신 비전 시스템 및 얼라인 마크 인식방법
CN113450291B (zh) 图像信息处理方法及装置
CN113780269A (zh) 图像识别方法、装置、计算机***及可读存储介质
WO2023197390A1 (zh) 姿态跟踪方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22921305

Country of ref document: EP

Kind code of ref document: A1