WO2023084116A1 - Procédé et dispositif de classification d'objets qui sont transportés en étant couchés sur une surface - Google Patents

Procédé et dispositif de classification d'objets qui sont transportés en étant couchés sur une surface Download PDF

Info

Publication number
WO2023084116A1
WO2023084116A1 PCT/EP2022/082002 EP2022082002W WO2023084116A1 WO 2023084116 A1 WO2023084116 A1 WO 2023084116A1 EP 2022082002 W EP2022082002 W EP 2022082002W WO 2023084116 A1 WO2023084116 A1 WO 2023084116A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
height profile
designed
neural network
segment
Prior art date
Application number
PCT/EP2022/082002
Other languages
German (de)
English (en)
Inventor
Massoud Koshan
Anthony Ioan
Lino Gerlach
Original Assignee
Koiotech UG (haftungsbeschränkt)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koiotech UG (haftungsbeschränkt) filed Critical Koiotech UG (haftungsbeschränkt)
Publication of WO2023084116A1 publication Critical patent/WO2023084116A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Definitions

  • the invention relates to a method for determining the value of at least one shape parameter of objects which are conveyed lying on a surface in an unknown orientation and grouping by moving the surface in a conveying direction running along the surface.
  • the invention relates to a method having the features of the preamble of independent claim 1.
  • the invention relates to a device for carrying out such a method, in particular with the features of the preamble of independent claim 12.
  • the objects can be, for example, root crops such as potatoes, which are transported by the conveyor belt poured onto a conveyor belt.
  • the shape parameter of these objects, the value of which is determined, can be, for example, a square measure of the potatoes or the number of sticks of certain cross-section and certain length that can be cut from the potatoes in the manufacture of french fries.
  • Measuring objects conveyed individually on a conveyor belt using a laser scanner is known and is used, for example, in the production monitoring of various products.
  • the methods and devices used are unsuitable for irregular objects that are arranged on a conveyor belt in an unknown orientation and grouping, ie in particular also lying on top of one another.
  • DE 101 34 714 A1 discloses a method and an arrangement for contactless surface inspection of products with an approximately cylindrical shape, such as potatoes.
  • the products are individually transferred in their longitudinal direction to two closely adjacent, stationary rollers rotating in the same direction at the same speed, which are also arranged in the longitudinal direction.
  • the individual products are simultaneously set in rotation and in a longitudinal movement along the longitudinal direction of the rollers.
  • a line-wise scanning sensor also aligned in the longitudinal direction, captures the image area between the rollers and scans the entire development of the surface of each product once or several times, while the product moves through its image field in a rotating and linear manner.
  • the products are classified into several classes and sorted mechanically. This procedure is only practical for products that occur in comparatively small numbers per unit of time. Otherwise, only a sample can be separated from a main stream of products and classified into classes.
  • WO 2020/120 702 A1 It is known from WO 2020/120 702 A1 to determine three-dimensional training image data and associated training weight data of a plurality of training food objects using a 3D imaging device and a scale. A neural network is trained using this training image data and associated training weight data. Then, three-dimensional image data of food objects is acquired with a 3D imaging device, and the trained neural network and the acquired 3D images are used to determine a weight for the particular food product.
  • the food objects can be conveyed on a conveyor and thereby scanned with the aid of a laser scanner, the impact point of the laser beam coming from the laser scanner being recorded with a camera.
  • Such a system includes an imaging device for generating images of an area and of targets in the area.
  • An object characterization processor is connected to the imaging device and has a neural network and stored parameters for characterizing the target objects.
  • the neural network detects and outputs the presence of a plurality of different materials within the images based on a plurality of different features.
  • the neural network is specifically described as a fully convolutional neural network and also referred to as a machine learning-based segmentation algorithm. This neural network generates material-specific images for different materials, one of which, for example, indicates the presence of aluminum cans, their size and orientation.
  • a neural network for sorting objects from a mass is known from EP 3 816 857 A1.
  • the sorting system has at least one radiation source to irradiate the objects and at least one optical sensor to acquire spectral data on the radiation reflected by the objects.
  • a processing circuit analyzes the reflected radiation of the objects by inputting the spectral data into a convolutional neural network (CNN) with at least two layers of convolution to either recognize and classify the objects in the spectral data and/or to segment the spectral data semantically.
  • CNN convolutional neural network
  • a mechanical sorter is provided to sort the objects using the analysis of the processing circuitry.
  • a height sensor records a height profile of plants on a conveyor.
  • the height sensor can be a LIDAR sensor.
  • the height profile is determined in Cartesian coordinates.
  • the height profile is analyzed to determine the value of a shape and/or quality parameter of the harvested plants.
  • the invention is based on the object of demonstrating a method and a device for determining the value of at least one shape parameter of objects which are conveyed lying on a surface in an unknown orientation and grouping by moving the surface in a conveying direction running along the surface, which also objects that occur in large numbers per unit of time are suitable.
  • the object of the invention is achieved by a method having the features of independent patent claim 1 and by a device having the features of independent patent claim 12 .
  • the dependent patent claims are directed to preferred embodiments of the method according to the invention and the device according to the invention.
  • the directional statement “transverse” is to be understood in such a way that it preferably, but not necessarily, means “vertical”. In principle, the statement of direction “transverse” should only mean “essentially vertical” and allow an angular deviation from “exactly vertical” of up to 40° or at least up to 20°.
  • a measuring beam of a LIDAR scanner directed onto the surface is used the objects lying on the surface are repeatedly scanned in a plane running transversely to the conveying direction.
  • a speed at which the surface passes through the plane in the conveying direction is recorded, and an original height profile of the objects on the surface is determined point by point from signals from the LIDAR scanner and the speed of the surface.
  • the original height profile is segmented with a first neural network, with segments of the original height profile being identified whose points each belong to a single object.
  • Sections of the original height profile each comprising an identified segment and an area surrounding this one identified segment, are transformed into Cartesian coordinates and then with a second neural network with regard to the value of the at least one shape parameter of the respective individual object to which the points of the segment belong , analyzed.
  • a LIDAR scanner records the travel time of its measurement beam to a measurement point hit by the measurement beam and back to the LIDAR scanner. The distance from the measuring point to the LIDAR scanner results from this transit time and the speed of light.
  • a LIDAR scanner also has means for aligning the measuring beam in different directions, typically one or more rotating mirrors. The position of these means is assigned to the respective propagation time of the measuring beam or the distance of the measuring point derived therefrom in order to record the direction in which the measuring point lies from the LIDAR scanner in addition to the distance.
  • the objects lying on the surface are repeatedly scanned in a plane running transversely to the conveying direction with the measuring beam of the LIDAR scanner directed onto the surface.
  • the plane can be slightly inclined to the conveying direction in such a way that the surface is scanned with the measuring beam in lines running almost perpendicularly to the conveying direction.
  • the inclination of the plane to be set for this depends on the speed with which the surface passes through the plane in the conveying direction. If the course of the plane deviates significantly from this ideal course, this must be taken into account in the further steps of the method according to the invention. In any case, when the objects lying on the surface are repeatedly scanned, the measuring points of the objects that follow one another in the conveying direction are not always scanned with the measuring beam.
  • the areal scanning of the objects lying on the surface in the method according to the invention results from the pivoting of the measuring beam in the plane running transversely to the conveying direction only in a spatial direction running along the surface; in the second spatial direction running along the surface, the scanning takes place by moving the surface on which the objects lie.
  • the position of each measuring point of the LIDAR scanner can be assigned to a point on the surface or, if it is between this point and the LIDAR scanner, to an object lying.
  • the original height profile of the objects on the surface is determined point by point from the signals of the LIDAR scanner and the speed of the surface.
  • this original height profile is segmented with the aid of the first neural network.
  • the first neural network identifies segments of the height profile in such a way that all points of each identified segment belong to a single object and that each identified segment includes all points of the original height profile that belong to the respective single object.
  • the excerpts of the original elevation profile each of which is not only a segment identified in the first evaluation step, but also a limited environment of this identified segment, transformed into Cartesian coordinates and then analyzed with the help of the second neural network.
  • the second neural network determines the value of the at least one shape parameter of interest of the respective individual object. The content of this determination can be that the exact value of the shape parameter is determined or in which of several value ranges of the shape parameter the respective value falls.
  • the present invention does not relate to the precise structure of the first and second neural networks, which may vary. Rather, the invention consists in carrying out the evaluation in successive evaluation steps which are defined in such a way that they can be managed without any problems using two neural networks and can therefore be carried out fully automatically.
  • the original coordinates of the measuring points of the scanned objects on the surface which result from the signals of the LIDAR scanner and the speed of the surface in the method according to the invention, are cylindrical coordinates in which the z-coordinate runs in the conveying direction and the angle around a swivel axis of the measuring beam of the LIDAR scanners.
  • a transformation of the original coordinates of all measuring points into Cartesian coordinates would require a lot of computing time.
  • This computing time is significantly reduced in the method according to the invention by the fact that the height profile is first determined in the original cylinder coordinates or at least not yet completely transformed coordinates, that the segmentation of the height profile also takes place directly in the original cylinder coordinates or in any case the not yet completely transformed coordinates and that the original height profile is only converted into Cartesian coordinates for the sections that are analyzed in the second evaluation step.
  • the transformation into Cartesian coordinates is not just a matter of converting the points of the original height profile into Cartesian coordinates, but also of determining the height profile in the Cartesian coordinates in points that are to be carried out on the basis of extrapolations between the measuring points of the LIDAR scanner the moving surface are distributed at equal intervals. This is a prerequisite for the identified segments to be comparable with one another, regardless of where the associated object was located on the surface. This in turn is a prerequisite for the associated sections of the height profile can be analyzed without considering further information.
  • a group can be identified specifically of those segments which each belong to an individual object that is exposed on the surface or on other objects to the extent that their upper sides are not partially covered by one or more other objects. If then in the second evaluation step only sections of the height profile are analyzed that include a segment from this group, the analysis in the second processing step is particularly simple. Identifying the segments of this group is also a task that can be mastered with the help of a neural network without any fundamental problems.
  • the values of the at least one shape parameter are only determined for some of the objects.
  • this part of the objects is a random and therefore also fundamentally representative random sample that is not singled out.
  • the identified segments can be displayed to the second neural network for the second evaluation step by a mask for the height profile.
  • the area of the respective mask can then be used to define each section of the height profile concentrically to a base area of the associated segment of the height profile.
  • the concentricity can be defined specifically in relation to a centroid of the mask or the base area.
  • the sections of the height profile preferably have fixed absolute dimensions.
  • both the first neural network with height profiles of real and/or simulated objects on the surface and the second neural network with sections of height profiles around segments of real and/or simulated objects on the surface and measured values of the at least a shape parameter of the real and/or simulated objects can be trained.
  • the simulation of objects allows the scope of training for the neural networks to be increased significantly in a limited absolute time.
  • scanning the surface with the measuring beam of the LIDAR scanner with a spatial resolution of ⁇ 3 mm was sufficient, ⁇ 2 mm was advantageous and ⁇ 1 mm was preferred.
  • the angle of the measuring beam to the surface in the method according to the invention therefore preferably remains at least 60°.
  • a zero line of the original height profile can be determined by scanning the surface without objects lying on it. The measuring points of the objects lying on the surface are above this zero line.
  • a reference height of the original height profile can be determined by detecting a highest point of the objects lying on the surface and scanned during a detection period. The acquisition period may be the period during which the objects of interest, or a subset thereof, have been conveyed through the plane in which the LIDAR scanner is scanning. Then the original height profile can be determined selectively for those measurement points that lie within a height range below this reference height. This range of altitudes may extend up to the zero line or be suitably determined to end slightly above.
  • the height of the measurement points within the height range can be normalized to values between 0 and 1 and outside of the height range can be set to 0 or 1, depending on which side of the height range the measurement points are on.
  • the height range can be determined as a function of a typical maximum dumping height of the objects lying one above the other on the surface and can then be of the order of one to a few dm, ie from 1 dm to 5 dm. In the case of potatoes on a conveyor belt, for example, it can be 3 dm.
  • the moving surface can be flat; however, this is by no means mandatory.
  • the surface can be curved or structured in the form of a channel.
  • the LIDAR scanner can record not only the propagation time of its measuring beam but also a degree of reflection of its measuring beam and add it to the original height profile as a further coordinate.
  • This coordinate is preferably taken into account by the first neural network during the segmentation of the original height profile in the first evaluation step.
  • the at least one shape parameter which is determined by the method according to the invention, can be selected, for example, from the length of the objects, the square dimensions of the objects, the volume of the objects, the surface of the objects, the curvature of the objects and the number of continuous rods of specific dimensions into which the objects are divided can become.
  • values of any shape parameters can be determined with the method according to the invention.
  • the sections of the height profile can be analyzed with the second neural network to determine which of a plurality of object classes the respective object falls into based on the value of the at least one shape parameter.
  • the value of the shape parameter for each individual object does not have to be determined numerically more precisely than is necessary for classifying the object.
  • the objects can be, for example, root crops, specifically potatoes or sugar beets.
  • a device is suitable for determining the value of at least one shape parameter of objects which are conveyed lying in an unknown orientation and grouping on a surface of a conveyor by moving the surface in a conveying direction running along the surface.
  • it has a LIDAR scanner that emits a measuring beam in different directions in a plane, a positioning device that is designed to position the LIDAR scanner relative to the conveyor in such a way that this plane runs transversely to the conveying direction, a speed sensor that is designed to detect a speed at which the surface in the conveying direction passes through the plane passes through, and a first evaluation device which is designed to determine point by point a height profile of the objects on the surface from signals from the LIDAR scanner and the speed of the surface.
  • a second evaluation device of the device according to the invention has a first neural network and is designed to segment the height profile with the first neural network, with segments of the height profile being identified whose points each belong to an individual object.
  • the device according to the invention also has a third evaluation device, which includes a second neural network and is designed to transform sections of the height profile, each of which includes a segment and a region surrounding one segment, into Cartesian coordinates and then compare them with the second neural network to analyze the value of the at least one shape parameter of the respective individual object.
  • the third evaluation device is designed to determine the height profile in the Cartesian coordinates on the basis of extrapolations in points that are distributed at equal intervals over the moving surface when transforming the sections of the original height profile into the Cartesian coordinates.
  • the second evaluation device is designed in particular to identify a group of such segments that each belong to an individual object that is exposed on the surface or on other objects and whose surface is not covered by one or more other objects.
  • the third evaluation device is then designed in particular to analyze only those sections of the height profile which each comprise a segment from this group.
  • FIG. 1 is a schematic view of a device according to the invention looking in the conveying direction.
  • Figure 2 is a perspective view of a detail of the device of Figure 1.
  • 3 is a flow chart of the method of the present invention.
  • FIG. 4 shows a height profile as determined in the course of the method according to the invention according to FIG.
  • FIG. 5 explains a segmentation of a region of the height profile according to FIG. 4.
  • FIG. 6 illustrates a mask for the elevation profile of FIG. 4 for displaying a single identified segment.
  • the device 1 shown schematically in FIG. 1 is used to determine the value of at least one shape parameter of objects 2 lying on the surface 3 of a conveyor 4 in an unknown orientation and grouping.
  • the conveyor 4 By moving the conveyor 4 with the surface 3 in a conveying direction running along the surface, which corresponds to the viewing direction of FIG. 1 , the objects 2 are conveyed between stationary side walls 5 .
  • a LIDAR scanner 6 is arranged above the conveyor 4 and is positioned and held there by a positioning device 7 in the form of a tripod 8 in such a way that a measuring beam 9, which the LIDAR scanner 6 emits and pivots about a pivot axis 10, is always in a Plane runs, which is aligned transversely to the conveying direction and coincides here with the plane of FIG.
  • the LIDAR scanner 6 repeatedly scans the surface 3 with the objects 2 lying thereon with the measuring beam 9 .
  • the LIDAR scanner 6 registers a distance between the respective measuring point 11 on the surface 3 or the object and a reference point, for example the pivot axis 10, based on the travel time of its measuring beam 9. Due to the movement of the surface 3 with the objects 2 lying in the conveying direction the measuring points 11 in rows offset from one another in the conveying direction. The spacing of these lines in the conveying direction depends on a speed with which the surface 3 is moved in the conveying direction and thus passes through the plane in which the objects 2 are scanned with the measuring beam 9 . This speed is recorded with a speed sensor 12 .
  • An evaluation unit 13 is provided to calculate an original height profile of the objects 2 on the surface 3 point by point from signals 14 from the LIDAR scanner 6, which can also include a degree of reflection of the measuring beam 9 at the respective measuring point 11, and the speed from the speed sensor 12 and then to evaluate them in relation to the values of the shape parameters of interest of the objects 2.
  • FIG. 2 is a perspective representation of the surface 3 of the conveyor 4 according to FIG to scan the surface 3 measuring point 11 for measuring point 11. Some of the measuring points 11 lie on the surface 3 next to the objects 2.
  • a zero line for the height profile to be determined can be determined from these measuring points. This zero line is preferably determined by scanning the surface 3 without objects 2 lying on it.
  • a reference height for the height profile to be determined is determined by detecting a highest point of the objects 2 lying on the surface 3 and scanned during a detection period.
  • the acquisition period can be the period in which the objects of interest
  • the recording period can deviate from this period, i.e. it can be earlier, for example. Even then, the detection period can be so long that the same quantity as the quantity of objects 2 of interest on the surface 3 is conveyed through the plane 17 . With a conveying speed in the range of 1 m/s, the detection period can be a few seconds long.
  • An embodiment of the method according to the invention begins with this determination 18 of the zero line.
  • the original height profile measuring point 11 for measuring point 11 is first determined 20.
  • Such an original height profile 21 is shown in Figure 4 as a grayscale image.
  • the brightness of the objects 2 on the conveyor belt 3 gives the height of the objects 2 above the zero line of the conveyor belt
  • the original height profile is determined 20 in a first evaluation device 31 of the evaluation unit 13 on the basis of the signal 14 from the LIDAR scanner 6 and the speed 15 from the speed sensor 12.
  • the original height profile 21 is then forwarded to a second evaluation device 32 of the evaluation unit 13, which has a first neural network.
  • This first neural network is used to segment 22 the original height profile 21.
  • segments of the original height profile 21 are determined which can be completely assigned to individual objects 2 on the surface 3 and which completely cover these individual objects 2.
  • 5 shows several such segments 23 and 24.
  • the segments 23 belong to a group of segments that lie directly on the surface 3 or on other objects 2 in such a way that their upper side is completely free, while the other segments 24 are objects 2 belong whose tops are partly covered by other objects
  • the second evaluation device of the evaluation unit 13 specifically identifies the segments 23 and forwards them in the form of a mask 25 together with the height profile 21 to a third evaluation device 33 of the evaluation unit 13 .
  • FIG. 6 shows the application of such a mask 25 to a section 26 of the height profile 21.
  • the mask 25 only allows one segment 23 of the height profile 21 to pass through.
  • the coordinates of the original height profile 21 are transformed 27, which are originally cylinder coordinates but deviate from pure cylinder coordinates due to the consideration of the zero line, into over the surface
  • Both the first neural network used in segmentation 22 and the second neural network used in analysis 29 must be trained, before the method according to figure 3 can reliably determine the values 30 of the shape parameters.
  • Both real objects 2, for which the values 30 of the shape parameters are determined with the aid of mechanical measuring instruments, for example, and simulated objects 2 can be used.
  • the respective value 30 of the shape parameter can be calculated or determined by simulating its mechanical determination.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Afin de déterminer la valeur d'au moins un paramètre de forme d'objets (2) qui sont transportés avec une orientation et un regroupement inconnus sur une surface (3) dans une direction de transport s'étendant longitudinalement par rapport à la surface, les objets (2) sont balayés de manière répétée avec un faisceau de mesure (9) d'un scanner LIDAR (6) dans un plan s'étendant transversalement à la direction de transport. Une vitesse à laquelle la surface (3) passe à travers le plan dans la direction de transport est mesurée. À partir de signaux du scanner LIDAR (6) et de la vitesse, un profil de hauteur d'origine des objets (2) sur la surface (3) est déterminé en des points. À l'aide d'un premier réseau neuronal, le profil de hauteur d'origine est segmenté, des segments du profil de hauteur d'origine dont des points appartiennent à un objet unique (2) sont identifiés. Des parties du profil de hauteur d'origine, qui comprennent chacune un segment identifié et les environs de ce segment, sont transformées en coordonnées cartésiennes et analysées ultérieurement à l'aide d'un second réseau neuronal par rapport à la valeur dudit au moins un paramètre de forme de l'objet unique.
PCT/EP2022/082002 2021-11-15 2022-11-15 Procédé et dispositif de classification d'objets qui sont transportés en étant couchés sur une surface WO2023084116A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021129673.0A DE102021129673A1 (de) 2021-11-15 2021-11-15 Verfahren und Vorrichtung zur Klassifizierung von Objekten, die auf einer Oberfläche liegend gefördert werden
DE102021129673.0 2021-11-15

Publications (1)

Publication Number Publication Date
WO2023084116A1 true WO2023084116A1 (fr) 2023-05-19

Family

ID=84387610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/082002 WO2023084116A1 (fr) 2021-11-15 2022-11-15 Procédé et dispositif de classification d'objets qui sont transportés en étant couchés sur une surface

Country Status (2)

Country Link
DE (1) DE102021129673A1 (fr)
WO (1) WO2023084116A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10134714A1 (de) 2001-07-22 2003-02-06 Robert Massen Verfahren und Anordnung zur optischen Inspektion von näherungsweise zylinderförmigen Objekten in der Produktionslinie
US20180029085A1 (en) * 2016-01-15 2018-02-01 Key Technology, Inc. Method and Apparatus for Sorting
US20180042176A1 (en) 2016-08-15 2018-02-15 Raptor Maps, Inc. Systems, devices, and methods for monitoring and assessing characteristics of harvested specialty crops
WO2019089825A1 (fr) 2017-11-02 2019-05-09 AMP Robotics Corporation Systèmes et procédés pour caractérisation matérielle optique de déchets par apprentissage automatique
WO2020120702A1 (fr) 2018-12-12 2020-06-18 Marel Salmon A/S Procédé et dispositif d'estimation du poids d'objets alimentaires
EP3816857A1 (fr) 2019-11-04 2021-05-05 TOMRA Sorting GmbH Réseau neuronal pour tri en vrac

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10134714A1 (de) 2001-07-22 2003-02-06 Robert Massen Verfahren und Anordnung zur optischen Inspektion von näherungsweise zylinderförmigen Objekten in der Produktionslinie
US20180029085A1 (en) * 2016-01-15 2018-02-01 Key Technology, Inc. Method and Apparatus for Sorting
US20180042176A1 (en) 2016-08-15 2018-02-15 Raptor Maps, Inc. Systems, devices, and methods for monitoring and assessing characteristics of harvested specialty crops
WO2019089825A1 (fr) 2017-11-02 2019-05-09 AMP Robotics Corporation Systèmes et procédés pour caractérisation matérielle optique de déchets par apprentissage automatique
WO2020120702A1 (fr) 2018-12-12 2020-06-18 Marel Salmon A/S Procédé et dispositif d'estimation du poids d'objets alimentaires
EP3816857A1 (fr) 2019-11-04 2021-05-05 TOMRA Sorting GmbH Réseau neuronal pour tri en vrac

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AKIN TATOGLU ET AL: "Point cloud segmentation with LIDAR reflection intensity behavior", ROBOTICS AND AUTOMATION (ICRA), 2012 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 14 May 2012 (2012-05-14), pages 786 - 790, XP032450955, ISBN: 978-1-4673-1403-9, DOI: 10.1109/ICRA.2012.6225224 *
LYNDON NEAL SMITH ET AL: "Innovative 3D and 2D machine vision methods for analysis of plants and crops in the field", COMPUTERS IN INDUSTRY, vol. 97, 10 March 2018 (2018-03-10), AMSTERDAM, NL, pages 122 - 131, XP055558662, ISSN: 0166-3615, DOI: 10.1016/j.compind.2018.02.002 *
YIMYAM PANITNAT PANITNAT@BUU AC TH: "Physical Property Analysis of Sweet Potatoes Using Computer Vision", PROCEEDINGS OF THE 17TH ACM WORKSHOP ON MOBILITY IN THE EVOLVING INTERNET ARCHITECTURE, ACMPUB27, NEW YORK, NY, USA, 27 July 2019 (2019-07-27), pages 18 - 22, XP058908018, ISBN: 978-1-4503-9657-8, DOI: 10.1145/3348445.3348471 *

Also Published As

Publication number Publication date
DE102021129673A1 (de) 2023-05-17

Similar Documents

Publication Publication Date Title
EP3275313B1 (fr) Dispositif pour saisir et évaluer des informations spécifiques aux produits sur des produits de l'industrie de traitement des aliments et système comprenant un tel dispositif et procédé pour traiter des produits de l'industrie de traitement des aliments
EP3454298B1 (fr) Système de caméra et procédé de détermination d'un flux d'objets
DE69331662T2 (de) Verfahren und gerät zur automatischen bewertung von getreidekörnern und anderen granularen produkten
DE19520190A1 (de) Vorrichtung zur Überwachung eines Klebstoffaufbringungszustandes
DE69921021T2 (de) Verfahren zum Unterscheiden von Produkteinheiten und Vorrichtung dazu
DD152870A1 (de) Verfahren und vorrichtung zum klassieren in bewegung befindlichen stueckgutes
DE112008001839T5 (de) Prüfvorrichtung und Prüfverfahren, welches durchdringende Strahlung verwendet
EP1682852A1 (fr) Procede et dispositif pour identifier et mesurer la vegetation aux abords de voies de communication
WO2012130853A1 (fr) Dispositif et procédé de surveillance automatique d'un dispositif de traitement de produits viandeux
EP3715779B1 (fr) Procédé et dispositif de détermination des déformations d'un objet
WO2023084116A1 (fr) Procédé et dispositif de classification d'objets qui sont transportés en étant couchés sur une surface
DE112009005102T5 (de) Aufspaltung mehrteiliger Objekte
EP1925921B1 (fr) Procédé et dispositif destinés à la détermination de la masse de marchandises au détail sur un dispositif de convoyage
DE4130399A1 (de) Automatische fenstertechnik zur gegenstandserkennung
WO2019185184A1 (fr) Dispositif et procédé pour la détection optique de position d'objets transportés
WO2008077680A1 (fr) Procédé et dispositif de contrôle optique d'objets
DE102022107144A1 (de) Komponentenprüfsystem und -verfahren auf produktionsgeschwindigkeit
WO2020229575A1 (fr) Procédé pour déterminer un modèle numérique d'une zone d'une forêt
DE20317095U1 (de) Vorrichtung zur Erkennung von Oberflächenfehlern
DE102008049859B4 (de) Verfahren und Prüfsystem zur optischen Prüfung einer Kontur eines Prüfobjekts
EP1139285B1 (fr) Procédé et dispositif pour contrôle ou inspection d'objets
EP2972071A1 (fr) Dispositif de mesure d'un objet carcasse d'animal de boucherie
DE4238193A1 (de) Verfahren und Vorrichtung zur Identifizierung von Gegenständen
WO2024149545A1 (fr) Procédé et dispositif pour faire fonctionner plusieurs capteurs lidard
DE102023100567A1 (de) Verfahren und Vorrichtung zum Betreiben eines Lidarsensors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22817728

Country of ref document: EP

Kind code of ref document: A1