EP4315271A1 - System and method for identification and/or sorting of objects - Google Patents
System and method for identification and/or sorting of objectsInfo
- Publication number
- EP4315271A1 EP4315271A1 EP22717721.9A EP22717721A EP4315271A1 EP 4315271 A1 EP4315271 A1 EP 4315271A1 EP 22717721 A EP22717721 A EP 22717721A EP 4315271 A1 EP4315271 A1 EP 4315271A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- objects
- property
- identity
- properties
- learning phase
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 58
- 238000004458 analytical method Methods 0.000 claims abstract description 46
- 239000000463 material Substances 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 22
- 238000004064 recycling Methods 0.000 claims abstract description 9
- 238000001514 detection method Methods 0.000 claims description 57
- 239000002245 particle Substances 0.000 claims description 13
- 239000000126 substance Substances 0.000 claims description 13
- 238000013461 design Methods 0.000 claims description 10
- 239000000203 mixture Substances 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000004876 x-ray fluorescence Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 29
- 238000005516 engineering process Methods 0.000 abstract description 21
- 238000004806 packaging method and process Methods 0.000 description 23
- 230000005284 excitation Effects 0.000 description 21
- 238000004020 luminiscence type Methods 0.000 description 21
- 239000013078 crystal Substances 0.000 description 16
- 238000005259 measurement Methods 0.000 description 13
- 238000001228 spectrum Methods 0.000 description 12
- 238000000295 emission spectrum Methods 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 11
- 239000004033 plastic Substances 0.000 description 11
- 229920003023 plastic Polymers 0.000 description 11
- 238000004519 manufacturing process Methods 0.000 description 9
- 229910052751 metal Inorganic materials 0.000 description 9
- 239000002184 metal Substances 0.000 description 9
- 230000008901 benefit Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 239000003550 marker Substances 0.000 description 7
- 238000007639 printing Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 230000006978 adaptation Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 238000002536 laser-induced breakdown spectroscopy Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 238000004611 spectroscopical analysis Methods 0.000 description 4
- 239000004698 Polyethylene Substances 0.000 description 3
- 239000000654 additive Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000011109 contamination Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000006249 magnetic particle Substances 0.000 description 2
- 150000002739 metals Chemical class 0.000 description 2
- 238000005424 photoluminescence Methods 0.000 description 2
- -1 polyethylene Polymers 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000000985 reflectance spectrum Methods 0.000 description 2
- 229920006300 shrink film Polymers 0.000 description 2
- 238000011895 specific detection Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000035882 stress Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 238000004566 IR spectroscopy Methods 0.000 description 1
- 238000004497 NIR spectroscopy Methods 0.000 description 1
- 239000004743 Polypropylene Substances 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 229910052729 chemical element Inorganic materials 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010017 direct printing Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000002189 fluorescence spectrum Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 229910001385 heavy metal Inorganic materials 0.000 description 1
- 229920001903 high density polyethylene Polymers 0.000 description 1
- 239000004700 high-density polyethylene Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000001320 near-infrared absorption spectroscopy Methods 0.000 description 1
- 239000005022 packaging material Substances 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 229920000573 polyethylene Polymers 0.000 description 1
- 229920001155 polypropylene Polymers 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000006862 quantum yield reaction Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/342—Sorting according to other particular properties according to optical properties, e.g. colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/58—Extraction of image or video features relating to hyperspectral data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/80—Recognising image objects characterised by unique random patterns
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29B—PREPARATION OR PRETREATMENT OF THE MATERIAL TO BE SHAPED; MAKING GRANULES OR PREFORMS; RECOVERY OF PLASTICS OR OTHER CONSTITUENTS OF WASTE MATERIAL CONTAINING PLASTICS
- B29B17/00—Recovery of plastics or other constituents of waste material containing plastics
- B29B17/02—Separating plastics from other materials
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29B—PREPARATION OR PRETREATMENT OF THE MATERIAL TO BE SHAPED; MAKING GRANULES OR PREFORMS; RECOVERY OF PLASTICS OR OTHER CONSTITUENTS OF WASTE MATERIAL CONTAINING PLASTICS
- B29B17/00—Recovery of plastics or other constituents of waste material containing plastics
- B29B17/02—Separating plastics from other materials
- B29B2017/0203—Separating plastics from plastics
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29B—PREPARATION OR PRETREATMENT OF THE MATERIAL TO BE SHAPED; MAKING GRANULES OR PREFORMS; RECOVERY OF PLASTICS OR OTHER CONSTITUENTS OF WASTE MATERIAL CONTAINING PLASTICS
- B29B17/00—Recovery of plastics or other constituents of waste material containing plastics
- B29B17/02—Separating plastics from other materials
- B29B2017/0213—Specific separating techniques
- B29B2017/0279—Optical identification, e.g. cameras or spectroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- the invention relates to a method for identifying and/or sorting objects, in particular for the recycling of materials and based on artificial intelligence technologies.
- the number of distinguishable specifications can be increased by being able to analyze many different object properties.
- each of the analyzed object properties can be subject to variance. This can be production-related (e.g. due to production fluctuations), application-related (e.g. change in object shape due to mechanical stress during collection and transport of the objects, e.g. color change due to dirt or aging) or analysis-related (e.g. different object positioning under a camera system).
- Object properties can also influence each other. For example, black pigments can reduce the emission intensity of luminescent materials.
- variance is intended to describe the variability of an object property within a defined set of objects.
- the term variance not only includes the statistical definition, but also property fluctuations in general.
- the object properties can include material properties.
- An additional challenge is therefore the training of the identification system so that it can determine the identity of objects.
- the training can consist of storing reference properties in a database.
- the identity of the objects can then be determined by comparing the detected object properties with the reference properties stored in a database.
- the storage of reference properties is complex when there is a large variety and variance of object properties.
- the object properties of a representative set of objects not only have to be analyzed in extensive measurement campaigns, but also assessed and evaluated in order to determine reference properties.
- the evaluation and evaluation of the analysis results requires a high level of expertise.
- Artificial intelligence (AI) technologies can be used to solve this challenge.
- Artificial intelligence (AI) technology is to be understood as any technology that enables the autonomous processing of a problem to be solved using a computer system.
- the problem to be solved by the AI technology is to derive the identity of the analyzed object from the analysis information collected.
- the analyzed object properties act as "input” and are processed by the AI technology.
- An identification result for the object is calculated as an "output” or subsequently assigned to it.
- This calculation is carried out using calculation algorithms.
- the algorithms have to be trained.
- the learning phase the computer system learns based on a set of example objects, which represents a representative selection of the variety and variance of object properties that can be analyzed, to generalize the information recorded and to establish correlations between the information recorded and the object identity.
- the information recorded in the learning phase serves as training data in which patterns and regularities are recognized.
- the AI algorithms are adapted. However, the system must be given the correct identity of the objects in the learning phase. After the learning phase, the algorithms are adapted in such a way that the system can independently determine the object identity is possible, even if the objects in question were not used as example objects during the learning phase, provided the properties of these objects are within a permissible variability found by the system during the learning phase. Also, after the learning phase, the system can classify objects as unrecognizable if the object properties lie outside of this allowable variability.
- the training data made available to the system in the learning phase includes a data record for each sample object consisting of the analyzed object properties and the correct object identity.
- This object identity represents a so-called "label”.
- the present invention therefore also relates to an intelligent system and method for identifying and/or sorting objects, which is based on AI technologies.
- a system and method is provided wherein a detection system having one or more detection modules is used to analyze object properties of the objects to be identified.
- the analyzed object properties are transmitted to a computer system and processed by this computer system in order to calculate an object identity of the objects to be identified.
- the objects can be sorted according to the calculated and associated object identity.
- the identity of an object should be understood as belonging to a faction.
- Fractional membership in the sense of the invention describes the property of an object or material to be part of an object/material fraction, with all parts of this object/material fraction having common properties. These properties can be, for example, a material type (e.g. PE), an origin (e.g. manufacturer of the material), an application (e.g. food packaging) or any other property (e.g. content of a specific additive).
- the parts of an object/material faction can also share combined properties (e.g. PE from a specific manufacturer).
- the system and method thus classifies the objects, with the objects being assigned to a specific object/material fraction and thus to a class. The objects can be sorted according to this assignment.
- the method has the following steps: linking at least one first object type and one reference object type property that uniquely identifies the first object type via object identity information;
- the learning phase comprising analyzing at least one object having the reference object type property for the reference object type property and at least one object property deviating from the reference object type property;
- the object properties can be analyzed qualitatively or quantitatively. Qualitative analysis is used to determine whether a property is present or not. Quantitative analysis can be used to determine how pronounced a property is.
- the analysis of the object properties can include detecting a property applied to the object or introduced into the object, something detecting a fluorescence code and/or an RFA (X-ray fluorescence analysis) code and/or a magnetic code and/or a particle code and /or electronic data and/or a watermark and/or a bar code and/or a QR code and/or a symbol and/or an item number and/or design elements.
- spectroscopically analyzing may include detecting a native property of the object, such as a chemical one material composition of the object. A color and/or a shape and/or a size and/or a surface structure of the object can also be detected.
- a fluorescence code is based on embedded or applied luminescence markers.
- a luminescence marker can have at least one luminescent material, e.g. a fluorescent material and/or a phosphorescent material and/or an upconverter and/or a downconverter and/or a material which re-emits an excitation wavelength after excitation. After excitation, a luminescence marker can emit at least one emission wavelength or a plurality of emission wavelengths. Individual different luminescence markers or mixtures of different luminescence markers can be used. Furthermore, the luminescence markers can be contained in different amounts in the mixtures, for example, so that an analyzable feature is created via the intensity distribution of the emitted wavelengths.
- Luminescence is the emission of electromagnetic radiation after the input of energy. It is preferred that the energy input takes place via photons, the observed luminescence is thus photoluminescence.
- the photoluminescence can occur in the UV and/or VIS and/or IR.
- Upconverters are luminescent substances which, after excitation, emit photons whose wavelength is shorter than the wavelength of the excitation photons.
- Downconverters are luminescent substances which, when excited, emit photons whose wavelength is longer than the wavelength of the excitation photons.
- Analyzing a fluorescent code may include spectroscopically analyzing in which the luminescent label(s) is/are electromagnetically excited to analyze the emitted spectrum.
- the analysis of the fluorescence code can be the analysis of the presence and/or absence of specific and/or the analysis of the emission intensity for one or more emission wavelengths or wavelength ranges and/or the analysis of emission intensity ratios between emission wavelengths or emission wavelength ranges and/or the analysis an entire emission spectrum, i.e. the intensity of the emission as a function of wavelength or frequency, and/or the analysis of a dynamic emission behavior.
- the dynamic emission behavior means the luminescence emission behavior over time.
- the emission of luminescence can be metrologically recorded in a specified period of time.
- a fixed dead time can be provided between the end of the excitation and the start of the first measurement.
- the luminescence intensity for an emission wavelength or a wavelength range can be determined several times after fixed time intervals.
- Intensity curves over time can be formed from the absolute intensities obtained. This can also be done for multiple emission wavelengths or wavelength ranges.
- intensity ratios can be formed from the absolute intensities obtained for identical or different emission wavelengths/wavelength ranges.
- the decay constant is determined for one or more emission wavelengths or wavelength ranges. The decay constant is understood to mean the period of time in which the initial intensity of the emission falls by a factor of 1/e.
- the luminescent properties of luminescent substances can be varied by varying their chemical composition. This results in a large variety of variants, which can be further increased by combining different luminescence markers.
- a large number of distinguishable fluorescence codes can thus be generated.
- Appropriate fluorescence codes can be selected in relation to the objects to be marked. Due to their higher energy emission compared to excitation, upconverters offer the possibility of optically background-free detection of the marker signals. This results in the potential of being able to achieve a particularly high signal-to-noise ratio. Downconverters can have higher quantum yields.
- RFA code is to be understood as meaning a code which can be detected by means of X-ray fluorescence analysis (RFA).
- the RFA code can, for example, be formed by defined amounts of one or more chemical elements.
- RFA codes are suitable, for example, for objects whose optical properties make it difficult to use photoluminescent markers, e.g. for black objects.
- a magnetic code can be based on magnetic particles with different magnetic properties.
- Magnetic Particle Spectroscopy can be used to analyze a magnetic code.
- magnetic codes are suitable, for example, for objects whose optical properties make it difficult to use photoluminescent markers, eg for black objects.
- a particle code can be based on randomly distributed particles.
- the particles can include luminescent particles.
- Camera systems optionally with lighting and excitation units, can be used to detect the particle pattern.
- the illumination and excitation devices can be designed for the re-emission of luminescent particles.
- the random distribution of the particles creates a unique particle pattern. An object that has this unique particle pattern can be unambiguously identified by detecting this pattern.
- Particle codes are therefore suitable, for example, for objects that are to be uniquely and individually identified.
- Fluorescent codes, RFA codes, magnetic codes and particle codes can be incorporated into an ink.
- the printing ink can be provided, for example, in a partial area of the printing that an object already has.
- the printing ink can be provided for printing on a label, a shrink film or the like of the object.
- the printing ink can also be used for direct printing on packaging, for example.
- these codes can be provided in a label adhesive, in a coating for a label or packaging material, in a base material of a label or shrink film, or in the base material of the object, e.g. in a plastic of a plastic bottle.
- Electronic data can, for example, be stored on RFID transponders.
- RFID transponders can be attached to a wide variety of objects.
- the data can be recorded by an RFID reader.
- Watermarks or watermarks are to be understood as codes that are inconspicuous to the human eye and are attached to the surface of objects, e.g. packaging.
- the watermarks are recorded with camera systems.
- Watermarks are suitable e.g. for objects with larger accessible surfaces.
- Bar codes, QR codes, symbols, article numbers and design elements such as logos or figurative marks are common product identifiers and are therefore suitable for identifying and sorting objects. They can be detected via optical detection devices.
- the chemical material composition of an object can be analyzed, for example, using near-infrared (NIR) spectroscopy.
- Objects can be assigned to a material class based on the IR reflection spectrum.
- Classic plastics such as polyethylene or polypropylene can be recognized.
- Laser-induced plasma spectroscopy (LIPS) is a method for determining the element-specific composition of a sample.
- the color of objects can be determined, for example, using visual spectroscopy (VIS) or color line cameras.
- the electrical conductivity can be analyzed using electromagnetic sensors. This allows metals to be detected.
- X-ray fluorescence analysis is also suitable for detecting metals.
- the atomic density of materials can be analyzed using X-ray transmission sensors. This makes it possible, for example, to differentiate between aluminum and heavy metals.
- Other object properties such as color, shape, size or surface structure can be detected using color line camera systems, for example.
- a conveyor device can be provided for feeding objects to the detection system.
- a transport device for transporting the objects through the detection system can also be provided. Alternatively, if the detection system is arranged essentially vertically, the objects can be transported through the system driven by gravity.
- the objects can be made available in bulk so that they are initially separated in a first step to optimize the analysis result.
- Any device for separating the objects can be provided for separating the arrangement of objects that is supplied in bulk.
- This can, for example, have a plurality of conveyor belts connected in series with increasing conveying speed, baffles, a vibrating device, a robot system, an infeed station with manual loading or the like.
- the separation can pursue the goal of positioning the objects at a sufficient distance from one another, which is necessary for object-specific detection, for example by preventing several from touching or overlapping Objects, or to position the objects arranged in a row in the feed direction of a conveyor.
- the objects can be transported further with the aid of a distribution conveyor with segmented carrying means, but also with the aid of a distribution conveyor with continuous carrying means.
- the objects Before being separated, the objects can be automatically fed into the sorting process from a collection point. Alternatively, the objects can also be fed in by manual loading. If the individual objects are fed in one after the other, the objects are separated at the same time.
- a distribution conveyor with segmented carrying elements is a conveyor system in which each transported object is in a defined place, e.g. in a trough-shaped receiving point.
- the objects are not in defined places.
- the isolation offers several advantages. On the one hand, only one object is examined when analyzing the object properties. Therefore, object-specific analysis results can be obtained. Without isolation, multiple objects with different object properties could be present in the detection module at the same time, or could be present in a detection module without sufficient spatial separation, which would lead to mixed analysis results. Furthermore, the isolation enables individual objects to be deposited on a segmented carrying element and thus allows individual objects to be transported in a targeted manner to defined target locations.
- the presence of separated objects can be checked. This can be done, for example, using optical image recognition. If a number of objects are detected and therefore incorrect separation, the analysis process can be paused. The group of non-separated objects then passes through the detection without analysis and can then be sorted out as non-analyzable or added to the separation step again.
- the detection technology required to analyze an object property is implemented in a detection module.
- the detection technologies used can include sensors for luminescence Analysis, optical sensor technology, such as camera systems such as hyperspectral cameras or color line scan cameras, VIS spectrometry, infrared spectrometry such as near-infrared spectrometry (NIR), detectors based on magnetic coils, electromagnetic sensors, RFID readers, X-ray sensors (e.g RFA or X-ray transmission sensors), laser-induced plasma spectroscopy (LIPS), metal sensors and the like.
- NIR near-infrared spectrometry
- NIR near-infrared spectrometry
- detectors based on magnetic coils electromagnetic sensors
- RFID readers e.g.g RFA or X-ray transmission sensors
- LIPS laser-induced plasma spectroscopy
- the analysis of the / of the fluorescence code or the / of the luminescence marker / s can be done with known methods of spectroscopy, which in the context of this application means all methods and devices that are suitable for a total emission spectrum, a partial emission spectrum, wavelength ranges, individual emission wavelengths or to analyze a dynamic emission behavior.
- various detectors such as black-and-white cameras, color cameras, hyperspectral cameras, photomultipliers, spectrometers, photocells, photodiodes, phototransistors can be used alone or in combination for the luminescence analysis.
- optical filters such as longpass/shortpass/bandpass filters.
- Broadband and/or narrowband sources such as lasers, laser diodes, light-emitting diodes (LEDs), xenon lamps, halogen lamps can be used individually or in combination to stimulate the luminescence.
- the excitation sources can be activated individually or activated simultaneously or sequentially in different combinations.
- Optical filters such as long-pass/short-pass/band-pass filters can be used in the excitation devices.
- a variation of the opening width of the excitation sources can be provided in order to modulate the size of an excitation zone through which material to be identified is transported.
- the excitation zone can also be modulated by arranging several excitation sources sequentially one behind the other and varying the number of activated excitation sources in this arrangement.
- the computer system can control a sorting device in order to sort the objects according to the calculated object identity.
- the sorting device can include, for example, drop-flap sorters, tilt-tray sorters, or nozzle strips for blowing out objects.
- the sorting of the isolated objects can include addressing a carrying means of the conveyor to which exactly one of the isolated objects is assigned, as a result of which the isolated object is fed to the destination address.
- the addressing can include the control of at least a large number of independently controllable support means of a conveyor.
- the conveyor can be, for example, a cross-belt sorter with a multiplicity of linked conveyor belts that can be controlled independently of one another, or a drop-flap sorter with a multiplicity of drop-flaps that can be controlled independently of one another.
- a carrying means of the conveyor to which exactly one of the isolated objects is assigned can be assigned the destination address associated with the calculated object identity by comparison with the destination addresses stored in a database for various object identities.
- the destination address can consequently be assigned to the analyzed object or a carrying means of a conveyor, for example a transport container of a drop-flap sorter, a segment of a cross-belt sorter or the like.
- the objects separated on the sorter can thus be positioned, for example, on a distribution conveyor with segmented carrying means, which makes it possible to transport the individual objects with the assigned destination address to the assigned destination address, for example to a storage container for specific types of plastic from a specific manufacturer or to a otherwise specific addressees.
- nozzles When using nozzle bars to blow out objects, nozzles can be activated depending on the calculation of the object identity and the comparison with the target addresses stored in a database for object identities. The objects are then blown out according to their assigned destination, e.g. into a collection container provided for this purpose.
- detection modules for object recognition can be provided, for example camera systems or non-imaging detectors such as light barriers. If an object is present on the conveyor and/or in the detection system, activation of the detection system and the AI algorithms are activated. If an object is present in the sorting device, it can be activated. Furthermore, these detection modules can be provided for detecting the presence of non-separated objects. Objects that are not isolated should be understood to mean objects that are not positioned at a sufficient distance from one another, which is necessary for object-specific detection, for example because they are touching or overlapping one another. If non-isolated objects are detected, the detection system and/or the algorithms can be deactivated in order to prevent mixed object properties from being detected. Furthermore, the sorting device can be caused to transport non-separated objects into a separate collection container.
- the computer system includes an AI algorithm or several different AI algorithms.
- the algorithm(s) calculate an object identity based on the analyzed object properties.
- the information from a detection module can be processed by an algorithm.
- the analysis of each detection module can be used to calculate an object identity.
- the algorithm can be specifically optimized for the analyzed object property.
- the information from a detection module can also be processed by a number of algorithms. This allows the results of different algorithms to be compared in order to select particularly suitable algorithms. By dividing the algorithms between the individual detection modules or individual detection characteristics of the same detection module, it is particularly easy to carry out updates, make further developments or correct errors.
- the information from a number of detection modules can also be processed by a common algorithm.
- the integration of the functions of the detection modules in a common algorithm can be particularly advantageous if a constant or regular data comparison is made between the analysis data of the different detection modules. In this case, several object properties can be used to calculate an object identity from the outset.
- the calculated individual object identities of all algorithms can be merged into a combined overall object identity and the overall object identity can then be assigned to the respective object.
- a matching algorithm can be provided for this purpose, which calculates the combined object identity. The calculation can determine the most frequently obtained object identity as a combined include object identity. Furthermore, a different weighting of individual object identities obtained can be carried out. This allows, for example, a weaker weighting of material features that are susceptible to interference.
- AI algorithms in the context of this application are to be understood as meaning all networks that can be learned and computational algorithms associated therewith that can be learned via concepts of machine learning and, after the learning phase, are suitable for calculating object identities on the basis of analyzed object properties.
- AI algorithms can be used.
- the algorithms VGG16, VGG19, ResNetso, ResNet 101, or ResNeti52 can be used.
- Support Vector Machine algorithms can also be used.
- the algorithms are trained using machine learning concepts.
- the algorithms are trained using training data.
- the training data are the analyzed object properties of objects.
- a correlation is established between the detected object properties and the object identity.
- the correct object identity of the objects is given to the system. Accordingly, the correct object identity is also part of the training data.
- Machine learning reduces the amount of work required to teach the identification system.
- the system must be given the object identity of the objects. However, there is no need for people to evaluate and evaluate recorded object properties.
- All available object properties can flow into the learning phase as training data, or a selection or just a single object property.
- the selection of the object properties used can be based on the object properties which are suitable for identifying and/or sorting the objects. If the objects have defined properties (e.g. when sorting plastic production waste from a defined production line in the production facility), then the analysis of a selection of suitable object properties (e.g. color or material composition) may be sufficient, while the analysis of unsuitable object properties (e.g. metal content) is omitted will can. If a large variety of different objects is used, all object properties can be included in the learning phase.
- objects can be used that have been specially made for this purpose, or regular objects from industrial business operations.
- objects with an identical object identity can be presented.
- Several types of objects with different object identities can also be presented. This has the advantage that different object identities can be trained on the system at the same time.
- Objects can also be presented along with companion objects.
- Accompanying objects are objects to which the object identity "unknown" is to be assigned both in the learning phase and in regular operation. In this way, the system can be taught to differentiate between objects with a defined identity and objects with an unknown identity.
- the system is given the object identity of the objects. This can be done in that the associated object identity is entered by a human operator as part of the analysis of the object properties of an object. This process can be repeated several times until the system calculates the correct object identity with a previously defined reliability rate.
- the reliability rate should be understood as the quotient of the number of correctly recognized object identities and the total number of objects analyzed.
- the reliability rate that is to be achieved can correspond to the identification quality and thus the sorting quality that is to be at least achieved in regular operation of the method.
- This can be 0.8, 0.9, preferably 0.95, more preferably 0.97, even more preferably 0.99.
- These qualities can also represent quality ranges, e.g. 0.8-1, 0.9-1, preferably 0.95-1, even more preferably 0.97-1, or even 0.99-1 can be provided.
- the analysis of the object properties of the objects includes checking whether the reference object type property is present in the objects. If the reference object type property is present, then the system can autonomously assign the correct object identity to the objects. This object identity can be assigned by comparing the detected reference object type properties with reference object type properties stored in a database, the reference properties in the database being assigned the associated object identities. In addition to analyzing the "reference object type property", other object properties can be analyzed.
- a reference object type property e.g. luminescence emission spectrum or object shape
- the algorithms are trained through the other detected object properties (e.g. shape, color, image pattern, infrared reflection spectrum, X-ray fluorescence spectrum, metal content).
- the correlation between the object properties of the objects and the correct object identity established by the reference object type property is implemented in the algorithms.
- the learning phase can thus include analyzing the object properties, checking the presence of a reference object type property, assigning an object identity, learning AI algorithms using the analyzed object properties and establishing a correlation between the analyzed object properties and the object identity.
- the learning phase can be performed with a large number of objects, largely without human intervention, until the system calculates the correct object identity with a previously defined reliability rate.
- a reference object type property that can be detected by the system and the associated uniquely assignable object identity
- the detected "reference property” e.g. luminescence emission spectrum
- the recorded reference property can be compared with the reference properties stored in a database and an object identity can be assigned to the analyzed objects.
- the detected reference property can be used to teach AI algorithms in the same way as the other detected object properties.
- the reference property can then also be included in the calculation of the object identity by the AI algorithms.
- a group of several object properties can also act as a "reference object type property" (e.g. emission spectrum and object shape) and establish the correlation between the analyzed object and the object identity.
- a reference object type property e.g. emission spectrum and object shape
- Reference objects are objects that are made available in order to establish a correlation between the analyzed object properties and the object identity during the learning phase of the AI algorithms.
- the system can provide for returning already analyzed objects to the detection modules. This allows objects to be guided in a circle, which enables an uninterrupted learning phase.
- mechanical devices e.g. baffles, can be provided to change the position of the objects, which means that the system can also learn to identify objects that are positioned differently.
- a single object can be guided in a circle in order to generate different measurement data.
- Object properties which are always present in the objects, can be used as reference object type properties. This can be, for example, the regular material composition and/or shape and/or color of the objects act. However, properties can also be used as reference object properties which were added to the objects specifically for this application, ie to produce a correlation between the object and object identity in the learning phase. This can be any property of the object, eg fluorescence codes in the base material of the objects, in the printing ink of objects or labels or elsewhere, or other markings such as QR codes or watermarks.
- Companion objects can be assigned any other object identity by an operator during the learning phase. This enables an identity to be subsequently assigned to objects to which no defined reference object type property can be assigned, e.g. because a suitable reference object type property cannot be identified or because a specific addition of a reference object type property is not possible for technical reasons.
- the system can be tested by deactivating the presence check of a reference object type property and thus the autonomous assignment of the correct object identity based on the reference object type property.
- the detection module provided for detecting the reference object type property can be deactivated or the comparison of the detected reference object type properties with the reference object type properties stored in a database and linked to object identities can be omitted. It can then be checked whether the system calculates the correct object identity based on the other detected object properties with the previously defined reliability rate. If the system has been trained to identify objects without a reference object type property, the system is tested by the human operator not entering the correct object identity.
- the system can also calculate the object identity in regular operation on the basis of the other object properties recorded.
- the reference object type property used in the learning phase then no longer needs to be present on the objects.
- the reference object type property can also be included in the calculation of the object identity in normal operation, provided that the objects have the "reference object type property" in normal operation.
- regular operation one, several or all of the detected object properties can function as "input". The selection of the analyzed object properties can be based on the object properties that were used in the learning phase.
- the detected object properties can differ. This can be due, for example, to production-related quality fluctuations, unequal mechanical stress, different lifespans (different lengths of aging) or different levels of contamination.
- variances can be included in the learning phase by using a representative selection of reference objects that contain the property fluctuations that occur. In regular operation, the system can then also calculate the correct object identity for objects with such property fluctuations.
- the reference object type properties can also be used to autonomously check the reliability of the object identification.
- the object identity is calculated by the algorithm(s).
- the system checks the presence of reference object type properties.
- the calculated object identity and the correct object identity defined via any reference property found are stored in a database for each analyzed object.
- the calculated identities can be matched with the correct object identities. The higher the agreement, the greater the reliability of the calculated object identity.
- the detected object properties can also be used to determine the variance of the detected object properties.
- a measurement data range is calculated from the measurement data of all individual objects.
- the measurement data range obtained can then be used to optimally adapt the sensors of the detection modules to the measurement results to be expected. For example, from the analysis of luminescent marker codes, the variances in emission intensity, emission maxima (wavelengths with maximum emission), and/or full width at half maximum can be obtained. Based on this, the sensors can be optimally adjusted in terms of spectral sensitivity and selectivity, for example by adjusting the excitation intensity or selecting suitable optical filters.
- the detection modules can be adjusted when the modules are inactive. This is the case, for example, if new hardware Components must be built into the modules. However, the adjustment can also be made with active modules. This can be used if all the technical components required for the adjustment are already integrated in the module and can be controlled.
- the sensors can then also be adjusted while the system is in operation. The influence of the adaptation on the object recognition can thus also be examined directly.
- virtual object properties can also serve as "input" in the learning phase and be processed by AI technology.
- These virtual object properties and thus virtual data sets can be based, for example, on measurement data from reference objects that were obtained and stored using external measurement modules and are only subsequently made available to the device proposed here. This offers the advantage that data from detection modules that have not yet been implemented in the device can also be entered. In this way, the technical feasibility and performance of such detection modules in the overall system can be examined before they are integrated into the device.
- the virtual data sets can be based on the measurement of a set of reference objects, which ensures a representative selection of the variety and variance of the analyzed object property.
- the virtual data sets can also be based on a smaller number of reference objects, which does not cover the entire range of variances.
- the measurement results obtained can be duplicated and varied artificially, e.g. computer-aided, in order to obtain a virtual duplication of the objects and a virtual increase in the variability of the measurement data. This offers the advantage that the identification of reference objects that are not available in a sufficient number for a representative selection can also be tested.
- the method can be used to identify and/or sort any objects, eg from private households, trade or industry. It can be, for example, production waste from commercial and industrial companies, or used sales packaging from private households.
- the identification and sorting of the objects enables the materials contained to be efficiently recycled. Accordingly, for example, the identification and/or sorting of objects made of plastic enables efficient recycling of different plastic materials.
- the method can be applied to a wide variety of materials such as metal-containing materials. The presented system offers various advantages:
- Various detection technologies or detection modules can be used to analyze various object properties. Due to the possibility of analyzing many different object properties, the number of distinguishable specifications can be increased.
- Both native and applied object properties can be analyzed. Analysis of native object properties enables identification and sorting based on natural object properties. Information can be added to objects by means of integrated/applied object properties, e.g. the application of fluorescence codes or watermarks. The analysis of such properties therefore enables identification and sorting independent of the natural object properties based on any specifications.
- the detection technologies or detection modules used in the learning phase and in regular operation can be selected depending on the properties of the objects to be identified.
- the identification system can contain all available detection modules. These can be activated or deactivated accordingly. Alternatively, the identification system can be specifically equipped with the detection modules to be used.
- the use of AI simplifies both the inclusion of many object properties and the consideration of the variance of object properties to identify an object identity, since the diversity and variance of the object properties can be implemented in the algorithms.
- the use of AI technology reduces the amount of work required to teach the system.
- the AI can be trained with the participation of a human operator. By including a physically measurable reference object type property, which establishes the connection to the correct object identity, the amount of work involved in training the system can be further reduced, so that the training can be carried out largely autonomously.
- the number of objects used in the learning phase can be increased. In this way it can be ensured that the training is carried out on a representative random sample, which represents the variance of the object properties.
- Objects from regular business operations can be used as objects for the learning phase. In this way it can be ensured that production and use-related variances in the object properties are included in the analyses.
- objects made specifically for this purpose can also be used. This enables, for example, the use of objects with selected and known property variances.
- these objects can be specifically subjected to different test conditions (e.g. treatment with defined test substances or mechanical or climatic loads) in order to adapt the system to the property fluctuations induced thereby.
- objects can be equipped with a “reference object type property” for the learning phase.
- the new object identity must be mapped by a human operator to the analysis data of the object properties of the new objects. If a large number of diverse objects pass through the detection modules of the system during operation, this is not very practical. It may then be necessary to interrupt regular operation in order to be able to teach and test new objects in a separate test campaign. In addition to interrupting operation, this also has the disadvantage that any influences caused by the presence of other objects cannot be detected.
- reference object type property can be dispensed with. This enables cost savings if these properties do not have to be implemented in products placed on the market. Furthermore, this enables the use of reference object type properties which, due to technical considerations, should not be used for products placed on the market. This can apply, for example, to reference object type properties that are based on additives that do not have official approval for the area of application of the products, or that could have a negative impact on the product function over a long product lifespan.
- Plastic objects can be treated with luminescent substances by adding these substances during the manufacture of the objects.
- inorganic anti-Stokes crystals inorganic Stokes crystals or organic phosphors can be used.
- Stokes crystals show a Stokes shift and are downconverters.
- Anti-Stokes crystals are upconverters.
- inorganic anti-Stokes crystals which can be excited with IR radiation and luminesce in the visible spectral range.
- the inorganic Anti-Stokes crystals do not affect the color of the objects when there is no infrared excitation. Furthermore, the crystals only have to be used in very small amounts. You therefore have no significant influence on the transparency of objects. Furthermore, there is no influence on the object shape. Consequently, they are suitable, for example, as a reference object type property for teaching the system through the object properties "color”, "shape” or image pattern, which can be detected using suitable camera systems. Therefore, by applying different anti-Stokes crystals with different emission spectra to different objects with different colors, shapes or images, the system can be trained to identify these different objects.
- the system can identify the objects based on their color, shape or image even if the Anti-Stokes crystals are no longer included.
- the object shape can serve as a reference object type property for teaching the system through the emission characteristics of anti-Stokes crystals, which can be detected, for example, with suitable spectrometers, cameras or photodiodes.
- three object types made of the same material but with different shapes could each be marked with characteristic Anti-Stokes crystals.
- the system can be trained to recognize objects using the object property of the anti-Stokes emission spectrum.
- the system can identify objects based on the anti-Stokes crystals they contain, even if other object shapes are present.
- inorganic Stokes crystals which have emission wavelengths ⁇ 100 m. However, they have no effect on the infrared reflectance spectrum above lioonm. Consequently, they are suitable as a reference object type property for teaching the system through the IR reflection spectrum as an analyzed object property.
- objects with a closely related but distinguishable IR reflection spectrum could be used, with one of the object types being equipped with Stokes crystals. This will cause the system to turn on object recognition is taught by means of the IR reflection spectrum, with the presence of a luminescence emission ⁇ noonm serving as a reference object type property.
- the system can identify the object type of the objects marked with Stokes crystals based on the IR reflection spectrum in contrast to the other objects, even if no Stokes crystals are contained.
- organic phosphors are known which can be excited, for example, with UV light and have characteristic emission spectra. Such substances can also be suitable as reference object type properties, provided they have no influence on the properties of the object properties to be trained. For example, object types with different metal contents could be marked with characteristic organic phosphors. In this way, the system can be trained to recognize objects using the “metal content” object property.
- the system can identify the object types based on the metal content, even if there are no phosphors left.
- the reference object type properties can also be used to autonomously check the reliability of the object identification.
- the identification of objects with a reference object type property can be compared with the identification of objects without a reference object type property after the learning phase has ended. Identical results indicate an independence of the detected object properties from the reference object type property. A mixture of objects with and without a reference object type property can also be tested here. If all objects are identified in the same way, the reference object type property has no influence on the identification result.
- the detection of object properties can be used to determine the variance of object properties.
- the variance of the measurement results obtained can be used to adapt the detection modules.
- Two types of plastic bottles A and B should be identified by their bottle shape.
- the shape of the bottle is detected by a detection module with a camera system.
- the bottles can be present in the detection module with random orientation, which makes automatic image recognition more difficult.
- the bottles are identified using AI technology.
- the bottles are marked with two different fluorescence codes a and b for the autonomous training of the system using the object property "bottle shape".
- the fluorescence codes a and b function as a reference object type property.
- the fluorescence codes a and b together with the associated object identities A and B are stored in a database.
- the learning phase includes the analysis of the object property "bottle shape" by taking pictures of the bottles, checking the presence of the reference object type properties code a and code b, assigning the object identity bottle A to bottles with code a, assigning the object identity bottle B to bottles with Code b and the adaptation of the algorithm that processes the images to recognize patterns and regularities in the images and establish a correlation between the captured images and the object identity.
- the system can calculate the identity of the plastic bottles based on the captured images.
- the fluorescence codes used in the learning phase no longer have to be present on the bottles.
- the sorting unit can sort bottles A and B into different containers.
- Two cosmetic bottles A and B have different label designs.
- the bottles should be identified based on the optical design.
- the design is detected by a detection module with a camera system.
- the bottles are identified using AI technology.
- the bottles are marked with two different fluorescence codes a and b for the autonomous training of the system using the object property "optical design".
- the fluorescence codes a and b function as a reference object type property.
- the fluorescence codes a and b together with the associated object identities A and B are stored in a database.
- the learning phase includes the analysis of the object property "optical design" by taking pictures of the bottles, the presence check of the reference object type properties code a and code b, the assignment of the object identity bottle A to bottles with code a, the assignment of the object identity bottle B to bottles with code b and the adaptation of the algorithm that processes the images to recognize patterns and regularities in the designs and establish a correlation between the captured designs and the object identity.
- the system can calculate the identity of the bottles based on the captured images.
- the fluorescence codes used in the learning phase no longer have to be present on the bottles.
- the sorting unit can sort bottles A and B into different containers.
- packaging A and B Two types of packaging A and B, on which labels A and B are located, are to be identified via watermarks integrated into the labels. Furthermore, packaging A and B should be able to be distinguished from packaging C, with packaging C not containing a watermark on the label.
- the water marks are detected by a detection module with a camera system.
- the packaging and thus the labels can be present in the detection module with a random orientation. Furthermore, the labels can be dirty and mechanically deformed. These factors complicate the automatic detection of the watermarks.
- Labels A and B are marked with two different fluorescence codes a and b for the autonomous training of the system using the “watermark” object property.
- the fluorescence codes a and b function as a reference object type property.
- the fluorescence codes a and b together with the associated object identities A and B are stored in a database.
- Label C does not receive a fluorescence code, so it does not contain a reference object type property.
- the learning phase includes the analysis of the object property "watermark” by analyzing the labels, the presence check of the reference object type properties code a and code b, the assignment of the object identity label A to labels with code a, the assignment of the object identity label B to labels with code b and adapting the algorithm that processes the watermarks to recognize patterns and regularities in the watermarks and establish a correlation between the detected watermarks and the object identity.
- labels C too, the object property “watermark” is analyzed by the detection module and the presence of the reference object type property “fluorescence code” is checked. Since the reference object type property is not found, objects with label C get the object identity "unknown”. For labels C, the algorithm therefore learns the correlation between the object identity “unknown” and labels without a watermark.
- Two types of packaging, A and B, are to be identified by labels with fluorescence codes applied there.
- the fluorescence code is detected by a detection module with a spectrometer.
- the packaging is identified using AI technology.
- the different geometries of packaging A and B are used as a reference object type property for the autonomous training of the system using the "fluorescence code" object property.
- the geometries a and b are stored in a database together with the associated object identities A and B.
- the learning phase includes the analysis of the object property "fluorescence code" by spectrometer analysis, the presence check of the reference object type properties geometry a and geometry b, the assignment of the object identity packaging A to packaging with geometry a, the assignment of the Object identity packaging B to packaging with geometry b and the adaptation of the algorithm that processes the fluorescence spectra to recognize patterns and regularities in the spectra and establish a correlation between the recorded spectra and the object identity.
- the system can calculate the identity of the packaging based on the detected fluorescence codes.
- the packaging no longer has to have the packaging geometries used in the learning phase.
- the sorting unit can sort the packaging A and B into different containers.
- Packaging contains a luminescent marker in the base material of the packaging.
- the packaging shows different levels of soiling. The influence of contamination on the variance of the emission spectrum of the luminescence marker is to be analyzed.
- emission spectrum is analyzed by spectrometer analysis.
- the variances of the emission intensity, emission maxima (wavelengths with maximum emission), and half-widths are obtained as results.
- the results obtained can now be used to adapt the spectrometer sensors.
- a variety of detection modules are used in a sorting system. For example, the electrical conductivity, the IR reflection spectrum, watermarks and fluorescence codes are recorded. Two types of objects A and B are to be identified. The objects are identified using AI technology. For the autonomous training of the system, object A is marked with fluorescence code a and object B with fluorescence code b. The fluorescence codes a and b act as a reference property. The fluorescence codes a and b together with the associated object identities A and B are stored in a database. The learning of the AI technology should be carried out during operation. For the learning phase, objects A and B are mixed with other objects transported through the sorting system. The objects are analyzed autonomously by the system.
- the learning phase includes the analysis of the object properties "electrical conductivity”, “IR reflection spectrum” and “watermark”, the presence check of the reference properties code a and code b, the assignment of the object identities A to objects with fluorescence code a and B to objects with fluorescence code b, the adaptation of the algorithms for the recognition of patterns and regularities in the analyzed object properties and the creation of a correlation between the detected properties and the object identity.
- the system can calculate the identity of the objects based on the detected object properties "electrical conductivity”, “IR reflectance spectrum” and “watermark” and sort the objects A and B.
- the fluorescence codes used in the learning phase no longer have to be present on the objects.
- the comparison of the recorded reference properties with the reference properties stored in the database and linked to object identities is deactivated. It is then checked whether objects A and B are still correctly identified and sorted. Furthermore, objects A and B without fluorescence codes can be processed by the sorting system and their identification and sorting can be checked.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Sorting Of Articles (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102021107079 | 2021-03-22 | ||
PCT/DE2022/100220 WO2022199758A1 (en) | 2021-03-22 | 2022-03-22 | System and method for identification and/or sorting of objects |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4315271A1 true EP4315271A1 (en) | 2024-02-07 |
Family
ID=81346266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22717721.9A Pending EP4315271A1 (en) | 2021-03-22 | 2022-03-22 | System and method for identification and/or sorting of objects |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240157402A1 (en) |
EP (1) | EP4315271A1 (en) |
WO (1) | WO2022199758A1 (en) |
-
2022
- 2022-03-22 US US18/282,363 patent/US20240157402A1/en active Pending
- 2022-03-22 EP EP22717721.9A patent/EP4315271A1/en active Pending
- 2022-03-22 WO PCT/DE2022/100220 patent/WO2022199758A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
US20240157402A1 (en) | 2024-05-16 |
WO2022199758A1 (en) | 2022-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4034308B1 (en) | Sorting method | |
EP0696236B1 (en) | Process and device for sorting materials | |
Tehrani et al. | A novel integration of hyper-spectral imaging and neural networks to process waste electrical and electronic plastics | |
DE102008028120B4 (en) | Method and device for sorting textiles | |
EP3746992B1 (en) | Method for checking the authenticity and/or integrity of a security document having a printed security feature, security feature and arrangement for verification | |
WO2017148550A1 (en) | Method for making a description of a piece of luggage and luggage description system | |
EP3071340B1 (en) | Method and appartus for sorting objects | |
EP0602464B1 (en) | Method and apparatus for recognizing objects | |
WO2018196921A1 (en) | Method for identifying deposit material | |
EP4315271A1 (en) | System and method for identification and/or sorting of objects | |
EP4139106B1 (en) | Method and device for sorting plastic objects | |
EP1421999A2 (en) | Process for identification, classification and sorting of objects and materials and according recognition system | |
DE10049404C2 (en) | Plastic, glass, textile or paper-containing material provided with an NIR marker and method for identifying this material | |
DE4330815A1 (en) | Marking of packs for the purpose of easy sorting | |
DE102007036621A1 (en) | Bottle e.g. beverage bottle, testing method for use in beverage industry, involves analyzing bottle image by image sensor and image processing system, and using self-luminous image for detecting preselected characteristics of bottle | |
EP4139662A1 (en) | Method and system for producing a plastic material | |
EP1533045A1 (en) | Process and device for improved sorting of waste based on wood or wood fibre products | |
DE102017109496B4 (en) | Product detection apparatus | |
DE102004051938A1 (en) | Method and device for checking the loading of a transport device with objects | |
DE102021119662A1 (en) | Container made of recyclable plastic and method and device for taking back such a container | |
DE102019127894B4 (en) | PRODUCT IDENTIFICATION SYSTEM AND PROCEDURES FOR IDENTIFICATION OF A PRODUCT | |
EP2283937B1 (en) | Method and device for transporting objects to destinations depending on pattern images | |
DE10315739A1 (en) | Automatic identification of exchangeable system components in a measurement system, e.g. biometric systems, by optoelectronic imaging of the labeled components and then analysis of the resultant images using fuzzy logic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
TPAC | Observations filed by third parties |
Free format text: ORIGINAL CODE: EPIDOSNTIPA |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20230911 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |