WO2023143703A1 - Computer-assisted system and method for object identification - Google Patents

Computer-assisted system and method for object identification Download PDF

Info

Publication number
WO2023143703A1
WO2023143703A1 PCT/EP2022/051678 EP2022051678W WO2023143703A1 WO 2023143703 A1 WO2023143703 A1 WO 2023143703A1 EP 2022051678 W EP2022051678 W EP 2022051678W WO 2023143703 A1 WO2023143703 A1 WO 2023143703A1
Authority
WO
WIPO (PCT)
Prior art keywords
retrieved
platform
determined
data
intended
Prior art date
Application number
PCT/EP2022/051678
Other languages
French (fr)
Inventor
Prathosh BAGA
Gourab Paul
Maheshwari RAGHAV
Sidharta Andalam
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Priority to PCT/EP2022/051678 priority Critical patent/WO2023143703A1/en
Publication of WO2023143703A1 publication Critical patent/WO2023143703A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C7/00Sorting by hand only e.g. of mail
    • B07C7/005Computer assisted manual sorting, e.g. for mail
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Definitions

  • This disclosure relates to a computer-assisted method and a computer-assisted system for identifying objects.
  • One solution for sorting used objects is based on manual labor in the form of human workers or sorters.
  • the workers are trained to recognize different plastic types.
  • a sorting area that may include a conveyor belt, each sorter along the conveyor belt is assigned to pick one particular type of plastic waste and place it into a respective container or receptacle for sorting.
  • Another solution for sorting used objects is fully automated sorting.
  • the sorting and picking activities are performed automatically with the assistance of different plastic identification technologies like computer vision or Near Infrared (NIR) and/or robotics.
  • NIR Near Infrared
  • the automatic sorting with plastic sensing technologies and robotics can significantly improve the sorting speed, efficiency and accuracy compared to manual sorters, they are relatively more expensive for mid or small-size sorting facilities to afford, and the upfront investments to purchase the equipment may be prohibitive. Hence, the initial costs considerations may outweigh the cost savings in the long run arising from reduced manual labor cost and improved productivity.
  • a technical solution is provided in the form of a computer-assisted system and method for assisting a manual sorter to identify the correct objects for pick up or retrieval in order to facilitate tasks such as sorting, auditing, or organizing.
  • the solution provides computer-assisted means to classify an object into one or more categories.
  • the solution determines if an object correctly retrieved by assigned personnel.
  • the solution includes illuminating a part of a platform to inform assigned personnel of the target object’s location.
  • the sorting efficiency and accuracy of the manual tasks such as sorting may be improved.
  • the solution reduces the costs and duration associated with training human personnel, and increases the sorting facilities flexibility for human resource management as human personnel can be easily re-deployed in other functions.
  • a computer-assisted system as claimed in claim 1 is provided.
  • a computer-assisted method according to the invention is defined in claim 9.
  • a computer program comprising instructions to execute the computer-assisted method is defined in claim 15.
  • FIG. 1 shows a setup of a system for object identification according to some embodiments
  • FIG. 2 A shows a schematic illustration of a processor for receiving sensor inputs according to some embodiments
  • FIG. 2B shows a process flow diagram between the various modules of the processor depicted in FIG. 2A;
  • FIG. 3 is a flow chart of a method for identifying objects according to some embodiments
  • - FIG. 4 is a decision flow diagram for using the system shown in FIG. 1 according to some embodiments;
  • FIG. 5 shows a correlation table of the input and output of the system according to some embodiments.
  • FIG. 6 shows a schematic illustration of a processor for sorting objects according to some embodiments.
  • the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements.
  • the term “and/or” includes any and all combinations of one or more of the associated listed items.
  • the term “sensor(s)” broadly refers to any device that facilitates sensing or detection of an object.
  • the term can refer to hardware sensors, software-based sensors, and/or combinations of hardware and software-based sensors.
  • the term can also refer to active or passive sensors.
  • a sensor may operate continuously upon activation until deactivated, or may operate for a predetermined period of time. Examples of sensor may include a camera or video recorder, and devices emitting an electromagnetic radiation (e.g. X-rays, electromagnetic radiation in the Terahertz (THz) range) for detection by a linear imaging scanner to generate images.
  • an electromagnetic radiation e.g. X-rays, electromagnetic radiation in the Terahertz (THz) range
  • module refers to, or forms part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable gate array
  • processor shared, dedicated, or group
  • the term module may include memory (shared, dedicated, or group) that stores code executed by the processor.
  • the term “artificial intelligence module” broadly include any machine learning module, deep learning module, which may be trained using supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, deep learning methods.
  • the ML/AI algorithms may include algorithms such as neural networks, fuzzy logic, evolutionary algorithms etc.
  • object includes any object, particularly recyclable or reusable object that may be identified according to type, class or categories.
  • plastic objects may be identified according to whether they are High Density Poly Ethylene (HDPE), Polyethylene terephthalate (PET), Polypropylene (PP), Polystyrene (PS), Low- density polyethylene (LDPE), Polyvinyl chloride (PVC) plastic objects.
  • HDPE High Density Poly Ethylene
  • PET Polyethylene terephthalate
  • PP Polypropylene
  • PS Polystyrene
  • LDPE Low- density polyethylene
  • PVC Polyvinyl chloride
  • Such objects may include bottles, jars, containers, plates, bowls etc. of various shapes, forms (distorted, flattened) and sizes.
  • FIG. 1 illustrates a setup of a system 100 for object identification comprising a first sensor 104 positioned or arranged to detect an object 102 on a platform 106; a processor 108 arranged in signal or data communication with the first sensor 104 to receive object data 110.
  • the first sensor 104 may be used to detect the presence of object 102 and may also be used to track the movement of the object 102 along the platform 106.
  • the platform 106 may form part of a conveyor belt system.
  • the processor 108 is configured to receive object data 110. From another perspective, the first sensor 104 sends object data 110 to the processor 108.
  • the processor 108 is configured to classify, based on the object data 110, the object into one or more predetermined object class/type, for example HDPE or PET plastic types.
  • the processor 108 is operable to determine, based on the classification of the object, whether the object is an intended object 102a to be retrieved from the platform 106, for example retrieved by a person 120, who may be personnel assigned to retrieve the object 102 for further tasks such as sorting. If the object 102 is intended to be retrieved from the platform 106, the processor operates to determine an object location of the object 102 on the platform 106.
  • the first sensor 104 is an image capturing device
  • the object may be detected and tracked via image processing methods.
  • the first sensor 104 may include weight sensors positioned at a suitable portion of the platform 106 to detect weight of objects at different locations on the platform 106.
  • the processor 108 may operate to activate a light source 112 to illuminate the object 102 on the platform 106.
  • the light source 112 may include a light array, the light array comprising a plurality of light emitters 114, each light emitter 114 positioned or arranged to illuminate a part of the platform 106 corresponding to the object location or position on the platform 106.
  • the processor 108 may generate an illumination signal and transmit the illumination signal to activate at least one light emitter 114 to illuminate the part of the platform 106 that corresponds to the object location of the intended object 102a intended to be retrieved.
  • the processor 108 may monitor the object 102 via detecting one or more movements of the person 120 to determine if the illuminated object 102a is retrieved from the platform 106. If the illuminated object 102a is not picked up by the person 120 after a predetermined time period, a notification may be sent to a device, such as a smart watch 122, to inform the person 120 that the illuminated object 102a has not been retrieved from the platform 106. A similar notification may be sent to the device 122 if the person 120 retrieves a wrong object (e.g. 102b) from the platform 106. It is contemplated that the notification may be in the form of an alert, such as a sound alert or a vibration alert, or may be in the form of a text message, an email message, and/or combinations of the aforementioned.
  • a device such as a smart watch 122
  • the system 100 may be applied to a sorting system having multiple sorting stations.
  • each sorting station may be configured to implement a system 100 pre-programmed or pre-determined to identify one type of object for sorting by a sorter 120.
  • the sorting system may comprise a first sorting station implementing a system 100 pre-configured to identify HDPE plastic objects for sorting and a second sorting station implementing the system 100 pre-configured to identify PET plastic objects for sorting. It is appreciable that the terms ‘first’ and ‘second’ are used for purposes of clarity and do not imply order or precedence.
  • FIG. 2A shows an embodiment of the processor 108 comprising an object localization module 202, a light module controller 204 and a gesture monitoring module 206.
  • FIG. 2B shows a data flow between the various modules 202, 204, 206.
  • Object data obtained from the first sensor 104 and/or second sensor may be sent to the object localization module 202 for further processing.
  • the object data 110 may include one or more of the following: a refractive index of the object to be sorted, an electric field intensity pattern of a portion of the object to be sorted, colour image data of the object, location of the object on the platform.
  • the object location on the platform may be determined based on a pre-defined x-axis and y-axis on the platform 106, and deriving the x- and y-coordinates of the object 102 to be retrieved on the platform 106. It is appreciable that the object localization module 202 may store the object data of multiple objects positioned on the platform 106 in a local database.
  • the localization module 202 may output object profile data to the light module controller 204 and the gesture monitoring module 206.
  • the object profile data may include data defining the boundaries or shape of each object, the location of the object 102 on the platform 106, and/or one or more identifiers associated with the object 102 to facilitate tracking and detection of the object 102 on the platform 106.
  • the light module controller 204 operates to receive the object profile data as input and generates control and/or illumination signals to activate one or more of the plurality of light emitters 114 of the light source 112.
  • the light module controller 204 may include a calculator (not shown) which considers the speed of the object movement on the platform 106 in order to coordinate the object movement and determine the timing to send the illumination signal(s) to the light emitters 114 so that the object 102a to be retrieved is illuminated at an appropriate time corresponding to the object 102a passing the vicinity of the light emitters 114.
  • the gesture monitoring module 206 operates to receive the object profile data and further receives data from one or more sensors to detect movement(s) of the person 120. Based on the received sensor inputs, if a part (e.g. hand) of the person 120 associated with retrieving the object 102 is determined to be in a grasped state and the object identifier (ID) is absent, the gesture monitoring module 206 may conclude that the intended object 102a has been retrieved by a person 120. If it is determined that the object ID of object 102a is still present after a predetermined time, OR if it is determined that a wrong object 102b has been retrieved, the gesture monitoring module 206 is programmed or configured to send a notification to the device 122.
  • a part e.g. hand
  • ID object identifier
  • a different notification is sent to the device 122 confirming the correct object 102a has been picked.
  • FIG. 3 is a flow chart according to another aspect of the disclosure in the form of a method 300 for identifying objects.
  • the method 300 includes the steps of: receiving object data associated with an object to be identified (step S302); classifying, based on the object data, the object into one or more predetermined categories (step S304); determining, based on the classification of the object, whether the object is an intended object to be retrieved (step s306) ; wherein if the object is intended to be retrieved, determine based on the object data, a location of the object on the platform (step S308).
  • the intended object to be retrieved may then be illuminated (step S310).
  • FIG. 4 shows a decision flow diagram 400 of an embodiment of using the system 100.
  • the process commences with the step of positioning or placing one or more objects on the platform 106 (step S402).
  • One or more sensors are then operable to obtain object data to categorize the objects based on various methods, including, but not limited to, pattern recognition techniques (step S404).
  • Each object 102 is selected in turn (step S406) to determine if it belongs to the class an intended object to be retrieved by a user (step S408). If the object is determined to be an intended object 102a to be retrieved, the position of the object is tracked (step S410) on the platform 106, and the intended object 102a is illuminated for a pre-determined time frame (step S412). In addition, the object is tracked by sensors to determine if the object is present on the platform 106 (step S414). If the object is deemed to be retrieved by a person 120, the light emitters are deactivated (step S416).
  • At least one of the localization module 202, a light module controller 204 and a gesture monitoring module 206 may include the use of one or more artificial intelligence modules.
  • the one or more artificial intelligence modules may be trained and tested before deployment.
  • testing and training datasets may be generated to augment the object data 110 received.
  • Such augmentation may include generation additional object data based on image processing function such as flip and/or rotate.
  • the additional object data may include supplementing the original object data with additional location data on the platform 106.
  • the artificial intelligence modules may include one or more neural networks.
  • Such neural networks may be single-layered or multi-layered.
  • the weights associated with each neuron of the layers may be trained and adjusted using an optimization algorithm modelled to minimize errors.
  • the output parameter or result predicted by the neural network after each iteration of training is compared with a reference parameter and feedback to the neural network for weights modification/adjustment.
  • the input training data may comprise object data including image data, and the desired output of the localization module 202 may be identify region of interests (ROI) around each object to be classified. The training is performed to ensure each object for subsequent classification is correctly identified, and the ROI around each object is correctly formed.
  • the data associated with the ROIs may then form part of an input dataset (object profile) to be input into the light module controller 204 and gesture monitoring module 206.
  • the light module controller 204 may generate illumination signals corresponding to activating the light emitters 114 at varying intensities.
  • the gesture monitoring module 206 may include an artificial intelligence module to receive the object profile data 110 as input and further include gesture- related data as another input. The object profile data 110 and the gesture-related data may then be combined to form an input dataset.
  • the output of the gesture monitoring module 206 is an indication of whether the intended object 102a is retrieved, and if not, a notification is sent to the device 122.
  • the input training data used to train the artificial intelligence module of the gesture monitoring module 206 may include the object data as previously discussed, and the gesture associated with a part of the user 210 used to retrieve the object 102. The training is performed to ensure that the gesture monitoring module 206 correctly correlates the object data and gesture monitoring data to a state of the object (retrieved state or not retrieved state).
  • FIG. 5 shows a simplified correlation table indicating how the system 100 and process 300, 400 may be used to illuminate an intended object to be retrieved in the form of PET plastic bottle. It is contemplated that such a correlation table may be applicable to other types of objects to be retrieved.
  • FIG. 6 shows a server computer system 600 according to an embodiment.
  • the server computer system 600 includes a communication interface 602 (e.g. configured to receive input data from the first sensors 104).
  • the server computer 600 further includes a processing unit 604 and a memory 606.
  • the memory 606 may be used by the processing unit 604 to store, for example, data to be processed, such as data associated with the input data and results output from the modules 202, 204, 206.
  • the server computer is configured to perform the method of FIG. 3 and/or FIG. 4. It should be noted that the server computer system 600 can be a distributed system including a plurality of computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Aspects concern a computer-assisted system for object identification based on one or more sensors positioned to detect objects on a platform and to identify the type or class associated with each object. Upon identification that an object is intended to be retrieved from the platform for various purposes including recycling, the object is tracked and, in some embodiments, lights are illuminated on the object to facilitate retrieval.

Description

COMPUTER-ASSISTED SYSTEM AND METHOD FOR OBJECT IDENTIFICATION
TECHNICAL FIELD
[0001] This disclosure relates to a computer-assisted method and a computer-assisted system for identifying objects.
BACKGROUND
[0002] Smart and sustainable ways to manage waste will minimize environmental impact and promote efficient economic growth. Waste sorting of recyclable or reusable used objects, e.g. plastic object plays a crucial role in waste management.
[0003] One solution for sorting used objects is based on manual labor in the form of human workers or sorters. The workers are trained to recognize different plastic types. At a sorting area that may include a conveyor belt, each sorter along the conveyor belt is assigned to pick one particular type of plastic waste and place it into a respective container or receptacle for sorting.
[0004] However, there are at least two drawbacks associated with using human workers as sorters. Firstly, for objects such as plastic packaging/bottles, human workers have to be trained to recognize various plastic types on a conveyor belt - this may incur additional training costs for sorting facilities. The attrition rate of trained human workers may be high for some sorting facilities. Secondly, human workers may not always be able to distinguish different plastic type with similar shape, color with their naked eyes, resulting in many unsorted or wrongly classified plastics being dumped into landfills or sent to recycling plants. The unsorted or wrongly classified objects may degrade the quality of recycled product and/or reduce sorting efficiency and accuracy. In fact, a 5 - 10 % error rate can significantly reduce the value of the recyclable used objects.
[0005] Another solution for sorting used objects is fully automated sorting. In this process, the sorting and picking activities are performed automatically with the assistance of different plastic identification technologies like computer vision or Near Infrared (NIR) and/or robotics. [0006] Although the automatic sorting with plastic sensing technologies and robotics can significantly improve the sorting speed, efficiency and accuracy compared to manual sorters, they are relatively more expensive for mid or small-size sorting facilities to afford, and the upfront investments to purchase the equipment may be prohibitive. Hence, the initial costs considerations may outweigh the cost savings in the long run arising from reduced manual labor cost and improved productivity. [0007] There exists a need to provide a solution to address at least one of the aforementioned problems.
SUMMARY
[0008] This disclosure was conceptualized to aid a manual sorting process. A technical solution is provided in the form of a computer-assisted system and method for assisting a manual sorter to identify the correct objects for pick up or retrieval in order to facilitate tasks such as sorting, auditing, or organizing. The solution provides computer-assisted means to classify an object into one or more categories. In some embodiments, the solution determines if an object correctly retrieved by assigned personnel. Additionally, the solution includes illuminating a part of a platform to inform assigned personnel of the target object’s location. As a result, the sorting efficiency and accuracy of the manual tasks such as sorting may be improved. Further, the solution reduces the costs and duration associated with training human personnel, and increases the sorting facilities flexibility for human resource management as human personnel can be easily re-deployed in other functions.
[0009] According to the present disclosure, a computer-assisted system as claimed in claim 1 is provided. A computer-assisted method according to the invention is defined in claim 9. A computer program comprising instructions to execute the computer-assisted method is defined in claim 15.
[0010] The dependent claims define some examples associated with the system and method, respectively.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The invention will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which:
- FIG. 1 shows a setup of a system for object identification according to some embodiments;
- FIG. 2 A shows a schematic illustration of a processor for receiving sensor inputs according to some embodiments;
- FIG. 2B shows a process flow diagram between the various modules of the processor depicted in FIG. 2A;
- FIG. 3 is a flow chart of a method for identifying objects according to some embodiments; - FIG. 4 is a decision flow diagram for using the system shown in FIG. 1 according to some embodiments;
- FIG. 5 shows a correlation table of the input and output of the system according to some embodiments; and
- FIG. 6 shows a schematic illustration of a processor for sorting objects according to some embodiments.
DETAILED DESCRIPTION
[0012] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the disclosure. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
[0013] Embodiments described in the context of one of the systems or methods are analogously valid for the other systems or methods.
[0014] Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.
[0015] In the context of various embodiments, the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements. [0016] As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0017] As used herein, the term “sensor(s)” broadly refers to any device that facilitates sensing or detection of an object. The term can refer to hardware sensors, software-based sensors, and/or combinations of hardware and software-based sensors. The term can also refer to active or passive sensors. A sensor may operate continuously upon activation until deactivated, or may operate for a predetermined period of time. Examples of sensor may include a camera or video recorder, and devices emitting an electromagnetic radiation (e.g. X-rays, electromagnetic radiation in the Terahertz (THz) range) for detection by a linear imaging scanner to generate images.
[0018] As used herein, the term “module” refers to, or forms part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the processor.
[0019] As used herein, the term “artificial intelligence module” broadly include any machine learning module, deep learning module, which may be trained using supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, deep learning methods. In some embodiments, the ML/AI algorithms may include algorithms such as neural networks, fuzzy logic, evolutionary algorithms etc.
[0020] As used herein, the term “object” includes any object, particularly recyclable or reusable object that may be identified according to type, class or categories. For example, plastic objects may be identified according to whether they are High Density Poly Ethylene (HDPE), Polyethylene terephthalate (PET), Polypropylene (PP), Polystyrene (PS), Low- density polyethylene (LDPE), Polyvinyl chloride (PVC) plastic objects. Such objects may include bottles, jars, containers, plates, bowls etc. of various shapes, forms (distorted, flattened) and sizes.
[0021] An embodiment of the disclosure is shown in FIG. 1, which illustrates a setup of a system 100 for object identification comprising a first sensor 104 positioned or arranged to detect an object 102 on a platform 106; a processor 108 arranged in signal or data communication with the first sensor 104 to receive object data 110. The first sensor 104 may be used to detect the presence of object 102 and may also be used to track the movement of the object 102 along the platform 106. The platform 106 may form part of a conveyor belt system.
[0022] The processor 108 is configured to receive object data 110. From another perspective, the first sensor 104 sends object data 110 to the processor 108. The processor 108 is configured to classify, based on the object data 110, the object into one or more predetermined object class/type, for example HDPE or PET plastic types.
[0023] The processor 108 is operable to determine, based on the classification of the object, whether the object is an intended object 102a to be retrieved from the platform 106, for example retrieved by a person 120, who may be personnel assigned to retrieve the object 102 for further tasks such as sorting. If the object 102 is intended to be retrieved from the platform 106, the processor operates to determine an object location of the object 102 on the platform 106. In some embodiments where the first sensor 104 is an image capturing device, the object may be detected and tracked via image processing methods. In other embodiments, the first sensor 104 may include weight sensors positioned at a suitable portion of the platform 106 to detect weight of objects at different locations on the platform 106.
[0024] Upon determining of the object location of the object 102 on the platform 106, the processor 108 may operate to activate a light source 112 to illuminate the object 102 on the platform 106. The light source 112 may include a light array, the light array comprising a plurality of light emitters 114, each light emitter 114 positioned or arranged to illuminate a part of the platform 106 corresponding to the object location or position on the platform 106. The processor 108 may generate an illumination signal and transmit the illumination signal to activate at least one light emitter 114 to illuminate the part of the platform 106 that corresponds to the object location of the intended object 102a intended to be retrieved.
[0025] The processor 108 may monitor the object 102 via detecting one or more movements of the person 120 to determine if the illuminated object 102a is retrieved from the platform 106. If the illuminated object 102a is not picked up by the person 120 after a predetermined time period, a notification may be sent to a device, such as a smart watch 122, to inform the person 120 that the illuminated object 102a has not been retrieved from the platform 106. A similar notification may be sent to the device 122 if the person 120 retrieves a wrong object (e.g. 102b) from the platform 106. It is contemplated that the notification may be in the form of an alert, such as a sound alert or a vibration alert, or may be in the form of a text message, an email message, and/or combinations of the aforementioned.
[0026] In some embodiments, the system 100 may be applied to a sorting system having multiple sorting stations. In such a sorting system, each sorting station may be configured to implement a system 100 pre-programmed or pre-determined to identify one type of object for sorting by a sorter 120. For example, the sorting system may comprise a first sorting station implementing a system 100 pre-configured to identify HDPE plastic objects for sorting and a second sorting station implementing the system 100 pre-configured to identify PET plastic objects for sorting. It is appreciable that the terms ‘first’ and ‘second’ are used for purposes of clarity and do not imply order or precedence.
[0027] FIG. 2A shows an embodiment of the processor 108 comprising an object localization module 202, a light module controller 204 and a gesture monitoring module 206. FIG. 2B shows a data flow between the various modules 202, 204, 206. Object data obtained from the first sensor 104 and/or second sensor (not shown) may be sent to the object localization module 202 for further processing. Depending on the type of sensors, the object data 110 may include one or more of the following: a refractive index of the object to be sorted, an electric field intensity pattern of a portion of the object to be sorted, colour image data of the object, location of the object on the platform. In some embodiments, the object location on the platform may be determined based on a pre-defined x-axis and y-axis on the platform 106, and deriving the x- and y-coordinates of the object 102 to be retrieved on the platform 106. It is appreciable that the object localization module 202 may store the object data of multiple objects positioned on the platform 106 in a local database.
[0028] The localization module 202 may output object profile data to the light module controller 204 and the gesture monitoring module 206. The object profile data may include data defining the boundaries or shape of each object, the location of the object 102 on the platform 106, and/or one or more identifiers associated with the object 102 to facilitate tracking and detection of the object 102 on the platform 106.
[0029] The light module controller 204 operates to receive the object profile data as input and generates control and/or illumination signals to activate one or more of the plurality of light emitters 114 of the light source 112. The light module controller 204 may include a calculator (not shown) which considers the speed of the object movement on the platform 106 in order to coordinate the object movement and determine the timing to send the illumination signal(s) to the light emitters 114 so that the object 102a to be retrieved is illuminated at an appropriate time corresponding to the object 102a passing the vicinity of the light emitters 114.
[0030] The gesture monitoring module 206 operates to receive the object profile data and further receives data from one or more sensors to detect movement(s) of the person 120. Based on the received sensor inputs, if a part (e.g. hand) of the person 120 associated with retrieving the object 102 is determined to be in a grasped state and the object identifier (ID) is absent, the gesture monitoring module 206 may conclude that the intended object 102a has been retrieved by a person 120. If it is determined that the object ID of object 102a is still present after a predetermined time, OR if it is determined that a wrong object 102b has been retrieved, the gesture monitoring module 206 is programmed or configured to send a notification to the device 122.
[0031] In some embodiments, upon successful retrieval of the object 102a by the person 120, a different notification is sent to the device 122 confirming the correct object 102a has been picked.
[0032] FIG. 3 is a flow chart according to another aspect of the disclosure in the form of a method 300 for identifying objects. The method 300 includes the steps of: receiving object data associated with an object to be identified (step S302); classifying, based on the object data, the object into one or more predetermined categories (step S304); determining, based on the classification of the object, whether the object is an intended object to be retrieved (step s306) ; wherein if the object is intended to be retrieved, determine based on the object data, a location of the object on the platform (step S308). The intended object to be retrieved may then be illuminated (step S310).
[0033] FIG. 4 shows a decision flow diagram 400 of an embodiment of using the system 100. The process commences with the step of positioning or placing one or more objects on the platform 106 (step S402). One or more sensors are then operable to obtain object data to categorize the objects based on various methods, including, but not limited to, pattern recognition techniques (step S404). Each object 102 is selected in turn (step S406) to determine if it belongs to the class an intended object to be retrieved by a user (step S408). If the object is determined to be an intended object 102a to be retrieved, the position of the object is tracked (step S410) on the platform 106, and the intended object 102a is illuminated for a pre-determined time frame (step S412). In addition, the object is tracked by sensors to determine if the object is present on the platform 106 (step S414). If the object is deemed to be retrieved by a person 120, the light emitters are deactivated (step S416).
[0034] In some embodiments, at least one of the localization module 202, a light module controller 204 and a gesture monitoring module 206 may include the use of one or more artificial intelligence modules. The one or more artificial intelligence modules may be trained and tested before deployment. In some embodiments, testing and training datasets may be generated to augment the object data 110 received. Such augmentation may include generation additional object data based on image processing function such as flip and/or rotate. In some embodiments, the additional object data may include supplementing the original object data with additional location data on the platform 106.
[0035] In some embodiments, the artificial intelligence modules may include one or more neural networks. Such neural networks may be single-layered or multi-layered. The weights associated with each neuron of the layers may be trained and adjusted using an optimization algorithm modelled to minimize errors. In such an arrangement, the output parameter or result predicted by the neural network after each iteration of training is compared with a reference parameter and feedback to the neural network for weights modification/adjustment. Using the localization module 202 as an example, the input training data may comprise object data including image data, and the desired output of the localization module 202 may be identify region of interests (ROI) around each object to be classified. The training is performed to ensure each object for subsequent classification is correctly identified, and the ROI around each object is correctly formed. The data associated with the ROIs may then form part of an input dataset (object profile) to be input into the light module controller 204 and gesture monitoring module 206.
[0036] In some embodiments, the light module controller 204 may generate illumination signals corresponding to activating the light emitters 114 at varying intensities.
[0037] In some embodiments, the gesture monitoring module 206 may include an artificial intelligence module to receive the object profile data 110 as input and further include gesture- related data as another input. The object profile data 110 and the gesture-related data may then be combined to form an input dataset. The output of the gesture monitoring module 206 is an indication of whether the intended object 102a is retrieved, and if not, a notification is sent to the device 122. The input training data used to train the artificial intelligence module of the gesture monitoring module 206 may include the object data as previously discussed, and the gesture associated with a part of the user 210 used to retrieve the object 102. The training is performed to ensure that the gesture monitoring module 206 correctly correlates the object data and gesture monitoring data to a state of the object (retrieved state or not retrieved state). [0038] FIG. 5 shows a simplified correlation table indicating how the system 100 and process 300, 400 may be used to illuminate an intended object to be retrieved in the form of PET plastic bottle. It is contemplated that such a correlation table may be applicable to other types of objects to be retrieved.
[0039] FIG. 6 shows a server computer system 600 according to an embodiment. The server computer system 600 includes a communication interface 602 (e.g. configured to receive input data from the first sensors 104). The server computer 600 further includes a processing unit 604 and a memory 606. The memory 606 may be used by the processing unit 604 to store, for example, data to be processed, such as data associated with the input data and results output from the modules 202, 204, 206. The server computer is configured to perform the method of FIG. 3 and/or FIG. 4. It should be noted that the server computer system 600 can be a distributed system including a plurality of computers.
[0040] While the disclosure has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.

Claims

1. A computer-assisted system for object identification comprising a first sensor positioned or arranged to detect an object on a platform; a processor arranged in signal or data communication with the first sensor to receive object data, the processor configured to
(i.) classify, based on the object data, the object into one or more predetermined categories;
(ii.) determine, based on the classification of the object, whether the object is an intended object to be retrieved from the platform; and
(iii.) if the object is intended to be retrieved from the platform, determine based on the object data, an object location of the object on the platform.
2. The system of claim 1, further comprising a light source comprising a plurality of light emitters, each light emitter positioned or arranged to illuminate a part of the platform; wherein if the object is intended to be retrieved from the platform, the processor is further configured to
(i.) generate, based on the object location and the object intended to be retrieved, an illumination signal; and
(ii.) transmit the illumination signal to activate at least one light emitter positioned or arranged to illuminate the part of the platform that corresponds to the object location of the object intended to be retrieved.
3. The system of claims 1 or 2, wherein the processor is operable to send the object data as input to an artificial intelligence module, the artificial intelligence module being operable to identify a category associated with the object based on the object data.
4. The system of claim 3, wherein the object data comprises at least one of the following: a refractive index of the object, an electric field intensity pattern of at least a part of the object, colour image data of the object.
5. The system of any one of claims 1 to 4, wherein the object location is determined based on a pre-defined x- and y-axis on the platform, and deriving the x- and y-coordinates of the object to be retrieved on the platform.
6. The system of any one of claims 2 to 5, wherein the processor is further configured to (i.) receive a plurality of images associated with the object;
(ii.) compare relevant features of the object determined to be retrieved in a first image with the features of the object to be retrieved in successive images; and
(iii) determine if the object to be retrieved is present or absent.
7. The system of claim 6, wherein if the object determined to be retrieved is determined to be absent, the processor is further configured to deactivate each light emitter positioned or arranged to illuminate the part of the platform that corresponds to the object location of the object determined to be retrieved.
8. The system of any one of claims 1 to 7, wherein the processor is configured to send a notification if the object to be retrieved is determined to be not retrieved after a predetermined period.
9. A computer-assisted method for identifying an object comprising the steps of (i.) receiving object data, the object data associated with the object;
(ii.) classifying based on the object data, the object into one or more predetermined categories;
(iii.) determining based on the classification of the object, whether the object is to be retrieved from a platform; and
(iv.) determining based on the object data, an object location of the object on the platform, if the object is determined to be retrieved.
10. The method of claim 9, further comprising the steps of
(i.) generating based on the object location of the object determined to be retrieved, an illumination signal; and
(ii.) transmitting the illumination signal to activate each light emitter positioned or arranged to illuminate a part of the platform that corresponds to the object location of the object determined to be retrieved.
11. The method of claims 9 or 10, wherein classifying the object into one or more predetermined categories based on the object data comprises
(i.) extracting at least one parameter from the object data; (ii.) determining if each of the at least one parameter matches a corresponding reference parameter associated with the one or more predetermined categories; and
(iii.) classifying based on the determination that each of the at least one parameter matches the corresponding reference parameter, the object into the one or more predetermined categories.
12. The method of any one of claims 9 to 11, wherein determining the object location of the object determined to be retrieved comprises determining, based on the object data, the x- and y-axis coordinates of the object determined to be retrieved.
13. The method of any one of claims 10 to 12, further comprising the steps of
(i.) receiving a plurality of images associated with the object determined to be retrieved;
(ii.) comparing the relevant features of the object determined to be retrieved in a first image with the features of the object determined to be retrieved in successive images; and
(iii) determining if the object determined to be retrieved is present or absent.
14. The method of claim 13, wherein if the object determined to be retrieved is determined to be absent, the method further comprises the step of deactivating each light emitter positioned or arranged to illuminate the part of the platform that corresponds to the object location of the object intended to be retrieved.
15. A computer program comprising instructions which, when executed by a computer, cause the computer to carry out the steps of
(i.) receiving object data, the object data associated with the object;
(ii.) classifying based on the object data, the object into one or more predetermined categories;
(iii.) determining based on the classification of the object, whether the object is intended to be retrieved; and
(iv.) determining based on the image, an object location of the object intended to be retrieved on the platform, if the object is intended to be retrieved.
PCT/EP2022/051678 2022-01-26 2022-01-26 Computer-assisted system and method for object identification WO2023143703A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/051678 WO2023143703A1 (en) 2022-01-26 2022-01-26 Computer-assisted system and method for object identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/051678 WO2023143703A1 (en) 2022-01-26 2022-01-26 Computer-assisted system and method for object identification

Publications (1)

Publication Number Publication Date
WO2023143703A1 true WO2023143703A1 (en) 2023-08-03

Family

ID=80222322

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/051678 WO2023143703A1 (en) 2022-01-26 2022-01-26 Computer-assisted system and method for object identification

Country Status (1)

Country Link
WO (1) WO2023143703A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118133027A (en) * 2024-05-07 2024-06-04 葛洲坝集团生态环保有限公司 Solid waste separation auxiliary method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011118611A1 (en) * 2011-11-16 2013-05-16 Knapp Ag Apparatus and method for a semi-automatic testing station
CA2863566A1 (en) * 2014-09-12 2016-03-12 Denis Hotte Augmented reality method and apparatus for assisting an operator to perform a task on a moving object
WO2022005305A1 (en) * 2020-06-29 2022-01-06 Compac Technologies Limited An article indication system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011118611A1 (en) * 2011-11-16 2013-05-16 Knapp Ag Apparatus and method for a semi-automatic testing station
CA2863566A1 (en) * 2014-09-12 2016-03-12 Denis Hotte Augmented reality method and apparatus for assisting an operator to perform a task on a moving object
WO2022005305A1 (en) * 2020-06-29 2022-01-06 Compac Technologies Limited An article indication system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118133027A (en) * 2024-05-07 2024-06-04 葛洲坝集团生态环保有限公司 Solid waste separation auxiliary method and system based on deep learning

Similar Documents

Publication Publication Date Title
US20220297940A1 (en) Systems and methods for learning to extrapolate optimal object routing and handling parameters
US10625304B2 (en) Recycling coins from scrap
US20210346916A1 (en) Material handling using machine learning system
US11975365B2 (en) Computer program product for classifying materials
US20230192418A1 (en) Object path planning in a sorting facility
WO2023143703A1 (en) Computer-assisted system and method for object identification
EP3784419A1 (en) Recycling coins from scrap
US20240157403A1 (en) Materials detector
WO2021030322A1 (en) System and method of object detection using ai deep learning models
Koganti et al. Deep Learning based Automated Waste Segregation System based on degradability
WO2024051935A1 (en) Computer-assisted system and method for illuminating identified objects
Dering et al. A computer vision approach for automatically mining and classifying end of life products and components
WO2023143706A1 (en) Computer-assisted method and system for sorting objects
Ríos-Zapata et al. Can the attributes of a waste bin improve recycling? A literature review for sensors and actuators to define product design objectives
WO2023217374A1 (en) System and method for monitoring performance of an object classification system
Iyyanar et al. Efficient and Smart Waste Categorization System using Deep Learning
US20230192416A1 (en) Heterogeneous material sorting
US20230191608A1 (en) Using machine learning to recognize variant objects
US20230196187A1 (en) Cloud and facility-based machine learning for sorting facilities
WO2023217375A1 (en) Computer-implemented method and system for labelling objects from thz and rgb images for sorting process
US20240058953A1 (en) Object picking optimization
Kareem et al. A MULTIFACETED ANALYSIS OF PIONEERING STRATEGIES AND LEADING-EDGE TECHNOLOGY FOR WASTE SEGREGATION
WO2023143704A1 (en) System and method for identification of objects and prediction of object class
WO2023003670A1 (en) Material handling system
WO2024009118A1 (en) Sorting container, sorting system, sorting container arrangement and use of sorting container

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22702913

Country of ref document: EP

Kind code of ref document: A1