WO2024075926A1 - Système et procédé d'inspection de vision à l'aide d'un robot mobile - Google Patents

Système et procédé d'inspection de vision à l'aide d'un robot mobile Download PDF

Info

Publication number
WO2024075926A1
WO2024075926A1 PCT/KR2023/008297 KR2023008297W WO2024075926A1 WO 2024075926 A1 WO2024075926 A1 WO 2024075926A1 KR 2023008297 W KR2023008297 W KR 2023008297W WO 2024075926 A1 WO2024075926 A1 WO 2024075926A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
image
mobile robot
vision
algorithm
Prior art date
Application number
PCT/KR2023/008297
Other languages
English (en)
Korean (ko)
Inventor
양창모
Original Assignee
현대자동차 주식회사
기아 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 현대자동차 주식회사, 기아 주식회사 filed Critical 현대자동차 주식회사
Publication of WO2024075926A1 publication Critical patent/WO2024075926A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D65/00Designing, manufacturing, e.g. assembling, facilitating disassembly, or structurally modifying motor vehicles or trailers, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D65/00Designing, manufacturing, e.g. assembling, facilitating disassembly, or structurally modifying motor vehicles or trailers, not otherwise provided for
    • B62D65/005Inspection and final control devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/93Detection standards; Calibrating baseline adjustment, drift correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/10Scanning
    • G01N2201/103Scanning by mechanical motion of stage

Definitions

  • the present invention relates to a vision inspection system and method using a mobile robot, and more specifically, to a vision inspection system and method using a mobile robot that inspects the quality of manual assembly by workers in real time in the automobile assembly process. will be.
  • Conventional real-time quality inspection involves vision inspection that captures inspection images using a camera mounted on a multi-joint fixed robot.
  • the conventional vision inspection method has limitations in controlling the photographing posture (motion) of the fixed robot and the area that can be photographed, so it requires the operation of multiple units, and the work movements of workers and robots overlap, which can interfere with work or pose a risk of collision.
  • Deviations in inspection images using such mobile robots cause performance degradation in the quality inspection of vision inspection systems, so improvement measures are needed to improve inspection performance.
  • An embodiment of the present invention utilizes a mobile robot that automatically calculates the image deviation reflecting the repetitive position error of the robot during vision inspection using a mobile robot in an industrial field, builds an inspection algorithm reflecting this, and performs a vision inspection for each inspection part.
  • the purpose is to provide a vision inspection system and method.
  • Another purpose of the present invention is to develop a mobile robot that maintains optimal inspection quality in real time by automatically replacing the existing inspection algorithm with a new inspection algorithm created by automatically specifying an augmentation range that reflects image deviations due to various environmental changes.
  • the goal is to provide a vision inspection system that utilizes
  • a vision inspection system utilizing a mobile robot includes: a mobile robot that moves to at least one designated inspection position (Position, P) and photographs an inspection area of parts assembled in a product; And image augmentation is performed by acquiring the inspection image taken from the mobile robot, calculating the image deviation of the inspection image compared to the reference image reflecting the repeated position error of the robot for each inspection position, and reflecting the range of the image deviation. It includes an inspection server that evaluates the assembly quality of the inspection image for each inspection position through a learned inspection algorithm.
  • the mobile robot includes a vision sensor module that generates the inspection image captured at the inspection position; Autonomous driving sensor module that detects the surroundings through sensors; A mobile module that moves freely via drive wheels or quadrupedal walking; A wireless communication module that transmits the captured inspection image to the inspection server through wireless communication; And a control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed.
  • a vision sensor module that generates the inspection image captured at the inspection position
  • Autonomous driving sensor module that detects the surroundings through sensors
  • a mobile module that moves freely via drive wheels or quadrupedal walking
  • a wireless communication module that transmits the captured inspection image to the inspection server through wireless communication
  • And a control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed.
  • control module may store the assembly positions of parts to be assembled in the process according to the type and specifications of the product and at least one inspection position designated for capturing the inspection image.
  • control module can distinguish a plurality of zones divided in various directions around the product and restrict movement so that the worker does not enter the worker zone where the worker currently exists.
  • the vision sensor module is mounted on the mobile robot through a robot arm, and can capture inspection images of the exterior parts and interior interior parts of the product through posture control of the robot arm.
  • the inspection server includes a communication unit that acquires an inspection image from the mobile robot; an image processing unit that stores a reference image corresponding to the inspection image for each part and generates a converted corrected image by matching the inspection image with the reference image; A deep learning unit that builds an inspection algorithm for each inspection part of the process by deep learning the repetitive position error range for each inspection position of the mobile robot through image augmentation of the inspection image in advance; a database (DB) storing programs and data for operating the inspection server; and a control unit configured to generate a new inspection algorithm by specifying an image augmentation range for deep learning learning based on a transformation matrix acquired in the image registration process.
  • DB database
  • the image processing unit calculates the image registration error of the inspection image with respect to the reference image during the image registration process, calculates the image deviation between the two images, and calculates a transformation matrix that can be obtained when correcting the inspection image. Information can be extracted.
  • the transformation matrix information is a numerical value of the image deviation and includes translation for image movement, rotation for image rotation, scale for image enlargement/reduction, tilt for image tilt conversion, and image shear. Shear information for can be included.
  • the image processing unit may extract at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
  • ROI inspection part area
  • the deep learning learning unit may perform image augmentation using the inspection part image extracted from the image processing unit and the transformation matrix range to increase the learning image of the inspection part in which the repetition position error range is reflected.
  • the deep learning learning unit can deep learn the inspection part image through the inspection algorithm of the corresponding part and output any one of inspection results of good product (OK), defective product (NG), and inspection error (NA). there is.
  • control unit may generate a new inspection algorithm that reflects the image augmentation range for deep learning learning in real time based on transformation matrix information acquired during the image registration process of the inspection image.
  • control unit compares the deep learning re-learned test results using the new test algorithm with the test results using the existing test algorithm, and if it determines that performance has improved, replace or update the existing test algorithm with the new test algorithm. You can.
  • a vision inspection method using a mobile robot to inspect the quality of a worker's manual assembly in real time in the automobile assembly process is a specific inspection position that requires inspection centered on the product vehicle using a mobile robot.
  • acquiring an inspection image taken in Processing the inspection image to match a preset reference image and extracting an inspection part image from the converted correction image; Automatically calculates the image deviation of the inspection image compared to the reference image reflecting the repetitive position error of the robot for each inspection position, stores the learned inspection algorithm through image augmentation that reflects the range of the image deviation, and inspects the inspection part.
  • the step of acquiring the test result if the test result is determined to be defective (NG) or test error (NA), the performance of the new test algorithm is performed by extracting false test or non-judgmentable images through the operator's confirmation of the test result.
  • a step of saving evaluation data for confirmation may be further included.
  • the step of extracting the inspection part image includes creating a new inspection algorithm by specifying an image augmentation range for the deep learning learning based on the transformation matrix acquired in the image registration process.
  • a vision inspection method using a mobile robot further comprising:
  • generating the new inspection algorithm may include calculating an image registration error of the inspection image with respect to the reference image; calculating the image deviation between two images and extracting a transformation matrix that can be obtained when correcting the inspection image; Calculating the distribution of transformation matrix values extracted corresponding to the inspection position, converting the range into a DB, and storing it in the DB; Designating a transformation matrix range for image augmentation when learning deep learning of the corresponding inspection part image based on the repetition position error range stored in the DB; And a step of deep learning the plurality of training images augmented by augmentation reflecting the repeated position error of the mobile robot to generate a new inspection algorithm for the part and deep learning the inspection part image.
  • the step of specifying the matrix range includes translation, rotation, and scale of the inspection part image using the augmentation during deep learning learning, based on the repetitive position error range of the transformation matrix. You can specify the range when generating random numbers for Scale, Tilt, and Shear conversion values.
  • deep learning learning can be performed by generating random numbers according to a Gaussian normal distribution and weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location.
  • the step of replacing or updating the existing inspection algorithm with the new inspection algorithm may be further included.
  • the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated and an inspection algorithm reflecting this is generated to perform the vision inspection, thereby reducing the repetitive position error of the mobile robot. It has the effect of improving inspection performance.
  • a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. It is effective in preventing and maintaining optimal inspection quality in real time.
  • the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
  • Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
  • Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
  • Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
  • Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
  • Figure 5 shows a transformation matrix according to an embodiment of the present invention.
  • Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
  • Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
  • Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
  • control unit may refer to a hardware device that includes memory and a processor.
  • the memory is configured to store program instructions, and the processor is specifically programmed to execute the program instructions to perform one or more processes described in more detail below.
  • the controller may control the operation of units, modules, components, devices, or the like, as described herein. It is also understood that the methods below can be performed by an apparatus that includes a control unit along with one or more other components, as will be recognized by those skilled in the art.
  • the inspection image refers to the original image acquired by the mobile robot shooting the inspection area at a specific inspection position (EX. P1, P2, ..., Pn) during an actual vision inspection.
  • the reference image is the optimal image taken of the inspection area after positioning the mobile robot in a pre-designated inspection position (EX. P1, P2, ..., Pn), and the assembly quality of each part belonging to the inspection image during vision inspection It becomes the standard for evaluating.
  • the corrected image refers to an image converted to be close (approximately) to the reference image by correcting the deviation of the test image with respect to the reference image.
  • the inspection part image refers to an individual part image extracted from the correction image corresponding to the inspection part ROI area set in the reference image.
  • the learning image refers to a data set for deep learning learning augmented by converting the inspection part image into image augmentation.
  • Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
  • Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
  • the inspection server 100 is for evaluating the quality of a worker's part assembly in a product assembly process at an industrial site, and has at least one designated inspection position (Position, P). ) and a mobile robot 10 that moves to photograph the inspection area of the parts assembled in the product, and an inspection image taken from the mobile robot 10 is acquired and a reference image reflecting the repeated position error of the robot for each inspection position It is characterized by automatically calculating the image deviation of the inspection image compared to the inspection image and evaluating the assembly quality of the inspection image for each inspection position through an inspection algorithm learned by performing image augmentation that reflects the range of the image deviation. Do it as
  • the product is transported to a set work location through the smart factory's transport means 20, and at least one part assigned to the worker is assembled.
  • the product will be described assuming a “vehicle,” but the vehicle may be a car body in the process of assembly with the goal of becoming a finished car or some components (EX. doors, dashboard, interior parts, etc.) that make up the car body.
  • the transfer means 20 may be a conveyor or a logistics transfer robot.
  • the mobile robot 10 moves to the designated inspection position (P) and transmits the captured inspection image to the inspection server 100.
  • the inspection position (P) is a shooting position/point that exists in a plurality of work zones (EX. first to sixth zones) divided in various directions around the vehicle, for example, a plurality of 1st inspection position (P1), 2nd inspection position (P2), ... designated for each work area. , may include a sixth inspection position (P6).
  • the number of work areas and inspection positions in the embodiment of the present invention is not limited to this, and a plurality of inspection positions within the same work area can be designated depending on the assembly position of the parts.
  • the mobile robot 10 may be configured as a quadrupedal robot or an Autonomous Mobile Robot (AMR). Since AMR has the limitation of being able to move only on flat surfaces using driving wheels, a preferred embodiment of the present invention is a quadrupedal robot (also called a "robot dog” or “Spot”) that can move on stairs or irregular rough roads with a higher degree of freedom. This will be explained assuming that.
  • AMR Autonomous Mobile Robot
  • the mobile robot 10 includes a vision sensor module 11, an autonomous driving sensor module 12, a movement module 13, a wireless communication module 14, and a control module 15.
  • the vision sensor module 11 is mounted on the robot and generates an inspection image taken at a designated inspection position (P).
  • the vision sensor module 11 is mounted on the mobile robot 10 through a robot arm 11-1 for freedom in changing the shooting position.
  • This vision sensor module 11 can photograph not only the exterior parts of the vehicle but also the interior parts of the vehicle through posture (motion) control of the robot arm 11-1.
  • the autonomous driving sensor module 12 includes at least one sensor among cameras, lasers, ultrasonic waves, radars, lidar, and location recognition devices for autonomous driving, and can detect the surroundings and recognize workers and objects.
  • the movement module 13 includes four legs and can freely move on stairs or irregular road surfaces through quadrupedal walking.
  • the wireless communication module 14 can transmit the inspection image captured by the vision sensor module 11 to the inspection server 100 through wireless communication, and receive a control signal from the inspection server 100 when necessary.
  • the control module 15 controls the overall operation of the mobile robot 10 according to an embodiment of the present invention.
  • the parts include car bodies, parts, and electrical components assembled in a vehicle, or fasteners such as bolts, nuts, and rivets that have designated assembly positions.
  • the control module 15 stores the assembly positions of parts to be assembled in the process according to the type and specifications of the vehicle and at least one inspection position (P1, P2, ..., P6) designated for capturing the inspection image.
  • the control module 15 recognizes the position of the worker through the vision sensor module 11 and moves to the inspection position (P1, P2, ..., P6) where the work has been completed following the worker through the vision sensor module 11. Controls to capture inspection images.
  • the mobile robot 10 moves in the order of P1, P2, and P3 to inspect You can wait after taking an image.
  • control module 15 prevents collisions without interfering with the worker's work flow by restricting the movement of the worker and making the worker wait so that he does not enter the existing worker zone (i.e., the fourth zone).
  • a surveillance camera 30 may be further installed in the assembly process area to monitor the worker's location for worker safety and transmit an entry event of the mobile robot 10 into the worker area to the inspection server 100.
  • the inspection server 100 upon receiving the entry event, the inspection server 100 immediately transmits a stop signal to the mobile robot 10 to restrict movement into the worker area, thus ensuring the safety of the worker.
  • the inspection server 100 acquires an inspection image from the mobile robot 10, the inspection server 100 detects feature points, performs image conversion close to the reference image, and crops the inspection part region (Region of Interest, ROI) to extract at least one image. Images of inspection parts are inspected through a deep learning vision inspection program.
  • the inspection image acquired according to the location of the mobile robot 10 and the location of the inspection part may have an image deviation from the reference image due to the position error that occurs during shooting. There is a problem.
  • Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
  • the reference image 121 is an optimal image obtained by photographing the inspection area after previously positioning the mobile robot 10 at the first inspection position P1.
  • the reference image 121 serves as a standard for evaluating the assembly quality of parts belonging to the inspection image during vision inspection, and at least one inspection part ROI area and the entire ROI area including them are set.
  • repetitive position error due to the nature of the mobile robot 10, unlike existing fixed equipment, even if it is set to move to a designated inspection position (P1, P2, ..., P6) and photograph the inspection area of the car body, there is a slight repetitive position error (hereinafter, (termed “repetitive position error”) may cause deviations in the inspection images taken each time, thereby deteriorating inspection performance.
  • image deviation occurs in the inspection image acquired from the mobile robot 10 due to the repetitive position error of the robot, which is caused by the translation, rotation, and scale of the image caused by the repetitive position error.
  • Tilt, and Shear can be said to be reflected in the image deviation.
  • the inspection server 100 performs a vision inspection using the mobile robot 10
  • the image conversion condition is repetitive due to the repetitive position error of the robot when correcting the image deviation of the acquired inspection image. Since it can be changed, it is necessary to create an inspection algorithm that reflects the robot's repetitive position error for each part of each inspection position (P).
  • the inspection server 100 develops an inspection algorithm through deep learning that reflects the robot's image deviation range (repeated position error) for each part of each inspection position (P) of the mobile robot 10. The purpose is to automatically generate it.
  • the inspection server 100 generates an inspection algorithm by performing image augmentation that reflects the repetitive position error range of the mobile robot 10 during the deep learning learning, and the repetitive position of the mobile robot 10 Inspection performance can be improved by performing vision inspection through an inspection algorithm that considers errors.
  • the inspection server 100 includes a communication unit 110, an image processing unit 120, a deep learning learning unit 130, a database (DB) 140, and a control unit 150.
  • the communication unit 110 includes wired and wireless communication means and acquires an inspection image from the mobile robot 10.
  • the image processing unit 120 stores a reference image corresponding to the inspection image for each part, processes the acquired inspection image by comparison with the corresponding reference image, and generates a converted correction image.
  • Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
  • Figure 5 shows a transformation matrix according to an embodiment of the present invention.
  • the image processing unit 120 calculates the image registration error of the inspection image with respect to the reference image when processing the image registration, calculates the image deviation between the two images, and obtains the image when correcting the inspection image. Extract possible transformation matrix information.
  • the transformation matrix information is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. Includes.
  • the image processing unit 120 extracts at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
  • ROI inspection part area
  • Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
  • Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
  • the deep learning learning unit 130 calculates the repetitive position error for each inspection position (P1, P2, ..., P6) of the mobile robot 10 through image augmentation of the inspection image in advance. By learning the range, an inspection algorithm is built for each inspection part (P1-1, P1-2, P2, ..., P6) of the process.
  • the deep learning learning unit 130 performs image augmentation using the inspection part image extracted from the image processing unit 120 and the transformation matrix range to create a learning image of the corresponding inspection part reflecting the repetitive position error range of the robot.
  • the range of random number generation of image translation, rotation, scale, tilt, and shear transformation values is specified according to the transformation matrix range.
  • image augmentation can be performed on a plurality of inspection part images included in one inspection image by equally applying the transformation matrix range.
  • the deep learning learning unit 130 may augment a random scale image reflecting the repetitive position error range of the robot and apply it to the learning image.
  • the area that the mobile robot can inspect can be expanded around the inspection position. Therefore, even if a repetitive position error of the mobile robot occurs within the inspectable area, inspection errors can be prevented and inspection performance can be improved.
  • the deep learning learning unit 130 deep learns the inspection part image through the inspection algorithm of the corresponding part and outputs an inspection result of one of good product (OK), defective product (NG), and inspection error (NA).
  • the deep learning learning unit 130 performs a vision inspection through the corresponding P1-1 algorithm according to the input of the P1-1 inspection part image and outputs the inspection result.
  • the test result is that the image of the P1-1 inspection part is close to the standard image of a good product (P1-1(OK)) in which the part is normally assembled, or is close to the standard image of a defective product in which the part is not assembled (P1-1(NG)). It can be determined based on the close similarity ratio (%). However, if the P1-1 inspection part image is not similar to either the corresponding good product standard image (P1-1(OK) or the defective standard image (P1-1(NG)), an inspection error (NA) is output due to the impossibility of judgment. can do.
  • the deep learning learning unit 130 determines whether the first inspection part image (P1-1) is good/bad under the condition that the first inspection part image (P1-1) is above or below a certain ratio (EX. 80%) of the good product standard image (P1-1 (OK)). (OK/NG) can be determined.
  • the deep learning learning unit 130 may be composed of an artificial neural network-based program.
  • the DB 140 stores various programs and data necessary for the operation of the inspection server 100 using a mobile robot according to an embodiment of the present invention, and stores data generated according to the operation.
  • the DB 140 stores the inspection algorithm for each inspection part (P1-1, P1-2, P2, ..., P6) corresponding to the process, and replaces or additionally updates the newly created inspection algorithm as the vision inspection is repeated. can do.
  • the control unit 150 is a central processing unit that controls the overall operation of the inspection server 100 that performs a vision inspection using a mobile robot according to an embodiment of the present invention. That is, the control unit 150 can control each part of the server 100 by executing various programs stored in the DB 140.
  • This control unit 150 may be implemented with one or more processors that operate according to a set program, and the set program may be programmed to perform each step of the vision inspection method using a mobile robot according to an embodiment of the present invention. .
  • Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
  • the mobile robot 10 in operation for vision inspection of the manual assembly process of parts is installed on the product vehicle.
  • the description will be made assuming a scenario in which inspection images of parts assembled in a vehicle are taken while moving to a nearby designated first inspection position (P1) to a sixth inspection position (P6).
  • the control unit 150 of the inspection server 100 uses the mobile robot 10 in operation to acquire an inspection image taken at a specific inspection position that requires inspection centered on the product vehicle (S110). For convenience of explanation, the following description will be made on the assumption that the control unit 150 has acquired the inspection image taken at the first inspection position (P1) from the mobile robot 10.
  • the control unit 150 processes the inspection image acquired from the mobile robot 10 to match the inspection part area (ROI) of the preset reference image and generates a converted correction image. Then, at least one inspection part image corresponding to the inspection part area (ROI) of the reference image is extracted from the corrected image (S120). For example, the inspection part image has a part number (P1-1) corresponding to the first inspection position (P1), and through this, the inspection part (P1-1) that is the target of the vision inspection can be identified.
  • the control unit 150 automatically calculates the image deviation of the inspection image compared to the reference image that reflects the repetitive position error of the robot for each inspection position in advance, and performs deep learning through automatic image augmentation that reflects the range of the image deviation.
  • the control unit 150 performs a vision inspection using deep learning learning using an inspection algorithm for the first inspection part (P1-1) corresponding to the first inspection part image (P1-1) (S130).
  • the control unit 150 may obtain an inspection result of any one of good product (OK), defective product (NG), and inspection error (NA) according to the vision inspection (S140).
  • the control unit 150 terminates the inspection and returns, repeating the vision inspection for the next part inspection images (P1-2, P2, ..., P6), and , If the next part inspection image does not exist, the inspection is terminated.
  • the control unit 150 extracts a false test (e.g. OK/NG misjudgment) or an unjudgmentable image through the operator's confirmation of the test result. It is saved and used as evaluation data to check the performance of the new inspection algorithm described later (S160).
  • NG defective
  • NA test error
  • control unit 150 learns the repetition error range for each inspection position of the mobile robot 10 in advance, builds an inspection algorithm for each part (P1-1, P1-2, P2, ..., P6), and uses this to create a vision system. By performing the inspection, the evaluation performance of the assembly quality of each part can be improved by correcting the image deviation for the repetitive position error of the mobile robot 10.
  • the pre-built inspection algorithm (hereinafter referred to as “existing inspection algorithm”) is used to process processes such as aging of process equipment including the mobile robot 10 or changes in the mounting location of the vision sensor module 11 over time. Image deviations may occur due to various operational environmental changes, which may deteriorate inspection performance.
  • control unit 150 optimizes the image augmentation range for deep learning learning in real time based on the transformation matrix information acquired in the image matching process of step S120. It has the characteristic of creating a new inspection algorithm.
  • the control unit 150 calculates the image registration error of the inspection image with respect to the reference image during the image registration process (S121). Then, the image deviation between the two images is calculated to extract a transformation matrix that can be obtained when correcting the inspection image (S122).
  • the transformation matrix is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. do.
  • the control unit 150 calculates the distribution of the transformation matrix value extracted corresponding to the inspection (photography) position of the mobile robot 10, converts the range into a DB, and stores it in the DB 140 (S123).
  • the control unit 150 specifies the transformation matrix range for image augmentation during deep learning learning of the corresponding inspection part image based on the repetition position error range stored in the DB 140 (S124). That is, the control unit 150 performs translation, rotation, scale, and tilt of the inspection part image using the augmentation when learning deep learning based on the repetitive position error range of the transformation matrix. ), you can specify the range when generating random numbers of shear conversion values. In particular, the control unit 150 generates random numbers according to a Gaussian normal distribution when specifying the transformation matrix range, thereby performing deep learning learning by weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location. You can.
  • the control unit 150 generates a new inspection algorithm for the corresponding part by deep learning a plurality of learning images augmented by augmentation reflecting the repeated position error of the mobile robot 10, and applies the image of the inspection part to the new inspection algorithm. Re-learn deep learning using (S125).
  • control unit 150 generates a new inspection algorithm for each part through image augmentation that reflects the error range for each position of the mobile robot 10 during the deep learning learning, and performs inspection performance through deep learning re-learning using this. can be improved.
  • the control unit 150 compares the relearning test results with evaluation data of the existing test algorithm prepared to evaluate the performance of the new test algorithm (S126).
  • the control unit 150 replaces the existing inspection algorithm with the new inspection algorithm. automatically replaced (S126; example). For example, if an image in which an inspection error (NA) occurred when using the existing inspection algorithm can be judged OK/NG using the new inspection algorithm, the evaluation performance can be judged to have improved and automatically replaced.
  • NA inspection error
  • control unit 150 maintains the existing inspection algorithm if the evaluation performance is not improved below that of the existing inspection algorithm (S126; No).
  • the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated, an inspection algorithm reflecting this is generated, and the vision inspection is performed, thereby repeating the mobile robot. Inspection performance for position errors can be improved.
  • a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. Prevention and optimal inspection quality can be maintained.
  • the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
  • the embodiments of the present invention are not implemented only through the devices and/or methods described above, but can be implemented through programs for realizing functions corresponding to the configuration of the embodiments of the present invention, recording media on which the programs are recorded, etc.
  • This implementation can be easily implemented by an expert in the technical field to which the present invention belongs based on the description of the embodiments described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Business, Economics & Management (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Combustion & Propulsion (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Transportation (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)

Abstract

L'invention divulgue un système d'inspection de vision à l'aide d'un robot mobile et un procédé pour celui-ci. Un système d'inspection de vision à l'aide d'un robot mobile, selon un mode de réalisation de la présente invention, comprend : un robot mobile qui se déplace jusqu'à au moins une position d'inspection désignée (P) et photographie une zone d'inspection d'un composant utilisé pour assembler un produit ; et un serveur d'inspection qui acquiert une image d'inspection photographiée provenant du robot mobile, calcule automatiquement un écart d'image de l'image d'inspection par rapport à une image de référence reflétant une erreur de position répétée du robot pour chaque position d'inspection, et évalue la qualité d'assemblage de l'image d'inspection pour chaque position d'inspection par l'intermédiaire d'un algorithme d'inspection entraîné par l'intermédiaire d'une augmentation d'image reflétant une plage de l'écart d'image.
PCT/KR2023/008297 2022-10-04 2023-06-15 Système et procédé d'inspection de vision à l'aide d'un robot mobile WO2024075926A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220126235A KR20240047507A (ko) 2022-10-04 2022-10-04 모바일 로봇을 활용한 비전 검사 시스템 및 그 방법
KR10-2022-0126235 2022-10-04

Publications (1)

Publication Number Publication Date
WO2024075926A1 true WO2024075926A1 (fr) 2024-04-11

Family

ID=90608539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/008297 WO2024075926A1 (fr) 2022-10-04 2023-06-15 Système et procédé d'inspection de vision à l'aide d'un robot mobile

Country Status (2)

Country Link
KR (1) KR20240047507A (fr)
WO (1) WO2024075926A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140703A1 (en) * 2014-11-17 2016-05-19 Hyundai Motor Company System for inspecting vehicle body and method thereof
JP2016190316A (ja) * 2015-03-30 2016-11-10 ザ・ボーイング・カンパニーThe Boeing Company 自動化された動的製造システムおよび関連する方法
JP2018122400A (ja) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 移動ロボット、移動ロボットの制御方法および制御プログラム
KR102272305B1 (ko) * 2020-10-14 2021-07-01 함만주 금형 제조 시스템
KR102393068B1 (ko) * 2020-11-03 2022-05-02 주식회사 성우하이텍 이미지기반 부품 인식 시스템 및 그 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140703A1 (en) * 2014-11-17 2016-05-19 Hyundai Motor Company System for inspecting vehicle body and method thereof
JP2016190316A (ja) * 2015-03-30 2016-11-10 ザ・ボーイング・カンパニーThe Boeing Company 自動化された動的製造システムおよび関連する方法
JP2018122400A (ja) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 移動ロボット、移動ロボットの制御方法および制御プログラム
KR102272305B1 (ko) * 2020-10-14 2021-07-01 함만주 금형 제조 시스템
KR102393068B1 (ko) * 2020-11-03 2022-05-02 주식회사 성우하이텍 이미지기반 부품 인식 시스템 및 그 방법

Also Published As

Publication number Publication date
KR20240047507A (ko) 2024-04-12

Similar Documents

Publication Publication Date Title
CN107590835B (zh) 一种核环境下机械臂工具快换视觉定位***与定位方法
CN110770989B (zh) 无人操作和维护的开关装置或控制装置***及其操作方法
CN102795011B (zh) 用于瞄准结构内多个预定位置的方法和对应的瞄准***
CN110703800A (zh) 基于无人机的电力设施智能识别方法及***
WO2013077623A1 (fr) Système et procédé de mesure de déplacement de structure
US8923602B2 (en) Automated guidance and recognition system and method of the same
WO2019164381A1 (fr) Procédé d'inspection de l'état de montage d'un composant, appareil d'inspection de carte de circuit imprimé et support d'enregistrement lisible par ordinateur
KR20190044496A (ko) 자동화 장치
US5333242A (en) Method of setting a second robots coordinate system based on a first robots coordinate system
WO2020075954A1 (fr) Système et procédé de positionnement utilisant une combinaison de résultats de reconnaissance d'emplacement basée sur un capteur multimodal
KR102393068B1 (ko) 이미지기반 부품 인식 시스템 및 그 방법
WO2024075926A1 (fr) Système et procédé d'inspection de vision à l'aide d'un robot mobile
JPH11156764A (ja) 移動ロボット装置
JP2022172053A (ja) Mmpを用いたadas検査システムおよびその方法
CN116337887A (zh) 铸造缸体上表面缺陷检测方法及***
WO2024122777A1 (fr) Procédé et système d'entraînement d'un robot collaboratif capable de réponse préemptive
CN109079777B (zh) 一种机械臂手眼协调作业***
Myers Industry begins to use visual pattern recognition
CN111571596B (zh) 利用视觉修正冶金接插装配作业机器人误差的方法及***
WO2021095907A1 (fr) Procédé de commande de pilotage pour robot agricole variable
CN114187312A (zh) 目标物的抓取方法、装置、***、存储介质及设备
CN115493513A (zh) 一种应用于空间站机械臂的视觉***
WO2023054813A1 (fr) Système de détection d'incendie et de réponse précoce à l'aide d'un robot mobile rechargeable
CN114735044A (zh) 智能轨道车辆巡检机器人
WO2021206209A1 (fr) Procédé et système de mise en œuvre de ra sans marqueur pour construction d'usine intelligente

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23874997

Country of ref document: EP

Kind code of ref document: A1