WO2024075926A1 - Vision inspection system and method using mobile robot - Google Patents

Vision inspection system and method using mobile robot Download PDF

Info

Publication number
WO2024075926A1
WO2024075926A1 PCT/KR2023/008297 KR2023008297W WO2024075926A1 WO 2024075926 A1 WO2024075926 A1 WO 2024075926A1 KR 2023008297 W KR2023008297 W KR 2023008297W WO 2024075926 A1 WO2024075926 A1 WO 2024075926A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
image
mobile robot
vision
algorithm
Prior art date
Application number
PCT/KR2023/008297
Other languages
French (fr)
Korean (ko)
Inventor
양창모
Original Assignee
현대자동차 주식회사
기아 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 현대자동차 주식회사, 기아 주식회사 filed Critical 현대자동차 주식회사
Publication of WO2024075926A1 publication Critical patent/WO2024075926A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • B25J13/089Determining the position of the robot with reference to its environment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D65/00Designing, manufacturing, e.g. assembling, facilitating disassembly, or structurally modifying motor vehicles or trailers, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D65/00Designing, manufacturing, e.g. assembling, facilitating disassembly, or structurally modifying motor vehicles or trailers, not otherwise provided for
    • B62D65/005Inspection and final control devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/93Detection standards; Calibrating baseline adjustment, drift correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/10Scanning
    • G01N2201/103Scanning by mechanical motion of stage

Definitions

  • the present invention relates to a vision inspection system and method using a mobile robot, and more specifically, to a vision inspection system and method using a mobile robot that inspects the quality of manual assembly by workers in real time in the automobile assembly process. will be.
  • Conventional real-time quality inspection involves vision inspection that captures inspection images using a camera mounted on a multi-joint fixed robot.
  • the conventional vision inspection method has limitations in controlling the photographing posture (motion) of the fixed robot and the area that can be photographed, so it requires the operation of multiple units, and the work movements of workers and robots overlap, which can interfere with work or pose a risk of collision.
  • Deviations in inspection images using such mobile robots cause performance degradation in the quality inspection of vision inspection systems, so improvement measures are needed to improve inspection performance.
  • An embodiment of the present invention utilizes a mobile robot that automatically calculates the image deviation reflecting the repetitive position error of the robot during vision inspection using a mobile robot in an industrial field, builds an inspection algorithm reflecting this, and performs a vision inspection for each inspection part.
  • the purpose is to provide a vision inspection system and method.
  • Another purpose of the present invention is to develop a mobile robot that maintains optimal inspection quality in real time by automatically replacing the existing inspection algorithm with a new inspection algorithm created by automatically specifying an augmentation range that reflects image deviations due to various environmental changes.
  • the goal is to provide a vision inspection system that utilizes
  • a vision inspection system utilizing a mobile robot includes: a mobile robot that moves to at least one designated inspection position (Position, P) and photographs an inspection area of parts assembled in a product; And image augmentation is performed by acquiring the inspection image taken from the mobile robot, calculating the image deviation of the inspection image compared to the reference image reflecting the repeated position error of the robot for each inspection position, and reflecting the range of the image deviation. It includes an inspection server that evaluates the assembly quality of the inspection image for each inspection position through a learned inspection algorithm.
  • the mobile robot includes a vision sensor module that generates the inspection image captured at the inspection position; Autonomous driving sensor module that detects the surroundings through sensors; A mobile module that moves freely via drive wheels or quadrupedal walking; A wireless communication module that transmits the captured inspection image to the inspection server through wireless communication; And a control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed.
  • a vision sensor module that generates the inspection image captured at the inspection position
  • Autonomous driving sensor module that detects the surroundings through sensors
  • a mobile module that moves freely via drive wheels or quadrupedal walking
  • a wireless communication module that transmits the captured inspection image to the inspection server through wireless communication
  • And a control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed.
  • control module may store the assembly positions of parts to be assembled in the process according to the type and specifications of the product and at least one inspection position designated for capturing the inspection image.
  • control module can distinguish a plurality of zones divided in various directions around the product and restrict movement so that the worker does not enter the worker zone where the worker currently exists.
  • the vision sensor module is mounted on the mobile robot through a robot arm, and can capture inspection images of the exterior parts and interior interior parts of the product through posture control of the robot arm.
  • the inspection server includes a communication unit that acquires an inspection image from the mobile robot; an image processing unit that stores a reference image corresponding to the inspection image for each part and generates a converted corrected image by matching the inspection image with the reference image; A deep learning unit that builds an inspection algorithm for each inspection part of the process by deep learning the repetitive position error range for each inspection position of the mobile robot through image augmentation of the inspection image in advance; a database (DB) storing programs and data for operating the inspection server; and a control unit configured to generate a new inspection algorithm by specifying an image augmentation range for deep learning learning based on a transformation matrix acquired in the image registration process.
  • DB database
  • the image processing unit calculates the image registration error of the inspection image with respect to the reference image during the image registration process, calculates the image deviation between the two images, and calculates a transformation matrix that can be obtained when correcting the inspection image. Information can be extracted.
  • the transformation matrix information is a numerical value of the image deviation and includes translation for image movement, rotation for image rotation, scale for image enlargement/reduction, tilt for image tilt conversion, and image shear. Shear information for can be included.
  • the image processing unit may extract at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
  • ROI inspection part area
  • the deep learning learning unit may perform image augmentation using the inspection part image extracted from the image processing unit and the transformation matrix range to increase the learning image of the inspection part in which the repetition position error range is reflected.
  • the deep learning learning unit can deep learn the inspection part image through the inspection algorithm of the corresponding part and output any one of inspection results of good product (OK), defective product (NG), and inspection error (NA). there is.
  • control unit may generate a new inspection algorithm that reflects the image augmentation range for deep learning learning in real time based on transformation matrix information acquired during the image registration process of the inspection image.
  • control unit compares the deep learning re-learned test results using the new test algorithm with the test results using the existing test algorithm, and if it determines that performance has improved, replace or update the existing test algorithm with the new test algorithm. You can.
  • a vision inspection method using a mobile robot to inspect the quality of a worker's manual assembly in real time in the automobile assembly process is a specific inspection position that requires inspection centered on the product vehicle using a mobile robot.
  • acquiring an inspection image taken in Processing the inspection image to match a preset reference image and extracting an inspection part image from the converted correction image; Automatically calculates the image deviation of the inspection image compared to the reference image reflecting the repetitive position error of the robot for each inspection position, stores the learned inspection algorithm through image augmentation that reflects the range of the image deviation, and inspects the inspection part.
  • the step of acquiring the test result if the test result is determined to be defective (NG) or test error (NA), the performance of the new test algorithm is performed by extracting false test or non-judgmentable images through the operator's confirmation of the test result.
  • a step of saving evaluation data for confirmation may be further included.
  • the step of extracting the inspection part image includes creating a new inspection algorithm by specifying an image augmentation range for the deep learning learning based on the transformation matrix acquired in the image registration process.
  • a vision inspection method using a mobile robot further comprising:
  • generating the new inspection algorithm may include calculating an image registration error of the inspection image with respect to the reference image; calculating the image deviation between two images and extracting a transformation matrix that can be obtained when correcting the inspection image; Calculating the distribution of transformation matrix values extracted corresponding to the inspection position, converting the range into a DB, and storing it in the DB; Designating a transformation matrix range for image augmentation when learning deep learning of the corresponding inspection part image based on the repetition position error range stored in the DB; And a step of deep learning the plurality of training images augmented by augmentation reflecting the repeated position error of the mobile robot to generate a new inspection algorithm for the part and deep learning the inspection part image.
  • the step of specifying the matrix range includes translation, rotation, and scale of the inspection part image using the augmentation during deep learning learning, based on the repetitive position error range of the transformation matrix. You can specify the range when generating random numbers for Scale, Tilt, and Shear conversion values.
  • deep learning learning can be performed by generating random numbers according to a Gaussian normal distribution and weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location.
  • the step of replacing or updating the existing inspection algorithm with the new inspection algorithm may be further included.
  • the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated and an inspection algorithm reflecting this is generated to perform the vision inspection, thereby reducing the repetitive position error of the mobile robot. It has the effect of improving inspection performance.
  • a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. It is effective in preventing and maintaining optimal inspection quality in real time.
  • the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
  • Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
  • Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
  • Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
  • Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
  • Figure 5 shows a transformation matrix according to an embodiment of the present invention.
  • Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
  • Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
  • Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
  • control unit may refer to a hardware device that includes memory and a processor.
  • the memory is configured to store program instructions, and the processor is specifically programmed to execute the program instructions to perform one or more processes described in more detail below.
  • the controller may control the operation of units, modules, components, devices, or the like, as described herein. It is also understood that the methods below can be performed by an apparatus that includes a control unit along with one or more other components, as will be recognized by those skilled in the art.
  • the inspection image refers to the original image acquired by the mobile robot shooting the inspection area at a specific inspection position (EX. P1, P2, ..., Pn) during an actual vision inspection.
  • the reference image is the optimal image taken of the inspection area after positioning the mobile robot in a pre-designated inspection position (EX. P1, P2, ..., Pn), and the assembly quality of each part belonging to the inspection image during vision inspection It becomes the standard for evaluating.
  • the corrected image refers to an image converted to be close (approximately) to the reference image by correcting the deviation of the test image with respect to the reference image.
  • the inspection part image refers to an individual part image extracted from the correction image corresponding to the inspection part ROI area set in the reference image.
  • the learning image refers to a data set for deep learning learning augmented by converting the inspection part image into image augmentation.
  • Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
  • Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
  • the inspection server 100 is for evaluating the quality of a worker's part assembly in a product assembly process at an industrial site, and has at least one designated inspection position (Position, P). ) and a mobile robot 10 that moves to photograph the inspection area of the parts assembled in the product, and an inspection image taken from the mobile robot 10 is acquired and a reference image reflecting the repeated position error of the robot for each inspection position It is characterized by automatically calculating the image deviation of the inspection image compared to the inspection image and evaluating the assembly quality of the inspection image for each inspection position through an inspection algorithm learned by performing image augmentation that reflects the range of the image deviation. Do it as
  • the product is transported to a set work location through the smart factory's transport means 20, and at least one part assigned to the worker is assembled.
  • the product will be described assuming a “vehicle,” but the vehicle may be a car body in the process of assembly with the goal of becoming a finished car or some components (EX. doors, dashboard, interior parts, etc.) that make up the car body.
  • the transfer means 20 may be a conveyor or a logistics transfer robot.
  • the mobile robot 10 moves to the designated inspection position (P) and transmits the captured inspection image to the inspection server 100.
  • the inspection position (P) is a shooting position/point that exists in a plurality of work zones (EX. first to sixth zones) divided in various directions around the vehicle, for example, a plurality of 1st inspection position (P1), 2nd inspection position (P2), ... designated for each work area. , may include a sixth inspection position (P6).
  • the number of work areas and inspection positions in the embodiment of the present invention is not limited to this, and a plurality of inspection positions within the same work area can be designated depending on the assembly position of the parts.
  • the mobile robot 10 may be configured as a quadrupedal robot or an Autonomous Mobile Robot (AMR). Since AMR has the limitation of being able to move only on flat surfaces using driving wheels, a preferred embodiment of the present invention is a quadrupedal robot (also called a "robot dog” or “Spot”) that can move on stairs or irregular rough roads with a higher degree of freedom. This will be explained assuming that.
  • AMR Autonomous Mobile Robot
  • the mobile robot 10 includes a vision sensor module 11, an autonomous driving sensor module 12, a movement module 13, a wireless communication module 14, and a control module 15.
  • the vision sensor module 11 is mounted on the robot and generates an inspection image taken at a designated inspection position (P).
  • the vision sensor module 11 is mounted on the mobile robot 10 through a robot arm 11-1 for freedom in changing the shooting position.
  • This vision sensor module 11 can photograph not only the exterior parts of the vehicle but also the interior parts of the vehicle through posture (motion) control of the robot arm 11-1.
  • the autonomous driving sensor module 12 includes at least one sensor among cameras, lasers, ultrasonic waves, radars, lidar, and location recognition devices for autonomous driving, and can detect the surroundings and recognize workers and objects.
  • the movement module 13 includes four legs and can freely move on stairs or irregular road surfaces through quadrupedal walking.
  • the wireless communication module 14 can transmit the inspection image captured by the vision sensor module 11 to the inspection server 100 through wireless communication, and receive a control signal from the inspection server 100 when necessary.
  • the control module 15 controls the overall operation of the mobile robot 10 according to an embodiment of the present invention.
  • the parts include car bodies, parts, and electrical components assembled in a vehicle, or fasteners such as bolts, nuts, and rivets that have designated assembly positions.
  • the control module 15 stores the assembly positions of parts to be assembled in the process according to the type and specifications of the vehicle and at least one inspection position (P1, P2, ..., P6) designated for capturing the inspection image.
  • the control module 15 recognizes the position of the worker through the vision sensor module 11 and moves to the inspection position (P1, P2, ..., P6) where the work has been completed following the worker through the vision sensor module 11. Controls to capture inspection images.
  • the mobile robot 10 moves in the order of P1, P2, and P3 to inspect You can wait after taking an image.
  • control module 15 prevents collisions without interfering with the worker's work flow by restricting the movement of the worker and making the worker wait so that he does not enter the existing worker zone (i.e., the fourth zone).
  • a surveillance camera 30 may be further installed in the assembly process area to monitor the worker's location for worker safety and transmit an entry event of the mobile robot 10 into the worker area to the inspection server 100.
  • the inspection server 100 upon receiving the entry event, the inspection server 100 immediately transmits a stop signal to the mobile robot 10 to restrict movement into the worker area, thus ensuring the safety of the worker.
  • the inspection server 100 acquires an inspection image from the mobile robot 10, the inspection server 100 detects feature points, performs image conversion close to the reference image, and crops the inspection part region (Region of Interest, ROI) to extract at least one image. Images of inspection parts are inspected through a deep learning vision inspection program.
  • the inspection image acquired according to the location of the mobile robot 10 and the location of the inspection part may have an image deviation from the reference image due to the position error that occurs during shooting. There is a problem.
  • Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
  • the reference image 121 is an optimal image obtained by photographing the inspection area after previously positioning the mobile robot 10 at the first inspection position P1.
  • the reference image 121 serves as a standard for evaluating the assembly quality of parts belonging to the inspection image during vision inspection, and at least one inspection part ROI area and the entire ROI area including them are set.
  • repetitive position error due to the nature of the mobile robot 10, unlike existing fixed equipment, even if it is set to move to a designated inspection position (P1, P2, ..., P6) and photograph the inspection area of the car body, there is a slight repetitive position error (hereinafter, (termed “repetitive position error”) may cause deviations in the inspection images taken each time, thereby deteriorating inspection performance.
  • image deviation occurs in the inspection image acquired from the mobile robot 10 due to the repetitive position error of the robot, which is caused by the translation, rotation, and scale of the image caused by the repetitive position error.
  • Tilt, and Shear can be said to be reflected in the image deviation.
  • the inspection server 100 performs a vision inspection using the mobile robot 10
  • the image conversion condition is repetitive due to the repetitive position error of the robot when correcting the image deviation of the acquired inspection image. Since it can be changed, it is necessary to create an inspection algorithm that reflects the robot's repetitive position error for each part of each inspection position (P).
  • the inspection server 100 develops an inspection algorithm through deep learning that reflects the robot's image deviation range (repeated position error) for each part of each inspection position (P) of the mobile robot 10. The purpose is to automatically generate it.
  • the inspection server 100 generates an inspection algorithm by performing image augmentation that reflects the repetitive position error range of the mobile robot 10 during the deep learning learning, and the repetitive position of the mobile robot 10 Inspection performance can be improved by performing vision inspection through an inspection algorithm that considers errors.
  • the inspection server 100 includes a communication unit 110, an image processing unit 120, a deep learning learning unit 130, a database (DB) 140, and a control unit 150.
  • the communication unit 110 includes wired and wireless communication means and acquires an inspection image from the mobile robot 10.
  • the image processing unit 120 stores a reference image corresponding to the inspection image for each part, processes the acquired inspection image by comparison with the corresponding reference image, and generates a converted correction image.
  • Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
  • Figure 5 shows a transformation matrix according to an embodiment of the present invention.
  • the image processing unit 120 calculates the image registration error of the inspection image with respect to the reference image when processing the image registration, calculates the image deviation between the two images, and obtains the image when correcting the inspection image. Extract possible transformation matrix information.
  • the transformation matrix information is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. Includes.
  • the image processing unit 120 extracts at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
  • ROI inspection part area
  • Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
  • Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
  • the deep learning learning unit 130 calculates the repetitive position error for each inspection position (P1, P2, ..., P6) of the mobile robot 10 through image augmentation of the inspection image in advance. By learning the range, an inspection algorithm is built for each inspection part (P1-1, P1-2, P2, ..., P6) of the process.
  • the deep learning learning unit 130 performs image augmentation using the inspection part image extracted from the image processing unit 120 and the transformation matrix range to create a learning image of the corresponding inspection part reflecting the repetitive position error range of the robot.
  • the range of random number generation of image translation, rotation, scale, tilt, and shear transformation values is specified according to the transformation matrix range.
  • image augmentation can be performed on a plurality of inspection part images included in one inspection image by equally applying the transformation matrix range.
  • the deep learning learning unit 130 may augment a random scale image reflecting the repetitive position error range of the robot and apply it to the learning image.
  • the area that the mobile robot can inspect can be expanded around the inspection position. Therefore, even if a repetitive position error of the mobile robot occurs within the inspectable area, inspection errors can be prevented and inspection performance can be improved.
  • the deep learning learning unit 130 deep learns the inspection part image through the inspection algorithm of the corresponding part and outputs an inspection result of one of good product (OK), defective product (NG), and inspection error (NA).
  • the deep learning learning unit 130 performs a vision inspection through the corresponding P1-1 algorithm according to the input of the P1-1 inspection part image and outputs the inspection result.
  • the test result is that the image of the P1-1 inspection part is close to the standard image of a good product (P1-1(OK)) in which the part is normally assembled, or is close to the standard image of a defective product in which the part is not assembled (P1-1(NG)). It can be determined based on the close similarity ratio (%). However, if the P1-1 inspection part image is not similar to either the corresponding good product standard image (P1-1(OK) or the defective standard image (P1-1(NG)), an inspection error (NA) is output due to the impossibility of judgment. can do.
  • the deep learning learning unit 130 determines whether the first inspection part image (P1-1) is good/bad under the condition that the first inspection part image (P1-1) is above or below a certain ratio (EX. 80%) of the good product standard image (P1-1 (OK)). (OK/NG) can be determined.
  • the deep learning learning unit 130 may be composed of an artificial neural network-based program.
  • the DB 140 stores various programs and data necessary for the operation of the inspection server 100 using a mobile robot according to an embodiment of the present invention, and stores data generated according to the operation.
  • the DB 140 stores the inspection algorithm for each inspection part (P1-1, P1-2, P2, ..., P6) corresponding to the process, and replaces or additionally updates the newly created inspection algorithm as the vision inspection is repeated. can do.
  • the control unit 150 is a central processing unit that controls the overall operation of the inspection server 100 that performs a vision inspection using a mobile robot according to an embodiment of the present invention. That is, the control unit 150 can control each part of the server 100 by executing various programs stored in the DB 140.
  • This control unit 150 may be implemented with one or more processors that operate according to a set program, and the set program may be programmed to perform each step of the vision inspection method using a mobile robot according to an embodiment of the present invention. .
  • Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
  • the mobile robot 10 in operation for vision inspection of the manual assembly process of parts is installed on the product vehicle.
  • the description will be made assuming a scenario in which inspection images of parts assembled in a vehicle are taken while moving to a nearby designated first inspection position (P1) to a sixth inspection position (P6).
  • the control unit 150 of the inspection server 100 uses the mobile robot 10 in operation to acquire an inspection image taken at a specific inspection position that requires inspection centered on the product vehicle (S110). For convenience of explanation, the following description will be made on the assumption that the control unit 150 has acquired the inspection image taken at the first inspection position (P1) from the mobile robot 10.
  • the control unit 150 processes the inspection image acquired from the mobile robot 10 to match the inspection part area (ROI) of the preset reference image and generates a converted correction image. Then, at least one inspection part image corresponding to the inspection part area (ROI) of the reference image is extracted from the corrected image (S120). For example, the inspection part image has a part number (P1-1) corresponding to the first inspection position (P1), and through this, the inspection part (P1-1) that is the target of the vision inspection can be identified.
  • the control unit 150 automatically calculates the image deviation of the inspection image compared to the reference image that reflects the repetitive position error of the robot for each inspection position in advance, and performs deep learning through automatic image augmentation that reflects the range of the image deviation.
  • the control unit 150 performs a vision inspection using deep learning learning using an inspection algorithm for the first inspection part (P1-1) corresponding to the first inspection part image (P1-1) (S130).
  • the control unit 150 may obtain an inspection result of any one of good product (OK), defective product (NG), and inspection error (NA) according to the vision inspection (S140).
  • the control unit 150 terminates the inspection and returns, repeating the vision inspection for the next part inspection images (P1-2, P2, ..., P6), and , If the next part inspection image does not exist, the inspection is terminated.
  • the control unit 150 extracts a false test (e.g. OK/NG misjudgment) or an unjudgmentable image through the operator's confirmation of the test result. It is saved and used as evaluation data to check the performance of the new inspection algorithm described later (S160).
  • NG defective
  • NA test error
  • control unit 150 learns the repetition error range for each inspection position of the mobile robot 10 in advance, builds an inspection algorithm for each part (P1-1, P1-2, P2, ..., P6), and uses this to create a vision system. By performing the inspection, the evaluation performance of the assembly quality of each part can be improved by correcting the image deviation for the repetitive position error of the mobile robot 10.
  • the pre-built inspection algorithm (hereinafter referred to as “existing inspection algorithm”) is used to process processes such as aging of process equipment including the mobile robot 10 or changes in the mounting location of the vision sensor module 11 over time. Image deviations may occur due to various operational environmental changes, which may deteriorate inspection performance.
  • control unit 150 optimizes the image augmentation range for deep learning learning in real time based on the transformation matrix information acquired in the image matching process of step S120. It has the characteristic of creating a new inspection algorithm.
  • the control unit 150 calculates the image registration error of the inspection image with respect to the reference image during the image registration process (S121). Then, the image deviation between the two images is calculated to extract a transformation matrix that can be obtained when correcting the inspection image (S122).
  • the transformation matrix is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. do.
  • the control unit 150 calculates the distribution of the transformation matrix value extracted corresponding to the inspection (photography) position of the mobile robot 10, converts the range into a DB, and stores it in the DB 140 (S123).
  • the control unit 150 specifies the transformation matrix range for image augmentation during deep learning learning of the corresponding inspection part image based on the repetition position error range stored in the DB 140 (S124). That is, the control unit 150 performs translation, rotation, scale, and tilt of the inspection part image using the augmentation when learning deep learning based on the repetitive position error range of the transformation matrix. ), you can specify the range when generating random numbers of shear conversion values. In particular, the control unit 150 generates random numbers according to a Gaussian normal distribution when specifying the transformation matrix range, thereby performing deep learning learning by weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location. You can.
  • the control unit 150 generates a new inspection algorithm for the corresponding part by deep learning a plurality of learning images augmented by augmentation reflecting the repeated position error of the mobile robot 10, and applies the image of the inspection part to the new inspection algorithm. Re-learn deep learning using (S125).
  • control unit 150 generates a new inspection algorithm for each part through image augmentation that reflects the error range for each position of the mobile robot 10 during the deep learning learning, and performs inspection performance through deep learning re-learning using this. can be improved.
  • the control unit 150 compares the relearning test results with evaluation data of the existing test algorithm prepared to evaluate the performance of the new test algorithm (S126).
  • the control unit 150 replaces the existing inspection algorithm with the new inspection algorithm. automatically replaced (S126; example). For example, if an image in which an inspection error (NA) occurred when using the existing inspection algorithm can be judged OK/NG using the new inspection algorithm, the evaluation performance can be judged to have improved and automatically replaced.
  • NA inspection error
  • control unit 150 maintains the existing inspection algorithm if the evaluation performance is not improved below that of the existing inspection algorithm (S126; No).
  • the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated, an inspection algorithm reflecting this is generated, and the vision inspection is performed, thereby repeating the mobile robot. Inspection performance for position errors can be improved.
  • a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. Prevention and optimal inspection quality can be maintained.
  • the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
  • the embodiments of the present invention are not implemented only through the devices and/or methods described above, but can be implemented through programs for realizing functions corresponding to the configuration of the embodiments of the present invention, recording media on which the programs are recorded, etc.
  • This implementation can be easily implemented by an expert in the technical field to which the present invention belongs based on the description of the embodiments described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Business, Economics & Management (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Combustion & Propulsion (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Transportation (AREA)
  • Strategic Management (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)

Abstract

Disclosed are a vision inspection system using a mobile robot and a method therefor. A vision inspection system using a mobile robot, according to an embodiment of the present invention, comprises: a mobile robot that moves to at least one designated inspection position (P) and photographs an inspection area of a component used to assemble a product; and an inspection server that acquires a photographed inspection image from the mobile robot, automatically calculates an image deviation of the inspection image compared to a reference image reflecting a repeated position error of the robot for each inspection position, and evaluates the quality of assembly of the inspection image for each inspection position through an inspection algorithm trained through image augmentation reflecting a range of the image deviation.

Description

모바일 로봇을 활용한 비전 검사 시스템 및 그 방법Vision inspection system and method using mobile robot
본 발명은 모바일 로봇을 활용한 비전 검사 시스템 및 그 방법에 관한 것으로써, 보다 상세하게는 자동차 조립 공정에서 작업자의 수동 조립 품질을 실시간으로 검사하는 모바일 로봇을 활용한 비전 검사 시스템 및 그 방법에 관한 것이다.The present invention relates to a vision inspection system and method using a mobile robot, and more specifically, to a vision inspection system and method using a mobile robot that inspects the quality of manual assembly by workers in real time in the automobile assembly process. will be.
일반적으로 자동차 생산 공장에서는 다양한 공정을 거쳐 차체와 부품들을 조립하고 있으며, 작업자에 의한 수동 조립 공정에서는 제품의 품질 확보를 위하여 실시간 품질 검사를 필요로 한다.In general, automobile production plants assemble car bodies and parts through various processes, and the manual assembly process by workers requires real-time quality inspection to ensure product quality.
종래의 실시간 품질검사는 다관절 고정 로봇에 장착된 카메라를 이용하여 검사 이미지를 촬영하는 방식의 비전검사를 수행하고 있다. 하지만, 종래의 비전검사 방식은 고정 로봇의 촬영 자세(모션) 제어 및 촬영 가능한 영역에 한계가 있어 여러 대의 운용이 필요하고, 작업자와 로봇의 작업 동선이 겹쳐 작업을 방해하거나 충돌 위험이 존재한다.Conventional real-time quality inspection involves vision inspection that captures inspection images using a camera mounted on a multi-joint fixed robot. However, the conventional vision inspection method has limitations in controlling the photographing posture (motion) of the fixed robot and the area that can be photographed, so it requires the operation of multiple units, and the work movements of workers and robots overlap, which can interfere with work or pose a risk of collision.
이에, 최근에는 작업자의 작업 동선에 방해를 주지 않게 모바일 로봇에 카메라를 장착하고 검사 이미지를 촬영하는 방안이 모색되고 있다. 그러나, 모바일 로봇을 활용하는 경우 카메라 촬영 시의 위치 오차(촬영 편차)의 발생으로 이미지 취득 시마다 반복되는 이미지 편차가 발생하며, 이는 비전 검사 성능의 저하로 이어지는 문제점이 있다.Accordingly, recently, ways have been explored to mount a camera on a mobile robot and capture inspection images so as not to interfere with the worker's work flow. However, when using a mobile robot, repeated image deviations occur each time an image is acquired due to positional error (capturing deviation) when shooting a camera, which leads to a problem in deteriorating vision inspection performance.
예컨대, 모바일 로봇은 이동가능한 특성상 지정된 검사 포지션에서 차량의 검사 영역을 촬영하도록 설정하더라도 미세하게 반복되는 모바일 로봇의 위치 오차로 인하여 매회 촬영한 검사 이미지간 편차가 발생한다. 그리고, 이러한 로봇의 위치 오차는 조립품질 검사를 위해 설정된 부품의 기준 이미지와 검사 이미지간 편차를 유발한다.For example, even if the mobile robot is set to photograph the inspection area of the vehicle at a designated inspection position due to its movable nature, deviations between inspection images captured each time occur due to slightly repeated positional errors of the mobile robot. In addition, this positional error of the robot causes a deviation between the reference image of the part set for assembly quality inspection and the inspection image.
이러한 모바일 로봇을 활용한 검사 이미지의 편차는 비전 검사 시스템의 품질 검사에 있어서 성능저하의 원인이 되므로 검사 성능 향상을 위한 개선 방안이 필요하다.Deviations in inspection images using such mobile robots cause performance degradation in the quality inspection of vision inspection systems, so improvement measures are needed to improve inspection performance.
이 배경기술 부분에 기재된 사항은 발명의 배경에 대한 이해를 증진하기 위하여 작성된 것으로서, 이 기술이 속하는 분야에서 통상의 지식을 가진 자에게 이미 알려진 종래기술이 아닌 사항을 포함할 수 있다.The matters described in this background art section have been prepared to enhance understanding of the background of the invention, and may include matters that are not prior art already known to those skilled in the art in the field to which this technology belongs.
본 발명의 실시예는 산업현장에서 모바일 로봇을 활용한 비전 검사 시 로봇의 반복 위치 오차를 반영한 이미지 편차를 자동으로 계산하고 이를 반영한 검사 알고리즘을 구축하여 검사부품별 비전 검사를 수행하는 모바일 로봇을 활용한 비전 검사 시스템 및 그 방법을 제공하는 것을 목적으로 한다.An embodiment of the present invention utilizes a mobile robot that automatically calculates the image deviation reflecting the repetitive position error of the robot during vision inspection using a mobile robot in an industrial field, builds an inspection algorithm reflecting this, and performs a vision inspection for each inspection part. The purpose is to provide a vision inspection system and method.
본 발명의 또다른 목적은 다양한 환경변화로 인한 이미지 편차를 반영한 어그멘테이션 범위를 자동으로 지정하여 생성된 신규 검사 알고리즘으로 기존 검사 알고리즘을 자동 교체하여 실시간으로 최적의 검사 품질을 유지하는 모바일 로봇을 활용한 비전 검사 시스템을 제공하는데 있다.Another purpose of the present invention is to develop a mobile robot that maintains optimal inspection quality in real time by automatically replacing the existing inspection algorithm with a new inspection algorithm created by automatically specifying an augmentation range that reflects image deviations due to various environmental changes. The goal is to provide a vision inspection system that utilizes
본 발명의 일 측면에 따르면, 모바일 로봇을 활용한 비전 검사 시스템은, 적어도 하나의 지정된 검사 포지션(Position, P)으로 이동하여 제품에 조립된 부품의 검사 영역을 촬영하는 모바일 로봇; 및 상기 모바일 로봇으로부터 촬영된 검사 이미지를 취득하고 검사 포지션별 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 상기 검사 이미지의 이미지 편차를 계산하여 상기 이미지 편차의 범위를 반영한 이미지 어그멘테이션(Augmentation) 진행으로 학습된 검사 알고리즘을 통해 검사 포지션별로 상기 검사 이미지의 조립 품질을 평가하는 검사 서버;를 포함한다.According to one aspect of the present invention, a vision inspection system utilizing a mobile robot includes: a mobile robot that moves to at least one designated inspection position (Position, P) and photographs an inspection area of parts assembled in a product; And image augmentation is performed by acquiring the inspection image taken from the mobile robot, calculating the image deviation of the inspection image compared to the reference image reflecting the repeated position error of the robot for each inspection position, and reflecting the range of the image deviation. It includes an inspection server that evaluates the assembly quality of the inspection image for each inspection position through a learned inspection algorithm.
또한, 상기 모바일 로봇은, 상기 검사 포지션에서 촬영된 상기 검사 이미지를 생성하는 비전 센서 모듈; 센서류를 통해 주변을 탐지하는 자율주행 센서 모듈; 구동륜 또는 4족 보행을 통해 자유롭게 이동하는 이동 모듈; 촬영된 검사 이미지를 무선통신을 통해 상기 검사 서버로 전송하는 무선통신 모듈; 및 상기 비전 센서 모듈을 통해 작업자의 위치를 인식하고 상기 작업자를 따라 작업이 완료한 검사 포지션(P)으로 이동하면서 상기 비전 센서 모듈을 통해 상기 검사 이미지를 촬영하도록 제어하는 제어 모듈;을 포함한다.In addition, the mobile robot includes a vision sensor module that generates the inspection image captured at the inspection position; Autonomous driving sensor module that detects the surroundings through sensors; A mobile module that moves freely via drive wheels or quadrupedal walking; A wireless communication module that transmits the captured inspection image to the inspection server through wireless communication; And a control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed.
또한, 상기 제어 모듈은, 상기 제품의 종류 및 사양에 따라 당해 공정에서 조립해야 하는 부품의 조립 위치와 그 검사 이미지 촬영을 위해 지정된 적어도 하나의 검사 포지션을 저장할 수 있다.Additionally, the control module may store the assembly positions of parts to be assembled in the process according to the type and specifications of the product and at least one inspection position designated for capturing the inspection image.
또한, 상기 제어 모듈은, 상기 제품을 중심으로 여러 방위로 구획된 복수의 구역을 구분하고, 현재 작업자가 존재하는 작업자 구역으로 진입하지 않도록 이동을 제한할 수 있다.In addition, the control module can distinguish a plurality of zones divided in various directions around the product and restrict movement so that the worker does not enter the worker zone where the worker currently exists.
또한, 비전 센서 모듈은, 로봇 암(Arm)을 통해 상기 모바일 로봇에 장착되며, 상기 로봇 암의 자세 제어를 통해 제품의 외장 부품과 실내 내장 부품의 검사 이미지를 촬영할 수 있다.In addition, the vision sensor module is mounted on the mobile robot through a robot arm, and can capture inspection images of the exterior parts and interior interior parts of the product through posture control of the robot arm.
또한, 상기 검사 서버는, 상기 모바일 로봇으로부터 검사 이미지를 취득하는 통신부; 부품별 검사 이미지에 대응하는 기준 이미지를 저장하고 상기 검사 이미지를 상기 기준 이미지와의 비교로 정합 처리하여 변환된 보정 이미지를 생성하는 이미지 처리부; 사전에 상기 검사 이미지에 대한 이미지 어그멘테이션(Augmentation)을 통해 상기 모바일 로봇의 검사 포지션별 반복 위치 오차 범위를 딥러닝 학습하여 당해 공정의 검사부품별 검사 알고리즘을 구축하는 딥러닝 학습부; 상기 검사 서버의 운용을 위한 프로그램과 데이터를 저장하는 데이터베이스(DB); 및 상기 이미지 정합 처리과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix)를 토대로 상기 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하도록 하는 제어부;를 포함할 수 있다.In addition, the inspection server includes a communication unit that acquires an inspection image from the mobile robot; an image processing unit that stores a reference image corresponding to the inspection image for each part and generates a converted corrected image by matching the inspection image with the reference image; A deep learning unit that builds an inspection algorithm for each inspection part of the process by deep learning the repetitive position error range for each inspection position of the mobile robot through image augmentation of the inspection image in advance; a database (DB) storing programs and data for operating the inspection server; and a control unit configured to generate a new inspection algorithm by specifying an image augmentation range for deep learning learning based on a transformation matrix acquired in the image registration process.
또한, 상기 이미지 처리부는, 상기 이미지 정합 처리시 상기 기준 이미지에 대한 검사 이미지의 이미지 정합 오차를 계산하고, 두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 추출할 수 있다.In addition, the image processing unit calculates the image registration error of the inspection image with respect to the reference image during the image registration process, calculates the image deviation between the two images, and calculates a transformation matrix that can be obtained when correcting the inspection image. Information can be extracted.
또한, 상기 트랜스포매이션 매트릭스(Transformation Matrix) 정보는, 상기 이미지 편차를 수치화한 값으로써 이미지 이동을 위한 Translation, 이미지 회전을 위한 Rotation, 이미지 확대/축소를 위한 Scale, 이미지 기울기 변환을 위한 Tilt 및 이미지 전단을 위한 Shear 정보를 포함할 수 있다.In addition, the transformation matrix information is a numerical value of the image deviation and includes translation for image movement, rotation for image rotation, scale for image enlargement/reduction, tilt for image tilt conversion, and image shear. Shear information for can be included.
또한, 상기 이미지 처리부는, 상기 보정 이미지에서 상기 기준 이미지에 설정된 검사부품 영역(ROI)에 해당하는 적어도 하나의 검사부품 이미지를 학습 데이터로 추출할 수 있다.Additionally, the image processing unit may extract at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
또한, 상기 딥러닝 학습부는, 상기 이미지 처리부에서 추출된 검사부품 이미지와 트랜스포매이션 매트릭스 범위를 이용한 이미지 어그멘테이션을 진행하여 상기 반복 위치 오차 범위가 반영된 해당 검사부품의 학습 이미지를 증대 시킬 수 있다.In addition, the deep learning learning unit may perform image augmentation using the inspection part image extracted from the image processing unit and the transformation matrix range to increase the learning image of the inspection part in which the repetition position error range is reflected.
또한, 상기 딥러닝 학습부는, 상기 검사부품 이미지를 해당 부품의 검사 알고리즘을 통해 딥러닝 학습하여 양품(OK), 불량(NG), 및 검사 오류(NA) 중 어느 하나의 검사 결과를 출력할 수 있다.In addition, the deep learning learning unit can deep learn the inspection part image through the inspection algorithm of the corresponding part and output any one of inspection results of good product (OK), defective product (NG), and inspection error (NA). there is.
또한, 상기 제어부는, 상기 검사 이미지의 이미지 정합 처리과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 토대로 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 실시간으로 반영한 신규 검사 알고리즘을 생성할 수 있다.Additionally, the control unit may generate a new inspection algorithm that reflects the image augmentation range for deep learning learning in real time based on transformation matrix information acquired during the image registration process of the inspection image.
또한, 상기 제어부는, 상기 신규 검사 알고리즘을 이용하여 딥러닝 재학습한 검사 결과를 기존 검사 알고리즘을 이용한 검사 결과와 비교하여 성능이 향상된 것으로 판단되면 기존 검사 알고리즘을 상기 신규 검사 알고리즘으로 교체 또는 업데이트 할 수 있다.In addition, the control unit compares the deep learning re-learned test results using the new test algorithm with the test results using the existing test algorithm, and if it determines that performance has improved, replace or update the existing test algorithm with the new test algorithm. You can.
한편, 본 발명의 일 측면에 따른, 자동차 조립 공정에서 작업자의 수동 조립 품질을 실시간으로 검사하는 모바일 로봇을 활용한 비전 검사 방법은, 모바일 로봇을 이용하여 제품 차량을 중심으로 검사가 필요한 특정 검사 포지션에서 촬영된 검사 이미지를 취득하는 단계; 상기 검사 이미지를 미리 설정된 기준 이미지에 맞게 이미지 정합 처리하여 변환된 보정 이미지에서 검사부품 이미지를 추출하는 단계; 검사 포지션별 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 상기 검사 이미지의 이미지 편차를 자동으로 계산하여 상기 이미지 편차의 범위를 반영한 이미지 어그멘테이션(Augmentation) 진행으로 학습된 검사 알고리즘을 저장하고 상기 검사부품 이미지에 대응하는 검사부품 검사 알고리즘을 이용한 딥러닝 학습으로 비전 검사를 실시하는 단계; 및 상기 비전 검사에 따라 양품(OK), 불량(NG) 및 검사 오류(NA) 중 어느 하나의 검사 결과를 취득하는 단계;를 포함한다.Meanwhile, according to one aspect of the present invention, a vision inspection method using a mobile robot to inspect the quality of a worker's manual assembly in real time in the automobile assembly process is a specific inspection position that requires inspection centered on the product vehicle using a mobile robot. acquiring an inspection image taken in; Processing the inspection image to match a preset reference image and extracting an inspection part image from the converted correction image; Automatically calculates the image deviation of the inspection image compared to the reference image reflecting the repetitive position error of the robot for each inspection position, stores the learned inspection algorithm through image augmentation that reflects the range of the image deviation, and inspects the inspection part. A step of performing a vision inspection using deep learning learning using an inspection part inspection algorithm corresponding to the image; and acquiring an inspection result of one of good product (OK), defective product (NG), and inspection error (NA) according to the vision inspection.
또한, 상기 검사 결과를 취득하는 단계는, 상기 검사 결과가 불량(NG) 혹은 검사 오류(NA)로 판정된 경우 운영자의 검사 결과 확인을 통해 오검사나 판정불가 이미지를 추출하여 신규 검사 알고리즘의 성능 확인을 위한 평가 데이터로 저장하는 단계를 더 포함할 수 있다.In addition, in the step of acquiring the test result, if the test result is determined to be defective (NG) or test error (NA), the performance of the new test algorithm is performed by extracting false test or non-judgmentable images through the operator's confirmation of the test result. A step of saving evaluation data for confirmation may be further included.
또한, 상기 검사부품 이미지를 추출하는 단계는, 상기 이미지 정합 처리 과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix)를 토대로 상기 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하는 단계를 더 포함하는 모바일 로봇을 활용한 비전 검사 방법.In addition, the step of extracting the inspection part image includes creating a new inspection algorithm by specifying an image augmentation range for the deep learning learning based on the transformation matrix acquired in the image registration process. A vision inspection method using a mobile robot further comprising:
또한, 상기 신규 검사 알고리즘을 생성하는 단계는, 상기 기준 이미지에 대한 상기 검사 이미지의 이미지 정합 오차를 계산는 단계; 두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix)를 추출하는 단계; 검사 포지션에 해당하여 추출된 트랜스포매이션 매트릭스 값의 분포를 계산하고 범위를 DB화하여 DB에 저장하는 단계; 상기 DB에 저장된 반복 위치 오차 범위를 기준으로 해당 검사부품 이미지의 딥러닝 학습시 이미지 어그멘테이션을 위한 트랜스포매이션 매트릭스 범위로 지정하는 단계; 및 상기 모바일 로봇의 반복 위치 오차가 반영된 어그멘테이션으로 증대된 복수의 학습 이미지를 딥러닝 학습하여 해당 부품의 신규 검사 알고리즘을 생성하여 상기 검사부품 이미지를 딥러닝 재학습 하는 단계;를 포함한다.Additionally, generating the new inspection algorithm may include calculating an image registration error of the inspection image with respect to the reference image; calculating the image deviation between two images and extracting a transformation matrix that can be obtained when correcting the inspection image; Calculating the distribution of transformation matrix values extracted corresponding to the inspection position, converting the range into a DB, and storing it in the DB; Designating a transformation matrix range for image augmentation when learning deep learning of the corresponding inspection part image based on the repetition position error range stored in the DB; And a step of deep learning the plurality of training images augmented by augmentation reflecting the repeated position error of the mobile robot to generate a new inspection algorithm for the part and deep learning the inspection part image.
또한, 상기 매트릭스 범위로 지정하는 단계는, 상기 트랜스포매이션 매트릭스의 반복 위치 오차 범위를 기준으로, 딥러닝 학습시 상기 어그멘테이션을 이용한 검사부품 이미지의 이동(Translation), 회전(Rotation), 스케일(Scale), 틸트(Tilt), 전단(Shear) 변환 값들의 난수 생성시 범위를 지정할 수 있다.In addition, the step of specifying the matrix range includes translation, rotation, and scale of the inspection part image using the augmentation during deep learning learning, based on the repetitive position error range of the transformation matrix. You can specify the range when generating random numbers for Scale, Tilt, and Shear conversion values.
또한, 상기 매트릭스 범위로 지정하는 단계는, 가우시안 정규 분포에 따른 난수 발생하여 위치에 따라 모바일 로봇의 반복 오차로 발생 가능성이 높은 이미지 편차에 가중치를 주어 딥러닝 학습을 수행할 수 있다.In addition, in the step of specifying the matrix range, deep learning learning can be performed by generating random numbers according to a Gaussian normal distribution and weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location.
또한, 상기 딥러닝 재학습 하는 단계 이후에, 상기 재학습한 결과를 상기 평가 데이터와 비교하여 성능이 향상된 것으로 판단되면 기존 검사 알고리즘을 상기 신규 검사 알고리즘으로 교체 또는 업데이트하는 단계를 더 포함할 수 있다.In addition, after the step of re-learning the deep learning, if performance is determined to be improved by comparing the re-learning result with the evaluation data, the step of replacing or updating the existing inspection algorithm with the new inspection algorithm may be further included. .
본 발명의 실시예에 따르면, 모바일 로봇을 활용한 비전 검사 시 모바일 로봇의 반복 위치 오차를 반영한 이미지 편차를 자동으로 계산하고 이를 반영한 검사 알고리즘을 생성하여 비전 검사를 수행함으로써 모바일 로봇의 반복 위치 오차에 대한 검사 성능을 향상시키는 효과가 있다.According to an embodiment of the present invention, when performing a vision inspection using a mobile robot, the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated and an inspection algorithm reflecting this is generated to perform the vision inspection, thereby reducing the repetitive position error of the mobile robot. It has the effect of improving inspection performance.
또한, 비전 검사를 수행하는 과정에서 시간에 따른 설비의 노후화나 다양한 환경변화로 인한 이미지 편차를 반영한 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하고 기존 검사 알고리즘을 교체/업데이트함으로써 검사 성능 저하를 예방하고 실시간으로 최적의 검사 품질을 유지할 수 있는 효과가 있다.In addition, in the process of performing vision inspection, a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. It is effective in preventing and maintaining optimal inspection quality in real time.
또한, 모바일 로봇이 작업자를 따라 비전 검사를 수행하고 현재 작업자가 존재하는 구역으로 진입하지 않도록 이동을 제한함으로써 작업자의 작업 동선을 방해하지 않고 충돌을 예방할 수 있는 효과가 있다.In addition, the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
도 1은 본 발명의 실시예에 따른 제품 조립 공정에 적용된 모바일 로봇을 활용한 비전 검사 시스템을 나타낸다.Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사 시스템의 구성을 개략적으로 나타낸 블록도이다.Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사시의 이미지 편차 발생 문제를 설명하기 위한 도면이다.Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 비전 검사를 위한 검사 이미지 처리 방법을 나타낸다.Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 트랜스포매이션 매트릭스를 나타낸다.Figure 5 shows a transformation matrix according to an embodiment of the present invention.
도 6은 본 발명의 실시예에 따른 검사부품별 검사 알고리즘 구축 방법을 나타낸다.Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
도 7은 본 발명의 실시예에 따른 검사부품별 검사 알고리즘을 이용한 비전검사 방법을 나타낸다.Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
도 8은 본 발명의 실시예에 따른 모바일 로봇을 이용한 비전 검사 방법을 개략적으로 나타낸 흐름도이다.Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. Below, with reference to the attached drawings, embodiments of the present invention will be described in detail so that those skilled in the art can easily implement the present invention.
여기에서 사용되는 용어는 오직 특정 실시예들을 설명하기 위한 목적이고, 본 발명을 제한하는 것으로 의도되지 않는다. 여기에서 사용되는 바와 같이, 단수 형태들은, 문맥상 명시적으로 달리 표시되지 않는 한, 복수 형태들을 또한 포함하는 것으로 의도된다. "포함하다" 및/또는 "포함하는"이라는 용어는, 본 명세서에서 사용되는 경우, 언급된 특징들, 정수들, 단계들, 작동들, 구성 요소들 및/또는 컴포넌트들의 존재를 특정하지만, 다른 특징들, 정수들, 단계들, 작동들, 구성 요소들, 컴포넌트들 및/또는 이들의 그룹들 중 하나 이상의 존재 또는 추가를 배제하지는 않음을 또한 이해될 것이다. 여기에서 사용되는 바와 같이, 용어 "및/또는"은, 연관되어 나열된 항목들 중 임의의 하나 또는 모든 조합들을 포함한다.The terminology used herein is for the purpose of describing specific embodiments only and is not intended to limit the invention. As used herein, singular forms are intended to also include plural forms, unless the context clearly indicates otherwise. The terms “comprise” and/or “comprising”, when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not include other It will also be understood that this does not exclude the presence or addition of one or more of features, integers, steps, operations, elements, components and/or groups thereof. As used herein, the term “and/or” includes any one or all combinations of the associated listed items.
명세서 전체에서, 제1, 제2, A, B, (a), (b) 등의 용어는 다양한 구성 요소들을 설명하는데 사용될 수 있지만, 상기 구성 요소들은 상기 용어들에 의해 한정되어서는 안 된다. 이러한 용어는 그 구성 요소를 다른 구성 요소와 구별하기 위한 것일 뿐, 그 용어에 의해 해당 구성 요소의 본질이나 차례 또는 순서 등이 한정되지 않는다.Throughout the specification, terms such as first, second, A, B, (a), (b), etc. may be used to describe various components, but the components should not be limited by the terms. These terms are only used to distinguish the component from other components, and the nature, sequence, or order of the component is not limited by the term.
명세서 전체에서, 어떤 구성 요소가 다른 구성 요소에 '연결된다'거나 '접속된다'고 언급되는 때에는, 그 다른 구성 요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성 요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에, 어떤 구성 요소가 다른 구성 요소에 '직접 연결된다'거나 '직접 접속된다'고 언급되는 때에는, 중간에 다른 구성 요소가 존재하지 아니하는 것으로 이해되어야 할 것이다Throughout the specification, when a component is referred to as being 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but other components may exist in the middle. It must be understood that it may be possible. On the other hand, when a component is said to be 'directly connected' or 'directly connected' to another component, it should be understood that there are no other components in between.
추가적으로, 아래의 방법들 또는 이들의 양상들 중 하나 이상은 적어도 하나 이상의 제어부에 의해 실행될 수 있음이 이해된다. "제어부"라는 용어는 메모리 및 프로세서를 포함하는 하드웨어 장치를 지칭할 수 있다. 메모리는 프로그램 명령들을 저장하도록 구성되고, 프로세서는 아래에서 더욱 자세히 설명되는 하나 이상의 프로세스들을 수행하기 위해 프로그램 명령들을 실행하도록 특별히 프로그래밍 된다. 제어부는, 여기에서 기재된 바와 같이, 유닛들, 모듈들, 부품들, 장치들, 또는 이와 유사한 것의 작동을 제어할 수 있다. 또한, 아래의 방법들은, 당업자에 의해 인식되는 바와 같이, 하나 이상의 다른 컴포넌트들과 함께 제어부를 포함하는 장치에 의해 실행될 수 있음이 이해된다. Additionally, it is understood that one or more of the methods below or aspects thereof may be executed by at least one or more controllers. The term “control unit” may refer to a hardware device that includes memory and a processor. The memory is configured to store program instructions, and the processor is specifically programmed to execute the program instructions to perform one or more processes described in more detail below. The controller may control the operation of units, modules, components, devices, or the like, as described herein. It is also understood that the methods below can be performed by an apparatus that includes a control unit along with one or more other components, as will be recognized by those skilled in the art.
또한, 명세서 전체에서 "이미지"와 관련된 다양한 용어가 사용되므로 아래와 같이 정의한다.Additionally, since various terms related to “image” are used throughout the specification, they are defined as follows.
검사 이미지는 실제 비전 검사 시 모바일 로봇이 특정 검사 포지션(EX. P1, P2, …, Pn)에서 검사 영역을 촬영하여 취득된 원본 이미지를 의미한다.The inspection image refers to the original image acquired by the mobile robot shooting the inspection area at a specific inspection position (EX. P1, P2, ..., Pn) during an actual vision inspection.
기준 이미지는 사전에 지정된 검사 포지션(EX. P1, P2, …, Pn)에 모바일 로봇을 정위치 시킨 후 상기 검사 영역을 촬영한 최적의 이미지이며, 비전 검사 시 상기 검사 이미지에 속하는 부품별 조립 품질을 평가하는 기준 된다.The reference image is the optimal image taken of the inspection area after positioning the mobile robot in a pre-designated inspection position (EX. P1, P2, ..., Pn), and the assembly quality of each part belonging to the inspection image during vision inspection It becomes the standard for evaluating.
보정 이미지는 상기 기준 이미지에 대한 상기 검사 이미지의 편차를 보정하여 상기 기준 이미지에 가깝게(근사하게) 변환한 이미지를 의미한다.The corrected image refers to an image converted to be close (approximately) to the reference image by correcting the deviation of the test image with respect to the reference image.
검사부품 이미지는 상기 기준 이미지에 설정된 검사부품 ROI 영역에 대응하여 상기 보정 이미지에서 추출된 개별 부품 이미지를 의미한다. The inspection part image refers to an individual part image extracted from the correction image corresponding to the inspection part ROI area set in the reference image.
학습 이미지는 상기 검사부품 이미지를 이미지 어그멘테이션(Augmentation)으로 변환하여 증대된 딥러닝 학습용 데이터 세트를 의미한다. The learning image refers to a data set for deep learning learning augmented by converting the inspection part image into image augmentation.
이제 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사 시스템 및 그 방법에 대하여 도면을 참조로 하여 상세하게 설명한다.Now, a vision inspection system and method using a mobile robot according to an embodiment of the present invention will be described in detail with reference to the drawings.
도 1은 본 발명의 실시예에 따른 제품 조립 공정에 적용된 모바일 로봇을 활용한 비전 검사 시스템을 나타낸다.Figure 1 shows a vision inspection system using a mobile robot applied to the product assembly process according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사 시스템의 구성을 개략적으로 나타낸 블록도이다.Figure 2 is a block diagram schematically showing the configuration of a vision inspection system using a mobile robot according to an embodiment of the present invention.
도 1 및 도 2를 참조하면, 본 발명의 실시예에 따른 검사 서버(100)는 산업현장의 제품 조립 공정에서 작업자의 부품 조립 품질을 평가하기 위한 것으로서, 적어도 하나의 지정된 검사 포지션(Position, P)으로 이동하여 제품에 조립된 부품의 검사 영역을 촬영하는 모바일 로봇(10), 및 상기 모바일 로봇(10)으로부터 촬영된 검사 이미지를 취득하고 검사 포지션별 반복된 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 상기 검사 이미지의 이미지 편차를 자동으로 계산하여 상기 이미지 편차의 범위를 반영한 어그멘테이션(Image augmentation)의 진행으로 학습된 검사 알고리즘을 통해 상기 검사 포지션별 상기 검사 이미지의 조립 품질을 평가하는 것을 특징으로 한다.Referring to FIGS. 1 and 2, the inspection server 100 according to an embodiment of the present invention is for evaluating the quality of a worker's part assembly in a product assembly process at an industrial site, and has at least one designated inspection position (Position, P). ) and a mobile robot 10 that moves to photograph the inspection area of the parts assembled in the product, and an inspection image taken from the mobile robot 10 is acquired and a reference image reflecting the repeated position error of the robot for each inspection position It is characterized by automatically calculating the image deviation of the inspection image compared to the inspection image and evaluating the assembly quality of the inspection image for each inspection position through an inspection algorithm learned by performing image augmentation that reflects the range of the image deviation. Do it as
여기서, 상기 제품은 스마트 팩토리의 이송 수단(20)을 통해 설정된 작업 위치로 이송되고 작업자에 할당된 적어도 하나의 부품이 조립된다. 이하, 상기 제품은 "차량"을 가정하여 설명하되, 상기 차량은 완성차를 목표로 조립 과정에 있는 차체이거나 그 차체를 이루는 일부 구성품(EX. 도어, 데시보드, 내장품 등)일 수 있다. 상기 이송 수단(20)은 컨베이어나 물류 이송 로봇 등일 수 있다.Here, the product is transported to a set work location through the smart factory's transport means 20, and at least one part assigned to the worker is assembled. Hereinafter, the product will be described assuming a “vehicle,” but the vehicle may be a car body in the process of assembly with the goal of becoming a finished car or some components (EX. doors, dashboard, interior parts, etc.) that make up the car body. The transfer means 20 may be a conveyor or a logistics transfer robot.
모바일 로봇(10)은 지정된 검사 포지션(P)으로 이동하여 촬영된 검사 이미지를 검사 서버(100)로 전송한다. 여기서, 상기 검사 포지션(P)은 차량을 중심으로 여러 방위로 구획된 복수의 작업 구역(EX. 제1 구역 내지 제6 구역)내 존재하는 촬영 포지션/포인트(Position/Point)로, 예컨대, 복수의 작업 구역별로 지정된 제1 검사 포지션(P1), 제2 검사 포지션(P2), …, 제6 검사 포지션(P6)을 포함할 수 있다. 다만, 본 발명의 실시예의 작업 구역과 검사 포지션 개수는 이에 한정되지 않으며 부품의 조립 위치에 따라 동일한 작업 구역내 복수의 검사 포지션을 지정할 수 있다.The mobile robot 10 moves to the designated inspection position (P) and transmits the captured inspection image to the inspection server 100. Here, the inspection position (P) is a shooting position/point that exists in a plurality of work zones (EX. first to sixth zones) divided in various directions around the vehicle, for example, a plurality of 1st inspection position (P1), 2nd inspection position (P2), … designated for each work area. , may include a sixth inspection position (P6). However, the number of work areas and inspection positions in the embodiment of the present invention is not limited to this, and a plurality of inspection positions within the same work area can be designated depending on the assembly position of the parts.
모바일 로봇(10)은 사족 보행 로봇이나 AMR(Autonomous Mobile Robot) 등으로 구성될 수 있다. AMR은 구동륜을 이용하여 평평한 바닥에서만 이동 가능한 제약이 있으므로, 본 발명의 바람직한 실시예는 보다 높은 자유도로 계단이나 불규칙한 험로에서도 이동가능한 사족 보행 로봇(일명 "로봇 개" 또는 "스팟(Spot)"이라고도 함)을 가정하여 설명하도록 한다.The mobile robot 10 may be configured as a quadrupedal robot or an Autonomous Mobile Robot (AMR). Since AMR has the limitation of being able to move only on flat surfaces using driving wheels, a preferred embodiment of the present invention is a quadrupedal robot (also called a "robot dog" or "Spot") that can move on stairs or irregular rough roads with a higher degree of freedom. This will be explained assuming that.
도 2와 같이, 모바일 로봇(10)은 비전 센서 모듈(11), 자율주행 센서 모듈(12), 이동 모듈(13), 무선통신 모듈(14) 및 제어 모듈(15)을 포함한다.As shown in FIG. 2, the mobile robot 10 includes a vision sensor module 11, an autonomous driving sensor module 12, a movement module 13, a wireless communication module 14, and a control module 15.
비전 센서 모듈(11)은 로봇에 장착되며, 지정된 검사 포지션(P)에서 촬영된 검사 이미지를 생성한다.The vision sensor module 11 is mounted on the robot and generates an inspection image taken at a designated inspection position (P).
비전 센서 모듈(11)은 촬영 포지션 변경의 자유도를 위해 로봇 암(Arm)(11-1)을 통해 모바일 로봇(10)에 장착된다. 이러한 비전 센서 모듈(11)은 차량의 외장 부품 뿐만 아니라 로봇 암(11-1)의 자세(모션) 제어를 통한 차량내 내장부품까지 촬영할 수 있다.The vision sensor module 11 is mounted on the mobile robot 10 through a robot arm 11-1 for freedom in changing the shooting position. This vision sensor module 11 can photograph not only the exterior parts of the vehicle but also the interior parts of the vehicle through posture (motion) control of the robot arm 11-1.
자율주행 센서 모듈(12)은 자율주행을 위한 카메라, 레이저, 초음파, 레이더, 라이다 및 위치인식장치 중 적어도 하나의 센서류를 포함하여 주변을 탐지하고 작업자 및 사물을 인식할 수 있다.The autonomous driving sensor module 12 includes at least one sensor among cameras, lasers, ultrasonic waves, radars, lidar, and location recognition devices for autonomous driving, and can detect the surroundings and recognize workers and objects.
이동 모듈(13)은 4개 다리를 포함하며, 4족 보행을 통해 계단이나 불규칙한 노면을 자유롭게 이동할 수 있다. The movement module 13 includes four legs and can freely move on stairs or irregular road surfaces through quadrupedal walking.
무선통신 모듈(14)은 비전 센서 모듈(11)에서 촬영된 검사 이미지를 무선통신을 통해 검사 서버(100)로 전송하고, 필요시 상기 검사 서버(100)로부터 제어신호를 수신할 수 있다.The wireless communication module 14 can transmit the inspection image captured by the vision sensor module 11 to the inspection server 100 through wireless communication, and receive a control signal from the inspection server 100 when necessary.
제어 모듈(15)은 본 발명의 실시예에 따른 모바일 로봇(10)의 운용을 위한 전반적인 동작을 제어한다.The control module 15 controls the overall operation of the mobile robot 10 according to an embodiment of the present invention.
작업자는 작업 위치로 차량이 이송되면 자신에 할당된 적어도 하나의 부품을 차량에 조립한다. 상기 부품은 차량에 조립되는 차체, 파트 및 전장 부품 등이거나 지정된 조립 위치를 갖는 볼트, 너트, 리벳 등의 체결부품 등을 포함한다. Once the vehicle is transported to the work location, the worker assembles at least one part assigned to him or her into the vehicle. The parts include car bodies, parts, and electrical components assembled in a vehicle, or fasteners such as bolts, nuts, and rivets that have designated assembly positions.
제어 모듈(15)은 차량의 종류 및 사양에 따라 당해 공정에서 조립해야 하는 부품의 조립 위치와 그 검사 이미지 촬영을 위해 지정된 적어도 하나의 검사 포지션(P1, P2, …, P6)을 저장한다.The control module 15 stores the assembly positions of parts to be assembled in the process according to the type and specifications of the vehicle and at least one inspection position (P1, P2, ..., P6) designated for capturing the inspection image.
제어 모듈(15)은 비전 센서 모듈(11)을 통해 작업자의 위치를 인식하고 상기 작업자를 따라 작업을 완료한 검사 포지션(P1, P2, …, P6)으로 이동하면서 비전 센서 모듈(11)을 통해 검사 이미지를 촬영하도록 제어한다.The control module 15 recognizes the position of the worker through the vision sensor module 11 and moves to the inspection position (P1, P2, ..., P6) where the work has been completed following the worker through the vision sensor module 11. Controls to capture inspection images.
예컨대, 도 1에 나타낸 것과 같이, 작업자가 제1 구역, 제2 구역, 제3 구역의 작업을 마치고, 제4 구역에서 작업중인 경우 모바일 로봇(10)은 P1, P2, P3 순으로 이동하여 검사 이미지를 촬영 후 대기할 수 있다. For example, as shown in Figure 1, when a worker has finished work in the first zone, second zone, and third zone and is working in the fourth zone, the mobile robot 10 moves in the order of P1, P2, and P3 to inspect You can wait after taking an image.
이 때, 제어 모듈(15)은 현재 작업자가 존재하는 작업자 구역(즉, 제4 구역)으로 진입하지 않도록 이동을 제한하고 대기시킴으로써 작업자의 작업 동선을 방해하지 않고 충돌을 예방한다.At this time, the control module 15 prevents collisions without interfering with the worker's work flow by restricting the movement of the worker and making the worker wait so that he does not enter the existing worker zone (i.e., the fourth zone).
또한, 조립 공정 영역에는 작업자의 안전을 위하여 작업자의 위치를 감시하고 작업자 구역내에 모바일 로봇(10)의 진입 이벤트를 검사 서버(100)로 전송하는 감시 카메라(30)가 더 구성될 수 있다.In addition, a surveillance camera 30 may be further installed in the assembly process area to monitor the worker's location for worker safety and transmit an entry event of the mobile robot 10 into the worker area to the inspection server 100.
이에, 검사 서버(100)는 상기 진입 이벤트를 수신하면 모바일 로봇(10)에 즉시 정지 신호를 전달하여 작업자 구역으로의 이동을 제한함으로써 작업자의 안전을 보장할 수 있다.Accordingly, upon receiving the entry event, the inspection server 100 immediately transmits a stop signal to the mobile robot 10 to restrict movement into the worker area, thus ensuring the safety of the worker.
한편, 검사 서버(100)는 모바일 로봇(10)으로부터 검사 이미지를 취득하면 특징점을 검출하여 기준 이미지에 가깝게 이미지 변환을 수행하고 검사부품 영역(Region of Interest, ROI)을 크로핑하여 추출된 적어도 하나의 검사부품 이미지를 딥러닝 비전 검사 프로그램을 통해 검사한다. Meanwhile, when the inspection server 100 acquires an inspection image from the mobile robot 10, the inspection server 100 detects feature points, performs image conversion close to the reference image, and crops the inspection part region (Region of Interest, ROI) to extract at least one image. Images of inspection parts are inspected through a deep learning vision inspection program.
다만, 모바일 로봇(10)을 이용하여 비전 검사를 수행하는 경우 모바일 로봇(10)의 위치와 검사부품 위치에 따라 취득되는 검사 이미지는 촬영 시 발생되는 위치 오차로 인하여 기준 이미지와의 이미지 편차가 발생되는 문제가 있다.However, when performing a vision inspection using the mobile robot 10, the inspection image acquired according to the location of the mobile robot 10 and the location of the inspection part may have an image deviation from the reference image due to the position error that occurs during shooting. There is a problem.
예컨대, 도 3은 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사시의 이미지 편차 발생 문제를 설명하기 위한 도면이다.For example, Figure 3 is a diagram for explaining the problem of image deviation occurring during vision inspection using a mobile robot according to an embodiment of the present invention.
도 3을 참조하면, 기준 이미지(121)는 사전에 제1 검사 포지션(P1)에 모바일 로봇(10)을 정위치 시킨 후 검사 영역을 촬영한 최적의 이미지이다.Referring to FIG. 3, the reference image 121 is an optimal image obtained by photographing the inspection area after previously positioning the mobile robot 10 at the first inspection position P1.
상기 기준 이미지(121)는 비전 검사 시 해당 검사 이미지에 속하는 부품의 조립 품질을 평가하는 기준이 되며, 적어도 하나의 검사부품 ROI 영역과 이들을 포함하는 전체 ROI 영역이 설정된다.The reference image 121 serves as a standard for evaluating the assembly quality of parts belonging to the inspection image during vision inspection, and at least one inspection part ROI area and the entire ROI area including them are set.
하지만, 모바일 로봇(10)은 이동성 설비인 특성상 기존 고정 설비와 달리 지정된 검사 포지션(P1, P2, …, P6)으로 이동하여 차체의 검사 영역을 촬영하도록 설정하더라도 미세하게 반복되는 위치 오차(이하, "반복 위치 오차"라 명명함)로 인하여 매회 촬영한 검사 이미지의 편차를 유발하여 검사 성능이 저하될 수 있다.However, due to the nature of the mobile robot 10, unlike existing fixed equipment, even if it is set to move to a designated inspection position (P1, P2, ..., P6) and photograph the inspection area of the car body, there is a slight repetitive position error (hereinafter, (termed “repetitive position error”) may cause deviations in the inspection images taken each time, thereby deteriorating inspection performance.
즉, 도 3에 나타낸 것과 같이, 모바일 로봇(10)이 매번 상기 제1 검사 포지션(P1)으로 이동시켜 검사 이미지를 취득한 경우 크고 작은(EX. 10mm ~ 500mm) 반복 위치 오차를 발생한다. That is, as shown in FIG. 3, when the mobile robot 10 moves to the first inspection position (P1) every time and acquires an inspection image, a repetitive position error large or small (EX. 10 mm to 500 mm) occurs.
이로 인하여, 모바일 로봇(10)이 검사 횟수별 실제 위치에서 촬영을 수행한 경우, ①번째 검사 이미지, ②번째 검사 이미지, 및 ③번째 검사 이미지 등은 기준 이미지(121)와 서로 다른 이미지 편차가 발생한다. 여기에, 모바일 로봇(10)에 이미지 촬영의 자유도를 위한 로봇 암(11-1)을 부착하여 사용할 경우 상기 반복 위치 오차와 로봇 암(11-1)의 포지션 오차에 따라서 촬영된 검사 이미지의 편차가 더욱 증가될 수 있다.As a result, when the mobile robot 10 performs shooting at the actual location for each number of inspections, image deviations different from the reference image 121 occur in the ①th inspection image, the ②th inspection image, and the ③th inspection image. do. Here, when using the mobile robot 10 by attaching the robot arm 11-1 for freedom of image taking, the deviation of the captured inspection image is caused by the repetitive position error and the position error of the robot arm 11-1. may increase further.
이처럼, 모바일 로봇(10)으로부터 취득된 검사 이미지는 로봇의 반복 위치 오차로 이미지 편차가 발생하며, 이는 상기 반복 위치 오차로 인해 발생되는 이미지의 이동(Translation), 회전(Rotation), 스케일(Scale), 틸트(Tilt), 전단(Shear)이 상기 이미지 편차에 반영되는 것이라 할 수 있다. In this way, image deviation occurs in the inspection image acquired from the mobile robot 10 due to the repetitive position error of the robot, which is caused by the translation, rotation, and scale of the image caused by the repetitive position error. , Tilt, and Shear can be said to be reflected in the image deviation.
이러한 문제를 종합할 때, 검사 서버(100)는 모바일 로봇(10)을 활용하여 비전 검사를 수행하는 경우, 취득된 검사 이미지의 이미지 편차를 보정 시 로봇의 반복 위치 오차로 인해 이미지 변환 조건이 반복적으로 변경될 수 있기 때문에 각 검사 포지션(P)의 부품별로 로봇의 반복 위치 오차가 반영된 검사 알고리즘 생성이 필요하다.When summarizing these problems, when the inspection server 100 performs a vision inspection using the mobile robot 10, the image conversion condition is repetitive due to the repetitive position error of the robot when correcting the image deviation of the acquired inspection image. Since it can be changed, it is necessary to create an inspection algorithm that reflects the robot's repetitive position error for each part of each inspection position (P).
이에, 본 발명의 실시예에 따른 검사 서버(100)는 모바일 로봇(10)의 각 검사 포지션(P)의 부품별로 로봇의 이미지 편차 범위(반복 위치 오차)를 반영한 딥러닝 학습을 통해 검사 알고리즘을 자동으로 생성하는 것을 목적으로 한다.Accordingly, the inspection server 100 according to an embodiment of the present invention develops an inspection algorithm through deep learning that reflects the robot's image deviation range (repeated position error) for each part of each inspection position (P) of the mobile robot 10. The purpose is to automatically generate it.
이를 위해, 검사 서버(100)는 상기 딥러닝 학습시 모바일 로봇(10)의 반복 위치 오차 범위를 반영한 이미지 어그멘테이션(Augmentation)을 수행하여 검사 알고리즘을 생성하고, 모바일 로봇(10)의 반복 위치 오차를 고려한 검사 알고리즘을 통해 비전 검사를 수행하여 검사 성능을 향상 시킬 수 있다.For this purpose, the inspection server 100 generates an inspection algorithm by performing image augmentation that reflects the repetitive position error range of the mobile robot 10 during the deep learning learning, and the repetitive position of the mobile robot 10 Inspection performance can be improved by performing vision inspection through an inspection algorithm that considers errors.
이러한, 검사 서버(100)는 통신부(110), 이미지 처리부(120), 딥러닝 학습부(130), 데이터베이스(DB)(140), 및 제어부(150)를 포함한다.The inspection server 100 includes a communication unit 110, an image processing unit 120, a deep learning learning unit 130, a database (DB) 140, and a control unit 150.
통신부(110)는 유무선 통신수단을 포함하며, 모바일 로봇(10)으로부터 검사 이미지를 취득한다.The communication unit 110 includes wired and wireless communication means and acquires an inspection image from the mobile robot 10.
이미지 처리부(120)는 부품별 검사 이미지에 대응하는 기준 이미지를 저장하고, 취득된 검사 이미지를 해당 기준 이미지와의 비교로 정합 처리하여 변환된 보정 이미지를 생성한다. The image processing unit 120 stores a reference image corresponding to the inspection image for each part, processes the acquired inspection image by comparison with the corresponding reference image, and generates a converted correction image.
도 4는 본 발명의 실시예에 따른 비전 검사를 위한 검사 이미지 처리 방법을 나타낸다.Figure 4 shows an inspection image processing method for vision inspection according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 트랜스포매이션 매트릭스를 나타낸다.Figure 5 shows a transformation matrix according to an embodiment of the present invention.
도 4 및 도 5를 참조하면, 이미지 처리부(120)는 상기 이미지 정합 처리시 상기 기준 이미지에 대한 검사 이미지의 이미지 정합 오차를 계산하고, 두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 추출한다. 여기서, 상기 트랜스포매이션 매트릭스 정보는 상기 이미지 편차를 수치화한 값으로써 이미지 이동을 위한 Translation, 이미지 회전을 위한 Rotation, 이미지 확대/축소를 위한 Scale, 이미지 기울기 변환을 위한 Tilt 및 이미지 전단을 위한 Shear 정보를 포함한다.Referring to FIGS. 4 and 5, the image processing unit 120 calculates the image registration error of the inspection image with respect to the reference image when processing the image registration, calculates the image deviation between the two images, and obtains the image when correcting the inspection image. Extract possible transformation matrix information. Here, the transformation matrix information is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. Includes.
이미지 처리부(120)는 상기 보정 이미지에서 상기 기준 이미지에 설정된 검사부품 영역(ROI)에 해당하는 적어도 하나의 검사부품 이미지를 학습 데이터로 추출한다.The image processing unit 120 extracts at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
한편, 도 6은 본 발명의 실시예에 따른 검사부품별 검사 알고리즘 구축 방법을 나타낸다.Meanwhile, Figure 6 shows a method of building an inspection algorithm for each inspection part according to an embodiment of the present invention.
도 7은 본 발명의 실시예에 따른 검사부품별 검사 알고리즘을 이용한 비전검사 방법을 나타낸다.Figure 7 shows a vision inspection method using an inspection algorithm for each inspected part according to an embodiment of the present invention.
도 6 및 도 7을 참조하면, 딥러닝 학습부(130)는 사전에 검사 이미지에 대한 이미지 어그멘테이션을 통해 모바일 로봇(10)의 검사 포지션별(P1, P2, …, P6) 반복 위치 오차 범위를 학습시켜 당해 공정의 검사부품별(P1-1, P1-2, P2, …, P6) 검사 알고리즘을 구축한다.Referring to FIGS. 6 and 7, the deep learning learning unit 130 calculates the repetitive position error for each inspection position (P1, P2, ..., P6) of the mobile robot 10 through image augmentation of the inspection image in advance. By learning the range, an inspection algorithm is built for each inspection part (P1-1, P1-2, P2, ..., P6) of the process.
이 때, 딥러닝 학습부(130)는 이미지 처리부(120)에서 추출된 검사부품 이미지와 트랜스포매이션 매트릭스 범위를 이용한 이미지 어그멘테이션을 진행하여 로봇의 반복 위치 오차 범위가 반영된 해당 검사부품의 학습 이미지를 증대할 수 있다. 상기 학습 이미지는 상기 트랜스포매이션 매트릭스 범위에 따라 이미지의 이동(Translation), 회전(Rotation), 스케일(Scale), 틸트(Tilt), 전단(Shear) 변환 값들의 난수 생성시의 범위가 지정된다. 여기서, 하나의 검사 이미지에 포함된 복수의 검사부품 이미지는 상기 트랜스포매이션 매트릭스 범위를 동일하게 적용하여 이미지 어그멘테이션을 진행할 수 있다.At this time, the deep learning learning unit 130 performs image augmentation using the inspection part image extracted from the image processing unit 120 and the transformation matrix range to create a learning image of the corresponding inspection part reflecting the repetitive position error range of the robot. can be increased. For the learning image, the range of random number generation of image translation, rotation, scale, tilt, and shear transformation values is specified according to the transformation matrix range. Here, image augmentation can be performed on a plurality of inspection part images included in one inspection image by equally applying the transformation matrix range.
또한, 딥러닝 학습부(130)는 로봇의 반복 위치 오차 범위가 반영된 랜덤 스케일(Scale) 이미지를 증대하여 상기 학습 이미지에 적용할 수 있다.Additionally, the deep learning learning unit 130 may augment a random scale image reflecting the repetitive position error range of the robot and apply it to the learning image.
이와 같이, 지정된 검사 포지션에 대하여 로봇의 반복 위치 오차 범위가 반영된 다양한 학습 이미지의 증대를 통해 검사 알고리즘을 생성함으로써 상기 검사 포지션을 중심으로 모바일 로봇이 검사 가능한 영역을 확장할 수 있다. 따라서, 상기 검사 가능한 영역 내에서 모바일 로봇의 반복 위치 오차가 발생하더라도 검사 오류를 예방하고 검사 성능을 향상시킬 수 있다.In this way, by generating an inspection algorithm through augmentation of various learning images reflecting the repetitive position error range of the robot for a designated inspection position, the area that the mobile robot can inspect can be expanded around the inspection position. Therefore, even if a repetitive position error of the mobile robot occurs within the inspectable area, inspection errors can be prevented and inspection performance can be improved.
딥러닝 학습부(130)는 상기 검사부품 이미지를 해당 부품의 검사 알고리즘을 통해 딥러닝 학습하여 양품(OK), 불량(NG), 및 검사 오류(NA) 중 어느 하나의 검사 결과를 출력한다.The deep learning learning unit 130 deep learns the inspection part image through the inspection algorithm of the corresponding part and outputs an inspection result of one of good product (OK), defective product (NG), and inspection error (NA).
예를 들어, 딥러닝 학습부(130)는 P1-1 검사부품 이미지의 입력에 따라 해당하는 P1-1 알고리즘을 통해 비전 검사를 수행하고 검사 결과를 출력한다. For example, the deep learning learning unit 130 performs a vision inspection through the corresponding P1-1 algorithm according to the input of the P1-1 inspection part image and outputs the inspection result.
여기서, 상기 검사 결과는 P1-1 검사부품 이미지가 해당 부품을 정상 조립한 양품 기준 이미지(P1-1(OK))에 가깝거나 부품을 미조립한 불량 기준 이미지(P1-1(NG))에 가까운 유사도 비율(%)에 따라 판정될 수 있다. 다만, P1-1 검사부품 이미지가 해당 양품 기준 이미지(P1-1(OK)와 불량 기준 이미지(P1-1(NG)) 중 어느 것 과도 유사하지 않으면 판정 불가에 따른 검사 오류(NA)를 출력할 수 있다. Here, the test result is that the image of the P1-1 inspection part is close to the standard image of a good product (P1-1(OK)) in which the part is normally assembled, or is close to the standard image of a defective product in which the part is not assembled (P1-1(NG)). It can be determined based on the close similarity ratio (%). However, if the P1-1 inspection part image is not similar to either the corresponding good product standard image (P1-1(OK) or the defective standard image (P1-1(NG)), an inspection error (NA) is output due to the impossibility of judgment. can do.
또한, 딥러닝 학습부(130)는 상기 제1 검사부품 이미지(P1-1)가 양품 기준 이미지(P1-1(OK))와 일정 비율(EX. 80%) 이상이거나 미만인 조건으로 양/불(OK/NG) 여부를 판정할 수 있다.In addition, the deep learning learning unit 130 determines whether the first inspection part image (P1-1) is good/bad under the condition that the first inspection part image (P1-1) is above or below a certain ratio (EX. 80%) of the good product standard image (P1-1 (OK)). (OK/NG) can be determined.
딥러닝 학습부(130)는 인공 신경망 기반의 프로그램으로 구성될 수 있다.The deep learning learning unit 130 may be composed of an artificial neural network-based program.
DB(140)는 본 발명의 실시예에 따른 모바일 로봇을 활용한 검사 서버(100)의 운용에 필요한 각종 프로그램 및 데이터를 저장하고, 그 운용에 따라 생성되는 데이터를 저장한다.The DB 140 stores various programs and data necessary for the operation of the inspection server 100 using a mobile robot according to an embodiment of the present invention, and stores data generated according to the operation.
DB(140)는 당해 공정에 해당하는 검사부품별(P1-1, P1-2, P2, …, P6) 검사 알고리즘을 저장하고, 비전 검사를 반복함에 따라 신규 생성되는 검사 알고리즘을 교체나 추가 업데이트 할 수 있다.The DB 140 stores the inspection algorithm for each inspection part (P1-1, P1-2, P2, ..., P6) corresponding to the process, and replaces or additionally updates the newly created inspection algorithm as the vision inspection is repeated. can do.
제어부(150)는 본 발명의 실시예에 따른 모바일 로봇을 활용한 비전 검사를 수행하는 검사 서버(100)의 전반적인 동작을 제어하는 중앙 처리 장치이다. 즉, 제어부(150)는 DB(140)에 저장된 각종 프로그램의 실행으로 서버(100)에 구성된 상기 각부를 제어할 수 있다.The control unit 150 is a central processing unit that controls the overall operation of the inspection server 100 that performs a vision inspection using a mobile robot according to an embodiment of the present invention. That is, the control unit 150 can control each part of the server 100 by executing various programs stored in the DB 140.
이러한 제어부(150)는 설정된 프로그램에 의하여 동작하는 하나 이상의 프로세서로 구현될 수 있으며, 상기 설정된 프로그램은 본 발명의 실시예에 따른 모바일 로봇을 이용한 비전 검사 방법의 각 단계를 수행하도록 프로그래밍 된 것일 수 있다. This control unit 150 may be implemented with one or more processors that operate according to a set program, and the set program may be programmed to perform each step of the vision inspection method using a mobile robot according to an embodiment of the present invention. .
이러한 모바일 로봇을 이용한 비전 검사 방법은 아래의 도면을 참조하여 더욱 구체적으로 설명하기로 한다.The vision inspection method using this mobile robot will be described in more detail with reference to the drawings below.
도 8은 본 발명의 실시예에 따른 모바일 로봇을 이용한 비전 검사 방법을 개략적으로 나타낸 흐름도이다.Figure 8 is a flowchart schematically showing a vision inspection method using a mobile robot according to an embodiment of the present invention.
도 8을 참조하면, 본 발명의 실시예에 따른 모바일 로봇을 이용한 비전 검사 방법은 앞서 도 1을 통해 설명한 것과 같이 부품의 수동 조립 공정의 비전 검사를 위해 운용중인 모바일 로봇(10)이 제품 차량의 주변에 지정된 제1 검사 포지션(P1) 내지 제6 검사 포지션(P6)으로 이동하면서 차량에 조립된 부품의 검사 이미지를 촬영하는 시나리오를 가정하여 설명한다.Referring to FIG. 8, in the vision inspection method using a mobile robot according to an embodiment of the present invention, as previously described with reference to FIG. 1, the mobile robot 10 in operation for vision inspection of the manual assembly process of parts is installed on the product vehicle. The description will be made assuming a scenario in which inspection images of parts assembled in a vehicle are taken while moving to a nearby designated first inspection position (P1) to a sixth inspection position (P6).
검사 서버(100)의 제어부(150)는 운용중인 모바일 로봇(10)을 이용하여 제품 차량을 중심으로 검사가 필요한 특정 검사 포지션에서 촬영된 검사 이미지를 취득한다(S110). 이하 설명의 편의상 제어부(150)가 모바일 로봇(10)으로 부터 제1 검사 포지션(P1)에서 촬영된 검사 이미지를 취득한 것을 가정하여 설명하도록 한다.The control unit 150 of the inspection server 100 uses the mobile robot 10 in operation to acquire an inspection image taken at a specific inspection position that requires inspection centered on the product vehicle (S110). For convenience of explanation, the following description will be made on the assumption that the control unit 150 has acquired the inspection image taken at the first inspection position (P1) from the mobile robot 10.
제어부(150)는 모바일 로봇(10)으로부터 취득된 검사 이미지를 미리 설정된 기준 이미지의 검사부품 영역(ROI)에 맞게 이미지 정합 처리하여 변환된 보정 이미지를 생성한다. 그리고, 상기 보정 이미지에서 상기 기준 이미지의 검사부품 영역(ROI)에 해당하는 적어도 하나의 검사부품 이미지를 추출한다(S120). 예컨대, 상기 검사부품 이미지는 제1 검사 포지션(P1)에 해당하는 부품 번호(P1-1)를 가지며 이를 통해 비전 검사 대상인 검사부품(P1-1)을 식별할 수 있다.The control unit 150 processes the inspection image acquired from the mobile robot 10 to match the inspection part area (ROI) of the preset reference image and generates a converted correction image. Then, at least one inspection part image corresponding to the inspection part area (ROI) of the reference image is extracted from the corrected image (S120). For example, the inspection part image has a part number (P1-1) corresponding to the first inspection position (P1), and through this, the inspection part (P1-1) that is the target of the vision inspection can be identified.
제어부(150)는 사전에 검사 포지션별 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 검사 이미지의 이미지 편차를 자동으로 계산하여 상기 이미지 편차의 범위를 반영한 자동 이미지 어그멘테이션(Augmentation) 진행으로 딥러닝 학습된 당해 공정의 조립 부품별(P1-1, P1-2, P2, …, P6) 검사 알고리즘을 구축하고 있다. The control unit 150 automatically calculates the image deviation of the inspection image compared to the reference image that reflects the repetitive position error of the robot for each inspection position in advance, and performs deep learning through automatic image augmentation that reflects the range of the image deviation. We are building an inspection algorithm for each assembly part (P1-1, P1-2, P2, ..., P6) of the process.
제어부(150)는 상기 제1 검사부품 이미지(P1-1)에 대응하는 제1 검사부품(P1-1) 검사 알고리즘을 이용한 딥러닝 학습으로 비전 검사를 실시한다(S130).The control unit 150 performs a vision inspection using deep learning learning using an inspection algorithm for the first inspection part (P1-1) corresponding to the first inspection part image (P1-1) (S130).
제어부(150)는 상기 비전 검사에 따라 양품(OK), 불량(NG) 및 검사 오류(NA) 중 어느 하나의 검사 결과를 취득할 수 있다(S140).The control unit 150 may obtain an inspection result of any one of good product (OK), defective product (NG), and inspection error (NA) according to the vision inspection (S140).
이 때, 상기 검사 결과가 양품(OK)으로 판정된 경우, 제어부(150)는 당해 검사를 종료하고 리턴 하여 다음 부품검사 이미지(P1-2, P2, …, P6)에 대한 비전 검사를 반복하고, 다음 부품검사 이미지가 존재하지 않으면 검사를 종료한다.At this time, if the inspection result is determined to be OK, the control unit 150 terminates the inspection and returns, repeating the vision inspection for the next part inspection images (P1-2, P2, ..., P6), and , If the next part inspection image does not exist, the inspection is terminated.
반면, 상기 검사 결과가 불량(NG) 혹은 검사 오류(NA)로 판정된 경우, 제어부(150)는 운영자의 검사 결과 확인을 통해 오검사(예; OK/NG 오판)나 판정불가 이미지를 추출하여 저장하고, 후술되는 신규 검사 알고리즘의 성능 확인을 위한 평가 데이터로 활용한다(S160).On the other hand, if the test result is determined to be defective (NG) or test error (NA), the control unit 150 extracts a false test (e.g. OK/NG misjudgment) or an unjudgmentable image through the operator's confirmation of the test result. It is saved and used as evaluation data to check the performance of the new inspection algorithm described later (S160).
이처럼, 제어부(150)는 사전에 모바일 로봇(10)의 검사 포지션별 반복 오차 범위를 학습하여 부품별(P1-1, P1-2, P2, …, P6) 검사 알고리즘을 구축하고 이를 활용한 비전 검사를 수행함으로써 모바일 로봇(10)의 반복 위치 오차에 대한 이미지 편차를 보정함으로써 부품별 조립 품질의 평가성능을 향상시킬 수 있다. In this way, the control unit 150 learns the repetition error range for each inspection position of the mobile robot 10 in advance, builds an inspection algorithm for each part (P1-1, P1-2, P2, ..., P6), and uses this to create a vision system. By performing the inspection, the evaluation performance of the assembly quality of each part can be improved by correcting the image deviation for the repetitive position error of the mobile robot 10.
한편, 기구축된 검사 알고리즘(이하, "기존 검사 알고리즘"이라 명명함)은 시간의 흐름에 따라 모바일 로봇(10)을 포함한 공정 설비들의 노후화나 비전 센서 모듈(11)의 장착 위치의 변화 등 공정 운용상의 다양한 환경변화로 인해 이미지 편차가 발생될 수 있고 이로 인하여 검사 성능이 저하될 수 있다. Meanwhile, the pre-built inspection algorithm (hereinafter referred to as “existing inspection algorithm”) is used to process processes such as aging of process equipment including the mobile robot 10 or changes in the mounting location of the vision sensor module 11 over time. Image deviations may occur due to various operational environmental changes, which may deteriorate inspection performance.
이에 검사 성능 저하를 방지하기 위하여, 제어부(150)는 상기 S120 단계의 상기 이미지 정합 처리과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 토대로 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 실시간으로 최적화한 신규 검사 알고리즘을 생성하는 특징을 가진다. Accordingly, in order to prevent a decrease in inspection performance, the control unit 150 optimizes the image augmentation range for deep learning learning in real time based on the transformation matrix information acquired in the image matching process of step S120. It has the characteristic of creating a new inspection algorithm.
이를 통해, 기존 검사 알고리즘을 상기 다양한 환경변화를 반영한 신규 검사 알고리즘으로 교체나 추가 업데이트함으로써 모바일 로봇의 반복 위치 오차 뿐만 아니라, 설비의 노후화 등 다양한 환경변화로 인한 검사 알고리즘의 성능 저하를 예방하고 나아가 검사 성능을 더욱 향상시킬 수 있다.Through this, by replacing or additionally updating the existing inspection algorithm with a new inspection algorithm that reflects the various environmental changes mentioned above, not only the repetitive positioning error of the mobile robot but also the performance degradation of the inspection algorithm due to various environmental changes such as aging of equipment is prevented and further inspection is performed. Performance can be further improved.
이하, 본 발명의 실시예에 따른 상기 부품별 신규 검사 알고리즘 생성 방법에 대하여 구체적으로 설명한다.Hereinafter, a method for generating a new inspection algorithm for each component according to an embodiment of the present invention will be described in detail.
제어부(150)는 상기 이미지 정합 처리과정에서 상기 기준 이미지에 대한 검사 이미지의 이미지 정합 오차를 계산한다(S121). 그리고, 두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix)를 추출한다(S122). 여기서, 상기 트랜스포매이션 매트릭스는 상기 이미지 편차를 수치화한 값으로써 이미지 이동을 위한 Translation, 이미지 회전을 위한 Rotation, 이미지 확대/축소를 위한 Scale, 이미지 기울기 변환을 위한 Tilt 및 이미지 전단을 위한 Shear 정보를 포함한다.The control unit 150 calculates the image registration error of the inspection image with respect to the reference image during the image registration process (S121). Then, the image deviation between the two images is calculated to extract a transformation matrix that can be obtained when correcting the inspection image (S122). Here, the transformation matrix is a numerical value of the image deviation and includes Translation for image movement, Rotation for image rotation, Scale for image enlargement/reduction, Tilt for image tilt conversion, and Shear information for image shearing. do.
제어부(150)는 모바일 로봇(10)의 검사(촬영) 위치에 해당하여 추출된 트랜스포매이션 매트릭스 값의 분포를 계산하고 범위를 DB화하여 DB(140)에 저장한다(S123).The control unit 150 calculates the distribution of the transformation matrix value extracted corresponding to the inspection (photography) position of the mobile robot 10, converts the range into a DB, and stores it in the DB 140 (S123).
제어부(150)는 상기 DB(140)에 저장된 반복 위치 오차 범위를 기준으로 해당 검사부품 이미지의 딥러닝 학습시 이미지 어그멘테이션을 위한 트랜스포매이션 매트릭스 범위로 지정 한다(S124). 즉, 제어부(150)는 트랜스포매이션 매트릭스의 반복 위치 오차 범위를 기준으로 딥러닝 학습시 상기 어그멘테이션을 이용한 검사부품 이미지의 이동(Translation), 회전(Rotation), 스케일(Scale), 틸트(Tilt), 전단(Shear) 변환 값들의 난수 생성시의 범위를 지정할 수 있다. 특히, 제어부(150)는 상기 트랜스포매이션 매트릭스 범위를 지정시 가우시안 정규 분포에 따른 난수 발생을 하도록 함으로써 위치에 따라 모바일 로봇의 반복 오차로 발생 가능성이 높은 이미지 편차에 가중치를 주어 딥러닝 학습을 수행할 수 있다. The control unit 150 specifies the transformation matrix range for image augmentation during deep learning learning of the corresponding inspection part image based on the repetition position error range stored in the DB 140 (S124). That is, the control unit 150 performs translation, rotation, scale, and tilt of the inspection part image using the augmentation when learning deep learning based on the repetitive position error range of the transformation matrix. ), you can specify the range when generating random numbers of shear conversion values. In particular, the control unit 150 generates random numbers according to a Gaussian normal distribution when specifying the transformation matrix range, thereby performing deep learning learning by weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location. You can.
제어부(150)는 모바일 로봇(10)의 반복 위치 오차가 반영된 어그멘테이션으로 증대된 복수의 학습 이미지를 딥러닝 학습하여 해당 부품의 신규 검사 알고리즘을 생성하고, 당해 검사부품 이미지를 상기 신규 검사 알고리즘을 활용하여 딥러닝 재학습 한다(S125).The control unit 150 generates a new inspection algorithm for the corresponding part by deep learning a plurality of learning images augmented by augmentation reflecting the repeated position error of the mobile robot 10, and applies the image of the inspection part to the new inspection algorithm. Re-learn deep learning using (S125).
즉, 제어부(150)는 상기 딥러닝 학습시 모바일 로봇(10)의 위치 별 오차 범위를 반영한 이미지 어그멘테이션을 통해 부품별 신규 검사 알고리즘을 생성하고, 이를 활용한 딥러닝 재학습을 통해 검사 성능을 향상시킬 수 있다.That is, the control unit 150 generates a new inspection algorithm for each part through image augmentation that reflects the error range for each position of the mobile robot 10 during the deep learning learning, and performs inspection performance through deep learning re-learning using this. can be improved.
제어부(150)는 상기 재학습 검사 결과와 상기 신규 검사 알고리즘의 성능 평가를 위해 마련된 기존 검사 알고리즘의 평가 데이터를 비교한다(S126).The control unit 150 compares the relearning test results with evaluation data of the existing test algorithm prepared to evaluate the performance of the new test algorithm (S126).
이 때, 제어부(150)는 사전에 구축된 기존 검사 알고리즘보다 상기 신규 검사 알고리즘의 평가 성능이 향상된 경우(EX. 기존 P1-1 알고리즘 < 신규 P1-1 알고리즘), 상기 기존 검사 알고리즘을 상기 신규 알고리즘으로 자동 교체한다(S126; 예). 가령, 기존 검사 알고리즘을 활용시 검사 오류(NA)가 발생된 이미지를 상기 신규 검사 알고리즘을 활용하여 양불(OK/NG)판정 가능한 경우 평가 성능이 향상된 것으로 판단하여 자동 교체할 수 있다.At this time, if the evaluation performance of the new inspection algorithm is improved compared to the existing inspection algorithm built in advance (EX. existing P1-1 algorithm < new P1-1 algorithm), the control unit 150 replaces the existing inspection algorithm with the new inspection algorithm. automatically replaced (S126; example). For example, if an image in which an inspection error (NA) occurred when using the existing inspection algorithm can be judged OK/NG using the new inspection algorithm, the evaluation performance can be judged to have improved and automatically replaced.
반면, 제어부(150)는 기존 검사 알고리즘과 이하로 상기 평가 성능이 향상되지 않은 경우 기존 검사 알고리즘을 유지한다(S126; 아니오). On the other hand, the control unit 150 maintains the existing inspection algorithm if the evaluation performance is not improved below that of the existing inspection algorithm (S126; No).
이와 같이, 본 발명의 실시예에 따르면, 모바일 로봇을 활용한 비전 검사 시 모바일 로봇의 반복 위치 오차를 반영한 이미지 편차를 자동으로 계산하고 이를 반영한 검사 알고리즘을 생성하여 비전 검사를 수행함으로써 모바일 로봇의 반복 위치 오차에 대한 검사 성능을 향상시킬 수 있다.As such, according to an embodiment of the present invention, when performing a vision inspection using a mobile robot, the image deviation reflecting the repetitive position error of the mobile robot is automatically calculated, an inspection algorithm reflecting this is generated, and the vision inspection is performed, thereby repeating the mobile robot. Inspection performance for position errors can be improved.
또한, 비전 검사를 수행하는 과정에서 시간에 따른 설비의 노후화나 다양한 환경변화로 인한 이미지 편차를 반영한 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하고 기존 검사 알고리즘을 교체/업데이트함으로써 검사 성능 저하를 예방하고 최적의 검사 품질을 유지할 수 있다.In addition, in the process of performing vision inspection, a new inspection algorithm is created by specifying an augmentation range that reflects image deviations due to aging of equipment or various environmental changes over time, and replacement/updating of existing inspection algorithms reduces inspection performance degradation. Prevention and optimal inspection quality can be maintained.
또한, 모바일 로봇이 작업자를 따라 비전 검사를 수행하고 현재 작업자가 존재하는 구역으로 진입하지 않도록 이동을 제한함으로써 작업자의 작업 동선을 방해하지 않고 충돌을 예방할 수 있는 효과가 있다.In addition, the mobile robot follows the worker, performs a vision inspection, and restricts movement so as not to enter the area where the worker currently exists, thereby preventing collisions without interfering with the worker's work flow.
본 발명의 실시예는 이상에서 설명한 장치 및/또는 방법을 통해서만 구현이 되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하기 위한 프로그램, 그 프로그램이 기록된 기록 매체 등을 통해 구현될 수도 있으며, 이러한 구현은 앞서 설명한 실시예의 기재로부터 본 발명이 속하는 기술분야의 전문가라면 쉽게 구현할 수 있는 것이다.The embodiments of the present invention are not implemented only through the devices and/or methods described above, but can be implemented through programs for realizing functions corresponding to the configuration of the embodiments of the present invention, recording media on which the programs are recorded, etc. This implementation can be easily implemented by an expert in the technical field to which the present invention belongs based on the description of the embodiments described above.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although the embodiments of the present invention have been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements made by those skilled in the art using the basic concept of the present invention defined in the following claims are also possible. It falls within the scope of rights.

Claims (20)

  1. 적어도 하나의 지정된 검사 포지션(Position, P)으로 이동하여 제품에 조립된 부품의 검사 영역을 촬영하는 모바일 로봇; 및 A mobile robot that moves to at least one designated inspection position (Position, P) and photographs an inspection area of parts assembled in a product; and
    상기 모바일 로봇으로부터 촬영된 검사 이미지를 취득하고 검사 포지션별 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 상기 검사 이미지의 이미지 편차를 계산하여 상기 이미지 편차의 범위를 반영한 이미지 어그멘테이션(Augmentation) 진행으로 학습된 검사 알고리즘을 통해 검사 포지션별로 상기 검사 이미지의 조립 품질을 평가하는 검사 서버;Learning is carried out by acquiring inspection images taken from the mobile robot, calculating the image deviation of the inspection image compared to the reference image reflecting the repetitive position error of the robot for each inspection position, and performing image augmentation that reflects the range of the image deviation. an inspection server that evaluates the assembly quality of the inspection image for each inspection position through an inspection algorithm;
    를 포함하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot including.
  2. 제1항에 있어서,According to paragraph 1,
    상기 모바일 로봇은, The mobile robot is,
    상기 검사 포지션에서 촬영된 상기 검사 이미지를 생성하는 비전 센서 모듈;A vision sensor module that generates the inspection image captured at the inspection position;
    센서류를 통해 주변을 탐지하는 자율주행 센서 모듈;Autonomous driving sensor module that detects the surroundings through sensors;
    구동륜 또는 4족 보행을 통해 자유롭게 이동하는 이동 모듈;A mobile module that moves freely via drive wheels or quadrupedal walking;
    촬영된 검사 이미지를 무선통신을 통해 상기 검사 서버로 전송하는 무선통신 모듈; 및 A wireless communication module that transmits the captured inspection image to the inspection server through wireless communication; and
    상기 비전 센서 모듈을 통해 작업자의 위치를 인식하고 상기 작업자를 따라 작업이 완료한 검사 포지션(P)으로 이동하면서 상기 비전 센서 모듈을 통해 상기 검사 이미지를 촬영하도록 제어하는 제어 모듈;A control module that recognizes the position of the worker through the vision sensor module and controls the worker to capture the inspection image through the vision sensor module while moving to the inspection position (P) where the work has been completed;
    을 포함하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot including.
  3. 제2항에 있어서, According to paragraph 2,
    상기 제어 모듈은, The control module is,
    상기 제품의 종류 및 사양에 따라 당해 공정에서 조립해야 하는 부품의 조립 위치와 그 검사 이미지 촬영을 위해 지정된 적어도 하나의 검사 포지션을 저장하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that stores the assembly positions of parts to be assembled in the process according to the type and specifications of the product and at least one inspection position designated for capturing the inspection image.
  4. 제2항에 있어서, According to paragraph 2,
    상기 제어 모듈은, The control module is,
    상기 제품을 중심으로 여러 방위로 구획된 복수의 구역을 구분하고, 현재 작업자가 존재하는 작업자 구역으로 진입하지 않도록 이동을 제한하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that divides a plurality of zones divided in various directions around the product and restricts the movement of workers so that they do not enter the currently existing worker zone.
  5. 제2항에 있어서, According to paragraph 2,
    비전 센서 모듈은, The vision sensor module is,
    로봇 암(Arm)을 통해 상기 모바일 로봇에 장착되며, 상기 로봇 암의 자세 제어를 통해 제품의 외장 부품과 실내 내장 부품의 검사 이미지를 촬영하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that is mounted on the mobile robot through a robot arm and takes inspection images of the exterior parts and interior interior parts of the product through posture control of the robot arm.
  6. 제1항에 있어서, According to paragraph 1,
    상기 검사 서버는, The inspection server is,
    상기 모바일 로봇으로부터 검사 이미지를 취득하는 통신부;a communication unit that acquires an inspection image from the mobile robot;
    부품별 검사 이미지에 대응하는 기준 이미지를 저장하고 상기 검사 이미지를 상기 기준 이미지와의 비교로 정합 처리하여 변환된 보정 이미지를 생성하는 이미지 처리부;an image processing unit that stores a reference image corresponding to the inspection image for each part and generates a converted corrected image by matching the inspection image with the reference image;
    사전에 상기 검사 이미지에 대한 이미지 어그멘테이션(Augmentation)을 통해 상기 모바일 로봇의 검사 포지션별 반복 위치 오차 범위를 딥러닝 학습하여 당해 공정의 검사부품별 검사 알고리즘을 구축하는 딥러닝 학습부;A deep learning unit that builds an inspection algorithm for each inspection part of the process by deep learning the repetitive position error range for each inspection position of the mobile robot through image augmentation of the inspection image in advance;
    상기 검사 서버의 운용을 위한 프로그램과 데이터를 저장하는 데이터베이스(DB); 및 a database (DB) storing programs and data for operating the inspection server; and
    상기 이미지 정합 처리과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix)를 토대로 상기 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하도록 하는 제어부;a control unit that specifies an image augmentation range for deep learning learning based on a transformation matrix acquired in the image registration process and generates a new inspection algorithm;
    를 포함하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot including.
  7. 제6항에 있어서, According to clause 6,
    상기 이미지 처리부는, The image processing unit,
    상기 이미지 정합 처리시 상기 기준 이미지에 대한 검사 이미지의 이미지 정합 오차를 계산하고, 두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 추출하는 모바일 로봇을 활용한 비전 검사 시스템.When processing the image registration, a mobile robot calculates the image registration error of the inspection image with respect to the reference image, calculates the image deviation between the two images, and extracts transformation matrix information that can be obtained when correcting the inspection image. Vision inspection system utilized.
  8. 제7항에 있어서, In clause 7,
    상기 트랜스포매이션 매트릭스(Transformation Matrix) 정보는, The transformation matrix information is,
    상기 이미지 편차를 수치화한 값으로써 이미지 이동을 위한 Translation, 이미지 회전을 위한 Rotation, 이미지 확대/축소를 위한 Scale, 이미지 기울기 변환을 위한 Tilt 및 이미지 전단을 위한 Shear 정보를 포함하는 모바일 로봇을 활용한 비전 검사 시스템.Vision using a mobile robot, which is a numerical value of the image deviation and includes translation for image movement, rotation for image rotation, scale for image enlargement/reduction, tilt for image tilt conversion, and shear information for image shearing. inspection system.
  9. 제6항에 있어서, According to clause 6,
    상기 이미지 처리부는, The image processing unit,
    상기 보정 이미지에서 상기 기준 이미지에 설정된 검사부품 영역(ROI)에 해당하는 적어도 하나의 검사부품 이미지를 학습 데이터로 추출하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that extracts at least one inspection part image corresponding to an inspection part area (ROI) set in the reference image from the correction image as learning data.
  10. 제6항 내지 제9항 중 어느 한 항에 있어서, According to any one of claims 6 to 9,
    상기 딥러닝 학습부는, The deep learning department,
    상기 이미지 처리부에서 추출된 검사부품 이미지와 트랜스포매이션 매트릭스 범위를 이용한 이미지 어그멘테이션을 진행하여 상기 반복 위치 오차 범위가 반영된 해당 검사부품의 학습 이미지를 증대 시키는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that performs image augmentation using the inspection part image extracted from the image processing unit and the transformation matrix range to increase the learning image of the inspection part reflecting the repetitive position error range.
  11. 제10항에 있어서, According to clause 10,
    상기 딥러닝 학습부는, The deep learning department,
    상기 검사부품 이미지를 해당 부품의 검사 알고리즘을 통해 딥러닝 학습하여 양품(OK), 불량(NG), 및 검사 오류(NA) 중 어느 하나의 검사 결과를 출력하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that learns the image of the inspected part through deep learning through the inspection algorithm of the corresponding component and outputs inspection results of good product (OK), defective product (NG), and inspection error (NA).
  12. 제11항에 있어서,According to clause 11,
    상기 제어부는, The control unit,
    상기 검사 이미지의 이미지 정합 처리과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix) 정보를 토대로 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 실시간으로 반영한 신규 검사 알고리즘을 생성하는 모바일 로봇을 활용한 비전 검사 시스템.A vision inspection system using a mobile robot that generates a new inspection algorithm that reflects the image augmentation range for deep learning learning in real time based on the transformation matrix information acquired during the image registration process of the inspection image.
  13. 제11항에 있어서, According to clause 11,
    상기 제어부는, The control unit,
    상기 신규 검사 알고리즘을 이용하여 딥러닝 재학습한 검사 결과를 기존 검사 알고리즘을 이용한 검사 결과와 비교하여 성능이 향상된 것으로 판단되면 기존 검사 알고리즘을 상기 신규 검사 알고리즘으로 교체 또는 업데이트하는 모바일 로봇을 활용한 비전 검사 시스템.Vision using a mobile robot that replaces or updates the existing inspection algorithm with the new inspection algorithm when performance is judged to be improved by comparing the inspection results relearned by deep learning using the new inspection algorithm with the inspection results using the existing inspection algorithm. inspection system.
  14. 자동차 조립 공정에서 작업자의 수동 조립 품질을 실시간으로 검사하는 모바일 로봇을 활용한 비전 검사 방법에 있어서, In the vision inspection method using a mobile robot that inspects the quality of manual assembly by workers in real time in the automobile assembly process,
    모바일 로봇을 이용하여 제품 차량을 중심으로 검사가 필요한 특정 검사 포지션에서 촬영된 검사 이미지를 취득하는 단계;Acquiring an inspection image taken at a specific inspection position that requires inspection centered on the product vehicle using a mobile robot;
    상기 검사 이미지를 미리 설정된 기준 이미지에 맞게 이미지 정합 처리하여 변환된 보정 이미지에서 검사부품 이미지를 추출하는 단계;Processing the inspection image to match a preset reference image and extracting an inspection part image from the converted correction image;
    검사 포지션별 로봇의 반복 위치 오차를 반영한 기준 이미지 대비 상기 검사 이미지의 이미지 편차를 자동으로 계산하여 상기 이미지 편차의 범위를 반영한 이미지 어그멘테이션(Augmentation) 진행으로 학습된 검사 알고리즘을 저장하고 상기 검사부품 이미지에 대응하는 검사부품 검사 알고리즘을 이용한 딥러닝 학습으로 비전 검사를 실시하는 단계; 및 Automatically calculates the image deviation of the inspection image compared to the reference image reflecting the repetitive position error of the robot for each inspection position, stores the learned inspection algorithm through image augmentation that reflects the range of the image deviation, and inspects the inspection part. A step of performing a vision inspection using deep learning learning using an inspection part inspection algorithm corresponding to the image; and
    상기 비전 검사에 따라 양품(OK), 불량(NG) 및 검사 오류(NA) 중 어느 하나의 검사 결과를 취득하는 단계;Obtaining an inspection result of one of good product (OK), defective product (NG), and inspection error (NA) according to the vision inspection;
    를 포함하는 모바일 로봇을 활용한 비전 검사 방법.Vision inspection method using a mobile robot including.
  15. 제14항에 있어서, According to clause 14,
    상기 검사 결과를 취득하는 단계는, The step of obtaining the test results is,
    상기 검사 결과가 불량(NG) 혹은 검사 오류(NA)로 판정된 경우 운영자의 검사 결과 확인을 통해 오검사나 판정불가 이미지를 추출하여 신규 검사 알고리즘의 성능 확인을 위한 평가 데이터로 저장하는 단계를 더 포함하는 모바일 로봇을 활용한 비전 검사 방법.If the test result is determined to be defective (NG) or test error (NA), an additional step is performed to extract the false test or non-judgment image through the operator's confirmation of the test result and save it as evaluation data to check the performance of the new test algorithm. Vision inspection method using a mobile robot, including:
  16. 제14항 또는 제15항에 있어서, According to claim 14 or 15,
    상기 검사부품 이미지를 추출하는 단계는, The step of extracting the inspection part image is,
    상기 이미지 정합 처리 과정에서 취득된 트랜스포매이션 매트릭스(Transformation Matrix)를 토대로 상기 딥러닝 학습을 위한 이미지 어그멘테이션 범위를 지정하여 신규 검사 알고리즘을 생성하는 단계를 더 포함하는 모바일 로봇을 활용한 비전 검사 방법.A vision inspection method using a mobile robot further comprising generating a new inspection algorithm by specifying an image augmentation range for the deep learning learning based on the transformation matrix acquired in the image registration process. .
  17. 제16항에 있어서, According to clause 16,
    상기 신규 검사 알고리즘을 생성하는 단계는, The step of creating the new inspection algorithm is,
    상기 기준 이미지에 대한 상기 검사 이미지의 이미지 정합 오차를 계산는 단계; calculating an image registration error of the test image with respect to the reference image;
    두 이미지간 이미지 편차를 계산하여 상기 검사 이미지를 보정 시 취득 가능한 트랜스포매이션 매트릭스(Transformation Matrix)를 추출하는 단계;calculating the image deviation between two images and extracting a transformation matrix that can be obtained when correcting the inspection image;
    검사 포지션에 해당하여 추출된 트랜스포매이션 매트릭스 값의 분포를 계산하고 범위를 DB화하여 DB에 저장하는 단계;Calculating the distribution of transformation matrix values extracted corresponding to the inspection position, converting the range into a DB, and storing it in the DB;
    상기 DB에 저장된 반복 위치 오차 범위를 기준으로 해당 검사부품 이미지의 딥러닝 학습시 이미지 어그멘테이션을 위한 트랜스포매이션 매트릭스 범위로 지정하는 단계; 및 Designating a transformation matrix range for image augmentation when learning deep learning of the corresponding inspection part image based on the repetition position error range stored in the DB; and
    상기 모바일 로봇의 반복 위치 오차가 반영된 어그멘테이션으로 증대된 복수의 학습 이미지를 딥러닝 학습하여 해당 부품의 신규 검사 알고리즘을 생성하여 상기 검사부품 이미지를 딥러닝 재학습 하는 단계;Deep learning a plurality of learning images augmented by augmentation reflecting the repetitive position error of the mobile robot to generate a new inspection algorithm for the corresponding part and deep learning the inspection part image;
    를 포함하는 모바일 로봇을 활용한 비전 검사 방법.Vision inspection method using a mobile robot including.
  18. 제17항에 있어서, According to clause 17,
    상기 매트릭스 범위로 지정하는 단계는, The step of specifying the matrix range is,
    상기 트랜스포매이션 매트릭스의 반복 위치 오차 범위를 기준으로, 딥러닝 학습시 상기 어그멘테이션을 이용한 검사부품 이미지의 이동(Translation), 회전(Rotation), 스케일(Scale), 틸트(Tilt), 전단(Shear) 변환 값들의 난수 생성시 범위를 지정하는 모바일 로봇을 활용한 비전 검사 방법.Based on the repetition position error range of the transformation matrix, movement, rotation, scale, tilt, and shear of the inspection part image using the augmentation during deep learning learning. ) Vision inspection method using a mobile robot that specifies the range when generating random numbers of conversion values.
  19. 제18항에 있어서, According to clause 18,
    상기 매트릭스 범위로 지정하는 단계는, The step of specifying the matrix range is,
    가우시안 정규 분포에 따른 난수 발생하여 위치에 따라 모바일 로봇의 반복 오차로 발생 가능성이 높은 이미지 편차에 가중치를 주어 딥러닝 학습을 수행하는 모바일 로봇을 활용한 비전 검사 방법.A vision inspection method using a mobile robot that generates random numbers according to a Gaussian normal distribution and performs deep learning learning by weighting image deviations that are likely to occur due to repetitive errors of the mobile robot depending on the location.
  20. 제17항에 있어서, According to clause 17,
    상기 딥러닝 재학습 하는 단계 이후에, After the deep learning retraining step,
    상기 재학습한 결과를 상기 평가 데이터와 비교하여 성능이 향상된 것으로 판단되면 기존 검사 알고리즘을 상기 신규 검사 알고리즘으로 교체 또는 업데이트하는 단계를 더 포함하는 모바일 로봇을 활용한 비전 검사 방법.A vision inspection method using a mobile robot further comprising replacing or updating the existing inspection algorithm with the new inspection algorithm when it is determined that performance is improved by comparing the re-learning results with the evaluation data.
PCT/KR2023/008297 2022-10-04 2023-06-15 Vision inspection system and method using mobile robot WO2024075926A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220126235A KR20240047507A (en) 2022-10-04 2022-10-04 Vision inspection system and method using mobile robot
KR10-2022-0126235 2022-10-04

Publications (1)

Publication Number Publication Date
WO2024075926A1 true WO2024075926A1 (en) 2024-04-11

Family

ID=90608539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/008297 WO2024075926A1 (en) 2022-10-04 2023-06-15 Vision inspection system and method using mobile robot

Country Status (2)

Country Link
KR (1) KR20240047507A (en)
WO (1) WO2024075926A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140703A1 (en) * 2014-11-17 2016-05-19 Hyundai Motor Company System for inspecting vehicle body and method thereof
JP2016190316A (en) * 2015-03-30 2016-11-10 ザ・ボーイング・カンパニーThe Boeing Company Automated dynamic manufacturing systems and related methods
JP2018122400A (en) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 Mobile robot, control method and control program of the mobile robot
KR102272305B1 (en) * 2020-10-14 2021-07-01 함만주 Mold manufacture system
KR102393068B1 (en) * 2020-11-03 2022-05-02 주식회사 성우하이텍 System and method for image-based part recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160140703A1 (en) * 2014-11-17 2016-05-19 Hyundai Motor Company System for inspecting vehicle body and method thereof
JP2016190316A (en) * 2015-03-30 2016-11-10 ザ・ボーイング・カンパニーThe Boeing Company Automated dynamic manufacturing systems and related methods
JP2018122400A (en) * 2017-02-01 2018-08-09 トヨタ自動車株式会社 Mobile robot, control method and control program of the mobile robot
KR102272305B1 (en) * 2020-10-14 2021-07-01 함만주 Mold manufacture system
KR102393068B1 (en) * 2020-11-03 2022-05-02 주식회사 성우하이텍 System and method for image-based part recognition

Also Published As

Publication number Publication date
KR20240047507A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN107590835B (en) Mechanical arm tool quick-change visual positioning system and positioning method in nuclear environment
CN110770989B (en) Unmanned and maintainable switchgear or control device system and method for operating same
CN102795011B (en) For method and the corresponding sighting system in precalculated position multiple in aiming structure
CN110703800A (en) Unmanned aerial vehicle-based intelligent identification method and system for electric power facilities
WO2013077623A1 (en) Structure displacement measurement system and method
US8923602B2 (en) Automated guidance and recognition system and method of the same
WO2019164381A1 (en) Method for inspecting mounting state of component, printed circuit board inspection apparatus, and computer readable recording medium
KR20190044496A (en) Automatic apparatus
US5333242A (en) Method of setting a second robots coordinate system based on a first robots coordinate system
WO2020075954A1 (en) Positioning system and method using combination of results of multimodal sensor-based location recognition
KR102393068B1 (en) System and method for image-based part recognition
WO2024075926A1 (en) Vision inspection system and method using mobile robot
JPH11156764A (en) Locomotive robot device
JP2022172053A (en) Adas examination system using mmp, and method for the same
CN116337887A (en) Method and system for detecting defects on upper surface of casting cylinder body
WO2024122777A1 (en) Method and system for driving collaborative robot capable of preemptive response
CN109079777B (en) Manipulator hand-eye coordination operation system
Myers Industry begins to use visual pattern recognition
CN111571596B (en) Method and system for correcting errors of metallurgical plug-in assembly operation robot by using vision
WO2021095907A1 (en) Driving control method for variable agricultural robot
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN115493513A (en) Visual system applied to space station mechanical arm
WO2023054813A1 (en) Fire detection and early response system using rechargeable mobile robot
CN114735044A (en) Intelligent railway vehicle inspection robot
WO2021206209A1 (en) Markerless-based ar implementation method and system for smart factory construction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23874997

Country of ref document: EP

Kind code of ref document: A1