US20200242413A1 - Machine vision and robotic installation systems and methods - Google Patents

Machine vision and robotic installation systems and methods Download PDF

Info

Publication number
US20200242413A1
US20200242413A1 US16/848,307 US202016848307A US2020242413A1 US 20200242413 A1 US20200242413 A1 US 20200242413A1 US 202016848307 A US202016848307 A US 202016848307A US 2020242413 A1 US2020242413 A1 US 2020242413A1
Authority
US
United States
Prior art keywords
machine vision
image data
work field
automated machine
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/848,307
Inventor
Tyler Edward Kurtz
Riley Harrison HansonSmith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boeing Co
Original Assignee
Boeing Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boeing Co filed Critical Boeing Co
Priority to US16/848,307 priority Critical patent/US20200242413A1/en
Assigned to THE BOEING COMPANY reassignment THE BOEING COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HANSONSMITH, RILEY HARRISON, KURTZ, TYLER EDWARD
Publication of US20200242413A1 publication Critical patent/US20200242413A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0006Industrial image inspection using a design-rule based approach
    • G06K9/6262
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/04Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/04Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
    • B23P19/06Screw or nut setting or loosening machines
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/10Aligning parts to be fitted together
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/10Aligning parts to be fitted together
    • B23P19/102Aligning parts to be fitted together using remote centre compliance devices
    • B23P19/105Aligning parts to be fitted together using remote centre compliance devices using sensing means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • G06K9/00503
    • G06K9/00664
    • G06K9/40
    • G06K9/4614
    • G06K9/4652
    • G06K9/4671
    • G06K9/6255
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/446Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40565Detect features of object, not position or orientation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06K2209/19
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • the present disclosure relates to machine vision.
  • Machine vision and robotic installation systems and methods are disclosed.
  • Robotic installation methods comprise performing an automated machine vision method according to the present disclosure, in which the camera system is mounted to, mounted with, or mounted as an end effector of a robotic arm, and based on the determining, installing, using the robotic arm, a component in a predetermined configuration relative to the object.
  • the controller comprises non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a presumed feature of the one or more predetermined features, and based at least in part on applying the filter, determine if the object has the presumed feature.
  • Robotic installation systems comprise a machine vision system according to the present disclosure and a robotic arm, with the camera system mounted to, mounted with, or mounted as an end effector of the robotic arm.
  • the robotic arm is configured to install a component in a predetermined configuration relative to the object based in part on a determination by the controller that the object has the specific feature.
  • FIG. 1 is a flowchart schematically representing methods according to the present disclosure.
  • FIG. 2 is a diagram schematically representing systems according to the present disclosure.
  • FIG. 3 is a somewhat schematic illustration representing an example robotic installation system according to the present disclosure.
  • FIG. 4 is a flowchart schematically representing aircraft production and service methodology.
  • FIG. 5 is a block diagram schematically representing an aircraft.
  • FIG. 1 provides a flowchart schematically representing automated machine vision methods 100 for determining if an object 302 within a work field 304 has one or more predetermined features 306
  • FIG. 2 provides a schematic representation of machine vision systems 300 for determining if an object 302 within a work field 304 has one or more predetermined features 306
  • the schematic representation of methods 100 and systems 300 in FIGS. 1 and 2 are not limiting and other methods 100 , steps of methods 100 , systems 300 , and elements of systems 300 are within the scope of the present disclosure, including methods 100 having greater than or fewer than the number of illustrated steps, as well as systems 300 having great than or fewer than the number of illustrated elements, as understood from the discussions herein.
  • methods 100 are not required to have the schematically represented steps of FIG. 1 performed in the order illustrated.
  • Systems 300 may be described as being configured to perform or implement a method 100 according to the present disclosure; however, not all methods 100 are required to be performed or implemented by a system 300 according to the present disclosure, and not all systems 300 are required to perform or implement a method 100 according to the present disclosure.
  • methods 100 and systems 300 determine if an object 302 within a work field 304 has one or more predetermined features 306 (e.g., physical or structural features or specific location or orientation of the object 302 ). Accordingly, by confirming that features 306 of the object 302 are features that are expected to be present, or otherwise correspond to an object that is desired to be present at the location of the object 302 , methods 100 and systems 300 may make a determination if the object 302 is the correct object, for example, and ready to be worked on by a subsequent operation. Additionally or alternatively, methods 100 and systems 300 may make a determination if the object 302 is in a desired configuration, such as being fully installed, properly positioned, assembled with a distinct component, etc.
  • predetermined features 306 e.g., physical or structural features or specific location or orientation of the object 302 .
  • a method 100 or system 300 determines that features of the object 302 are not the features 306 expected to be present, or otherwise correspond to an object, or configuration thereof, that is desired to be present at the location of the object 302 , a subsequent operation may be avoided or an alternative operation may be performed, such as by replacing the object 302 with the correct object, by further manipulating the object 302 prior to subsequent processing, etc. Accordingly, some methods 100 and systems 300 may be described as quality control methods and systems.
  • Methods 100 and systems 300 may be described as automated methods and systems, in so far as upon initiation or activation, the steps necessary to result in the ultimate goal of a method 100 or a system 300 may be fully automated by a system or its component parts without requiring external input from a human user.
  • Work field 304 may be any appropriate work field, such as in a manufacturing or production environment.
  • a work field 304 may comprise a product, or portion thereof, such as a vehicle, an aircraft, or a machine, in a stage of assembly or manufacture.
  • a work field 304 may comprise a region of a product in which fasteners are installed to assemble two or more components together.
  • objects 302 may comprise fasteners; however, methods 100 and systems 300 may be implemented with or configured for any suitable configuration of objects 302 .
  • methods 100 comprise capturing 102 image data of the work field 304 from a camera system 308 , applying 104 a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features 306 , and based at least in part on the applying 104 , determining 106 if the object 302 has the specific feature.
  • image data associated with the work field 304 is captured, a filter having an aspect that corresponds to a specific feature is applied to the image data, and from the filtered image data, a determination is made whether or not the object 302 has the specific feature.
  • the filter when the object 302 is a threaded fastener, the filter may have an aspect that corresponds to a thread configuration (e.g., thread pitch) of an object that is expected to be where the object 302 is positioned, and as a result of the filtering, it is determined whether or not the threaded fastener has the thread pitch that is expected to be present.
  • a thread configuration e.g., thread pitch
  • systems 300 comprise a camera system 308 that is configured to capture image data of the work field 304 , and a controller 310 communicatively coupled to the camera system 308 to receive the image data from the camera system 308 .
  • the controller 310 may be described as being configured to perform or implement methods 100 according to the present disclosure. Additionally or alternatively, the controller 310 may comprise non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features 306 , and based at least in part on application of the filter, determine if the object 302 has the specific feature.
  • a controller 310 may be any suitable device or devices that are configured to perform the functions of controllers 310 discussed herein.
  • a controller 310 may include one or more of an electronic controller, a dedicated controller, a special-purpose controller, a personal computer, a special-purpose computer, a display device, a logic device, a memory device, and/or a memory device having computer readable media suitable for storing computer-executable instructions for implementing aspects of systems and/or methods according to the present disclosure.
  • controllers 310 may include, or be configured to read, non-transitory computer readable storage, or memory, media suitable for storing computer-executable instructions, or software, for implementing methods or steps of methods according to the present disclosure.
  • Examples of such media include CD-ROMs, disks, hard drives, flash memory, etc.
  • storage, or memory, devices and media having computer-executable instructions as well as computer-implemented methods and other methods according to the present disclosure are considered to be within the scope of subject matter deemed patentable in accordance with Section 101 of Title 35 of the United States Code.
  • the camera system 308 is positioned in a known or determined position and orientation relative to the work field 304 .
  • the size, shape, and orientation of the object 302 from the perspective of the camera system 308 effects the captured image data (e.g., the collection of pixels corresponding to the object 302 ) and thus the corresponding analysis of the image data for making the determination of whether the object 302 has a specific feature.
  • coordinate systems associated with the camera system 308 and the work field 304 need to be aligned or at least coordinated in a known manner for operative implementation of methods 100 and systems 300 according to these examples.
  • some methods 100 may be described as further comprising determining, or acquiring, a position and/or an orientation of the camera system 308 relative to the work field 304 and/or vice versa, and similarly, in some systems 300 , the computer-readable instructions, when executed, may further cause the controller to determine, or acquire, a position and/or an orientation of the camera system 308 relative to the work field 304 and/or vice versa.
  • one or both of the camera system 308 and the work field 304 may be positioned in known or determined positions, with the controller 310 of a system 300 , for example, taking into account the known or determined positions when determining whether the object 302 has one or more predetermined features.
  • This may include determining and/or establishing a coordinate system that describes a location and/or an orientation of the system 300 , determining and/or establishing a coordinate system that describes a location and/or an orientation of the work field 304 , determining and/or establishing a coordinate system that describes a location and/or an orientation of the camera system 308 , and/or determining and/or establishing a relationship, an offset, and/or a difference between two or more of the coordinate systems.
  • one or both of the camera system 308 and the work field 304 may be associated with or monitored by a positioning system, such as an indoor positioning system within a manufacturing or production environment, that is configured to detect the precise position of the camera system 308 and/or the work field 304 .
  • camera system 308 may be fixed in space, and the work field 304 may be moved into the field of view of the camera system 308 , with the work field 304 being accurately located via precision tooling or loosely located with a global resync operation performed by the system 300 with either a vision system or a touch probe, for example.
  • the work field 304 may be fixed in space, and the camera system 308 may be moved such that the work field 304 is within the field of view of the camera system 308 , with the camera system 308 being accurately located via precision tooling or loosely located with a global resync operation performed by the system 300 with either a vision system or a touch probe, for example.
  • a mathematical model, or description, of the work field 304 may be utilized to estimate, to establish, and/or to determine a relative orientation between the camera system 308 and the work field 304 .
  • This mathematical model, or description, of the work field 304 may include and/or be a CAD model of the work field 304 or a 2-D projection of the CAD model of the work field 304 , for example.
  • the camera system 308 is supported by a camera mount 312 , such as a robotic arm. That is, the camera system 308 may be mounted to, mounted with, or mounted as, an end effector of a robotic arm.
  • the controller 310 is communicatively coupled to the camera mount to receive location and orientation of the camera system 308 relative to the work field 304 . As discussed, taking into account, knowing, determining, and/or quantifying the location and orientation of the camera system 308 relative to the work field 304 facilitates analysis of the image data for making the determination of whether the object 302 has a specific feature.
  • the image data comprises color data.
  • a feature 306 may be a specific color.
  • the image data may not comprise color data.
  • the camera system 308 therefore may capture black and white or greyscale data.
  • Some methods 100 further comprise illuminating 108 the work field 304 during the capturing 102 , as schematically represented in FIG. 1 .
  • some systems 300 may further comprise an illumination device 314 that is configured to illuminate the work field 304 when the camera system 308 captures the image data, as schematically represented in FIG. 2 .
  • the illumination device 314 may comprise a flash device that is communicatively coupled to, or a component of, the camera system 308 , such that the flash illuminates the work field 304 when the camera system 308 captures the image data.
  • some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, subtracting 110 , from the image data, data corresponding to portions of the work field 304 that are not the object 302 based on a known color of the portions of the work field 304 that are not the object 302 .
  • the computer-readable instructions when executed, therefore further cause the controller to subtract data corresponding to portions of the work field 304 that are not the object 302 .
  • the work field 304 may be uniform in color other than one or more objects 302 present within the work field 304 .
  • the work field 304 may comprise a sheet material in a raw or painted state, with the raw or painted state having a specific color. Accordingly the data associated with the specific color may be subtracted from the image data, so that only, or generally only, data associated with one or more objects 302 within the work field 304 remains.
  • the subtracting of color data additionally or alternatively may be described as chroma keying.
  • Some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, transforming 112 the image data into HSV (hue, saturation, value) domain.
  • the computer-readable instructions when executed, therefore further cause the controller to transform the image data into HSV domain prior to application of the filter. While not required in all methods 100 and systems 300 , such transformation of the image data into HSV domain may facilitate the subtraction of color data associated with the portions of the work field 304 that are not an object 302 .
  • some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, binary thresholding 114 the image data based on a presumed feature of the one or more predetermined features 306 , and responsive to the binary thresholding 114 , identifying 116 one or more groups of pixels as candidates for the presumed feature.
  • the computer-readable instructions when executed, therefore further cause the controller to binary threshold the image data based on a presumed feature of the one or more predetermined features 306 prior to application of the filter, and responsive to the image data being binary thresholded, identify one or more groups of pixels as candidates for the presumed feature.
  • the presumed feature may be one or more of a color, a shape, a size, indicia, and a thread configuration associated with an object that is expected to be within, or desired to be within, the work field 304 .
  • the presumed feature corresponds to an expected or desired object to be within the work field 304 based on a database associated with the work field 304 . For example, a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified.
  • the binary thresholding 114 based on the presumed feature facilitates the identification of pixels, or a subset of the image data, as candidates for the presumed feature.
  • a fastener, or portion of a fastener, that is intended to be used in a specific assembly of a product may have a specific color.
  • Some such methods 100 further comprise noise filtering 118 (i.e., apply a noise filter (e.g., a binary noise filter) to) the image data following the binary thresholding 114 , in which case, the identifying 116 the one or more groups of pixels is further responsive to the noise filtering 118 .
  • the computer-readable instructions when executed, therefore further cause the controller to noise filter (i.e., apply a noise filter (e.g., a binary noise filter) to) the image data after the image data is binary thresholded, and identification of the one or more groups of pixels is further responsive to the image data being noise filtered.
  • the image data following the binary thresholding 114 may be noisy, and noise filtering 118 may clean up the image data and facilitate the ultimate determination of whether or not the object 302 is in fact the object that is desired to be present, or whether or not the object 302 is in a desired configuration, such as being fully installed, properly positioned, assembled with a distinct component, etc.
  • the specific feature is a presumed texture of the one or more predetermined features.
  • the presumed texture corresponds to an expected or desired object to be within the work field 304 based on a database associated with the work field 304 .
  • a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified.
  • an object that is expected, or desired, to be within the work field 304 has a presumed feature, such as a specific texture
  • the application of the filter facilitates the ultimate determination of whether or not the object 302 has the presumed feature.
  • a fastener that is intended to be used in a specific assembly of a product may have threads that give the fastener a texture corresponding to the specific thread size, or pitch.
  • the filter is a Gabor filter.
  • a Gabor filter For example, for a thread pitch associated with a desired fastener, and based on relative and/or absolute position and/or orientation data associated with the work field 304 and/or the camera system 308 (e.g., known or acquired during a method 100 and/or by a system 300 ), an appropriate Gabor filter may be selected, or derived, and applied to the image data, such as to the one or more groups of pixels that were identified as a result of the binary thresholding 114 .
  • the greater the response of the Gabor filter to the group(s) of pixels the greater the confidence that a group of pixels actually represents a feature (e.g., a specific thread pitch) of an object that is desired to be within the work field 304 .
  • the desired fastener may have a known thread pitch, which may be defined along a longitudinal axis of the known fastener.
  • a location of the object 302 within the system 300 , within the work field 304 , and/or relative to the camera system 308 may be known and/or acquired (such as discussed herein during methods 100 and/or utilizing systems 300 ).
  • This known thread pitch which may exhibit a corresponding known wavelength, may be scaled, such as via a perspective projection of the object 302 , based upon the location of the object 302 to generate a scaled wavelength.
  • This scaled wavelength then may be utilized to create, generate, and/or select a Gabor filter, or a family of directional Gabor filters.
  • the Gabor filter or the family of directional Gabor filters, then may be applied to the image data. If the image data includes the desired fastener (i.e., if the object 302 is the desired fastener), which has the known thread pitch, the Gabor filter will exhibit a strong response. Conversely, if the image data does not include the desired fastener (i.e., if the object 302 is not the desired fastener), which has the known thread pitch, the Gabor filter will exhibit a weak, or a weaker, response.
  • the desired fastener i.e., if the object 302 is the desired fastener
  • the Gabor filter will exhibit a strong response. Conversely, if the image data does not include the desired fastener (i.e., if the object 302 is not the desired fastener), which has the known thread pitch, the Gabor filter will exhibit a weak, or a weaker, response.
  • a longitudinal axis of the object 302 may be oriented at an angle relative, or with respect to, an imaging plane of the camera system 308 .
  • the above-described scaled wavelength further may be multiplied by an angular correction factor to produce and/or generate an angle-corrected and scaled wavelength.
  • This angle-corrected and scaled wavelength then may be utilized to create, generate, and/or select the Gabor filter, or the family of directional Gabor filters, which may be applied to the image data, as discussed above.
  • An example of the angular correction factor includes, or is, a cosine of the angle at which the longitudinal axis of the object 302 is oriented relative to the imaging plane.
  • the camera system 308 may be rotated about the object 302 , about a location of the object 302 , and/or about the longitudinal axis of the object 302 . This rotation of the camera system 308 causes a rotation of the image plane, thereby providing three-dimensional information about the object 302 and permitting systems 300 to utilize the family directional Gabor filters.
  • the filtered data comprises, or may be described as, one or more blobs of pixels that are candidates for being representative of the object 302 .
  • Some such methods 100 further comprise, following the applying 104 the filter, analyzing 120 the one or more blobs (i.e., localized grouping) of pixels to determine presence of one or more blob features.
  • the computer-readable instructions when executed, therefore further cause the controller to analyze the one or more blobs of pixels to determine presence of one or more blob features following application of the filter.
  • the one or more blob features may comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
  • the one or more blob features are associated with the one or more predetermined features 306 in a database associated with the work field 304 .
  • a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified.
  • the analysis of the one or more blobs facilitates the ultimate determination of whether or not the object 302 has the one or more predetermined features.
  • Some such methods 100 further comprise training 122 a machine learning model to identify the one or more predetermined features 306 associated with the one or more blob features.
  • the computer-readable instructions when executed, therefore further cause the controller to train a machine learning model to identify the one or more predetermined features 306 associated with the one or more blob features.
  • the machine learning model may be a support vector machine (SVM).
  • the camera system 308 is a stereo camera system, and therefore the image data comprises two images.
  • Such methods 100 further comprise creating 124 a point cloud of the filtered data.
  • the computer-readable instructions when executed, therefore further cause the controller 310 to create a point cloud of the filtered data.
  • Some such methods 100 further comprise during the capturing 102 the image data, projecting 126 a light texture on the work field 304 .
  • Corresponding systems 300 further comprise a projector 316 that is configured to project a light texture on the work field 304 .
  • the light texture comprises a pattern, while in other such methods 100 and systems 300 , the light texture is random.
  • creating 124 the point cloud may comprise generating 128 a disparity map from the two images based on the light texture.
  • some methods 100 further comprise selecting 130 pixels associated with the object 302 , and comparing 132 the pixels associated with the object 302 to a computer model (e.g., a CAD model) of an expected or desired object from a database associated with the work field 304 .
  • a computer model e.g., a CAD model
  • the computer-readable instructions when executed, therefore further cause the controller to select pixels associated with the object 302 , and compare the pixels associated with the object 302 to a computer model of an expected or desired object from a database associated with the work field 304 .
  • a computer model of a fastener may include a representation of a cylinder corresponding to the shaft of the fastener, and comparing the computer model with the pixels associated with the object 302 may include fitting the cylinder to the pixels by using normal and radius estimates.
  • the work field 304 comprises a plurality of objects 302 . Accordingly, some methods 100 are performed in connection with each of the plurality of objects.
  • the computer-readable instructions when executed, therefore further cause the controller 310 to determine if each of the plurality of objects 302 has the specific feature.
  • robotic installation methods 200 comprise performing an automated machine vision method 100 according to the present disclosure, in which the camera system 308 is mounted to, mounted with, or mounted as an end effector of a robotic arm 402 , and based on the determining 106 of the performed method 100 , instructing 202 the robotic arm 402 to install a component in a predetermined configuration relative to the object 302 , and installing 204 , using the robotic arm 402 , the component in the predetermined configuration relative to the object 302 .
  • FIG. 1 robotic installation methods 200 comprise performing an automated machine vision method 100 according to the present disclosure, in which the camera system 308 is mounted to, mounted with, or mounted as an end effector of a robotic arm 402 , and based on the determining 106 of the performed method 100 , instructing 202 the robotic arm 402 to install a component in a predetermined configuration relative to the object 302 , and installing 204 , using the robotic arm 402 , the component in the predetermined configuration relative to the object 302 .
  • FIG. 1 schematically
  • robotic installation systems 400 comprise a machine vision system 300 according to the present disclosure, and a robotic arm 402 , in which the camera system 308 is mounted to, mounted with, or mounted as an end effector of the robotic arm 402 , and in which the robotic arm 402 is configured to install a component in a predetermined configuration relative to the object 302 based in part on a determination by the controller 310 if the object 302 has the specific feature and instructions received from the controller 310 . That is, the computer-readable instructions, when executed, further cause the controller to instruct the robotic arm to install the component in the predetermined configuration relative to the object.
  • FIG. 3 an illustrative non-exclusive example of a robotic installation system 400 in the form of robotic installation system 401 is illustrated.
  • the reference numerals from the schematic illustration of FIG. 2 are used to designate corresponding parts of robotic installation system 401 ; however, the example of FIG. 3 is non-exclusive and does not limit robotic installation systems 400 to the illustrated embodiment of FIG. 3 . That is, robotic installation systems 400 are not limited to the specific embodiment of the illustrated robotic installation system 401 , and robotic installation systems 400 may incorporate any number of the various aspects, configurations, characteristics, properties, etc. of robotic installation systems 400 that are illustrated in and discussed with reference to the schematic representation of FIG. 2 and/or the embodiments of FIG.
  • robotic installation system 401 may not be discussed, illustrated, and/or labeled again with respect to robotic installation system 401 ; however, it is within the scope of the present disclosure that the previously discussed features, variants, etc. may be utilized with robotic installation system 401 .
  • robotic installation system 401 comprises a robotic arm 402 having as end effectors 404 , a camera system 308 and a nut runner 406 .
  • the robotic arm 402 is positioned relative to work field 304 that comprises a plurality of objects 302 in the form of fasteners extending from a sheet material 414 . More specifically, in the depicted example, a plurality of bolts 408 extend through the sheet material 414 , with a subset of the bolts 408 having nuts 410 installed thereon and defining a fastener pair 412 .
  • a controller 310 is schematically presented in communication with the camera system 308 , the nut runner 406 , and the robotic arm 402 , with this operative communication schematically represented by lightning bolts and which communication may be wired and/or wireless.
  • robotic installation system 401 is configured to perform methods 100 and methods 200 according to the present disclosure. More specifically, robotic installation system 401 utilizes its machine vision system 300 to determine if the objects 302 having one more predetermined features, such as whether the correct bolts 408 and nuts 410 are present, whether a bolt 408 is correctly or fully positioned for operative receipt of a nut 410 , whether a nut is properly and operatively positioned on a corresponding bolt 408 , which bolts 408 still need nuts 410 , etc.
  • the robotic arm 402 may operatively install additional nuts 410 utilizing the nut runner 406 and/or may not install a nut 410 , for example, where an incorrect bolt 408 is positioned or where a bolt 408 is not operatively positioned to receive a nut 410 .
  • a method 500 may include one or more of a method 100 or method 200 according to the present disclosure, and an aircraft 600 may be manufactured or serviced utilizing a system 300 or a system 400 according to the present disclosure, with the aircraft 600 comprising a work field 304 , for example.
  • exemplary method 500 may include specification and design 504 of the aircraft 600 and material procurement 506 .
  • component and subassembly manufacturing 508 and system integration 510 of the aircraft 600 takes place.
  • the aircraft 600 may go through certification and delivery 512 in order to be placed in service 514 .
  • routine maintenance and service 516 which may also include modification, reconfiguration, refurbishment, and so on).
  • a system integrator may include without limitation any number of aircraft manufacturers and major-system subcontractors; a third party may include without limitation any number of venders, subcontractors, and suppliers; and an operator may be an airline, leasing company, military entity, service organization, and so on.
  • the aircraft 600 produced by exemplary method 500 may include an airframe 602 with a plurality of systems 604 and an interior 606 .
  • high-level systems 604 include one or more of a propulsion system 608 , an electrical system 610 , a hydraulic system 612 , and an environmental system 614 . Any number of other systems also may be included.
  • an aerospace example is shown, the principles of the inventions disclosed herein may be applied to other industries, such as the automotive industry.
  • Apparatus and methods disclosed herein may be employed during any one or more of the stages of the production and service method 500 .
  • components or subassemblies corresponding to production stage 508 may be fabricated or manufactured in a manner similar to components or subassemblies produced while the aircraft 600 is in service.
  • one or more apparatus embodiments, method embodiments, or a combination thereof may be utilized during the production stages 508 and 510 , for example, by substantially expediting assembly of or reducing the cost of an aircraft 600 .
  • apparatus embodiments, method embodiments, or a combination thereof may be utilized while the aircraft 600 is in service, for example and without limitation, to maintenance and service 516 .
  • An automated machine vision method for determining if an object within a work field has one or more predetermined features comprising:
  • the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features
  • the automated machine vision method further comprises determining a relative orientation between the camera system and the work field.
  • A7.2 The automated machine vision method of any of paragraphs A7-A7.1, wherein the presumed feature is one or more of a color, a shape, a size, indicia, and a thread configuration.
  • identifying the one or more groups of pixels is further responsive to the noise filtering.
  • the one or more blob features comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
  • A10.1.1.1 The automated machine vision method of paragraph A10.1.1, wherein the one or more blob features are associated with the one or more predetermined features in a/the database associated with the work field.
  • A10.1.2 The automated machine vision method of any of paragraphs A10-A10.1.1.1, further comprising:
  • A11 The automated machine vision method of any of paragraphs A-A10.1.2, wherein the camera system is a stereo camera system, wherein the image data comprises two images, and wherein the automated machine vision method further comprises:
  • A11.1.3 The automated machine vision method of any of paragraphs A11.1-A11.1.2, wherein the creating the point cloud comprises generating a disparity map from the two images based on the light texture.
  • A13 The automated machine vision method of any of paragraphs A-A12, wherein the work field comprises a plurality of objects, and wherein the method is performed in connection with each of the plurality of objects.
  • A15 The automated machine vision method of any of paragraphs A-A14.1, wherein the one or more predetermined features comprise one or more of a color, a size, a shape, indicia, and a thread configuration.
  • a robotic installation method comprising:
  • a machine vision system for determining if an object within a work field has one or more predetermined features, the machine vision system comprising:
  • a camera system configured to capture image data of the work field
  • controller communicatively coupled to the camera system to receive the image data from the camera system, wherein the controller comprises non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to:
  • controller is communicatively coupled to the camera mount to receive location and orientation data of the camera system relative to the work field.
  • identification of the one or more groups of pixels is further responsive to the image data being noise filtered.
  • blob features comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
  • a projector configured to project a light texture on the work field.
  • a robotic installation system comprising:
  • the camera system is mounted to, mounted with, or mounted as an end effector of the robotic arm, and wherein the robotic arm is configured to install a component in a predetermined configuration relative to the object based in part on a determination by the controller that the object has the specific feature;
  • the terms “adapted” and “configured” mean that the element, component, or other subject matter is designed and/or intended to perform a given function. Thus, the use of the terms “adapted” and “configured” should not be construed to mean that a given element, component, or other subject matter is simply “capable of” performing a given function but that the element, component, and/or other subject matter is specifically selected, created, implemented, utilized, programmed, and/or designed for the purpose of performing the function. It is also within the scope of the present disclosure that elements, components, and/or other recited subject matter that is recited as being adapted to perform a particular function may additionally or alternatively be described as being configured to perform that function, and vice versa. Similarly, subject matter that is recited as being configured to perform a particular function may additionally or alternatively be described as being operative to perform that function.
  • the term “and/or” placed between a first entity and a second entity means one of (1) the first entity, (2) the second entity, and (3) the first entity and the second entity.
  • Multiple entries listed with “and/or” should be construed in the same manner, i.e., “one or more” of the entities so conjoined.
  • Other entities optionally may be present other than the entities specifically identified by the “and/or” clause, whether related or unrelated to those entities specifically identified.
  • a reference to “A and/or B,” when used in conjunction with open-ended language such as “comprising,” may refer, in one example, to A only (optionally including entities other than B); in another example, to B only (optionally including entities other than A); in yet another example, to both A and B (optionally including other entities).
  • These entities may refer to elements, actions, structures, steps, operations, values, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Data Mining & Analysis (AREA)
  • Robotics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)
  • Automatic Assembly (AREA)

Abstract

Machine vision methods and systems determine if an object within a work field has one or more predetermined features. Methods comprise capturing image data of the work field, applying a filter to the image data, in which the filter comprises an aspect corresponding to a presumed feature, and based at least in part on the applying, determining if the object has the presumed feature. Systems comprise a camera system configured to capture image data of the work field, and a controller communicatively coupled to the camera system and programmed to apply a filter to the image data, and based at least in part on applying the filter, determine if the object has the specific feature. Robotic installation methods and systems that utilize machine vision methods and systems also are disclosed.

Description

    RELATED APPLICATION
  • This application is a continuation of and claims priority to U.S. application Ser. No. 15/939,127, entitled MACHINE VISION AND ROBOTIC INSTALLATION SYSTEMS AND METHODS and filed on Mar. 28, 2018, the complete disclosure of which is incorporated herein by reference.
  • FIELD
  • The present disclosure relates to machine vision.
  • BACKGROUND
  • Manufacturing is becoming increasingly more automated. Current robotic manufacturing systems are very efficient at repetitive tasks; however, they do not necessarily adapt to changing environments or to mistakes that are introduced by personnel or other upstream inputs. For example, a nut running robot (e.g., a robotic arm with a nut installation end effector) that is tasked with installation of nuts onto threaded fasteners will install a nut on an incorrectly sized or configured threaded fastener, resulting in human personnel having not only to recognize the mistake but also to subsequently fix the mistake.
  • SUMMARY
  • Machine vision and robotic installation systems and methods are disclosed.
  • Automated machine vision methods according to the present disclosure for determining if an object within a work field has one or more predetermined features comprise capturing image data of the work field from a camera system, applying a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a presumed feature of the one or more predetermined features, and based at least in part on the applying, determining if the object has the presumed feature. Robotic installation methods according to the present disclosure comprise performing an automated machine vision method according to the present disclosure, in which the camera system is mounted to, mounted with, or mounted as an end effector of a robotic arm, and based on the determining, installing, using the robotic arm, a component in a predetermined configuration relative to the object.
  • Machine vision systems according to the present disclosure for determining if an object within a work field has one or more predetermined features comprise a camera system configured to capture image date of the work field, and a controller communicatively coupled to the camera system to receive the image data from the camera system. The controller comprises non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a presumed feature of the one or more predetermined features, and based at least in part on applying the filter, determine if the object has the presumed feature. Robotic installation systems according to the present disclosure comprise a machine vision system according to the present disclosure and a robotic arm, with the camera system mounted to, mounted with, or mounted as an end effector of the robotic arm. The robotic arm is configured to install a component in a predetermined configuration relative to the object based in part on a determination by the controller that the object has the specific feature.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flowchart schematically representing methods according to the present disclosure.
  • FIG. 2 is a diagram schematically representing systems according to the present disclosure.
  • FIG. 3 is a somewhat schematic illustration representing an example robotic installation system according to the present disclosure.
  • FIG. 4 is a flowchart schematically representing aircraft production and service methodology.
  • FIG. 5 is a block diagram schematically representing an aircraft.
  • DESCRIPTION
  • Generally, in the figures, elements that are likely to be included in a given example are illustrated in solid lines, while elements that are optional to a given example are illustrated in broken lines. However, elements that are illustrated in solid lines are not essential to all examples of the present disclosure, and an element shown in solid lines may be omitted from a particular example without departing from the scope of the present disclosure.
  • Machine vision and robotic installation systems and methods are disclosed herein. FIG. 1 provides a flowchart schematically representing automated machine vision methods 100 for determining if an object 302 within a work field 304 has one or more predetermined features 306, and FIG. 2 provides a schematic representation of machine vision systems 300 for determining if an object 302 within a work field 304 has one or more predetermined features 306. The schematic representation of methods 100 and systems 300 in FIGS. 1 and 2 are not limiting and other methods 100, steps of methods 100, systems 300, and elements of systems 300 are within the scope of the present disclosure, including methods 100 having greater than or fewer than the number of illustrated steps, as well as systems 300 having great than or fewer than the number of illustrated elements, as understood from the discussions herein. As also understood from the discussions herein, methods 100 are not required to have the schematically represented steps of FIG. 1 performed in the order illustrated. Systems 300 may be described as being configured to perform or implement a method 100 according to the present disclosure; however, not all methods 100 are required to be performed or implemented by a system 300 according to the present disclosure, and not all systems 300 are required to perform or implement a method 100 according to the present disclosure.
  • As mentioned, methods 100 and systems 300 determine if an object 302 within a work field 304 has one or more predetermined features 306 (e.g., physical or structural features or specific location or orientation of the object 302). Accordingly, by confirming that features 306 of the object 302 are features that are expected to be present, or otherwise correspond to an object that is desired to be present at the location of the object 302, methods 100 and systems 300 may make a determination if the object 302 is the correct object, for example, and ready to be worked on by a subsequent operation. Additionally or alternatively, methods 100 and systems 300 may make a determination if the object 302 is in a desired configuration, such as being fully installed, properly positioned, assembled with a distinct component, etc. Conversely, if a method 100 or system 300 determines that features of the object 302 are not the features 306 expected to be present, or otherwise correspond to an object, or configuration thereof, that is desired to be present at the location of the object 302, a subsequent operation may be avoided or an alternative operation may be performed, such as by replacing the object 302 with the correct object, by further manipulating the object 302 prior to subsequent processing, etc. Accordingly, some methods 100 and systems 300 may be described as quality control methods and systems.
  • Methods 100 and systems 300 may be described as automated methods and systems, in so far as upon initiation or activation, the steps necessary to result in the ultimate goal of a method 100 or a system 300 may be fully automated by a system or its component parts without requiring external input from a human user.
  • Work field 304 may be any appropriate work field, such as in a manufacturing or production environment. As illustrative, non-exclusive examples, a work field 304 may comprise a product, or portion thereof, such as a vehicle, an aircraft, or a machine, in a stage of assembly or manufacture. As a more specific example, a work field 304 may comprise a region of a product in which fasteners are installed to assemble two or more components together. In such an example, objects 302 may comprise fasteners; however, methods 100 and systems 300 may be implemented with or configured for any suitable configuration of objects 302.
  • With specific reference to FIG. 1 and general reference to FIG. 2, methods 100 comprise capturing 102 image data of the work field 304 from a camera system 308, applying 104 a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features 306, and based at least in part on the applying 104, determining 106 if the object 302 has the specific feature. In other words, image data associated with the work field 304 is captured, a filter having an aspect that corresponds to a specific feature is applied to the image data, and from the filtered image data, a determination is made whether or not the object 302 has the specific feature. As an illustrative, non-exclusive example, when the object 302 is a threaded fastener, the filter may have an aspect that corresponds to a thread configuration (e.g., thread pitch) of an object that is expected to be where the object 302 is positioned, and as a result of the filtering, it is determined whether or not the threaded fastener has the thread pitch that is expected to be present.
  • With specific reference to FIG. 2, systems 300 comprise a camera system 308 that is configured to capture image data of the work field 304, and a controller 310 communicatively coupled to the camera system 308 to receive the image data from the camera system 308. The controller 310 may be described as being configured to perform or implement methods 100 according to the present disclosure. Additionally or alternatively, the controller 310 may comprise non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to apply a filter to the image data to create filtered data, in which the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features 306, and based at least in part on application of the filter, determine if the object 302 has the specific feature.
  • A controller 310 may be any suitable device or devices that are configured to perform the functions of controllers 310 discussed herein. For example, a controller 310 may include one or more of an electronic controller, a dedicated controller, a special-purpose controller, a personal computer, a special-purpose computer, a display device, a logic device, a memory device, and/or a memory device having computer readable media suitable for storing computer-executable instructions for implementing aspects of systems and/or methods according to the present disclosure. Additionally or alternatively, controllers 310 may include, or be configured to read, non-transitory computer readable storage, or memory, media suitable for storing computer-executable instructions, or software, for implementing methods or steps of methods according to the present disclosure. Examples of such media include CD-ROMs, disks, hard drives, flash memory, etc. As used herein, storage, or memory, devices and media having computer-executable instructions as well as computer-implemented methods and other methods according to the present disclosure are considered to be within the scope of subject matter deemed patentable in accordance with Section 101 of Title 35 of the United States Code.
  • In some examples of methods 100 and systems 300, the camera system 308 is positioned in a known or determined position and orientation relative to the work field 304. For example, the size, shape, and orientation of the object 302 from the perspective of the camera system 308 effects the captured image data (e.g., the collection of pixels corresponding to the object 302) and thus the corresponding analysis of the image data for making the determination of whether the object 302 has a specific feature. Accordingly, coordinate systems associated with the camera system 308 and the work field 304 need to be aligned or at least coordinated in a known manner for operative implementation of methods 100 and systems 300 according to these examples. Accordingly, some methods 100 may be described as further comprising determining, or acquiring, a position and/or an orientation of the camera system 308 relative to the work field 304 and/or vice versa, and similarly, in some systems 300, the computer-readable instructions, when executed, may further cause the controller to determine, or acquire, a position and/or an orientation of the camera system 308 relative to the work field 304 and/or vice versa.
  • In some such examples, one or both of the camera system 308 and the work field 304 may be positioned in known or determined positions, with the controller 310 of a system 300, for example, taking into account the known or determined positions when determining whether the object 302 has one or more predetermined features. This may include determining and/or establishing a coordinate system that describes a location and/or an orientation of the system 300, determining and/or establishing a coordinate system that describes a location and/or an orientation of the work field 304, determining and/or establishing a coordinate system that describes a location and/or an orientation of the camera system 308, and/or determining and/or establishing a relationship, an offset, and/or a difference between two or more of the coordinate systems.
  • For example, one or both of the camera system 308 and the work field 304 may be associated with or monitored by a positioning system, such as an indoor positioning system within a manufacturing or production environment, that is configured to detect the precise position of the camera system 308 and/or the work field 304. Additionally or alternatively, camera system 308 may be fixed in space, and the work field 304 may be moved into the field of view of the camera system 308, with the work field 304 being accurately located via precision tooling or loosely located with a global resync operation performed by the system 300 with either a vision system or a touch probe, for example. Alternatively, the work field 304 may be fixed in space, and the camera system 308 may be moved such that the work field 304 is within the field of view of the camera system 308, with the camera system 308 being accurately located via precision tooling or loosely located with a global resync operation performed by the system 300 with either a vision system or a touch probe, for example. Alternatively, a mathematical model, or description, of the work field 304 may be utilized to estimate, to establish, and/or to determine a relative orientation between the camera system 308 and the work field 304. This mathematical model, or description, of the work field 304 may include and/or be a CAD model of the work field 304 or a 2-D projection of the CAD model of the work field 304, for example.
  • In some examples of methods 100 and systems 300, the camera system 308 is supported by a camera mount 312, such as a robotic arm. That is, the camera system 308 may be mounted to, mounted with, or mounted as, an end effector of a robotic arm. In some such systems 300, the controller 310 is communicatively coupled to the camera mount to receive location and orientation of the camera system 308 relative to the work field 304. As discussed, taking into account, knowing, determining, and/or quantifying the location and orientation of the camera system 308 relative to the work field 304 facilitates analysis of the image data for making the determination of whether the object 302 has a specific feature.
  • In some examples of methods 100 and systems 300, the image data comprises color data. For example, a feature 306 may be a specific color. However, in applications where color of the object is not needed or otherwise indicative of whether or not the object 302 corresponds to a desired object, then the image data may not comprise color data. In such examples, the camera system 308 therefore may capture black and white or greyscale data.
  • Some methods 100 further comprise illuminating 108 the work field 304 during the capturing 102, as schematically represented in FIG. 1. Accordingly, some systems 300 may further comprise an illumination device 314 that is configured to illuminate the work field 304 when the camera system 308 captures the image data, as schematically represented in FIG. 2. For example, the illumination device 314 may comprise a flash device that is communicatively coupled to, or a component of, the camera system 308, such that the flash illuminates the work field 304 when the camera system 308 captures the image data.
  • As schematically represented in FIG. 1, some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, subtracting 110, from the image data, data corresponding to portions of the work field 304 that are not the object 302 based on a known color of the portions of the work field 304 that are not the object 302. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to subtract data corresponding to portions of the work field 304 that are not the object 302. For example, in some applications, the work field 304 may be uniform in color other than one or more objects 302 present within the work field 304. As a more specific example, the work field 304 may comprise a sheet material in a raw or painted state, with the raw or painted state having a specific color. Accordingly the data associated with the specific color may be subtracted from the image data, so that only, or generally only, data associated with one or more objects 302 within the work field 304 remains. The subtracting of color data additionally or alternatively may be described as chroma keying.
  • Some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, transforming 112 the image data into HSV (hue, saturation, value) domain. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to transform the image data into HSV domain prior to application of the filter. While not required in all methods 100 and systems 300, such transformation of the image data into HSV domain may facilitate the subtraction of color data associated with the portions of the work field 304 that are not an object 302.
  • Additionally or alternatively, some methods 100 further comprise following the capturing 102 the image data and prior to the applying 104 the filter, binary thresholding 114 the image data based on a presumed feature of the one or more predetermined features 306, and responsive to the binary thresholding 114, identifying 116 one or more groups of pixels as candidates for the presumed feature. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to binary threshold the image data based on a presumed feature of the one or more predetermined features 306 prior to application of the filter, and responsive to the image data being binary thresholded, identify one or more groups of pixels as candidates for the presumed feature. As illustrative, non-exclusive examples, the presumed feature may be one or more of a color, a shape, a size, indicia, and a thread configuration associated with an object that is expected to be within, or desired to be within, the work field 304. In some examples, the presumed feature corresponds to an expected or desired object to be within the work field 304 based on a database associated with the work field 304. For example, a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified. Accordingly, if an object that is expected, or desired, to be within the work field 304 has a presumed feature, such as a specific color, then the binary thresholding 114 based on the presumed feature facilitates the identification of pixels, or a subset of the image data, as candidates for the presumed feature. As a more specific example, a fastener, or portion of a fastener, that is intended to be used in a specific assembly of a product may have a specific color. By binary thresholding the image data based on the specific color, specific groups of pixels corresponding to the specific color are identified within the image data for further processing for the ultimate determination of whether or not the object 302 is in fact the object that is desired to be present, or whether or not the object 302 is in a desired configuration, such as being fully installed, properly positioned, assembled with a distinct component, etc.
  • Some such methods 100 further comprise noise filtering 118 (i.e., apply a noise filter (e.g., a binary noise filter) to) the image data following the binary thresholding 114, in which case, the identifying 116 the one or more groups of pixels is further responsive to the noise filtering 118. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to noise filter (i.e., apply a noise filter (e.g., a binary noise filter) to) the image data after the image data is binary thresholded, and identification of the one or more groups of pixels is further responsive to the image data being noise filtered. For example, depending on whether or not the work field 304 was illuminated and/or on the degree or quality of illumination of the work field 304, the image data following the binary thresholding 114 may be noisy, and noise filtering 118 may clean up the image data and facilitate the ultimate determination of whether or not the object 302 is in fact the object that is desired to be present, or whether or not the object 302 is in a desired configuration, such as being fully installed, properly positioned, assembled with a distinct component, etc.
  • In some methods 100 and systems 300, the specific feature, the presence of which is being determined and which corresponds to an aspect of the filter being applied to the image data when creating the filtered image data, is a presumed texture of the one or more predetermined features. In some such examples, the presumed texture corresponds to an expected or desired object to be within the work field 304 based on a database associated with the work field 304. For example, and as discussed, a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified. Accordingly, if an object that is expected, or desired, to be within the work field 304 has a presumed feature, such as a specific texture, then the application of the filter facilitates the ultimate determination of whether or not the object 302 has the presumed feature. As a more specific example, a fastener that is intended to be used in a specific assembly of a product may have threads that give the fastener a texture corresponding to the specific thread size, or pitch.
  • In some examples of methods 100 and systems 300, the filter is a Gabor filter. For example, for a thread pitch associated with a desired fastener, and based on relative and/or absolute position and/or orientation data associated with the work field 304 and/or the camera system 308 (e.g., known or acquired during a method 100 and/or by a system 300), an appropriate Gabor filter may be selected, or derived, and applied to the image data, such as to the one or more groups of pixels that were identified as a result of the binary thresholding 114. The greater the response of the Gabor filter to the group(s) of pixels, the greater the confidence that a group of pixels actually represents a feature (e.g., a specific thread pitch) of an object that is desired to be within the work field 304.
  • As an illustrative, non-exclusive example, the desired fastener may have a known thread pitch, which may be defined along a longitudinal axis of the known fastener. A location of the object 302 within the system 300, within the work field 304, and/or relative to the camera system 308 may be known and/or acquired (such as discussed herein during methods 100 and/or utilizing systems 300). This known thread pitch, which may exhibit a corresponding known wavelength, may be scaled, such as via a perspective projection of the object 302, based upon the location of the object 302 to generate a scaled wavelength. This scaled wavelength then may be utilized to create, generate, and/or select a Gabor filter, or a family of directional Gabor filters. The Gabor filter, or the family of directional Gabor filters, then may be applied to the image data. If the image data includes the desired fastener (i.e., if the object 302 is the desired fastener), which has the known thread pitch, the Gabor filter will exhibit a strong response. Conversely, if the image data does not include the desired fastener (i.e., if the object 302 is not the desired fastener), which has the known thread pitch, the Gabor filter will exhibit a weak, or a weaker, response.
  • As another illustrative, non-exclusive example, a longitudinal axis of the object 302 may be oriented at an angle relative, or with respect to, an imaging plane of the camera system 308. Under these conditions, the above-described scaled wavelength further may be multiplied by an angular correction factor to produce and/or generate an angle-corrected and scaled wavelength. This angle-corrected and scaled wavelength then may be utilized to create, generate, and/or select the Gabor filter, or the family of directional Gabor filters, which may be applied to the image data, as discussed above. An example of the angular correction factor includes, or is, a cosine of the angle at which the longitudinal axis of the object 302 is oriented relative to the imaging plane.
  • As yet another illustrative, non-exclusive example, the camera system 308 may be rotated about the object 302, about a location of the object 302, and/or about the longitudinal axis of the object 302. This rotation of the camera system 308 causes a rotation of the image plane, thereby providing three-dimensional information about the object 302 and permitting systems 300 to utilize the family directional Gabor filters.
  • In some methods 100 and systems 300, the filtered data comprises, or may be described as, one or more blobs of pixels that are candidates for being representative of the object 302. Some such methods 100 further comprise, following the applying 104 the filter, analyzing 120 the one or more blobs (i.e., localized grouping) of pixels to determine presence of one or more blob features. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to analyze the one or more blobs of pixels to determine presence of one or more blob features following application of the filter. For example, the one or more blob features may comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity. In some such methods 100 and systems 300, the one or more blob features are associated with the one or more predetermined features 306 in a database associated with the work field 304. As discussed, a product being assembled or manufactured may have a parts list saved within the database, in which features of the parts are identified. Accordingly, if an object that is expected, or desired, to be within the work field 304 has a presumed feature that corresponds to an expected area, eccentricity, dimension, brightness, correlation, and/or homogeneity of the one or more blobs, then the analysis of the one or more blobs facilitates the ultimate determination of whether or not the object 302 has the one or more predetermined features.
  • Some such methods 100 further comprise training 122 a machine learning model to identify the one or more predetermined features 306 associated with the one or more blob features. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to train a machine learning model to identify the one or more predetermined features 306 associated with the one or more blob features. As an illustrative, non-exclusive example, the machine learning model may be a support vector machine (SVM).
  • In some methods 100 and systems 300, the camera system 308 is a stereo camera system, and therefore the image data comprises two images. Such methods 100 further comprise creating 124 a point cloud of the filtered data. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller 310 to create a point cloud of the filtered data.
  • Some such methods 100 further comprise during the capturing 102 the image data, projecting 126 a light texture on the work field 304. Corresponding systems 300 further comprise a projector 316 that is configured to project a light texture on the work field 304. In some such methods 100 and systems 300, the light texture comprises a pattern, while in other such methods 100 and systems 300, the light texture is random. By projecting a light texture on the work field 304, the two images captured by the camera system 308 may be resolved to create the point cloud. For example, in some methods 100, creating 124 the point cloud may comprise generating 128 a disparity map from the two images based on the light texture.
  • More specifically, some methods 100 further comprise selecting 130 pixels associated with the object 302, and comparing 132 the pixels associated with the object 302 to a computer model (e.g., a CAD model) of an expected or desired object from a database associated with the work field 304. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller to select pixels associated with the object 302, and compare the pixels associated with the object 302 to a computer model of an expected or desired object from a database associated with the work field 304. As an illustrative, non-exclusive example, a computer model of a fastener may include a representation of a cylinder corresponding to the shaft of the fastener, and comparing the computer model with the pixels associated with the object 302 may include fitting the cylinder to the pixels by using normal and radius estimates.
  • In some applications, the work field 304 comprises a plurality of objects 302. Accordingly, some methods 100 are performed in connection with each of the plurality of objects. In corresponding systems 300, the computer-readable instructions, when executed, therefore further cause the controller 310 to determine if each of the plurality of objects 302 has the specific feature.
  • With continued reference to FIGS. 1 and 2, also within the scope of the present disclosure are robotic installation methods 200 and robotic installation systems 400. As schematically represented in FIG. 1 and with general reference to FIG. 2, robotic installation methods 200 comprise performing an automated machine vision method 100 according to the present disclosure, in which the camera system 308 is mounted to, mounted with, or mounted as an end effector of a robotic arm 402, and based on the determining 106 of the performed method 100, instructing 202 the robotic arm 402 to install a component in a predetermined configuration relative to the object 302, and installing 204, using the robotic arm 402, the component in the predetermined configuration relative to the object 302. As schematically represented in FIG. 2, robotic installation systems 400 comprise a machine vision system 300 according to the present disclosure, and a robotic arm 402, in which the camera system 308 is mounted to, mounted with, or mounted as an end effector of the robotic arm 402, and in which the robotic arm 402 is configured to install a component in a predetermined configuration relative to the object 302 based in part on a determination by the controller 310 if the object 302 has the specific feature and instructions received from the controller 310. That is, the computer-readable instructions, when executed, further cause the controller to instruct the robotic arm to install the component in the predetermined configuration relative to the object.
  • Turning now to FIG. 3, an illustrative non-exclusive example of a robotic installation system 400 in the form of robotic installation system 401 is illustrated. Where appropriate, the reference numerals from the schematic illustration of FIG. 2 are used to designate corresponding parts of robotic installation system 401; however, the example of FIG. 3 is non-exclusive and does not limit robotic installation systems 400 to the illustrated embodiment of FIG. 3. That is, robotic installation systems 400 are not limited to the specific embodiment of the illustrated robotic installation system 401, and robotic installation systems 400 may incorporate any number of the various aspects, configurations, characteristics, properties, etc. of robotic installation systems 400 that are illustrated in and discussed with reference to the schematic representation of FIG. 2 and/or the embodiments of FIG. 3, as well as variations thereof, without requiring the inclusion of all such aspects, configurations, characteristics, properties, etc. For the purpose of brevity, each previously discussed component, part, portion, aspect, region, etc. or variants thereof may not be discussed, illustrated, and/or labeled again with respect to robotic installation system 401; however, it is within the scope of the present disclosure that the previously discussed features, variants, etc. may be utilized with robotic installation system 401.
  • As illustrated in FIG. 3, robotic installation system 401 comprises a robotic arm 402 having as end effectors 404, a camera system 308 and a nut runner 406. The robotic arm 402 is positioned relative to work field 304 that comprises a plurality of objects 302 in the form of fasteners extending from a sheet material 414. More specifically, in the depicted example, a plurality of bolts 408 extend through the sheet material 414, with a subset of the bolts 408 having nuts 410 installed thereon and defining a fastener pair 412. A controller 310 is schematically presented in communication with the camera system 308, the nut runner 406, and the robotic arm 402, with this operative communication schematically represented by lightning bolts and which communication may be wired and/or wireless. Accordingly, robotic installation system 401 is configured to perform methods 100 and methods 200 according to the present disclosure. More specifically, robotic installation system 401 utilizes its machine vision system 300 to determine if the objects 302 having one more predetermined features, such as whether the correct bolts 408 and nuts 410 are present, whether a bolt 408 is correctly or fully positioned for operative receipt of a nut 410, whether a nut is properly and operatively positioned on a corresponding bolt 408, which bolts 408 still need nuts 410, etc. Then, based on such determination, the robotic arm 402 may operatively install additional nuts 410 utilizing the nut runner 406 and/or may not install a nut 410, for example, where an incorrect bolt 408 is positioned or where a bolt 408 is not operatively positioned to receive a nut 410.
  • Turning now to FIGS. 4 and 5, embodiments of the present disclosure may be described in the context of an aircraft manufacturing and service method 500 as shown in FIG. 4 and an aircraft 600 as shown in FIG. 5. That is, a method 500 may include one or more of a method 100 or method 200 according to the present disclosure, and an aircraft 600 may be manufactured or serviced utilizing a system 300 or a system 400 according to the present disclosure, with the aircraft 600 comprising a work field 304, for example.
  • During pre-production, exemplary method 500 may include specification and design 504 of the aircraft 600 and material procurement 506. During production, component and subassembly manufacturing 508 and system integration 510 of the aircraft 600 takes place. Thereafter, the aircraft 600 may go through certification and delivery 512 in order to be placed in service 514. While in service by a customer, the aircraft 600 is scheduled for routine maintenance and service 516 (which may also include modification, reconfiguration, refurbishment, and so on).
  • Each of the processes of method 500 may be performed or carried out by a system integrator, a third party, and/or an operator (e.g., a customer). For the purposes of this description, a system integrator may include without limitation any number of aircraft manufacturers and major-system subcontractors; a third party may include without limitation any number of venders, subcontractors, and suppliers; and an operator may be an airline, leasing company, military entity, service organization, and so on.
  • As shown in FIG. 5, the aircraft 600 produced by exemplary method 500 may include an airframe 602 with a plurality of systems 604 and an interior 606. Examples of high-level systems 604 include one or more of a propulsion system 608, an electrical system 610, a hydraulic system 612, and an environmental system 614. Any number of other systems also may be included. Although an aerospace example is shown, the principles of the inventions disclosed herein may be applied to other industries, such as the automotive industry.
  • Apparatus and methods disclosed herein may be employed during any one or more of the stages of the production and service method 500. For example, components or subassemblies corresponding to production stage 508 may be fabricated or manufactured in a manner similar to components or subassemblies produced while the aircraft 600 is in service. Also, one or more apparatus embodiments, method embodiments, or a combination thereof may be utilized during the production stages 508 and 510, for example, by substantially expediting assembly of or reducing the cost of an aircraft 600. Similarly, one or more of apparatus embodiments, method embodiments, or a combination thereof may be utilized while the aircraft 600 is in service, for example and without limitation, to maintenance and service 516.
  • Illustrative, non-exclusive examples of inventive subject matter according to the present disclosure are described in the following enumerated paragraphs:
  • A. An automated machine vision method for determining if an object within a work field has one or more predetermined features, the automated machine vision method comprising:
  • capturing image data of the work field from a camera system;
  • applying a filter to the image data to create filtered data, wherein the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features; and
  • based at least in part on the applying, determining if the object has the specific feature.
  • A1. The automated machine vision method of paragraph A, wherein at least one of:
  • (i) the camera system is positioned in a known position and orientation relative to the work field; and
  • (ii) the automated machine vision method further comprises determining a relative orientation between the camera system and the work field.
  • A2. The automated machine vision method of any of paragraphs A-A1, wherein the camera system is mounted to, mounted with, or mounted as, an end effector of a robotic arm.
  • A3. The automated machine vision method of any of paragraphs A-A2, wherein the image data comprises color data.
  • A4. The automated machine vision method of any of paragraphs A-A3, further comprising:
  • illuminating the work field during the capturing.
  • A5. The automated machine vision method of any of paragraphs A-A4, further comprising following the capturing the image data and prior to the applying the filter:
  • subtracting, from the image data, data corresponding to portions of the work field that are not the object based on a known color of the portions of the work field that are not the object.
  • A6. The automated machine vision method of any of paragraphs A-A5, further comprising following the capturing the image data and prior to the applying the filter:
  • transforming the image data into HSV domain.
  • A7. The automated machine vision method of any of paragraphs A-A6, further comprising following the capturing the image data and prior to the applying the filter:
  • binary thresholding the image data based on a presumed feature of the one or more predetermined features; and
  • responsive to the binary thresholding, identifying one or more groups of pixels as candidates for the presumed feature.
  • A7.1. The automated machine vision method of paragraph A7, wherein the presumed feature corresponds to an expected or desired object to be within the work field based on a database associated with the work field.
  • A7.2. The automated machine vision method of any of paragraphs A7-A7.1, wherein the presumed feature is one or more of a color, a shape, a size, indicia, and a thread configuration.
  • A7.3. The automated machine vision method of any of paragraphs A7-A7.2, further comprising:
  • noise filtering the image data following the binary thresholding;
  • wherein the identifying the one or more groups of pixels is further responsive to the noise filtering.
  • A8. The automated machine vision method of any of paragraphs A-A7.3, wherein the specific feature is a presumed texture of the one or more predetermined features.
  • A8.1. The automated machine vision method of paragraph A8, wherein the presumed texture corresponds to an/the expected or desired object to be within the work field based on a/the database associated with the work field.
  • A8.2. The automated machine vision method of any of paragraphs A8-A8.1, wherein the presumed texture corresponds to a fastener thread size.
  • A9. The automated machine vision method of any of paragraphs A-A8.2, wherein the filter is a Gabor filter.
  • A10. The automated machine vision method of any of paragraphs A-A9, wherein the filtered data comprises one or more blobs of pixels that are candidates for being representative of the object.
  • A10.1. The automated machine vision method of paragraph A10, further comprising: following the applying the filter, analyzing the one or more blobs of pixels to determine presence of one or more blob features.
  • A10.1.1. The automated machine vision method of paragraph A10.1, wherein the one or more blob features comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
  • A10.1.1.1. The automated machine vision method of paragraph A10.1.1, wherein the one or more blob features are associated with the one or more predetermined features in a/the database associated with the work field.
  • A10.1.2. The automated machine vision method of any of paragraphs A10-A10.1.1.1, further comprising:
  • training a machine learning model to identify the one or more predetermined features associated with the one or more blob features.
  • A11. The automated machine vision method of any of paragraphs A-A10.1.2, wherein the camera system is a stereo camera system, wherein the image data comprises two images, and wherein the automated machine vision method further comprises:
  • creating a point cloud of the filtered data.
  • A11.1. The automated machine vision method of paragraph A11, further comprising:
  • during the capturing the image data, projecting a light texture on the work field.
  • A11.1.1. The automated machine vision method of paragraph A11.1, wherein the light texture comprises a pattern.
  • A11.1.2. The automated machine vision method of paragraph A11.1, wherein the light texture is random.
  • A11.1.3. The automated machine vision method of any of paragraphs A11.1-A11.1.2, wherein the creating the point cloud comprises generating a disparity map from the two images based on the light texture.
  • A11.1.3.1 The automated machine vision method of paragraph A11.1.3, further comprising:
  • selecting pixels associated with the object; and
  • comparing the pixels associated with the object to a computer model of an/the expected or desired object from a/the database associated with the work field.
  • A12. The automated machine vision method of any of paragraphs A-A11.1.3.1, wherein the object comprises a fastener or fastener pair.
  • A13. The automated machine vision method of any of paragraphs A-A12, wherein the work field comprises a plurality of objects, and wherein the method is performed in connection with each of the plurality of objects.
  • A14. The automated machine vision method of any of paragraphs A-A13, wherein the work field comprises a plurality of objects extending from a sheet material.
  • A14.1. The automated machine vision method of paragraph A14 when depending from paragraph A5, wherein the portions of the work field that are not the object correspond to the sheet material.
  • A15. The automated machine vision method of any of paragraphs A-A14.1, wherein the one or more predetermined features comprise one or more of a color, a size, a shape, indicia, and a thread configuration.
  • A16. The automated machine vision method of any of paragraphs A-A15, performed by the machine vision system of any of paragraphs C-C15.
  • B. A robotic installation method, comprising:
  • performing the automated machine vision method of any of paragraphs A-A16, wherein the camera system is mounted to, mounted with, or mounted as an/the end effector of a/the robotic arm;
  • based on the determining, instructing the robotic arm to install a component in a predetermined configuration relative to the object; and
  • installing, using the robotic arm, the component in the predetermined configuration relative to the object.
  • B1. The robotic installation method of paragraph B, wherein the object and the component comprise a/the fastener pair.
  • C. A machine vision system for determining if an object within a work field has one or more predetermined features, the machine vision system comprising:
  • a camera system configured to capture image data of the work field; and
  • a controller communicatively coupled to the camera system to receive the image data from the camera system, wherein the controller comprises non-transitory computer readable media having computer-readable instructions that, when executed, cause the controller to:
      • apply a filter to the image data to create filtered data, wherein the filter comprises an aspect corresponding to a specific feature of the one or more predetermined features; and
      • based at least in part on application of the filter, determine if the object has the specific feature.
  • C1. The machine vision system of paragraph C, further comprising:
  • a camera mount, wherein the camera system is supported by the camera mount;
  • wherein the controller is communicatively coupled to the camera mount to receive location and orientation data of the camera system relative to the work field.
  • C1.1. The machine vision system of paragraph C1, wherein the camera mount is a robotic arm.
  • C2. The machine vision system of any of paragraphs C-C1.1, wherein the image data comprises color data.
  • C3. The machine vision system of any of paragraphs C-C2, further comprising: an illumination device configured to illuminate the work field when the camera system captures the image data.
  • C4. The machine vision system of any of paragraphs C-C3, wherein the computer-readable instructions, when executed, further cause the controller to:
  • prior to application of the filter, subtract, from the image data, data corresponding to portions of the work field that are not the object based on a known color of the portions of the work field that are not the object.
  • C5. The machine vision system of any of paragraphs C-C4, wherein the computer-readable instructions, when executed, further cause the controller to:
  • prior to application of the filter, transform the image data into HSV domain.
  • C6. The machine vision system of any of paragraphs C-C5, wherein the computer-readable instructions, when executed, further cause the controller to:
  • prior to application of the filter, binary threshold the image data based on a presumed feature of the one or more predetermined features; and
  • responsive to the image data being binary thresholded, identify one or more groups of pixels as candidates for the presumed feature.
  • C6.1. The machine vision system of paragraph C6, wherein the presumed feature corresponds to an expected or desired object to be within the work field based on a database associated with the work field.
  • C6.2. The machine vision system of any of paragraphs C6-C6.1, wherein the presumed feature is one or more of a color, a shape, a size, indicia, and a thread configuration.
  • C6.3. The machine vision system of any of paragraphs C6-C6.2, wherein the computer-readable instructions, when executed, further cause the controller to:
  • noise filter the image data after the image data is binary thresholded;
  • wherein identification of the one or more groups of pixels is further responsive to the image data being noise filtered.
  • C7. The machine vision system of any of paragraphs C-C6.3, wherein the specific feature is a presumed texture of the one or more predetermined features.
  • C7.1. The machine vision system of paragraph C7, wherein the presumed texture corresponds to an/the expected or desired object to be within the work field based on a/the database associated with the work field.
  • C7.2. The machine vision system of any of paragraphs C7-C7.1, wherein the presumed texture corresponds to a fastener thread size.
  • C8. The machine vision system of any of paragraphs C-C7.2, wherein the filter is a Gabor filter.
  • C9. The machine vision system of any of paragraphs C-C8, wherein the filtered data comprises one or more blobs of pixels that are candidates for being representative of the object.
  • C9.1. The machine vision system of paragraph C9, wherein the computer-readable instructions, when executed, further cause the controller to:
  • following application of the filter, analyze the one or more of blobs of pixels to determine presence of one or more blob features.
  • C9.1.1. The machine vision system of paragraph C9.1, wherein the one or more blob features comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
  • C9.1.1.1. The machine vision system of paragraph C9.1.1, wherein the one or more blob features are associated with the one or more predetermined features in a/the database associated with the work field.
  • C9.1.2. The machine vision system of any of paragraphs C9-C9.1.1.1, wherein the computer-readable instructions, when executed, further cause the controller to:
  • train a machine learning model to identify the one or more predetermined features associated with the one or more blob features.
  • C10. The machine vision system of any of paragraphs C-C9.1.2, wherein the camera system is a stereo camera system, wherein the image data comprises two images, and wherein the computer-readable instructions, when executed, further cause the controller to:
  • create a point cloud of the filtered data.
  • C10.1. The machine vision system of paragraph C10, further comprising:
  • a projector configured to project a light texture on the work field.
  • C10.1.1. The machine vision system of paragraph C10.1, wherein the light texture comprises a pattern.
  • C10.1.2. The machine vision system of paragraph C10.1, wherein the light texture is random.
  • C10.1.3. The machine vision system of any of paragraphs C10.1-C10.1.2, wherein creation of the point cloud comprises generation of a disparity map from the two images based on the light texture.
  • C10.1.3.1 The machine vision system of paragraph C10.1.3, wherein the computer-readable instructions, when executed, further cause the controller to:
  • select pixels associated with the object; and
  • compare the pixels associated with the object to a computer model of an/the expected or desired object from a/the database associated with the work field.
  • C11. The machine vision system of any of paragraphs C-C10.1.3.1, wherein the object comprises a fastener or fastener pair.
  • C12. The machine vision system of any of paragraphs C-C11, wherein the work field comprises a plurality of objects, and wherein the computer-readable instructions, when executed, further cause the controller to determine if each of the plurality of objects has the specific feature.
  • C13. The machine vision system of any of paragraphs C-C12, wherein the work field comprises a plurality of objects extending from a sheet material.
  • C13.1. The machine vision system of paragraph C13 when depending from paragraph C4, wherein the portions of the work field that are not the object correspond to the sheet material.
  • C14. The machine vision system of any of paragraphs C-C13.1, wherein the one or more predetermined features comprise one or more of a color, a size, a shape, indicia, and a thread configuration.
  • C15. The machine vision system of any of paragraphs C-C14, configured to perform the automated machine vision method of any of paragraphs A-A16.
  • C16. The use of the machine vision system of any of paragraphs C-C15 to determine if the object within the work field has the one or more predetermined features.
  • D. A robotic installation system, comprising:
  • the machine vision system of any of paragraphs C-C15; and
  • a/the robotic arm, wherein the camera system is mounted to, mounted with, or mounted as an end effector of the robotic arm, and wherein the robotic arm is configured to install a component in a predetermined configuration relative to the object based in part on a determination by the controller that the object has the specific feature;
  • wherein the computer-readable instructions, when executed, further cause the controller to instruct the robotic arm to install the component in the predetermined configuration relative to the object.
  • D1. The robotic installation system of paragraph D, wherein the object and the component comprise a/the fastener pair.
  • D2. The use of the robotic installation system of any of paragraphs D-D1 to install the component in the predetermined configuration relative to the object.
  • As used herein, the terms “adapted” and “configured” mean that the element, component, or other subject matter is designed and/or intended to perform a given function. Thus, the use of the terms “adapted” and “configured” should not be construed to mean that a given element, component, or other subject matter is simply “capable of” performing a given function but that the element, component, and/or other subject matter is specifically selected, created, implemented, utilized, programmed, and/or designed for the purpose of performing the function. It is also within the scope of the present disclosure that elements, components, and/or other recited subject matter that is recited as being adapted to perform a particular function may additionally or alternatively be described as being configured to perform that function, and vice versa. Similarly, subject matter that is recited as being configured to perform a particular function may additionally or alternatively be described as being operative to perform that function.
  • As used herein, the term “and/or” placed between a first entity and a second entity means one of (1) the first entity, (2) the second entity, and (3) the first entity and the second entity. Multiple entries listed with “and/or” should be construed in the same manner, i.e., “one or more” of the entities so conjoined. Other entities optionally may be present other than the entities specifically identified by the “and/or” clause, whether related or unrelated to those entities specifically identified. Thus, as a non-limiting example, a reference to “A and/or B,” when used in conjunction with open-ended language such as “comprising,” may refer, in one example, to A only (optionally including entities other than B); in another example, to B only (optionally including entities other than A); in yet another example, to both A and B (optionally including other entities). These entities may refer to elements, actions, structures, steps, operations, values, and the like.
  • The various disclosed elements of apparatuses and steps of methods disclosed herein are not required to all apparatuses and methods according to the present disclosure, and the present disclosure includes all novel and non-obvious combinations and subcombinations of the various elements and steps disclosed herein. Moreover, one or more of the various elements and steps disclosed herein may define independent inventive subject matter that is separate and apart from the whole of a disclosed apparatus or method. Accordingly, such inventive subject matter is not required to be associated with the specific apparatuses and methods that are expressly disclosed herein, and such inventive subject matter may find utility in apparatuses and/or methods that are not expressly disclosed herein.

Claims (20)

1. An automated machine vision method for determining if an object within a work field has one or more predetermined features, the automated machine vision method comprising:
capturing image data of the work field from a camera system;
following the capturing the image data, binary thresholding the image data based on a presumed feature of the one or more predetermined features;
responsive to the binary thresholding, identifying one or more groups of pixels as candidates for the presumed feature;
following the identifying, applying a filter to the identified one or more groups of pixels to create filtered data, wherein the filter comprises an aspect corresponding to the presumed feature; and
based at least in part on the applying, determining if the object has the presumed feature.
2. The automated machine vision method of claim 1, wherein the presumed feature is a specific color.
3. The automated machine vision method of claim 1, wherein the presumed feature is a specific shape.
4. The automated machine vision method of claim 1, wherein the presumed feature is a specific size.
5. The automated machine vision method of claim 1, wherein the presumed feature is specific indicia.
6. The automated machine vision method of claim 1, wherein the presumed feature is a specific texture.
7. The automated machine vision method of claim 1, wherein at least one of:
(i) the camera system is positioned in a known position and orientation relative to the work field; and
(ii) the automated machine vision method further comprises determining a relative orientation between the camera system and the work field.
8. The automated machine vision method of claim 1, further comprising following the capturing the image data and prior to the applying the filter:
transforming the image data into HSV domain; and
subtracting, from the image data, data corresponding to portions of the work field that are not the object based on a known color of the portions of the work field that are not the object.
9. The automated machine vision method of claim 1, wherein the presumed feature corresponds to an expected or desired object to be within the work field based on a database associated with the work field.
10. The automated machine vision method of claim 1, further comprising:
noise filtering the image data following the binary thresholding;
wherein the identifying the one or more groups of pixels is further responsive to the noise filtering.
11. The automated machine vision method of claim 1, wherein the filter is a Gabor filter.
12. The automated machine vision method of claim 1, wherein the filtered data comprises one or more blobs of pixels that are candidates for being representative of the object, and wherein the automated machine vision method further comprises:
following the applying the filter, analyzing the one or more blobs of pixels to determine presence of one or more blob features, wherein the one or more blob features comprise one or more of blob area, blob eccentricity, blob dimensions, blob brightness, blob correlation, and blob homogeneity.
13. The automated machine vision method of claim 12, wherein the one or more blob features are associated with the one or more predetermined features in a database associated with the work field.
14. The automated machine vision method of claim 1, wherein the camera system is a stereo camera system, wherein the image data comprises two images, and wherein the automated machine vision method further comprises:
during the capturing the image data, projecting a light texture on the work field;
creating a point cloud of the filtered data, wherein the creating the point cloud comprises generating a disparity map from the two images based on the light texture;
selecting pixels associated with the object; and
comparing the pixels associated with the object to a computer model of an expected or desired object from a database associated with the work field.
15. The automated machine vision method of claim 1, wherein the object comprises a fastener.
16. A robotic installation method, comprising:
performing the automated machine vision method of claim 1, wherein the camera system is mounted to, mounted with, or mouthed as an end effector of a robotic arm;
based on the determining, instructing the robotic arm to install a component in a predetermined configuration relative to the object; and
installing, using the robotic arm, the component in the predetermined configuration relative to the object.
17. The robotic installation method of claim 16, wherein the object and the component comprise a fastener pair.
18. A machine vision system for determining if an object within a work field has one or more predetermined features, the machine vision system comprising:
a camera system configured to capture image data of the work field; and
a controller communicatively coupled to the camera system to receive the image data from the camera system, wherein the controller comprises non-transitory computer-readable media having computer-readable instructions that, when executed, cause the controller to:
binary threshold the image data based on a presumed feature of the one or more predetermined features;
responsive to binary thresholding the image data, identify one or more groups of pixels as candidates for the presumed feature;
following the identification of the one or more groups of pixels, apply a filter to the identified one or more groups of pixels to create filtered data, wherein the filter comprises an aspect corresponding to the presumed feature; and
based at least in part on application of the filter, determine if the object has the presumed feature.
19. The machine vision system of claim 18, wherein the presumed feature comprises one or more of a specific color, a specific shape, a specific size, specific indicia, or a specific texture.
20. A robotic installation system, comprising:
the machine vision system of claim 18; and
a robotic arm, wherein the camera system is mounted to, mounted with, or mounted as an end effector of the robotic arm, and wherein the robotic arm is configured to install a component in a predetermined configuration relative to the object based in part on a determination by the controller that the object has the presumed feature;
wherein the computer-readable instructions, when executed, further cause the controller to instruct the robotic arm to install the component in the predetermined configuration relative to the object.
US16/848,307 2018-03-28 2020-04-14 Machine vision and robotic installation systems and methods Abandoned US20200242413A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/848,307 US20200242413A1 (en) 2018-03-28 2020-04-14 Machine vision and robotic installation systems and methods

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/939,127 US10657419B2 (en) 2018-03-28 2018-03-28 Machine vision and robotic installation systems and methods
US16/848,307 US20200242413A1 (en) 2018-03-28 2020-04-14 Machine vision and robotic installation systems and methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/939,127 Continuation US10657419B2 (en) 2018-03-28 2018-03-28 Machine vision and robotic installation systems and methods

Publications (1)

Publication Number Publication Date
US20200242413A1 true US20200242413A1 (en) 2020-07-30

Family

ID=65443701

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/939,127 Active 2038-10-30 US10657419B2 (en) 2018-03-28 2018-03-28 Machine vision and robotic installation systems and methods
US16/848,307 Abandoned US20200242413A1 (en) 2018-03-28 2020-04-14 Machine vision and robotic installation systems and methods

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/939,127 Active 2038-10-30 US10657419B2 (en) 2018-03-28 2018-03-28 Machine vision and robotic installation systems and methods

Country Status (4)

Country Link
US (2) US10657419B2 (en)
EP (1) EP3557478B1 (en)
JP (1) JP2019194565A (en)
CN (1) CN110315529A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024043775A1 (en) 2022-08-25 2024-02-29 Sime Darby Plantation Intellectual Property Sdn Bhd Autonomous method and system for detecting, grabbing, picking and releasing of objects

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593931B2 (en) 2020-06-09 2023-02-28 Howmedica Osteonics Corp. Surgical kit inspection systems and methods for inspecting surgical kits having parts of different types
KR102463522B1 (en) * 2020-11-04 2022-11-04 전남대학교산학협력단 Image-based structure motion measurement method using high-order image decomposition filter
EP4024034A1 (en) * 2021-01-05 2022-07-06 The Boeing Company Methods and apparatus for measuring fastener concentricity
CN117798654B (en) * 2024-02-29 2024-05-03 山西漳电科学技术研究院(有限公司) Intelligent adjusting system for center of steam turbine shafting

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317953B1 (en) 1981-05-11 2001-11-20 Lmi-Diffracto Vision target based assembly
GB2204814B (en) 1987-04-23 1991-05-08 Nissan Motor Assembly method for component parts
US5579444A (en) * 1987-08-28 1996-11-26 Axiom Bildverarbeitungssysteme Gmbh Adaptive vision-based controller
US5727300A (en) 1995-02-07 1998-03-17 The Boeing Company Fastener verification system
US6448549B1 (en) * 1995-08-04 2002-09-10 Image Processing Systems, Inc. Bottle thread inspection system and method of operating the same
JPH11300670A (en) * 1998-04-21 1999-11-02 Fanuc Ltd Article picking-up device
GB0125079D0 (en) 2001-10-18 2001-12-12 Cimac Automation Ltd Auto motion:robot guidance for manufacturing
US7003161B2 (en) * 2001-11-16 2006-02-21 Mitutoyo Corporation Systems and methods for boundary detection in images
US7343034B2 (en) * 2003-10-30 2008-03-11 Illinois Tool Works Inc Method of inspecting threaded fasteners and a system therefor
US7755761B2 (en) 2004-11-12 2010-07-13 The Boeing Company Self-normalizing contour drilling machine
US20070188606A1 (en) 2006-02-16 2007-08-16 Kevin Atkinson Vision-based position tracking system
US7362437B2 (en) 2006-03-28 2008-04-22 The Boeing Company Vision inspection system device and method
US8050486B2 (en) 2006-05-16 2011-11-01 The Boeing Company System and method for identifying a feature of a workpiece
WO2008076942A1 (en) * 2006-12-15 2008-06-26 Braintech Canada, Inc. System and method of identifying objects
US8224121B2 (en) 2007-06-15 2012-07-17 The Boeing Company System and method for assembling substantially distortion-free images
US8978967B2 (en) 2007-10-31 2015-03-17 The Boeing Campany Intelligent fastener system
US9302785B2 (en) 2007-11-29 2016-04-05 The Boeing Company Engine installation using machine vision for alignment
US7876216B2 (en) 2008-06-25 2011-01-25 The Boeing Company Object location and reporting system for harsh RF environments
JP5333344B2 (en) * 2009-06-19 2013-11-06 株式会社安川電機 Shape detection apparatus and robot system
JP5685027B2 (en) * 2010-09-07 2015-03-18 キヤノン株式会社 Information processing apparatus, object gripping system, robot system, information processing method, object gripping method, and program
FI20106090A0 (en) * 2010-10-21 2010-10-21 Zenrobotics Oy Procedure for filtering target image images in a robotic system
GB2486658A (en) 2010-12-21 2012-06-27 Crane Electronics Torque tool positioning system
US9098908B2 (en) * 2011-10-21 2015-08-04 Microsoft Technology Licensing, Llc Generating a depth map
JP5507595B2 (en) * 2012-02-17 2014-05-28 ファナック株式会社 Article assembling apparatus using robot
US9299118B1 (en) 2012-04-18 2016-03-29 The Boeing Company Method and apparatus for inspecting countersinks using composite images from different light sources
US10473558B2 (en) * 2012-11-13 2019-11-12 Ues, Inc. Automated high speed metallographic system
US9761002B2 (en) 2013-07-30 2017-09-12 The Boeing Company Stereo-motion method of three-dimensional (3-D) structure information extraction from a video for fusion with 3-D point cloud data
US9568906B2 (en) 2013-09-18 2017-02-14 The Boeing Company Robotic object coating system
JP2015147256A (en) * 2014-02-04 2015-08-20 セイコーエプソン株式会社 Robot, robot system, control device, and control method
US9789549B2 (en) 2015-07-07 2017-10-17 The Boeing Company Robotic system and drilling end effector for robotic system
US10002431B2 (en) * 2015-11-03 2018-06-19 The Boeing Company Locating a feature for robotic guidance
US10066925B2 (en) 2016-02-02 2018-09-04 The Boeing Company Point cloud processing apparatus and method
US10596700B2 (en) * 2016-09-16 2020-03-24 Carbon Robotics, Inc. System and calibration, registration, and training methods
US10217031B2 (en) * 2016-10-13 2019-02-26 International Business Machines Corporation Identifying complimentary physical components to known physical components
CN107030693B (en) * 2016-12-09 2019-09-13 南京理工大学 A kind of hot line robot method for tracking target based on binocular vision
US10671835B2 (en) * 2018-03-05 2020-06-02 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Object recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024043775A1 (en) 2022-08-25 2024-02-29 Sime Darby Plantation Intellectual Property Sdn Bhd Autonomous method and system for detecting, grabbing, picking and releasing of objects

Also Published As

Publication number Publication date
EP3557478A2 (en) 2019-10-23
EP3557478B1 (en) 2023-12-27
US20190303721A1 (en) 2019-10-03
CN110315529A (en) 2019-10-11
US10657419B2 (en) 2020-05-19
JP2019194565A (en) 2019-11-07
EP3557478A3 (en) 2020-01-15

Similar Documents

Publication Publication Date Title
US20200242413A1 (en) Machine vision and robotic installation systems and methods
US9862077B2 (en) System, an apparatus and a method for laser projection-assisted fastener installation
EP2653828B1 (en) Method and apparatus for inspecting precision countersinks in aircraft structures by machine vision
JP7143868B2 (en) Parts mounting equipment, work management method, work management system, verification device, verification method and program
JPWO2016083897A5 (en)
US10831177B2 (en) Systems and methods for automated welding
US20170103508A1 (en) Imaging system for an automated production line
Dharmara et al. Robotic assembly of threaded fasteners in a non-structured environment
WO2019197981A1 (en) System for the detection of defects on a surface of at least a portion of a body and method thereof
CN115937059A (en) Part inspection system with generative training models
Piero et al. Virtual commissioning of camera-based quality assurance systems for mixed model assembly lines
CN113706501B (en) Intelligent monitoring method for aircraft assembly
WO2020142498A1 (en) Robot having visual memory
CN111486790A (en) Full-size detection method and device for battery
Wang et al. Study on the Target Recognition and Location Technology of industrial Sorting Robot based on Machine Vision.
Nikam et al. Circuit board defect detection using image processing and microcontroller
Xiang et al. Automatic vehicle identification in coating production line based on computer vision
CN113689495A (en) Hole center detection method based on deep learning and hole center detection device thereof
Podrekar et al. Automated visual inspection of pharmaceutical tablets in heavily cluttered dynamic environments
GB2590468A (en) Analysing surfaces of vehicles
Dominguez et al. Object recognition and inspection in difficult industrial environments
Wang et al. Target Recognition based on Machine Vision for Industrial Sorting Robot
Rababaah et al. Automatic visual inspection system for stamped sheet metals (AVIS 3 M)
JP2009085900A (en) System and method for testing component
Williams Vorsprung durch photonik: Machine vision technology is improving quality control in automotive manufacturing, writes Andrew Williams

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE BOEING COMPANY, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KURTZ, TYLER EDWARD;HANSONSMITH, RILEY HARRISON;REEL/FRAME:052393/0264

Effective date: 20180326

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION