CN110501342B - Cheese yarn rod positioning visual detection method - Google Patents

Cheese yarn rod positioning visual detection method Download PDF

Info

Publication number
CN110501342B
CN110501342B CN201910767686.XA CN201910767686A CN110501342B CN 110501342 B CN110501342 B CN 110501342B CN 201910767686 A CN201910767686 A CN 201910767686A CN 110501342 B CN110501342 B CN 110501342B
Authority
CN
China
Prior art keywords
yarn
image
coordinate
visual
yarn rod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910767686.XA
Other languages
Chinese (zh)
Other versions
CN110501342A (en
Inventor
王文胜
李天剑
卢影
冉宇辰
黄民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN201910767686.XA priority Critical patent/CN110501342B/en
Publication of CN110501342A publication Critical patent/CN110501342A/en
Application granted granted Critical
Publication of CN110501342B publication Critical patent/CN110501342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Biochemistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a cheese yarn rod positioning visual detection method, which comprises the following steps: the method comprises the steps that a vision system and a robot vision detection system are arranged, wherein the vision system comprises a vision camera and a light source for illumination; the visual camera collects images of the yarn rods and transmits the images to the robot, and the robot processes the images; carrying out visual positioning calibration on each yarn rod, and calibrating and recording centering reference coordinates of each yarn rod of the visual camera; sending a command of requesting to detect a new yarn bar, and writing the number of the currently requested yarn bar; judging whether the numbers of the yarn rods at the current positions are the same in real time, sending out a command for continuing the request when the positions are different, moving the yarn rods above the yarn rods requested by the vision system when the positions are the same, and returning the numbers of the current yarn rods to show the positions; and the visual system reads the current yarn bar number equal to the request number, detects the yarn bar number and records the detection coordinate result to complete the yarn bar positioning visual detection. The invention realizes the on-line detection of the yarn rod head, and improves the detection efficiency and the detection stability.

Description

Cheese yarn rod positioning visual detection method
Technical Field
The invention relates to the field of machine vision, in particular to a cheese rod positioning visual detection method.
Background
One process in the cheese dyeing process is yarn loading, i.e. loading a spindle onto a yarn bar in a yarn cage. Along with the continuous use of the yarn cage, the yarn rod can be deflected normally, and when the deflection degree is too large, the automatic loading of the spindle can be influenced. The larger the deviation distance of the yarn rod is, the longer the time for loading and unloading the yarn is, the lower the accuracy is, the single yarn cage has problems, the subsequent workshop work is influenced, and safety accidents can be caused if the single yarn cage is serious. When the deflection degree exceeds a certain amount, the yarn rod needs to be corrected. At present, the yarn rod correction work is still manually completed, workers need to measure and record the data of each rod, and then the number of the yarn rod with more deviation is obtained according to the comparison of the measured data, and corresponding adjustment is carried out. Along with the increasingly perfect and practical theoretical exploration and application research in the relevant fields of high-precision image sensing devices, digital signal processor technology, computer vision, mode recognition and the like, the development of automatic detection systems based on vision in the textile industry is receiving more and more attention.
When the yarn rod is used, the end surface defects such as rusting, cracks, damage, scabs, holes, surface layering, pockmarks and the like can appear on the rod head, which can influence the accuracy of visual detection and positioning and even can cause detection errors. The defect detection based on computer vision has the advantages of non-contact, rapidness, accuracy, high reliability and the like, so that the defect detection on the surfaces of food grading, printed matters, glass, steel, liquid crystal displays and the like has been widely researched and applied. However, the existing visual inspection system cannot be applied to the online detection of the positioning of the yarn rod head, and the main reasons are that: 1) the mechanical structures of the yarn cage and the yarn rod are different from other positioning detection; 2) the environmental influence of the spinning factory workshop is different from other application environments, so that the background interference is also different; 3) the end face of the yarn rod head has various defects, such as corrosion, scratch, crush, bump and the like; 4) the industrial site has higher real-time requirement.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a visual positioning detection method for a cheese rod, which automatically performs positioning detection on a yarn rod head on a cheese cage through the machine vision, thereby solving the problem that the deflection degree of the yarn rod is difficult to judge.
In order to achieve the purpose, the invention adopts the following technical scheme: a cheese yarn rod positioning visual detection method comprises the following steps: 1) arranging a visual detection system, wherein the visual detection system comprises a visual system and a robot, and the visual system comprises a visual camera and a light source for illumination; after the vision camera collects images of each yarn rod, transmitting the image information to the robot, and carrying out image processing by the robot; 2) the robot carries out image processing according to the received image information, visual positioning calibration is carried out on each yarn rod, and the centering reference coordinates of each yarn rod of the visual camera are calibrated and recorded; 3) visual inspection of the yarn rod which has been calibrated to record the original centering coordinate: sending a command of detecting a new yarn bar by the vision system, and writing the currently requested yarn bar number into the vision system; judging whether the numbers of the yarn rods at the current positions are the same in real time, sending out a command for continuing the request when the positions are different, moving the yarn rods above the yarn rods requested by the vision system when the positions are the same, and returning the numbers of the current yarn rods to show the positions; and the visual system reads the current yarn bar number equal to the request number, detects the yarn bar number and records the detection coordinate result to complete the yarn bar positioning visual detection.
Further, in the step 1), the image processing includes the following steps: 1.1) calibrating a visual camera, aligning the center of the visual camera through a standard circle, and calculating the corresponding relation between a pixel value and the diameter of the standard circle; 1.2) acquiring an original acquisition image containing an image of a head of a yarn rod to be detected; 1.3) carrying out automatic threshold segmentation on the original collected image, and extracting a connected domain to obtain a connected domain image of the head of the yarn rod to be detected; 1.4) carrying out centroid extraction on a communicating region of the yarn rod head to be detected to obtain a centroid, namely a central position coordinate under an image coordinate system of the yarn rod head; 1.5) converting a coordinate system, converting an image coordinate system into coordinates under an actual coordinate system, and comparing the coordinates with original position coordinates to obtain an actual offset size; 1.6) carrying out defect estimation, carrying out characteristic analysis on the obtained connected domain image, calculating to obtain area and shape characteristics, comparing the area and shape characteristics with those of a standard yarn rod head, further judging whether the yarn rod head has defects or not, evaluating the defect degree through the area ratio, and giving the credibility of the yarn rod coordinate.
Further, in the step 1.3), a connected domain which meets preset conditions is obtained as the connected domain of the yarn rod head to be detected through area, roundness and rectangle characteristic condition screening.
Further, in the step 1.3), the method for acquiring the connected domain image of the yarn rod head to be detected by adopting an image segmentation algorithm comprises the following steps: 1.3.1) filtering the original collected image, and then carrying out contrast stretching; 1.3.2) adopting a self-adaptive threshold segmentation method to the preprocessed image to obtain an image with a candidate area of the club head; 1.3.3) extracting connected domains of the obtained candidate region images, and calculating the size and position information of each candidate workpiece region in the candidate region set.
Further, in the step 1.5), the coordinate system conversion includes the following steps: 1.5.1) adopting a standard circle with known size to be arranged at the top end of the yarn rod to ensure that the standard circle and the top end of the yarn rod are in the same horizontal plane; 1.5.2) acquiring a standard circle image by using a camera calibration method, and performing Hough transformation to obtain the size of a standard circle radius pixel; 1.5.3) calculating the ratio of the obtained standard circle radius pixel size to the actual size to obtain a calibration scale; 1.5.4) the coordinate of the center position in the coordinate system of the yarn bar head image is multiplied by the scale to obtain the coordinate of the final actual position.
Further, in the step 1.6), the defect evaluation includes the following steps: 1.6.1) carrying out area screening on the obtained connected domain image, and extracting the connected domain within the preset value range of the yarn rod; 1.6.2) analyzing the distance between the centroid position of the connected domain and the central position of the image, and judging whether the area of the connected domain within 20 pixels away from the center of the image is in a range; 1.6.3) if the difference between the preset value and the preset value is within the range of 50 pixels, judging the roundness and the rectangularity to analyze the defect degree; 1.6.4) if the difference is not within 50 pixels, judging whether the difference between the center positions of the second and third connected domains is within 10 pixels, if so, performing expansion processing to connect the two connected domains, and then performing the processing of step 1.6.2).
Further, in the step 2), the positioning method includes the following steps: 2.1) the robot sends a positioning request signal to the vision system and updates the identification serial number to perform vision identification; 2.2) the vision camera moves to a preset original standard alignment position after receiving the positioning request signal, photographing identification is carried out, an identified coordinate result is fed back to the robot, the identification serial number is refreshed together with the positioning data, and the robot compares the fed back identification result with the requested serial number; 2.3) correcting the centering coordinate according to the comparison result, judging whether the deviation value between the position coordinate where the current vision camera is located and the identified yarn rod coordinate is within a preset allowable range, if so, moving to the next yarn rod coordinate to be positioned to continue positioning, otherwise, moving to the correction position, continuing calculating the deviation, and circularly changing the reference coordinate until the deviation is within the set allowable range; when the feedback identification result is consistent with the requested serial number, the robot considers that the vision system has finished positioning work and data processing, and the X/Y axis deviation in the data area is true and effective.
Further, a large constant image Mercury series GigE digital video camera is adopted as a visual camera in the visual system.
Further, the illumination light source adopts an annular LED brightness-adjustable light source.
Due to the adoption of the technical scheme, the invention has the following advantages: 1. by adopting the industrial camera and the visual image processing method, the defects of low efficiency, easy fatigue and incapability of ensuring quality of a manual detection method are overcome, and the requirement of improving the automation level in the textile field can be met. 2. According to the method, the image of the yarn rod head in the center of the visual field of the camera is obtained by carrying out image segmentation and target association on the image acquired in real time; then, the position information of the yarn rod and the defect degrees of corrosion, crush damage, collision, scratch and the like are obtained through gray level self-adaptive threshold segmentation, connected domain extraction, centroid solution, coordinate transformation and defect judgment, and the detection problem of the yarn rod deflection is solved.
Drawings
FIG. 1 is a schematic flow chart of a positioning method of the present invention;
FIG. 2 is a schematic flow chart of the detection method of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
The invention provides a cheese yarn rod positioning visual detection method, which adopts a single-rod mode, namely a mode of detecting 120 rods of the whole yarn cage one by one. Because the length of the yarn rod is about 2m, the light reflection effect of the background chassis can be reduced by adjusting the visual field and the focal length of the camera, and the effects of blurring and darkening the background and highlighting the head of the yarn rod are achieved. The illumination of the annular LED light source increases the brightness of the top end of the yarn rod, so that the yarn rod is exposed to the maximum extent, the background interference is also darkened, and the detection is more accurate.
The method comprises the following steps:
1) arranging a visual detection system, wherein the visual detection system comprises a visual system and a robot, and the visual system comprises a visual camera and a light source for illumination; after the vision camera collects images of each yarn rod, transmitting the image information to the robot, and carrying out image processing by the robot;
the image processing comprises the following steps:
1.1) calibrating a visual camera, aligning the center of the visual camera through a standard circle, and calculating the corresponding relation between a pixel value and the diameter of the standard circle;
1.2) acquiring an original acquisition image containing an image of a head of a yarn rod to be detected;
1.3) carrying out automatic threshold segmentation on the original collected image, and extracting a connected domain to obtain a connected domain image of the head of the yarn rod to be detected;
the method comprises the following steps of screening characteristic conditions such as area, roundness and rectangle degree to obtain a connected domain which meets preset conditions as a connected domain of a yarn rod head to be detected; the roundness is the ratio of the area of the connected domain to the minimum circumscribed circle area of the connected domain, and the rectangularity is the ratio of the area of the connected domain to the minimum circumscribed rectangle area.
1.4) carrying out centroid extraction on a communicating region of the yarn rod head to be detected to obtain a centroid, namely a central position coordinate under an image coordinate system of the yarn rod head;
extracting the centroid of the connected domain of the yarn rod head to be detected, directly analyzing the connected domain and extracting the centroid, or extracting the circular contour from the original collected image in the step 1.2) or the connected domain image in the step 1.3) through Hough transform to obtain the center of the circular contour;
and 1.5) converting a coordinate system, converting the image coordinate system into coordinates under an actual coordinate system, and comparing the coordinates with the original position coordinates to obtain the actual offset size.
1.6) carrying out defect estimation, carrying out characteristic analysis on the obtained connected domain image, calculating to obtain characteristics such as area and shape, comparing the characteristics with the characteristics such as the area and the shape of a standard yarn rod head, further judging whether the yarn rod head has defects or not, evaluating the defect degree through an area ratio, and giving the credibility of the yarn rod coordinate.
In the step 1.3), the connected domain image of the yarn rod head to be detected is obtained by adopting an image segmentation algorithm, and the method comprises the following steps:
1.3.1) filtering the original collected image (not limited to mean filtering), and then carrying out contrast stretching;
1.3.2) adopting a self-adaptive threshold segmentation method to the preprocessed image to obtain an image with a candidate area of the club head; wherein, the adaptive threshold value division adopts but not limited to OTSU Otsu method, and can also adopt other methods such as bimodal method;
1.3.3) extracting connected domains of the obtained candidate region images, and calculating the size and position information of each candidate region in the candidate region set;
in the step 1.5), the coordinate system conversion includes the following steps:
1.5.1) adopting a standard circle with known size to be arranged at the top end of the yarn rod to ensure that the standard circle and the top end of the yarn rod are in the same horizontal plane;
1.5.2) acquiring and obtaining a standard circle image by using a camera calibration method, and performing Hough transformation to obtain the size of a standard circle radius pixel;
1.5.3) calculating the ratio of the obtained standard circle radius pixel size to the actual size to obtain a calibration scale;
1.5.4) multiplying the central position coordinate in the step 1.4) by a scale to obtain the final actual position coordinate.
In the step 1.6), the defect evaluation includes the following steps:
1.6.1) carrying out area screening on the obtained connected domain image, and extracting the connected domain within the preset value range of the yarn rod;
1.6.2) analyzing the distance between the centroid position of the connected domain and the central position of the image, and judging whether the area of the connected domain within 20 pixels away from the center of the image is in a range;
1.6.3) if the difference between the preset value and the preset value is within the range of 50 pixels, judging the roundness and the rectangularity to analyze the defect degree, wherein the roundness is the ratio of the area of the connected domain to the minimum circumscribed circle area of the connected domain, and the rectangularity is the ratio of the area of the connected domain to the minimum circumscribed rectangle area.
1.6.4) if the difference is not within 50 pixels, judging whether the difference between the center positions of the second and third connected domains is within 10 pixels, if so, performing expansion processing to connect the two connected domains, and then performing the processing of step 1.6.2).
2) The robot carries out image processing according to the received image information, visual positioning calibration is carried out on each yarn rod, and the centering reference coordinates of each yarn rod of the visual camera are calibrated and recorded;
as shown in fig. 1, the positioning method includes the following steps:
2.1) the robot sends a positioning request signal to the vision system and updates the identification serial number to perform vision identification;
2.2) the vision camera moves to a preset original standard alignment position after receiving the positioning request signal, photographing identification is carried out, an identified coordinate result is fed back to the robot, the identification serial number is refreshed together with the positioning data, and the robot compares the fed back identification result with the requested serial number;
2.3) correcting the centering coordinate according to the comparison result (the centering coordinate is the position coordinate aligned with the center point of the visual camera picture, namely the coordinate can be used as the reference coordinate for error comparison of the final yarn rod), judging whether the deviation value of the position coordinate where the current visual camera is located and the identified yarn rod coordinate is within a preset allowable range (set to be 0.1mm in the embodiment), if so, moving to the next yarn rod coordinate to be positioned to continue positioning, otherwise, moving the visual camera to the correction position, continuing to calculate the deviation, and circularly changing the reference coordinate until the deviation is within the set allowable range;
when the feedback identification result is consistent with the requested serial number, the robot considers that the vision system has finished positioning work and data processing, and the X/Y axis deviation in the data area is true and effective.
3) Visual inspection of the yarn rod which has been calibrated to record the original centering coordinate: as shown in fig. 2, the vision system sends out a command for detecting a new yarn bar, and writes the currently requested yarn bar number into the vision system; and judging whether the number of the yarn bar at the current position is the same as that of the yarn bar required to be detected in real time, sending a command for continuing the request when the positions are different, moving the yarn bar above the yarn bar required by the vision system when the positions are the same, and returning the number of the current yarn bar to show the position. And the visual system reads the current yarn bar number equal to the request number, detects the yarn bar number and records the detection coordinate result to complete the yarn bar positioning visual detection.
In the steps, a large constant image Mercury series GigE digital camera which is an industrial camera with a focal length of 8mm is adopted as a visual camera in the visual system. The distance between a detection object (namely a yarn rod head) and a camera lens is 200-300 mm, the camera and the robot Z-axis truss are arranged in parallel, so that a camera view field coordinate system is coincident with a robot motion coordinate system, and the angle error is not more than 1 degree. The illumination light source adopts an annular LED brightness-adjustable light source which is used for illuminating the head of the yarn rod and blurring the background.
Among the above-mentioned each step, dyeing and finishing workshop yarn cage, yarn pole all adopt standardized production, have 120 yarn poles on each yarn cage, make the mark in order, and yarn pole top plane circle diameter 7mm has the diameter 5mm screw hole, and the material is stainless steel reflection of light face. The wavy ring is arranged on the disc at the bottom end of the yarn rod for assisting in fixing the cheese, light irradiation can generate diffuse reflection, background interference is caused, and light reflection is serious. By adjusting the brightness of the annular light source, the problem of background reflection interference is solved, the background is blurred, and the brightness reflection of the top yarn rod is enhanced.
In each of the above steps, the vision system is calibrated before being used as a measuring tool. The vision camera can not be guaranteed to be completely consistent with the previous time when being reinstalled for some reasons, the top height of each yarn cage yarn rod has certain deviation, and a pixel/distance scale needs to be calibrated before each use.
In conclusion, the invention utilizes the machine vision light source to control the ambient brightness, takes the image acquired by a high-resolution industrial camera (a large constant star series GigE digital camera, 8mm focal length) in real time as a main information source to position and analyze the yarn rod, and realizes the on-line automatic detection of the positioning of the finished yarn rod. The result of recognition in the image is a pixel deviation value on an X/Y axis, the pixel deviation value is converted into an actual deviation value in the calibration process, the actual deviation values represented by the pixel deviation values with different installation heights and the same pixel deviation value are different, and the height difference of each camera, each yarn cage and the camera and the yarn cage during each installation cannot be completely guaranteed to be unchanged. A fixed scale is therefore required to scale the image. A standard circle with the diameter of 50mm is used as a scale, and is sleeved on the top surface of the top of the yarn rod to be flush with the yarn rod. The size of the pixel/distance scale is calibrated by detecting the diameter size (pixel size) of the standard circle in the camera, and the data measured in the visual positioning calibration and detection procedures are converted by using the scale.
Since the vision camera and the robot each have their own coordinate systems, there are two sets of coordinate systems and there is a deviation between the two coordinate systems. Before the visual positioning calibration, the two coordinate systems need to be calibrated. The method for calibrating the origin of the coordinate system is complex, and the position of the origin is not required to be known in the experimental process. Therefore, during positioning calibration, the deviation value of the current yarn bar is given and stored in the robot, and is compared with the robot coordinate system to calculate, so that the current coordinate is (0, 0), and the current coordinate is taken as a reference value, namely a new origin. During visual detection, the reference value coordinate is called, and the detection value of the output result is the deviation value of the coordinate of the center of the circle at the top end of the yarn rod.
Those of skill in the art will appreciate that the method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of electronic hardware and software. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Claims (6)

1. A cheese yarn rod positioning visual detection method is characterized by comprising the following steps:
1) arranging a visual detection system, wherein the visual detection system comprises a visual system and a robot, and the visual system comprises a visual camera and a light source for illumination; after the vision camera collects images of each yarn rod, transmitting the image information to the robot, and carrying out image processing by the robot;
2) the robot carries out image processing according to the received image information, visual positioning calibration is carried out on each yarn rod, and the centering reference coordinates of each yarn rod of the visual camera are calibrated and recorded;
3) visual inspection of the yarn rod which has been calibrated to record the original centering coordinate: sending a command of detecting a new yarn bar by the vision system, and writing the currently requested yarn bar number into the vision system; judging whether the numbers of the yarn rods at the current positions are the same in real time, sending out a command for continuing the request when the positions are different, moving the yarn rods above the yarn rods requested by the vision system when the positions are the same, and returning the numbers of the current yarn rods to show the positions; the visual system reads the current yarn bar number equal to the request number, detects the yarn bar number and records the detection coordinate result to complete the yarn bar positioning visual detection;
in the step 1), the image processing comprises the following steps:
1.1) calibrating a vision camera, aligning the center of the vision camera through a standard circle, and calculating the corresponding relation between a pixel value and the diameter of the standard circle;
1.2) acquiring an original acquisition image containing an image of a head of a yarn rod to be detected;
1.3) carrying out automatic threshold segmentation on the original collected image, and extracting a connected domain to obtain a connected domain image of the head of the yarn rod to be detected;
1.4) carrying out centroid extraction on a communicating region of the yarn rod head to be detected to obtain a centroid, namely a central position coordinate under an image coordinate system of the yarn rod head;
1.5) converting a coordinate system, converting the image coordinate system into a coordinate under an actual coordinate system, and comparing the coordinate with the original position coordinate to obtain an actual offset size;
1.6) carrying out defect estimation, carrying out feature analysis on the obtained connected domain image, calculating to obtain area and shape features, comparing the area and shape features with the area and shape features of a standard yarn rod head, further judging whether the yarn rod head has defects or not, evaluating the defect degree through the area ratio, and giving the credibility of the yarn rod coordinate;
in the step 1.6), the defect evaluation comprises the following steps:
1.6.1) carrying out area screening on the obtained connected domain image, and extracting the connected domain within the preset value range of the yarn rod;
1.6.2) analyzing the distance between the centroid position of the connected domain and the central position of the image, and judging whether the area of the connected domain within 20 pixels away from the center of the image is in a range;
1.6.3) if the difference between the preset value and the preset value is within the range of 50 pixels, judging the roundness and the rectangularity to analyze the defect degree;
1.6.4) if the difference is not within 50 pixels, judging whether the difference between the center positions of the second and third connected domains is within 10 pixels, if so, performing expansion processing to connect the two connected domains, and then performing the processing of step 1.6.2);
in the step 2), the positioning method comprises the following steps:
2.1) the robot sends a positioning request signal to the vision system and updates the identification serial number to perform vision identification;
2.2) the vision camera moves to a preset original standard alignment position after receiving the positioning request signal, photographing identification is carried out, an identified coordinate result is fed back to the robot, the identification serial number is refreshed together with the positioning data, and the robot compares the fed back identification result with the requested serial number;
2.3) correcting the centering coordinate according to the comparison result, judging whether the deviation value between the position coordinate where the current vision camera is located and the identified yarn rod coordinate is within a preset allowable range, if so, moving to the next yarn rod coordinate to be positioned to continue positioning, otherwise, moving to the correction position, continuing calculating the deviation, and circularly changing the reference coordinate until the deviation is within the set allowable range;
when the feedback identification result is consistent with the requested serial number, the robot considers that the vision system has finished positioning work and data processing, and the X/Y axis deviation in the data area is true and effective.
2. The method of claim 1, wherein: in the step 1.3), through area, roundness and squareness characteristic condition screening, a connected domain which meets preset conditions is obtained and is used as a connected domain of the yarn rod head to be detected.
3. The method of claim 1, wherein: in the step 1.3), the connected domain image of the yarn rod head to be detected is obtained by adopting an image segmentation algorithm, and the method comprises the following steps:
1.3.1) filtering the original collected image, and then carrying out contrast stretching;
1.3.2) adopting a self-adaptive threshold segmentation method to the preprocessed image to obtain an image with a candidate area of the club head;
1.3.3) extracting connected domains of the obtained candidate region images, and calculating the size and position information of each candidate workpiece region in the candidate region set.
4. The method of claim 1, wherein: in the step 1.5), the coordinate system conversion comprises the following steps:
1.5.1) adopting a standard circle with known size to be arranged at the top end of the yarn rod to ensure that the standard circle and the top end of the yarn rod are in the same horizontal plane;
1.5.2) acquiring and obtaining a standard circle image by using a camera calibration method, and performing Hough transformation to obtain the size of a standard circle radius pixel;
1.5.3) calculating the ratio of the obtained standard circle radius pixel size to the actual size to obtain a calibration scale;
1.5.4) the coordinate of the center position in the coordinate system of the yarn bar head image is multiplied by the scale to obtain the coordinate of the final actual position.
5. The method of any of claims 1 to 4, wherein: a large constant image Mercury series GigE digital video camera is adopted as a visual camera in the visual system.
6. The method of any of claims 1 to 4, wherein: the illumination light source adopts an annular LED brightness-adjustable light source.
CN201910767686.XA 2019-08-20 2019-08-20 Cheese yarn rod positioning visual detection method Active CN110501342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910767686.XA CN110501342B (en) 2019-08-20 2019-08-20 Cheese yarn rod positioning visual detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910767686.XA CN110501342B (en) 2019-08-20 2019-08-20 Cheese yarn rod positioning visual detection method

Publications (2)

Publication Number Publication Date
CN110501342A CN110501342A (en) 2019-11-26
CN110501342B true CN110501342B (en) 2022-06-07

Family

ID=68588394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910767686.XA Active CN110501342B (en) 2019-08-20 2019-08-20 Cheese yarn rod positioning visual detection method

Country Status (1)

Country Link
CN (1) CN110501342B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110961289B (en) * 2019-12-09 2021-06-29 国网智能科技股份有限公司 Transformer substation insulator anti-pollution flashover coating spraying tool and spraying method
CN110992358B (en) * 2019-12-18 2023-10-20 北京机科国创轻量化科学研究院有限公司 Method and device for positioning yarn rod of yarn cage, storage medium and processor
CN112781521A (en) * 2020-12-11 2021-05-11 北京信息科技大学 Software operator shape recognition method based on visual markers
CN115138592B (en) * 2021-03-30 2023-07-04 中国科学院长春光学精密机械与物理研究所 Sorting device parameter calibration method
CN114348372B (en) * 2022-01-06 2023-08-01 青岛双清智能科技有限公司 Method and device for identifying and checking varieties of cheeses by using paper tube patterns
CN114473140A (en) * 2022-02-22 2022-05-13 上海电力大学 Molten pool image parallel acquisition method based on time division multiplexing
CN115096178B (en) * 2022-05-11 2023-06-13 中国矿业大学 Lifting container positioning method based on machine vision
CN115761000B (en) * 2022-11-06 2023-08-29 卢米纳科技(深圳)有限公司 Cleaning calibration method and system based on visual laser
CN116086313A (en) * 2022-12-05 2023-05-09 瑞声科技(南京)有限公司 Rotor position calibration method and related device of direct-drive transmission system
CN117147699B (en) * 2023-10-31 2024-01-02 江苏蓝格卫生护理用品有限公司 Medical non-woven fabric detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102284769A (en) * 2011-08-05 2011-12-21 上海交通大学 System and method for initial welding position identification of robot based on monocular vision sensing
CN109187581A (en) * 2018-07-12 2019-01-11 中国科学院自动化研究所 The bearing finished products plate defects detection method of view-based access control model
CN109489591A (en) * 2018-12-17 2019-03-19 吉林大学 Plane scratch length non-contact measurement method based on machine vision
CN109934802A (en) * 2019-02-02 2019-06-25 浙江工业大学 A kind of Fabric Defects Inspection detection method based on Fourier transformation and morphological image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3259908B1 (en) * 2015-02-18 2021-07-14 Siemens Healthcare Diagnostics Inc. Image-based tray alignment and tube slot localization in a vision system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102284769A (en) * 2011-08-05 2011-12-21 上海交通大学 System and method for initial welding position identification of robot based on monocular vision sensing
CN109187581A (en) * 2018-07-12 2019-01-11 中国科学院自动化研究所 The bearing finished products plate defects detection method of view-based access control model
CN109489591A (en) * 2018-12-17 2019-03-19 吉林大学 Plane scratch length non-contact measurement method based on machine vision
CN109934802A (en) * 2019-02-02 2019-06-25 浙江工业大学 A kind of Fabric Defects Inspection detection method based on Fourier transformation and morphological image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
空瓶检测机器人瓶底缺陷检测方法研究;范涛;《电子测量与仪器学报》;20170930;第31卷(第9期);全文 *

Also Published As

Publication number Publication date
CN110501342A (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN110501342B (en) Cheese yarn rod positioning visual detection method
CN1306244C (en) On-the-spot printing circuit board test based on digital image
CN106814083B (en) Filter defect detection system and detection method thereof
CN106546263B (en) A kind of laser leveler shoot laser line detecting method based on machine vision
CN114549835B (en) Pointer instrument correction identification method and device based on deep learning
CN110853018B (en) Computer vision-based vibration table fatigue crack online detection system and detection method
CN110146017A (en) Industrial robot repetitive positioning accuracy measurement method
CN110987932A (en) Automatic assembly coordinate vision measurement method
CN110926378A (en) Improved bar straightness detection system and method based on visual detection
CN116993744A (en) Weld defect detection method based on threshold segmentation
CN110514664B (en) Cheese yarn rod positioning and detecting robot and method
CN113222955A (en) Gear size parameter automatic measurement method based on machine vision
CN104316530A (en) Part detection method and application
CN103337067B (en) The visible detection method of single needle scan-type screw measurement instrument probe X-axis rotating deviation
WO2021179400A1 (en) Computer vision-based adaptive measurement system and method for geometric parameters in assembly process
CN107643611A (en) A kind of automatic checkout system of liquid crystal display and backlight assembly precision
CN113819841A (en) Machine vision-based plate shape detection device and detection method thereof
CN116091488B (en) Displacement testing method and displacement testing system for engine swing test
CN117092113A (en) Camera mould welding quality detection device and system thereof
CN116105604B (en) Steel pipe quality detection system and detection method
CN112051003B (en) Automatic product detection device, detection method and reading method for hydraulic meter
TWI693374B (en) Non-contact measurement system for measuring object contour
CN113989513A (en) Method for recognizing reading of square pointer type instrument
CN112763495A (en) Mobile phone battery size and appearance defect detection system and detection method
CN110567345A (en) Non-contact type pipe wall thickness measuring method and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant