CN114494169A - Industrial flexible object detection method based on machine vision - Google Patents

Industrial flexible object detection method based on machine vision Download PDF

Info

Publication number
CN114494169A
CN114494169A CN202210053072.7A CN202210053072A CN114494169A CN 114494169 A CN114494169 A CN 114494169A CN 202210053072 A CN202210053072 A CN 202210053072A CN 114494169 A CN114494169 A CN 114494169A
Authority
CN
China
Prior art keywords
target
pixel
object detection
detection method
flexible object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210053072.7A
Other languages
Chinese (zh)
Inventor
王堃
陈涛
陈思光
张载龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210053072.7A priority Critical patent/CN114494169A/en
Publication of CN114494169A publication Critical patent/CN114494169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an industrial flexible object detection method based on machine vision, which mainly comprises the following steps: step 1, collecting an original color image of a target to serve as an original data set; step 2, performing data enhancement processing on the initial data set to obtain a black-white binary image for distinguishing a target area from a white background; step 3, drawing the connected regions of the target, counting the region arc length of each connected region, and reserving the part of which the area of the connected region is larger than a fixed value S or the arc length of the connected region is larger than a fixed value L; step 4, drawing an edge contour curve according to the connected region, and extracting parallel straight lines on the center line of the connected region and the target edge contour; step 5, drawing a line segment capable of representing the target width by using the central line and the parallel line; and 6, mapping the pixel distance to the actual distance of the line segment length representing the target width.

Description

Industrial flexible object detection method based on machine vision
Technical Field
The invention relates to a machine vision-based industrial flexible object detection method, and belongs to the technical field of industrial detection.
Background
The rapid development of computer technology has injected greater vigor into industrial production and life, and particularly, the application of computer vision technology has greatly accelerated the production efficiency and capacity in some industrial fields. Computer vision technology has been a very compelling research hotspot, and in principle, a machine vision system mainly consists of three parts: acquisition of images, processing and analysis of images, output or display. Including digital image processing techniques, mechanical engineering techniques, control techniques, light source illumination techniques, optical imaging techniques, sensor techniques, analog and digital video techniques, computer software and hardware techniques, human-computer interface techniques, and the like. Only coordinated application of these techniques can constitute a complete machine vision application system.
According to these fields, five typical applications of the machine vision industry can be roughly summarized, and the five typical applications can also basically summarize the role of the machine vision technology in industrial production. In turn, an image recognition application, an image detection application, a visual positioning application, an object measurement application, and an object sorting application.
An example of an image recognition application, which processes, analyzes and understands images using machine vision to recognize various patterns of objects and objects, is the recognition of two-dimensional codes. The image detection is applied, almost all products need to be detected, manual detection has more defects, the manual detection accuracy is low, the accuracy cannot be guaranteed if the manual detection is carried out for a long time, the detection speed is low, and the efficiency of the whole production process is easily influenced. Machine vision is also very widely used in image inspection applications, such as: color register positioning and color comparison inspection in the printing process, printing quality inspection of beverage bottle caps in the packaging process, bar code and character recognition on product packaging and the like. The visual positioning application requires that a machine vision system can quickly and accurately find a part to be detected and confirm the position of the part. The object measurement application and the machine vision industrial application are greatly characterized in that the non-contact measurement technology has the performances of high precision and high speed, but the non-contact and non-abrasion performance eliminates the hidden danger of secondary damage possibly caused by contact measurement. The object sorting application is established in a link after recognition and detection, and the images are processed through a machine vision system to realize sorting. It is commonly used in machine vision industry applications for food sorting, automatic sorting of surface defects of parts, cotton fiber sorting, etc.
The existing mature case in the field of object measurement application basically detects a rigid object, the shape and size of the rigid object are generally fixed, the deformation degree is very small under the action of external force, and the detection process and the detection result cannot be greatly influenced. Such as gear, automobile part testing, etc. The width index detection of a flexible target with an irregular appearance contour is always relatively blank, and the application of the aspect is relatively less in patent.
In view of the above, it is necessary to provide a method for detecting an industrial flexible object based on machine vision to solve the above problems.
Disclosure of Invention
The invention aims to provide an industrial flexible object detection method based on machine vision so as to reduce the error rate of detection and improve the real-time performance of detection.
In order to achieve the above object, the present invention provides a machine vision-based industrial flexible object detection method, which mainly includes:
step 1, collecting an original color image of a target to serve as an original data set;
step 2, performing data enhancement processing on the initial data set to obtain a black-white binary image for distinguishing a target area from a white background;
step 3, drawing the connected regions of the target, counting the region arc length of each connected region, and reserving the part of which the area of the connected region is larger than a fixed value S or the arc length of the connected region is larger than a fixed value L;
step 4, drawing an edge contour curve according to the connected region, and extracting parallel straight lines on the center line of the connected region and the target edge contour;
step 5, drawing a line segment capable of representing the target width by using the central line and the parallel line;
and 6, mapping the pixel distance to the actual distance of the line segment length representing the target width.
As a further development of the invention, in step 1 the initial data set is captured in real time by a digital imaging device.
As a further improvement of the present invention, in step 2, the data enhancement processing includes median filtering and graying, the median filtering is used for eliminating salt and pepper noise; graying uses a weighted average graying method.
As a further improvement of the present invention, the calculation formula of the weighted average graying method is as follows:
gray (i, j) ═ wR × R (i, j) + wG × G (i, j) + wB × B (i, j), where wR, wG, wB represent green, red, blue weighted averages, respectively.
In a further refinement of the present invention, wR is 0.229, wG is 0.578, and wB is 0.114.
As a further improvement of the present invention, in step 3, the connected region of the image is a region having the same pixel value and composed of pixels adjacent thereto.
As a further improvement of the present invention, step 4 comprises:
step 4.1, an algorithm for extracting the center line of the connected region is a Zhang-Suen thinning algorithm; continuously corroding and thinning the target from the periphery of the target to the center of the target by using the characteristic of a 3-by-3 pixel window taking the pixel to be detected as the center until the target is corroded to the width of a single-layer pixel so as to obtain the central line of the communicated region;
and 4.2, applying Hope straight line detection to map a curve or a straight line with the same shape in the image space to a point in the Hope space so as to convert the problem of detecting any shape into the problem of coincidence of the midpoint in the statistical Hope space.
As a further improvement of the present invention, step 5 comprises:
step 5.1, sequentially traversing edge points of the target contour, searching connecting lines between the current position and all non-edge contour pixel points of the position, and recording the length;
step 5.2, judging whether the central line of the communication area passes through, if so, keeping, and if not, discarding; and taking the minimum value of the reserved search length as the target width of the current pixel point position.
As a further improvement of the present invention, in step 6, a pixel distance to actual distance mapping method adopts a Zhangyingyou scaling method; shooting is carried out by adopting a monocular camera, and the position relation between a certain point in space and an imaging point in the camera is established by means of a pinhole imaging model, wherein the unit distance in the actual distance is millimeter, and the unit distance in a pixel coordinate system is pixel.
The invention has the beneficial effects that: the industrial flexible object detection method based on machine vision utilizes the mature computer vision technology and combines the related improved algorithm to realize the width detection of the industrial flexible object, and the method has the characteristics of low error rate and high real-time performance.
Drawings
FIG. 1 is a flow chart of the industrial flexible object detection method based on machine vision of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
The existing method for detecting the industrial flexible target is generally a projection method, and generally comprises the steps of selecting a product pattern which is cut and produced on a production line and has a regular appearance profile, pasting the product pattern on a sample holder, measuring the target by using a magnifying glass scale method/a digital projector, selecting width measurement values of a plurality of positions of each target, and taking the width measurement average value of all pattern product measurement points as the final width measurement value of the target. The method is long in time consumption and low in real-time performance, and in the process of selecting a target product and clamping, the width index of the target is affected secondarily by the action of external force, so that a certain error is caused to the final result.
The invention adopts a computer vision detection method, has high real-time performance, can photograph products on a production line in real time, carry out data enhancement, black-and-white binarization processing and the like, extracts characteristic representation by utilizing a centerline method and a Hopp parallel straight line method, calculates a width value by combining with distance mapping between a world coordinate system and a camera coordinate system, finds that the provided innovative method has accurate measurement result and timely feedback through multiple experimental results, and greatly increases the detection accuracy and detection efficiency.
As shown in fig. 1, the present invention discloses a method for detecting an industrial flexible object based on machine vision, which mainly comprises:
step 1, collecting an original color image of a target to serve as an original data set;
step 2, performing data enhancement processing on the initial data set to obtain a black-white binary image for distinguishing a target area from a white background;
step 3, drawing the connected regions of the target, counting the region arc length of each connected region, and reserving the part of which the area of the connected region is larger than a fixed value S or the arc length of the connected region is larger than a fixed value L;
step 4, drawing an edge contour curve according to the connected region, and extracting parallel straight lines on the center line of the connected region and the target edge contour;
step 5, drawing a line segment capable of representing the target width by using the central line and the parallel line;
and 6, mapping the pixel distance to the actual distance of the line segment length representing the target width.
Further, the initial data set in the step 1 is obtained by real-time capture of a digital imaging device
Further, the data enhancement processing in the step 2 comprises median filtering and graying, wherein the median filtering is used for eliminating salt and pepper noise. The graying is performed by a weighted average graying method, that is, Gray (i, j) ═ wR (i, j) + wG (i, j) + wB (i, j), where wR, wG, and wB respectively represent weighted averages of green, red, and blue, and where wR is 0.229, wG is 0.578, and wB is 0.114 in the present invention.
Further, the connected region of the image in step 3 refers to a region composed of pixels having the same pixel value and adjacent positions, and the 8-neighborhood method is used in the present invention.
Further, the contour centerline extraction algorithm in the step 4 is a Zhang-Suen thinning algorithm. According to the method, the target is continuously corroded and refined from the periphery of the target to the center of the target by utilizing the characteristic of a 3-by-3 pixel window taking a pixel to be detected as the center until the pixel is corroded to the width of a single-layer pixel, and then the central line of the communicated area can be obtained.
And (4) applying Hopp line detection to the picture in the step (4), wherein the Hopp line detection has the basic idea that a curve or a straight line with the same shape in the image space is mapped to one point in the Hopp space by using the transformation between the image space and the Hopp space, so that the problem of detecting any shape is converted into the problem of counting the coincidence of the points in the Hopp space. In the invention, when the Hopp conversion is applied, the distance step taking a pixel as a unit is selected to be 1, the angle step taking radian as a unit is selected to be pi/180, the threshold parameter of the accumulated value is set to be 5, the threshold value of the minimum line segment length is 3, and if the length of the broken interval pixel on the same straight line is 2 or more, the broken interval pixel is considered to be different line segments.
Further, in step 5, the slope k of each line segment is calculatediAnd the midpoint (x) of each line segmentcen,ycen). Traversing all the parallel line segments, connecting the midpoint of the current line segment with the midpoints of the other line segments, judging whether the connecting line is a perpendicular line between two straight lines, if so, keeping, otherwise, discarding, and iteratively selecting the perpendicular line with the shortest distance.
Further, in step 5, sequentially traversing the edge points of the target contour, and searching the current position (x, y) and all the pixel points (x) of the contour not at the position edgei,yj) Between the two connecting lines liRecord liLength. Judgment of liAnd if the central line of the communication area passes through the central line of the communication area, the central line is retained if the central line passes through the central line, and the central line is discarded if the central line does not pass through the central line. For all reserved liFinding the minimum lengthThe value is taken as the target width for the current pixel point location.
Further, in step 6, a mapping method from the pixel distance to the actual distance adopts a Zhang-Yongyou scaling method. In the invention, a monocular camera is adopted for shooting, the position relation between a certain point in space and an imaging point in the camera is established by means of a pinhole imaging model, and the unit distance is millimeter in the actual distance; the unit distance in the pixel coordinate system is a pixel. The basic formula for the final transformation according to the coordinate system is:
sm=A[R t]M
wherein s is an arbitrary scaling factor; m is a pixel coordinate for calibrating a checkerboard corner point; m is the world coordinate of the corner of the checkerboard. A is a camera internal reference matrix; [ R t ] is an external reference matrix, R is a rotation matrix, and t is a translation matrix, which respectively correspond to the rotation and translation of the calibration plate.
The camera internal reference matrix is as follows:
Figure BDA0003475112680000061
wherein u is0Is a main point horizontal coordinate; v. of0Is a principal point ordinate; gamma is a deflection error parameter of two coordinate axes u and v generated by industrial manufacturing; can be approximately considered as 0; f. ofxIs the transverse effective focal length; f. ofyIs the effective focal length in the longitudinal direction; the above parameter units are all pixels.
In the invention, a re-projection error is used as a calibrated evaluation index, and the re-projection error is an error between a two-dimensional point obtained by re-projecting a three-dimensional point of a world coordinate system to an image coordinate system according to a camera internal reference matrix and an original two-dimensional point. The calculation formula is as follows:
Figure BDA0003475112680000062
wherein error is a reprojection error; for the angular points of the checkerboard, x' is the two-dimensional point coordinate after the heavy projection, and x is the original two-dimensional point coordinate; total _ points is the number of all corner points on the checkerboard.
In summary, the industrial flexible object detection method based on machine vision of the invention utilizes the mature computer vision technology and combines the related improved algorithm to realize the width detection of the industrial flexible object, and the method has the characteristics of low error rate and high real-time performance.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present invention.

Claims (9)

1. A machine vision-based industrial flexible object detection method is characterized in that: the industrial flexible object detection method mainly comprises the following steps:
step 1, collecting an original color image of a target to serve as an original data set;
step 2, performing data enhancement processing on the initial data set to obtain a black-white binary image which is used for distinguishing a target area from a white background;
step 3, drawing the connected regions of the target, counting the region arc length of each connected region, and reserving the part of which the area of the connected region is larger than a fixed value S or the arc length of the connected region is larger than a fixed value L;
step 4, drawing an edge contour curve according to the connected region, and extracting parallel straight lines on the center line of the connected region and the target edge contour;
step 5, drawing a line segment capable of representing the target width by using the central line and the parallel line;
and 6, mapping the pixel distance to the actual distance of the line segment length representing the target width.
2. The machine-vision-based industrial flexible object detection method of claim 1, wherein: in step 1, an initial data set is captured in real time by a digital imaging device.
3. The machine-vision-based industrial flexible object detection method of claim 1, wherein: in step 2, the data enhancement processing comprises median filtering and graying, wherein the median filtering is used for eliminating salt and pepper noise; graying uses a weighted average graying method.
4. The machine-vision-based industrial flexible object detection method of claim 1, wherein: the calculation formula of the weighted average graying method is as follows:
gray (i, j) ═ wR × R (i, j) + wG × G (i, j) + wB × B (i, j), where wR, wG, wB represent green, red, blue weighted averages, respectively.
5. The machine-vision-based industrial flexible object detection method of claim 4, wherein: the wR is 0.229, wG is 0.578, and wB is 0.114.
6. The machine-vision-based industrial flexible object detection method of claim 1, wherein: in step 3, the connected region of the image is a region having the same pixel value and composed of pixels adjacent thereto.
7. The machine-vision-based industrial flexible object detection method according to claim 1, wherein the step 4 comprises:
step 4.1, an algorithm for extracting the center line of the connected region is a Zhang-Suen thinning algorithm; continuously corroding and thinning the target from the periphery of the target to the center of the target by using the characteristic of a 3-by-3 pixel window taking the pixel to be detected as the center until the target is corroded to the width of a single-layer pixel so as to obtain the central line of the communicated region;
and 4.2, applying Hope straight line detection to map a curve or a straight line with the same shape in the image space to a point in the Hope space so as to convert the problem of detecting any shape into the problem of coincidence of the midpoint in the statistical Hope space.
8. The machine-vision-based industrial flexible object detection method according to claim 1, wherein step 5 comprises:
step 5.1, sequentially traversing edge points of the target contour, searching connecting lines between the current position and all non-edge contour pixel points of the position, and recording the length;
step 5.2, judging whether the central line of the communication area passes through, if so, keeping, and if not, discarding; and taking the minimum value of the reserved search length as the target width of the position of the current pixel point.
9. The machine-vision-based industrial flexible object detection method of claim 1, wherein: in step 6, a pixel distance to actual distance mapping method adopts a Zhang-Zhengyou calibration method; shooting is carried out by adopting a monocular camera, and the position relation between a certain point in space and an imaging point in the camera is established by means of a pinhole imaging model, wherein the unit distance in the actual distance is millimeter, and the unit distance in a pixel coordinate system is pixel.
CN202210053072.7A 2022-01-18 2022-01-18 Industrial flexible object detection method based on machine vision Pending CN114494169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210053072.7A CN114494169A (en) 2022-01-18 2022-01-18 Industrial flexible object detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210053072.7A CN114494169A (en) 2022-01-18 2022-01-18 Industrial flexible object detection method based on machine vision

Publications (1)

Publication Number Publication Date
CN114494169A true CN114494169A (en) 2022-05-13

Family

ID=81512312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210053072.7A Pending CN114494169A (en) 2022-01-18 2022-01-18 Industrial flexible object detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN114494169A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402871A (en) * 2023-03-28 2023-07-07 苏州大学 Monocular distance measurement method and system based on scene parallel elements and electronic equipment
CN116721100A (en) * 2023-08-09 2023-09-08 聚时科技(深圳)有限公司 Industrial vision narrow-area cutting method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN112330663A (en) * 2020-11-25 2021-02-05 中国烟草总公司郑州烟草研究院 Computer vision tobacco shred width detection method based on variable diameter circle
WO2021248686A1 (en) * 2020-06-10 2021-12-16 南京翱翔信息物理融合创新研究院有限公司 Projection enhancement-oriented gesture interaction method based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
WO2021248686A1 (en) * 2020-06-10 2021-12-16 南京翱翔信息物理融合创新研究院有限公司 Projection enhancement-oriented gesture interaction method based on machine vision
CN112330663A (en) * 2020-11-25 2021-02-05 中国烟草总公司郑州烟草研究院 Computer vision tobacco shred width detection method based on variable diameter circle

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402871A (en) * 2023-03-28 2023-07-07 苏州大学 Monocular distance measurement method and system based on scene parallel elements and electronic equipment
CN116402871B (en) * 2023-03-28 2024-05-10 苏州大学 Monocular distance measurement method and system based on scene parallel elements and electronic equipment
CN116721100A (en) * 2023-08-09 2023-09-08 聚时科技(深圳)有限公司 Industrial vision narrow-area cutting method and device, electronic equipment and storage medium
CN116721100B (en) * 2023-08-09 2023-11-24 聚时科技(深圳)有限公司 Industrial vision narrow-area cutting method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111340797B (en) Laser radar and binocular camera data fusion detection method and system
CN106683137B (en) Artificial mark based monocular and multiobjective identification and positioning method
CN103093191B (en) A kind of three dimensional point cloud is in conjunction with the object identification method of digital image data
CN105067638B (en) Tire fetal membrane face character defect inspection method based on machine vision
CN103424409B (en) Vision detecting system based on DSP
CN114494169A (en) Industrial flexible object detection method based on machine vision
CN109559324B (en) Target contour detection method in linear array image
CN111260788B (en) Power distribution cabinet switch state identification method based on binocular vision
CN110189375B (en) Image target identification method based on monocular vision measurement
CN109993154B (en) Intelligent identification method for single-pointer sulfur hexafluoride instrument of transformer substation
CN111604909A (en) Visual system of four-axis industrial stacking robot
CN107133983B (en) Bundled round steel end face binocular vision system and space orientation and method of counting
CN108288289B (en) LED visual detection method and system for visible light positioning
CN110032946B (en) Aluminum/aluminum blister packaging tablet identification and positioning method based on machine vision
CN111551350A (en) Optical lens surface scratch detection method based on U _ Net network
CN110335233B (en) Highway guardrail plate defect detection system and method based on image processing technology
CN113996500A (en) Intelligent dispensing identification system based on visual dispensing robot
CN103729631A (en) Vision-based connector surface feature automatically-identifying method
CN113406111B (en) Defect detection method and device based on structural light field video stream
CN112699267B (en) Vehicle type recognition method
TW202225730A (en) High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model
CN114280075A (en) Online visual inspection system and method for surface defects of pipe parts
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN116071315A (en) Product visual defect detection method and system based on machine vision
CN115953550A (en) Point cloud outlier rejection system and method for line structured light scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination