CN105457908A - Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD - Google Patents

Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD Download PDF

Info

Publication number
CN105457908A
CN105457908A CN201510771640.7A CN201510771640A CN105457908A CN 105457908 A CN105457908 A CN 105457908A CN 201510771640 A CN201510771640 A CN 201510771640A CN 105457908 A CN105457908 A CN 105457908A
Authority
CN
China
Prior art keywords
row
coordinate
image
face glass
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510771640.7A
Other languages
Chinese (zh)
Other versions
CN105457908B (en
Inventor
孙高磊
程涛
冯平
彭涛
刘新辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510771640.7A priority Critical patent/CN105457908B/en
Publication of CN105457908A publication Critical patent/CN105457908A/en
Application granted granted Critical
Publication of CN105457908B publication Critical patent/CN105457908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for locating glass panels and provides a sorting and quick locating method for the small-size glass panels on the basis of a monocular CCD. The method comprises the steps of A, collecting images of clamping grooves supporting the glass panels through the monocular CCD, carrying out gray level transformation on the collected images and then carrying out pretreatment for obtaining gray-level images; B, calculating the gray level average of row pixels of the gray-level images and determining the row coordinates of the glass panels; C, carrying out binary segmentation on the gray-level images and extracting the edge coordinates of areas of interest according to the glass panels; D, determining the center coordinates of the clamping grooves for obtaining the adsorption positions according to the row coordinates and the edge coordinates of the areas of interest, and controlling a manipulator to reach the adsorption positions for adsorption. According to the invention, on the basis of gray level transformation and edge detection, the center position of each glass panel in the clamping grooves in view can be found fast, the glass panels needing to be grabbed currently can be fast found through cooperation with the view center position of a camera, and therefore the small-size glass panels can be sorted and quickly located.

Description

Based on sorting method for rapidly positioning and the system of the small size face glass of monocular CCD
Technical field
The invention belongs to framing field, particularly relate to a kind of sorting method for rapidly positioning and system of the small size face glass based on monocular CCD.
Background technology
At present, localization method is broadly divided into machinery location and machine vision location two large classes, machinery location is fairly simple, but adaptivity is not high, especially for the face glass that size differs, and machine vision method positioning precision is high, speed is fast, and self adaptation is high, and untouchable, can meet and detect in real time, thus apply more and more wider.According to CCD quantity, machine vision localization method can be divided into monocular vision to locate and multi-vision visual localization method; According to object space dimensionality, two-dimensional location method and three-dimensional fix method can be divided into again.Multi-vision visual localization method is usually used in the Spatial Multi-Dimensional location of more complicated.But at present when using monocular CCD to carry out sorting quick position, the mechanical locating self-adaption of manipulator is not high, easily contact face glass causes panel to scratch.
Summary of the invention
Technical problem to be solved by this invention is the sorting method for rapidly positioning and the system that provide a kind of small size face glass based on monocular CCD, when being intended to use monocular CCD to carry out sorting quick position, the mechanical locating self-adaption of manipulator is not high, easily contacts the problem that face glass causes panel to scratch.
The present invention is achieved in that a kind of sorting method for rapidly positioning of the small size face glass based on monocular CCD, and step comprises:
Steps A, utilize monocular CCD to gather the image being placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
Step B, calculates the row pixel grey scale average of described gray level image, then determines described face glass place row-coordinate according to described row gray average;
Step C, carries out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
Step D, according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then controls the absorption position that manipulator reaches described face glass and adsorbs.
Further, steps A specifically comprises:
Steps A 1, controls manipulator and moves to above bin draw-in groove, and controls the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass;
Steps A 2, carries out gradation conversion to the image that steps A 1 gathers, obtains gray level image after then carrying out pretreatment; Described pretreatment comprises filtering, denoising.
Further, step B specifically comprises:
Step B1, calculates the summation of the gray value of every one-row pixels of described gray level image;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
Step B2, according to the summation of the gray value of every one-row pixels, calculates every a line pixel grey scale average;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
Step B3, finds row pixel grey scale maximum according to every a line pixel grey scale average, confirms described face glass place row-coordinate with this row pixel grey scale maximum.
Further, step B3 specifically comprises:
Step B31, calculates the row pixel grey scale average of described gray level image;
Represent the row pixel grey scale average of described gray level image with RowAverage, then:
R o w A v e r a g e = Σ 0 r R o w A v e ( i ) / r ;
Step B32, calculates the gray scale difference value of every one-row pixels;
Represent the gray scale difference value of the i-th row pixel with Delta (i), then this row pixel departs from the size of described row pixel grey scale average and is: Delta (i)=RowAve (i)-RowAverage;
Step B33, travels through every a line pixel grey scale average, obtains maximum, carry out threshold value setting with described maximum;
Represent described threshold value with Delta, MaxRowAve represents described maximum, then:
Delta=(MaxRowAve-RowAverage)*0.8;
Step B34, judges whether described gray scale difference value meets described threshold value, according to judged result determination gray scale maximum, thus determines described face glass place row-coordinate;
If Delta < Delta (i), then determine the row at the i-th behavior gray scale maximum place, i.e. the row at described face glass place, and obtain described face glass place row-coordinate.
Further, described step C specifically comprises:
Step C1, carries out binarization segmentation process to described gray level image, obtains binarization of gray value image;
Step C2, carries out BLOB analysis to described binarization of gray value image, obtains region of interest area image;
Step C3, carries out edge extracting to described region of interest area image, and the edge according to extracting obtains area-of-interest edge coordinate;
Represent the leftmost edge extreme coordinates of the i-th row of described region of interest area image with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
Described region of interest area image being traveled through from left to the i-th row, jumping out circulation when meeting ColGrayVa1 (i)=255, record this point coordinates and travel through from right; When meeting ColGrayVa2 (i)=255, recording this point coordinates, then from left, traveling through described region of interest area image to i-th+1.
Further, step D specifically comprises:
Step D1, according to described row-coordinate and described area-of-interest edge coordinate, determines the draw-in groove center of this row;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
Step D2, determines the absorption position of described face glass according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter (i)-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
Step D3, control manipulator arrives described absorption position and adsorbs.
Present invention also offers a kind of sorting quick positioning system of the small size face glass based on monocular CCD, comprising:
Acquisition process unit, for utilizing monocular CCD to gather the image being placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
Computing unit, for calculating the row pixel grey scale average of described gray level image, then determines described face glass place row-coordinate according to described row gray average;
Edge extracting unit, for carrying out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
Location absorbing unit, for according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then control the absorption position that manipulator reaches described face glass and adsorb.
Further, described acquisition process unit specifically for:
First, control manipulator and move to above bin draw-in groove, and control the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass;
Finally, gradation conversion is carried out to the image that steps A 1 gathers, after then carrying out pretreatment, obtains gray level image; Described pretreatment comprises filtering, denoising.
Further, described computing unit specifically for:
First, the summation of the gray value of every one-row pixels of described gray level image is calculated;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
Secondly, according to the summation of the gray value of every one-row pixels, every a line pixel grey scale average is calculated;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
Finally, find row pixel grey scale maximum according to every a line pixel grey scale average, confirm described face glass place row-coordinate with this row pixel grey scale maximum.
Further, locate absorbing unit specifically for:
First, according to described row-coordinate and described area-of-interest edge coordinate, the draw-in groove center of this row is determined;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
Secondly, the absorption position of described face glass is determined according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter (i)-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
Finally, control manipulator to arrive described absorption position and adsorb.
The present invention compared with prior art, beneficial effect is: the present invention is based upon on the basis of monocular vision location and two-dimensional location method, based on gradation conversion and rim detection, the center of the every block face glass being positioned at draw-in groove can be found rapidly in the visual field, combining camera visual field center position simultaneously, can fast searching to current need capture face glass, realize the sorting quick position of small size panel.Further, the present invention utilizes machine vision, and the secondary avoiding machinery location to cause because of contact face glass scratches, can adjust according to draw-in groove error simultaneously, automation is realized, rapid sorting, the multiple panel model of energy self adaptation of the present invention for face glass detects.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of sorting method for rapidly positioning based on monocular CCD small size face glass that the embodiment of the present invention provides.
Fig. 2 is the gray scale schematic diagram of the charging bin that the embodiment of the present invention provides.
Fig. 3 is the row gray value schematic diagram of the charging bin that the embodiment of the present invention provides.
Fig. 4 is the area-of-interest edge schematic diagram that the embodiment of the present invention provides.
Fig. 5 is the panel absorption position schematic diagram that the embodiment of the present invention provides.
Fig. 6 is the structural representation of a kind of sorting quick positioning system based on monocular CCD small size face glass that the embodiment of the present invention provides.
Detailed description of the invention
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Sorting method for rapidly positioning based on monocular CCD small size face glass is be based upon on the basis of monocular vision location and two-dimensional location method, based on gradation conversion and rim detection, find in the visual field center of the every block face glass being positioned at draw-in groove, and combining camera visual field center position finds the face glass of current crawl.Its Integral Thought is the capable projection of gray level image captured camera on manipulator, calculate row gray average, extract face glass according to maximum and be expert at, binaryzation is carried out to former figure, extract area-of-interest, edge extracting is carried out to this region, and then obtains edge coordinate, in conjunction with panel place row-coordinate, obtain the row and column at panel place, the combining camera visual field and coordinate, obtain, when presort panel place draw-in groove centre coordinate, sorting.
Based on above-mentioned theory, the present invention proposes a kind of sorting method for rapidly positioning based on monocular CCD small size face glass as shown in Figure 1, step comprises:
S1, utilize monocular CCD to gather the image being placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
S2, calculates the row pixel grey scale average of described gray level image, then determines described face glass place row-coordinate according to described row gray average.In this step, the two-dimensional image information of initial gray level image is converted into one-dimension information.Simultaneously, although this step obtains the row at panel place, but the draw-in groove of correspondence is not necessarily positioned at picture center, center not by image is determined, therefore also to need to obtain about draw-in groove two end points and obtain draw-in groove center corresponding to panel row further, therefore also need to carry out step S3.
S3, carries out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
S4, according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then controls the absorption position that manipulator reaches described face glass and adsorbs.
Concrete, step S1 specifically comprises:
S11, controls manipulator and moves to above bin draw-in groove, and controls the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass.In this step, the image of collection as shown in Figure 2.
S12, carries out gradation conversion to the image that step S11 gathers, obtains gray level image after then carrying out pretreatment; Described pretreatment comprises filtering, denoising etc.
Concrete, step S2 specifically comprises:
S21, calculates the summation of the gray value of every one-row pixels of described gray level image;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
S22, according to the summation of the gray value of every one-row pixels, calculates every a line pixel grey scale average;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
S23, finds row pixel grey scale maximum according to every a line pixel grey scale average, confirms described face glass place row-coordinate with this row pixel grey scale maximum.
Concrete, above-mentioned steps S23 specifically comprises:
S231, calculates the row pixel grey scale average of described gray level image;
Represent the row pixel grey scale average of described gray level image with RowAverage, then:
R o w A v e r a g e = &Sigma; 0 r R o w A v e ( i ) / r ;
S232, calculates the gray scale difference value of every one-row pixels;
Represent the gray scale difference value of the i-th row pixel with Delta (i), then this row pixel departs from the size of described row pixel grey scale average and is: Delta (i)=RowAve (i)-RowAverage;
S233, travels through every a line pixel grey scale average, obtains maximum, carry out threshold value setting with described maximum;
Represent described threshold value with Delta, MaxRowAve represents described maximum, then:
Delta=(MaxRowAve-RowAverage)*0.8。In this step 0.8 is experimentally data acquisition, for different draw-in groove, correspondingly can make and change.
S234, judges whether described gray scale difference value meets described threshold value, according to judged result determination gray scale maximum, thus determines described face glass place row-coordinate;
If Delta < Delta (i), then determine the row at the i-th behavior gray scale maximum place, i.e. the row at described face glass place, and obtain described face glass place row-coordinate.
Particularly, above-mentioned steps S3 specifically comprises:
S31, carries out binarization segmentation process to described gray level image, obtains binarization of gray value image;
S32, carries out BLOB analysis to described binarization of gray value image, obtains region of interest area image;
S33, carries out edge extracting to described region of interest area image, and the edge according to extracting obtains area-of-interest edge coordinate;
Represent the leftmost edge extreme coordinates of the i-th row of described region of interest area image with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
Described region of interest area image being traveled through from left to the i-th row, jumping out circulation when meeting ColGrayVa1 (i)=255, record this point coordinates and travel through from right; When meeting ColGrayVa2 (i)=255, recording this point coordinates, then from left, traveling through described region of interest area image to i-th+1.In this step, after traversal i-th row terminates, start the i-th+1 row, repeat above-mentioned traversal and obtain left and right edges extreme coordinates, until traveled through region of interest area image.
Particularly, step S4 comprises further:
S41, according to described row-coordinate and described area-of-interest edge coordinate, determines the draw-in groove center of this row;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
S42, determines the absorption position of described face glass according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter (i)-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
S43, the absorption position that control manipulator arrives described face glass adsorbs.
In the present invention, extract draw-in groove two endpoint locations, and then obtain draw-in groove center, combining camera place coordinate and the visual field, obtain current absorption face glass centre coordinate, driving device hand arrives the absorption of this position, completes location.
Present invention also offers a kind of sorting quick positioning system based on monocular CCD small size face glass as shown in Figure 6, comprising:
Acquisition process unit 1, for the image for utilizing monocular CCD collection to be placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
Computing unit 2, for the row pixel grey scale average for calculating described gray level image, then determines described face glass place row-coordinate according to described row gray average;
Edge extracting unit 3, for carrying out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
Location absorbing unit 4, for according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then control the absorption position that manipulator reaches described face glass and adsorb.
Further, acquisition process unit 1 specifically for:
First, control manipulator and move to above bin draw-in groove, and control the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass;
Finally, gradation conversion is carried out to the image gathered, after then carrying out pretreatment, obtains gray level image; Described pretreatment comprises filtering, denoising etc.
Further, computing unit 2 specifically for:
First, the summation of the gray value of every one-row pixels of described gray level image is calculated;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
Secondly, according to the summation of the gray value of every one-row pixels, every a line pixel grey scale average is calculated;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
Finally, find row pixel grey scale maximum according to every a line pixel grey scale average, confirm described face glass place row-coordinate with this row pixel grey scale maximum.Wherein, in this implementation process, also enter one comprise,
Calculate the row pixel grey scale average of described gray level image;
Represent the row pixel grey scale average of described gray level image with RowAverage, then:
R o w A v e r a g e = &Sigma; 0 r R o w A v e ( i ) / r ;
Calculate the gray scale difference value of every one-row pixels;
Represent the gray scale difference value of the i-th row pixel with Delta (i), then this row pixel departs from the size of described row pixel grey scale average and is: Delta (i)=RowAve (i)-RowAverage;
Travel through every a line pixel grey scale average, obtain maximum, carry out threshold value setting with described maximum;
Represent described threshold value with Delta, MaxRowAve represents described maximum, then:
Delta=(MaxRowAve-RowAverage)*0.8;
Judge whether described gray scale difference value meets described threshold value, according to judged result determination gray scale maximum, thus determine face glass place row-coordinate;
If Delta < Delta (i), then determine the row at the i-th behavior gray scale maximum place, i.e. the row at face glass place, and obtain face glass place row-coordinate.
Further, edge extracting unit 3 specifically for:
First, binarization segmentation process is carried out to described gray level image, obtain binarization of gray value image;
Then, BLOB analysis is carried out to described binarization of gray value image, obtains region of interest area image;
Finally, carry out edge extracting to described region of interest area image, the edge according to extracting obtains area-of-interest edge coordinate;
Represent the leftmost edge extreme coordinates of the i-th row of described region of interest area image with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
Described region of interest area image being traveled through from left to the i-th row, jumping out circulation when meeting ColGrayVa1 (i)=255, record this point coordinates and travel through from right; When meeting ColGrayVa2 (i)=255, recording this point coordinates, then from left, traveling through described region of interest area image to i-th+1.
Further, locate absorbing unit specifically for:
First, according to described row-coordinate and described area-of-interest edge coordinate, determine the draw-in groove center of this row, and obtain its coordinate;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
Secondly, the absorption position of described face glass is determined according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter (i)-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
Finally, control manipulator to arrive described absorption position and adsorb.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. based on a sorting method for rapidly positioning for the small size face glass of monocular CCD, it is characterized in that, described sorting method for rapidly positioning step comprises:
Steps A, utilize monocular CCD to gather the image being placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
Step B, calculates the row pixel grey scale average of described gray level image, then determines described face glass place row-coordinate according to described row gray average;
Step C, carries out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
Step D, according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then controls the absorption position that manipulator reaches described face glass and adsorbs.
2. sort method for rapidly positioning as claimed in claim 1, it is characterized in that, steps A specifically comprises:
Steps A 1, controls manipulator and moves to above bin draw-in groove, and controls the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass;
Steps A 2, carries out gradation conversion to the image that steps A 1 gathers, obtains gray level image after then carrying out pretreatment; Described pretreatment comprises filtering, denoising.
3. sort method for rapidly positioning as claimed in claim 1, it is characterized in that, step B specifically comprises:
Step B1, calculates the summation of the gray value of every one-row pixels of described gray level image;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
Step B2, according to the summation of the gray value of every one-row pixels, calculates every a line pixel grey scale average;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
Step B3, finds row pixel grey scale maximum according to every a line pixel grey scale average, confirms described face glass place row-coordinate with this row pixel grey scale maximum.
4. sort method for rapidly positioning as claimed in claim 3, it is characterized in that, step B3 specifically comprises:
Step B31, calculates the row pixel grey scale average of described gray level image;
Represent the row pixel grey scale average of described gray level image with RowAverage, then:
R o w A v e r a g e = &Sigma; 0 r R o w A v e ( i ) / r ;
Step B32, calculates the gray scale difference value of every one-row pixels;
Represent the gray scale difference value of the i-th row pixel with Delta (i), then this row pixel departs from the size of described row pixel grey scale average and is: Delta (i)=RowAve (i)-RowAverage;
Step B33, travels through every a line pixel grey scale average, obtains maximum, carry out threshold value setting with described maximum;
Represent described threshold value with Delta, MaxRowAve represents described maximum, then:
Delta=(MaxRowAve-RowAverage)*0.8;
Step B34, judges whether described gray scale difference value meets described threshold value, according to judged result determination gray scale maximum, thus determines described face glass place row-coordinate;
If Delta < Delta (i), then determine the row at the i-th behavior gray scale maximum place, i.e. the row at described face glass place, and obtain described face glass place row-coordinate.
5. sort method for rapidly positioning as claimed in claim 1, it is characterized in that, described step C specifically comprises:
Step C1, carries out binarization segmentation process to described gray level image, obtains binarization of gray value image;
Step C2, carries out BLOB analysis to described binarization of gray value image, obtains region of interest area image;
Step C3, carries out edge extracting to described region of interest area image, and the edge according to extracting obtains area-of-interest edge coordinate;
Represent the leftmost edge extreme coordinates of the i-th row of described region of interest area image with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
Described region of interest area image being traveled through from left to the i-th row, jumping out circulation when meeting ColGrayVa1 (i)=255, record this point coordinates and travel through from right; When meeting ColGrayVa2 (i)=255, recording this point coordinates, then from left, traveling through described region of interest area image to i-th+1.
6. sort method for rapidly positioning as claimed in claim 1, it is characterized in that, step D specifically comprises:
Step D1, according to described row-coordinate and described area-of-interest edge coordinate, determines the draw-in groove center of this row, and obtains its coordinate;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
Step D2, determines the absorption position of described face glass according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter (i)-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
Step D3, control manipulator arrives described absorption position and adsorbs.
7. based on a sorting quick positioning system for the small size face glass of monocular CCD, it is characterized in that, described sorting quick positioning system comprises:
Acquisition process unit, for utilizing monocular CCD to gather the image being placed with the draw-in groove of face glass, and the image collected is carried out gradation conversion, the image then gradation conversion obtained carries out pretreatment and obtains gray level image;
Computing unit, for calculating the row pixel grey scale average of described gray level image, then determines described face glass place row-coordinate according to described row gray average;
Edge extracting unit, for carrying out binarization segmentation to described gray level image, extracts area-of-interest edge coordinate according to described face glass;
Location absorbing unit, for according to described row-coordinate and described area-of-interest edge coordinate determination draw-in groove centre coordinate, using described draw-in groove centre coordinate as the absorption position of described face glass, then control the absorption position that manipulator reaches described face glass and adsorb.
8. sort quick positioning system as claimed in claim 7, it is characterized in that, described acquisition process unit specifically for:
First, control manipulator and move to above bin draw-in groove, and control the image that the monocular CCD camera collection of being fixed on described manipulator is placed with the draw-in groove of face glass;
Finally, gradation conversion is carried out to the image that steps A 1 gathers, after then carrying out pretreatment, obtains gray level image; Described pretreatment comprises filtering, denoising.
9. sort method for rapidly positioning as claimed in claim 1, it is characterized in that, described computing unit specifically for:
First, the summation of the gray value of every one-row pixels of described gray level image is calculated;
Represent that described gray level image i-th row jth arranges with I (i, j), r represents the height of described gray level image, and c represents the wide of described gray level image, and Row (i) represents the summation of the gray value of described gray level image i-th row pixel, then: wherein 0≤i≤r, 0≤j≤c;
Secondly, according to the summation of the gray value of every one-row pixels, every a line pixel grey scale average is calculated;
Represent the i-th row grey scale pixel value average with RowAve (i), then:
RowAve(i)=Row(i)/c;
Finally, find row pixel grey scale maximum according to every a line pixel grey scale average, confirm described face glass place row-coordinate with this row pixel grey scale maximum.
10. sort method for rapidly positioning as claimed in claim 7, it is characterized in that, location absorbing unit specifically for:
First, according to described row-coordinate and described area-of-interest edge coordinate, determine the draw-in groove center of this row, and obtain its coordinate;
The draw-in groove center of the i-th row is represented with RowCenter (i), the i-th Hang Hang center is represented with ColCenter (i), the leftmost edge extreme coordinates of the i-th row of described region of interest area image is represented with ColGrayVa1 (i), ColGrayVa2 (i) represents the right marginal end point coordinates of the i-th row of described region of interest area image, then:
RowCenter(i)=i;
ColCenter(i)=(ColGrayVa1(i)+ColGrayVa2(i))/2;
Secondly, the absorption position of described face glass is determined according to camera fields of view centre coordinate and described draw-in groove centre coordinate;
Represent the row-coordinate at camera fields of view center with CameraRowCenter, CameraColCenter represents the row coordinate at camera fields of view center, and r represents the height of described gray level image, and c represents the wide of described gray level image, then:
CameraRowCenter=r/2;CameraColCenter=c/2;
And if only if meet abs (CameraRowCenter (i)-RowCenter (i)) and
Abs (CameraColCenter-ColCenter (i)) is for time minimum, determine being expert at of the face glass that the behavior will adsorb, with this row place row-coordinate, the three-dimensional coordinate of combining camera, obtains the absorption position of described face glass;
Finally, control manipulator to arrive described absorption position and adsorb.
CN201510771640.7A 2015-11-12 2015-11-12 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD Active CN105457908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510771640.7A CN105457908B (en) 2015-11-12 2015-11-12 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510771640.7A CN105457908B (en) 2015-11-12 2015-11-12 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD

Publications (2)

Publication Number Publication Date
CN105457908A true CN105457908A (en) 2016-04-06
CN105457908B CN105457908B (en) 2018-04-13

Family

ID=55596316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510771640.7A Active CN105457908B (en) 2015-11-12 2015-11-12 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD

Country Status (1)

Country Link
CN (1) CN105457908B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106378663A (en) * 2016-11-28 2017-02-08 重庆大学 Machine tool auxiliary tool setting system based on machine vision
CN106493086A (en) * 2016-10-21 2017-03-15 北京源著智能科技有限公司 The sorting of sheet material processes method and system
CN107798683A (en) * 2017-11-10 2018-03-13 珠海格力智能装备有限公司 Product specific region edge detection method, device and terminal
CN109035135A (en) * 2018-07-13 2018-12-18 常州宏大智能装备产业发展研究院有限公司 A kind of on-line automatic pattern adjustment method of fabric based on machine vision
CN109143624A (en) * 2018-08-28 2019-01-04 武汉华星光电技术有限公司 Panel adsorbent equipment and the automatic absorbing method for using the device
CN110517318A (en) * 2019-08-28 2019-11-29 昆山国显光电有限公司 Localization method and device, storage medium
CN111797695A (en) * 2020-06-10 2020-10-20 盐城工业职业技术学院 Automatic identification method and system for twisted yarn
CN114603715A (en) * 2022-03-10 2022-06-10 郴州旗滨光伏光电玻璃有限公司 Glass punching method, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247619A1 (en) * 2007-03-29 2008-10-09 Fujifilm Corporation Method, device and computer-readable recording medium containing program for extracting object region of interest
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method
US8224078B2 (en) * 2000-11-06 2012-07-17 Nant Holdings Ip, Llc Image capture and identification system and process
US20130223729A1 (en) * 2012-02-28 2013-08-29 Snell Limited Identifying points of interest in an image
EP2639745A1 (en) * 2012-03-16 2013-09-18 Thomson Licensing Object identification in images or image sequences
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224078B2 (en) * 2000-11-06 2012-07-17 Nant Holdings Ip, Llc Image capture and identification system and process
US20080247619A1 (en) * 2007-03-29 2008-10-09 Fujifilm Corporation Method, device and computer-readable recording medium containing program for extracting object region of interest
CN101334263A (en) * 2008-07-22 2008-12-31 东南大学 Circular target circular center positioning method
US20130223729A1 (en) * 2012-02-28 2013-08-29 Snell Limited Identifying points of interest in an image
EP2639745A1 (en) * 2012-03-16 2013-09-18 Thomson Licensing Object identification in images or image sequences
CN104751147A (en) * 2015-04-16 2015-07-01 成都汇智远景科技有限公司 Image recognition method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106493086B (en) * 2016-10-21 2019-03-05 北京源著智能科技有限公司 The sorting processing method and system of plate
CN106493086A (en) * 2016-10-21 2017-03-15 北京源著智能科技有限公司 The sorting of sheet material processes method and system
CN106378663A (en) * 2016-11-28 2017-02-08 重庆大学 Machine tool auxiliary tool setting system based on machine vision
CN107798683A (en) * 2017-11-10 2018-03-13 珠海格力智能装备有限公司 Product specific region edge detection method, device and terminal
CN109035135A (en) * 2018-07-13 2018-12-18 常州宏大智能装备产业发展研究院有限公司 A kind of on-line automatic pattern adjustment method of fabric based on machine vision
US10803344B2 (en) 2018-08-28 2020-10-13 Wuhan China Star Optoelectronics Technology Co., Ltd. Panel adsorption device and automatic adsorption method using the same
WO2020042548A1 (en) * 2018-08-28 2020-03-05 武汉华星光电技术有限公司 Panel adsorption apparatus and automatic adsorption method using apparatus
CN109143624B (en) * 2018-08-28 2020-06-16 武汉华星光电技术有限公司 Panel adsorption device and automatic adsorption method adopting same
CN109143624A (en) * 2018-08-28 2019-01-04 武汉华星光电技术有限公司 Panel adsorbent equipment and the automatic absorbing method for using the device
CN110517318A (en) * 2019-08-28 2019-11-29 昆山国显光电有限公司 Localization method and device, storage medium
CN110517318B (en) * 2019-08-28 2022-05-17 昆山国显光电有限公司 Positioning method and device, and storage medium
CN111797695A (en) * 2020-06-10 2020-10-20 盐城工业职业技术学院 Automatic identification method and system for twisted yarn
CN111797695B (en) * 2020-06-10 2023-09-29 盐城工业职业技术学院 Automatic identification method and system for twist of folded yarn
CN114603715A (en) * 2022-03-10 2022-06-10 郴州旗滨光伏光电玻璃有限公司 Glass punching method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN105457908B (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN105457908A (en) Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD
CN103839038A (en) People counting method and device
Geiger et al. Are we ready for autonomous driving? the kitti vision benchmark suite
CN103295016B (en) Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN105574543B (en) A kind of vehicle brand type identifier method and system based on deep learning
CN105163110A (en) Camera cleanliness detection method and system and shooting terminal
CN103477352A (en) Gesture recognition using depth images
Nourani-Vatani et al. A study of feature extraction algorithms for optical flow tracking
CN105894534B (en) A kind of improvement moving target detecting method based on ViBe
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN108229352B (en) Standing detection method based on deep learning
CN105741324A (en) Moving object detection identification and tracking method on moving platform
CN102855758A (en) Detection method for vehicle in breach of traffic rules
CN102436590A (en) Real-time tracking method based on on-line learning and tracking system thereof
CN109359577B (en) System for detecting number of people under complex background based on machine learning
CN105760846A (en) Object detection and location method and system based on depth data
CN104408725A (en) Target recapture system and method based on TLD optimization algorithm
Babahajiani et al. Object recognition in 3D point cloud of urban street scene
CN103679730A (en) Video abstract generating method based on GIS
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
Ji et al. Integrating visual selective attention model with HOG features for traffic light detection and recognition
CN104318588A (en) Multi-video-camera target tracking method based on position perception and distinguish appearance model
CN103093198A (en) Crowd density monitoring method and device
CN103426172A (en) Vision-based target tracking method and device
CN202058221U (en) Passenger flow statistic device based on binocular vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant