CN108090895B - Container lockhole contour extraction method based on image processing - Google Patents

Container lockhole contour extraction method based on image processing Download PDF

Info

Publication number
CN108090895B
CN108090895B CN201711221634.XA CN201711221634A CN108090895B CN 108090895 B CN108090895 B CN 108090895B CN 201711221634 A CN201711221634 A CN 201711221634A CN 108090895 B CN108090895 B CN 108090895B
Authority
CN
China
Prior art keywords
pixel
image
region
hole
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711221634.XA
Other languages
Chinese (zh)
Other versions
CN108090895A (en
Inventor
高飞
葛一粟
王孖豪
卢书芳
张元鸣
陆佳炜
肖刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201711221634.XA priority Critical patent/CN108090895B/en
Publication of CN108090895A publication Critical patent/CN108090895A/en
Application granted granted Critical
Publication of CN108090895B publication Critical patent/CN108090895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a container lock hole contour extraction method based on image processing, which realizes the accurate extraction of a container lock hole contour and improves the positioning accuracy of a container; the automatic container positioning device has the advantages of being high in precision, high in speed, simple in operation and capable of processing in real time, reducing the manual positioning process with low efficiency and complex operation, reducing error problems and time cost caused by manual positioning, and solving the problems of high labor intensity, low efficiency and poor reliability of manual operation.

Description

Container lockhole contour extraction method based on image processing
Technical Field
The invention relates to a container lock hole contour extraction method based on image processing, in particular to a method for processing container corner fittings images by utilizing an image recognition technology to obtain a lock hole contour.
Background
The shipping volume of shipping containers is increasing due to the continuous development of global commerce. From the economic accounting analysis of the voyage times, the residence time of the ship in the port is shortened, the ship berthing cost can be reduced, the cargo throughput of the port is improved, and the economic benefit is further improved. The traditional method of controlling the lifting appliance to grab the container by visual observation of workers seriously affects the loading and unloading speed of goods, and the problems of low efficiency, easy fatigue, low accuracy rate and the like exist in manual operation, so that the actual production and work requirements cannot be met. The method for extracting the contour of the container lock hole based on image processing becomes a better solution because the key problem lies in the accurate positioning of the position of the container.
In order to solve the problem of positioning the lock hole of the container, a plurality of schemes are provided in the academic circles and the industrial circles at home and abroad. The technical scheme which is closer to the invention comprises the following steps: the method detects and positions the container lockholes through a binocular stereo vision technology (based on an image container lockhole positioning technology [ J ]. hoisting and transporting machinery, 2015(10):67-69.), but the method mainly aims at the thought realization and mathematical demonstration of the binocular stereo vision technology, is difficult to apply in an actual scene, is not complete in the introduction of the lockhole positioning and identification related contents, and cannot be well applied to a scene with obvious external interference; and secondly, establishing a container lock hole recognition algorithm research [ J ] based on machine vision, the Chinese engineering machinery science, 2016,14(05), wherein the research identifies the approximate position of a container body through a color space histogram, obtains the specific edge of the container body through projection, further detects the lock hole position at the edge intersection according to the identified edge of the container body, and finally positions the container lock hole position through trapping. Under the condition of low running time, the algorithm can better identify the image of the container which occupies a large position in the graph and is newer in the graph; in an actual application scene, the container is affected by the working conditions of the container, more rusty spots and dust exist on the container, the position of the container is difficult to position through the color of the container body, and due to the problem of uneven illumination, the colors of part of corner fittings and lock holes are difficult to distinguish, so that the accuracy of positioning the lock holes of the container through trapped waves is not high; MIZhaosurpass (container lockhole recognition algorithm research [ C ]// national mechanical engineering Forum. 2015.) based on vision system) the method processes container images through gray histogram equalization, utilizes H channel information in HSV channel to carry out binarization operation to obtain approximate region of the box body, then according to the location of the notched position of the edge region of the box body, utilizes Hough algorithm to detect the circular edge of the notched region, and realizes the accurate location of lockholes; however, in an actual scene, the outline of the lock hole is a rounded rectangle, the Hough circular detection effect for the lock hole area is not obvious, the lock hole area is easily influenced by the external illumination condition, the identification precision is reduced, even the lock hole area cannot be identified, and the system cannot well meet the requirement of the container lock hole accurate identification
In summary, the following disadvantages exist in the current container keyhole outline extraction processing scheme:
(1) most methods firstly identify the container body, then estimate the approximate area of the lockhole according to the edge of the container, but the container is easily polluted due to long-time exposure to the sun, and the accurate positioning of the lockhole is easily influenced due to the incorrect identification of the container;
(2) the accurate outline of the container lock hole is not really recognized in the method, but the accuracy of a recognition system is insufficient, so that the container grabbing failure is easily caused, the container lifting appliance and the container are damaged, and unnecessary loss is caused;
(3) under the influence of illumination conditions, illumination at the positions of the lock holes of the container is uneven, so that part of areas in the lock holes are bright and are shielded by shadows of corner pieces, and the identification precision of the lock holes is reduced;
the container transportation plays an important role in the global trade industry, the construction of an automatic port is an important way for promoting the development of international trade, and the accurate positioning of a container lock hole is the important basis of the automatic grabbing function of the container. However, due to the influence of illumination conditions, part of the lock holes are not uniformly illuminated, which easily causes low positioning accuracy of the lock holes and causes large system errors. The lock hole contour extraction strategy can better identify the accurate contour of the lock hole in the image aiming at the images with uneven illumination and stained corner fitting surfaces, and realize the accurate positioning of the lock hole of the container.
Disclosure of Invention
In order to realize the positioning problem of the container lock hole, the invention provides a container lock hole contour extraction method based on image processing; the scheme adopted for solving the technical problem comprises the following steps:
step 1: collecting an image of a container below by using a camera arranged on a lifting appliance;
step 2: the method for roughly positioning the lock hole of the container based on HOG + SVM is utilized to obtain the rough positioning range of the upper lock hole and the lower lock hole, the width of the rough positioning area of the lock hole is width, the height is height, the unit is pixel, and the rough positioning area is marked as a rough positioning image F of the lock hole1Image size of Akeyhole
AkeyholeWidth × height (1) step 3: image F by region Segmentation method Graph-Based Segmentation1Performing superpixel division to divide the image into n1A super pixel region set S ═ Ri(xi,yi)|i=1,2,3,…,n1In which R isiIs formed by miSuper-pixel area composed of pixels, xiAnd yiRespectively represent regions RiAbscissa and ordinate of the center point, Ri={pj|j=1,2,3,…,mi},pjRepresents a region RiA pixel of (1);
Figure BDA0001485338310000041
and 4, step 4: calculating significance values of all super-pixel regions in the set S by using a histogram contrast significance calculation method HC to obtain a significant value set S of the super-pixelssal={vi|i=1,2,3,…,n1},viIs a super pixel region RiA significance value of; will SsalSuper-image with highest mean saliencyElement region, denoted as Rsal={pj|j=1,2,3,…,nsalIn which n issalRepresents RsalThe number of pixels in;
and 5: calculating all superpixel center points to superpixel R in set SsalThe Manhattan distance of the central point is sorted from small to large according to the distance to obtain a super-pixel set S1={Rw|w=1,2,3,…,n1};
Step 6: performing lockhole region spelling and lockhole region aggregation
Figure BDA0001485338310000043
Degree of area filling coldWhen the value is 0, the following is concrete:
step 6.1: from the super-pixel set S1Taking out the element R1Is added to the set S2In S2Is marked with Rn(ii) a And reacting the element R1From S1Deletion in, for set S1Reordering according to the distance from small to large;
step 6.2: computing a set S2Number of pixels N:
Figure BDA0001485338310000042
wherein n is the set S2Number of super pixels, mnIs S2The number of pixels of the super-middle pixel;
step 6.3: extracting the set S2The minimum value and the maximum value in the abscissa of the middle pixel are respectively marked as x1And x2Set S2The minimum value and the maximum value in the vertical coordinate of the middle pixel are respectively marked as y1And y2(ii) a Calculate the set S2The middle pixel constitutes the minimum bounding rectangle R (w, h) of the image, where w represents the width of the rectangle R, h represents the height of the rectangle R, the area of the rectangle is a:
w=x2-x1 (4)
h=y2-y1 (5)
A=w×h (6)
step 6.4: calculating the region filling degree c after region mergingnew
Figure BDA0001485338310000051
Step 6.5: if the condition c is satisfiednew≥coldThen c will benewValue of (c) is given toold(ii) a Otherwise, the element R isnFrom S2Deleting; repeating steps 6.1 to 6.5 until
Figure BDA0001485338310000054
Or the number of iterations is greater than λ1Wherein λ is1Representing a predetermined maximum number of iterations;
and 7: calculating to obtain a lockhole area set S2={pu|u=1,2,3,…,n2}, region set S2The circumscribed rectangle of (A) is Rbox(xbox,ybox,wbox,hbox) Wherein p isuRepresentation set S2Pixel of (2), n2Denotes S2Number of pixels in, xboxAnd yboxRespectively represent a rectangle RboxAbscissa and ordinate of the upper left corner, wboxAnd hboxRespectively represent a rectangle RboxWidth and height of (d);
and 8: calculating the circumscribed rectangle R of the lock hole by adopting the formula (8) or (9)hole(xhole,yhole,whole,hhole),xholeAnd yholeRespectively showing the abscissa and ordinate of the upper left corner of the rectangle circumscribed by the keyhole, wholeAnd hholeRespectively representing the width and the height of a circumscribed rectangle of the lock hole;
Figure BDA0001485338310000052
Figure BDA0001485338310000053
and step 9: adopting GrubCut graph cut algorithm to cut image F1Processing is carried out, and approximate foreground and background areas of the image are set; setting the area in the external rectangle as possible foreground area and the area outside the rectangle as background area, and meeting the condition n2≥λ2Wherein λ is2A preset keyhole size threshold value is obtained; otherwise, set S is set2Middle pixel being a possible foreground region, region RsalMiddle pixel is foreground region, image F1The rest of the image is a background area;
step 10: iterating GrubCut graph cut algorithm lambda3Second, lambda3For a given number of image cutting iterations, a keyhole contour image F can be obtained2
THE ADVANTAGES OF THE PRESENT INVENTION
The accurate extraction of the outline of the lock hole of the container is realized, and the positioning accuracy of the container is improved; the automatic container positioning device has the advantages of being high in precision, high in speed, simple in operation and capable of processing in real time, reducing the manual positioning process with low efficiency and complex operation, reducing error problems and time cost caused by manual positioning, and solving the problems of high labor intensity, low efficiency and poor reliability of manual operation.
Drawings
Fig. 1 is a photograph of an upper portion of a container taken in accordance with an exemplary embodiment of the present invention.
FIG. 2 is a rough keyhole positioning image obtained by HOG + SVM processing in step 2 according to the present invention.
FIG. 3 is a super pixel region image after the region segmentation in step 3 according to the present invention.
Fig. 4 is a saliency region image after the saliency calculation in step 4 according to the present invention.
Fig. 5 is a keyhole area image obtained by stitching keyhole areas in step 6 according to the present invention.
Fig. 6 is an image of the precise contour of the keyhole obtained in step 10 according to the present invention.
Detailed Description
The following describes in detail a specific embodiment of the container keyhole contour extraction method according to the present invention with reference to an implementation example. The method comprises the following steps:
step 1: collecting an image of a container below by using a camera arranged on a lifting appliance;
step 2: the method for roughly positioning the lock hole of the container based on HOG + SVM is utilized to obtain the rough positioning range of the upper lock hole and the lower lock hole, the width of the rough positioning area of the lock hole is width, the height is height, the unit is pixel, and the rough positioning area is marked as a rough positioning image F of the lock hole1Image size of Akeyhole
Akeyhole=width×height (1)
And step 3: image F by region Segmentation method Graph-Based Segmentation1Performing superpixel division to divide the image into n1A super pixel region set S ═ Ri(xi,yi)|i=1,2,3,…,n1In which R isiIs formed by miSuper-pixel area composed of pixels, xiAnd yiRespectively represent regions RiAbscissa and ordinate of the center point, Ri={pj|j=1,2,3,…,mi},pjRepresents a region RiA pixel of (1);
Figure BDA0001485338310000071
and 4, step 4: calculating significance values of all super-pixel regions in the set S by using a histogram contrast significance calculation method HC to obtain a significant value set S of the super-pixelssal={vi|i=1,2,3,…,n1},viIs a super pixel region RiA significance value of; will SsalThe super pixel region with the highest medium saliency value is marked as Rsal={pj|j=1,2,3,…,nsalIn which n issalRepresents RsalThe number of pixels in;
and 5: calculating all superpixel center points to superpixel R in set SsalThe Manhattan distance of the central point is sorted from small to large according to the distance to obtain a super-pixel set S1={Rw|w=1,2,3,…,n1};
Step 6: performing lockhole region spelling and lockhole region aggregation
Figure BDA0001485338310000083
Degree of area filling coldWhen the value is 0, the following is concrete:
step 6.1: from the super-pixel set S1Taking out the element R1Is added to the set S2In S2Is marked with Rn(ii) a And reacting the element R1From S1Deletion in, for set S1Reordering according to the distance length from small to large;
step 6.2: computing a set S2Number of pixels N:
Figure BDA0001485338310000081
wherein n is the set S2Number of super pixels, mnIs S2The number of pixels of the super-middle pixel;
step 6.3: extracting the set S2The minimum value and the maximum value of the abscissa of the middle pixel are respectively marked as x1And x2Set S2The minimum value and the maximum value of the vertical coordinate of the middle pixel are respectively marked as y1And y2(ii) a Calculate the set S2The middle pixel constitutes the minimum bounding rectangle R (w, h) of the image, where w represents the width of the rectangle R, h represents the height of the rectangle R, the area of the rectangle is a:
w=x2-x1 (4)
h=y2-y1 (5)
A=w×h (6)
step 6.4: calculating the region filling degree c after region mergingnew
Figure BDA0001485338310000082
Step 6.5: if the condition c is satisfiednew≥coldThen c will benewValue of (c) is given toold(ii) a Inverse directionWherein the element R isnFrom S2Deleting; repeating steps 6.1 to 6.5 until
Figure BDA0001485338310000084
Or the number of iterations is greater than λ1Wherein λ is1Denotes a predetermined maximum number of iterations, in this example λ110; and 7: calculating to obtain a lockhole area set S2={pu|u=1,2,3,…,n2}, region set S2The circumscribed rectangle of (A) is Rbox(xbox,ybox,wbox,hbox) Wherein p isuRepresentation set S2Pixel of (2), n2Denotes S2Number of pixels in, xboxAnd yboxRespectively represent a set S of regions2Abscissa and ordinate of the top left corner of the circumscribed rectangle, wboxAnd hboxRespectively represent a set S of regions2The width and height of the circumscribed rectangle;
and 8: calculating the circumscribed rectangle R of the lock hole by adopting the formula (8) or (9)hole(xhole,yhole,whole,hhole),xholeAnd yholeRespectively showing the abscissa and ordinate of the upper left corner of the rectangle circumscribed by the keyhole, wholeAnd hholeRespectively representing the width and the height of a circumscribed rectangle of the lock hole;
Figure BDA0001485338310000091
Figure BDA0001485338310000092
and step 9: image F is cut by using GrubCut graph cut algorithm1Processing is carried out, and approximate foreground and background areas of the image are set; setting the area in the external rectangle as possible foreground area and the area outside the rectangle as background area, and meeting the condition n2≥λ2Wherein λ is2For a predetermined keyhole size threshold, in this example, λ21000; otherwise, the reverse is carried outSet S2Middle pixel being a possible foreground region, region RsalMiddle pixel is foreground region, image F1The rest of the image is a background area;
step 10: iterating GrubCut graph cut algorithm lambda3Next, a keyhole contour image F can be obtained2,λ3For a predetermined number of graph cut iterations, in this example, λ3=3。
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. A container lock hole contour extraction method based on image processing comprises the following steps:
step 1: collecting an image of a container below by using a camera arranged on a lifting appliance;
step 2: the method for roughly positioning the lock hole of the container based on HOG + SVM is utilized to obtain the rough positioning range of the upper lock hole and the lower lock hole, the width of the rough positioning area of the lock hole is width, the height is height, the unit is pixel, and the rough positioning area is marked as a rough positioning image F of the lock hole1Image size of Akeyhole
Akeyhole=width×height (1);
And step 3: image F by region Segmentation method Graph-Based Segmentation1Performing superpixel division to divide the image into n1A super pixel region set S ═ Ri(xi,yi)|i=1,2,3,…,n1In which R isiIs formed by miSuper-pixel area composed of pixels, xiAnd yiRespectively represent regions RiAbscissa and ordinate of the center point, Ri={pj|j=1,2,3,…,mi},pjRepresents a region RiA pixel of (1);
Figure FDA0002988241190000011
and 4, step 4: calculating significance values of all super-pixel regions in the set S by using a histogram contrast significance calculation method HC to obtain a significant value set S of the super-pixelssal={vi|i=1,2,3,…,n1},viIs a super pixel region RiA significance value of; will SsalThe super pixel region with the highest medium saliency value is marked as Rsal={pj|j=1,2,3,…,nsalIn which n issalRepresents RsalThe number of pixels in;
and 5: calculating all superpixel center points to superpixel R in set SsalThe Manhattan distance of the central point is sorted from small to large according to the distance to obtain a super-pixel set S1={Rw|w=1,2,3,…,n1};
Step 6: performing lockhole region spelling and lockhole region aggregation
Figure FDA0002988241190000012
Degree of area filling cold=0;
And 7: calculating to obtain a lockhole area set S2={pu|u=1,2,3,…,n2}, region set S2The circumscribed rectangle of (A) is Rbox(xbox,ybox,wbox,hbox) Wherein p isuRepresentation set S2Pixel of (2), n2Denotes S2Number of pixels in, xboxAnd yboxRespectively represent a set S of regions2Abscissa and ordinate of the top left corner of the circumscribed rectangle, wboxAnd hboxRespectively represent a set S of regions2The width and height of the circumscribed rectangle;
and 8: calculating the circumscribed rectangle R of the lock hole by adopting the formula (8) or (9)hole(xhole,yhole,whole,hhole),xholeAnd yholeRespectively showing the abscissa and ordinate of the upper left corner of the rectangle circumscribed by the keyhole, wholeAnd hholeRespectively representing the width and the height of a circumscribed rectangle of the lock hole;
Figure FDA0002988241190000021
Figure FDA0002988241190000022
and step 9: image F is cut by using GrubCut graph cut algorithm1Processing is carried out, and foreground and background areas of the image are set; setting the area in the external rectangle as possible foreground area and the area outside the rectangle as background area, and meeting the condition n2≥λ2Wherein λ is2A preset keyhole size threshold value is obtained; if the condition n is not satisfied2≥λ2Set S2Middle pixel being a possible foreground region, region RsalMiddle pixel is foreground region, image F1The rest of the image is a background area;
step 10: iterating GrubCut graph cut algorithm lambda3Next, a keyhole contour image F can be obtained2,λ3Cutting iteration times for a preset graph;
the step 6 is as follows:
step 6.1: from the super-pixel set S1Taking out the element R1Is added to the set S2In S2Is marked with Rn(ii) a And reacting the element R1From S1Deletion in, for set S1Reordering according to the distance length from small to large;
step 6.2: computing a set S2Number of pixels N:
Figure FDA0002988241190000031
wherein n is the set S2Number of super pixels, mnIs S2The number of pixels of the super-middle pixel;
step 6.3: extracting the set S2The minimum value and the maximum value of the abscissa of the middle pixel are respectively marked as x1And x2Set S2The minimum value and the maximum value of the vertical coordinate of the middle pixel are respectively marked as y1And y2(ii) a Calculate the set S2The middle pixel constitutes the minimum bounding rectangle R (w, h) of the image, where w represents the width of the rectangle R, h represents the height of the rectangle R, the area of the rectangle is a:
w=x2-x1 (4)
h=y2-y1 (5)
A=w×h (6)
step 6.4: calculating the region filling degree c after region mergingnew
Figure FDA0002988241190000032
Step 6.5: if the condition c is satisfiednew≥coldThen c will benewValue of (c) is given toold(ii) a Otherwise, the element R isnFrom S2Deleting; repeating steps 6.1 to 6.5 until
Figure FDA0002988241190000033
Or the number of iterations is greater than λ1Wherein λ is1Representing a predetermined maximum number of iterations.
CN201711221634.XA 2017-11-28 2017-11-28 Container lockhole contour extraction method based on image processing Active CN108090895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711221634.XA CN108090895B (en) 2017-11-28 2017-11-28 Container lockhole contour extraction method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711221634.XA CN108090895B (en) 2017-11-28 2017-11-28 Container lockhole contour extraction method based on image processing

Publications (2)

Publication Number Publication Date
CN108090895A CN108090895A (en) 2018-05-29
CN108090895B true CN108090895B (en) 2021-07-06

Family

ID=62173276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711221634.XA Active CN108090895B (en) 2017-11-28 2017-11-28 Container lockhole contour extraction method based on image processing

Country Status (1)

Country Link
CN (1) CN108090895B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165649B (en) * 2018-09-03 2022-04-15 苏州巨能图像检测技术有限公司 High-precision container hole detection method based on visual detection
CN110197499B (en) * 2019-05-27 2021-02-02 江苏警官学院 Container safety hoisting monitoring method based on computer vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787481A (en) * 2016-04-05 2016-07-20 湖南人文科技学院 Target detection algorithm based on targeted potential areas analysis and application thereof
CN105956619A (en) * 2016-04-27 2016-09-21 浙江工业大学 Container lockhole coarse positioning and tracking method
CN106097332A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on Corner Detection
CN106952269A (en) * 2017-02-24 2017-07-14 北京航空航天大学 The reversible video foreground object sequence detection dividing method of neighbour and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787481A (en) * 2016-04-05 2016-07-20 湖南人文科技学院 Target detection algorithm based on targeted potential areas analysis and application thereof
CN105956619A (en) * 2016-04-27 2016-09-21 浙江工业大学 Container lockhole coarse positioning and tracking method
CN106097332A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on Corner Detection
CN106952269A (en) * 2017-02-24 2017-07-14 北京航空航天大学 The reversible video foreground object sequence detection dividing method of neighbour and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于暗区连通域的定位孔识别方法;肖刚,张元鸣 等;《计算机工程与应用》;20111231;全文 *
基于迭代拟合的集装箱角件边缘检测算法;高飞,卢书芳 等;《计算机测量与控制》;20170930;全文 *
数字图像处理在集装箱检测中的应用研究;张广军;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120715;全文 *

Also Published As

Publication number Publication date
CN108090895A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN113370977B (en) Intelligent vehicle forward collision early warning method and system based on vision
CN111291603B (en) Lane line detection method, device, system and storage medium
JP6369131B2 (en) Object recognition apparatus and object recognition method
CN110827235B (en) Steel plate surface defect detection method
US10445868B2 (en) Method for detecting a defect on a surface of a tire
CN101620732A (en) Visual detection method of road driving line
CN108090895B (en) Container lockhole contour extraction method based on image processing
CN107832674B (en) Lane line detection method
CN107680086B (en) Method for detecting material contour defects with arc-shaped edges and linear edges
CN104966047A (en) Method and device for identifying vehicle license
Galsgaard et al. Circular hough transform and local circularity measure for weight estimation of a graph-cut based wood stack measurement
CN108734172A (en) Target identification method, system based on linear edge feature
Malik et al. Comparative analysis of edge detection between gray scale and color image
Holz et al. Fast edge-based detection and localization of transport boxes and pallets in rgb-d images for mobile robot bin picking
CN113343962B (en) Visual perception-based multi-AGV trolley working area maximization implementation method
CN115424240A (en) Ground obstacle detection method, system, medium, equipment and terminal
CN105469401B (en) A kind of headchute localization method based on computer vision
CN109741306B (en) Image processing method applied to dangerous chemical storehouse stacking
CN113128346B (en) Target identification method, system and device for crane construction site and storage medium
Diao et al. Vision-based detection of container lock holes using a modified local sliding window method
CN111563871B (en) Image processing method, device and equipment, visual guide unstacking method and system
Jiang et al. Apple recognition based on machine vision
CN108717699B (en) Ultrasonic image segmentation method based on continuous minimum segmentation
Xu et al. A lane detection method combined fuzzy control with ransac algorithm
CN100403769C (en) Image crisperding method during composing process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant