CN114219842A - Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation - Google Patents
Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation Download PDFInfo
- Publication number
- CN114219842A CN114219842A CN202111525088.5A CN202111525088A CN114219842A CN 114219842 A CN114219842 A CN 114219842A CN 202111525088 A CN202111525088 A CN 202111525088A CN 114219842 A CN114219842 A CN 114219842A
- Authority
- CN
- China
- Prior art keywords
- target container
- container
- image
- lifting appliance
- binocular
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000005259 measurement Methods 0.000 title claims abstract description 21
- 230000000007 visual effect Effects 0.000 title claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 239000011248 coating agent Substances 0.000 claims abstract description 5
- 238000000576 coating method Methods 0.000 claims abstract description 5
- 238000009434 installation Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000003709 image segmentation Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000012423 maintenance Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a visual identification, distance measurement and positioning method in automatic loading and unloading operation of a port container, which comprises the following steps: step S1: constructing a binocular camera system, and arranging and installing the binocular camera system at two ends of the container bridge crane; step S2: identifying and plane positioning the target container; step S3: after the plane position of the target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation; step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined. The invention effectively reduces the deployment difficulty and maintenance cost of the camera.
Description
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to a vision identification, distance measurement and positioning method in automatic loading and unloading operation of a port container.
Background
At present, in the process of grabbing and stacking containers at a port, a lifting appliance is generally controlled manually to realize butt joint with the containers, and the method has the problems of low efficiency, low precision, dependence on the experience of workers and the like. It is therefore essential to achieve an automated precise docking of the spreader to the container. If the technology is to be realized, the problem to be solved is to accurately identify and position the target container and the spreader, and to obtain the accurate position coordinates of the target container and the spreader in a space coordinate system. The existing mature positioning method usually adopts a 2D and 3D laser radar scanning mode to acquire three-dimensional point cloud information of a container and a lifting appliance, the method is high in positioning accuracy and high in speed, but the hardware cost of the laser radar is high, the acquired information content is single, and only the position information of the container and the lifting appliance can be acquired. Compared with a container and hanger positioning method based on laser radar, the visual identification and positioning scheme is based on lower hardware cost, acquired information is richer, the positions of the container and the hanger can be acquired, information such as container color, container number and damage degree can be acquired, and automatic grabbing and stacking operation of port containers is facilitated.
The requirements for the grabbing and stacking operation of the containers in the port are high on the positioning accuracy of the containers and the hangers (2-5cm), high on the updating frequency of the positions (20-50Hz), and large in the height from the top of the crane to the ground (20-30 m for a tire crane and 30-50m for a rail crane), which puts high requirements on the technology of identifying and positioning the containers in the hangers based on vision. In order to meet the requirement of positioning accuracy, the current visual positioning scheme usually adopts a binocular vision technology to realize the distance measurement and the positioning of a target container, but the accuracy of the binocular distance measurement can rapidly fall along with the increase of the working distance, so that a binocular camera is usually required to be deployed below a lifting appliance to reduce the distance between the camera and the target container, and the accuracy of the distance measurement is ensured. However, the method needs to modify the lifting appliance and deploy the communication and power supply cables of the cameras, so that the maintenance cost is high, the position information of the lifting appliance cannot be acquired, and the actual application requirements cannot be met.
In conclusion, the method for identifying and positioning the container and the lifting appliance by using the laser radar has the problems of high cost and limited acquired information; the method for identifying and positioning the container by using the camera arranged on the lifting appliance has the problems of high maintenance cost, incapability of acquiring the position of the lifting appliance and the like. Aiming at the problems, the invention provides a method for deploying a binocular camera on a trolley above a cross beam and realizing accurate identification, distance measurement and positioning of a target container by fusing port container structural information.
Disclosure of Invention
In order to solve the problems, the invention provides a visual identification, distance measurement and positioning method in the automatic loading and unloading operation of a port container, which comprises the following steps:
step S1: constructing a binocular camera system, and arranging and installing two sides of a movable trolley on a beam of the container bridge crane;
step S2: identifying and plane positioning the target container;
step S3: after the plane position of the target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation;
step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined.
Further, step S1 specifically includes the following steps:
step 1-1: fixing two cameras with the same model and specification on a datum plane of a rigid support to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes to be parallel, taking an installation datum line on the support as a common vertical line, enabling a baseline distance between the two cameras to be D, and selecting the camera with a frame rate higher than 60FPS and a resolution higher than 2K;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley on a crane beam and are perpendicular to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end part of the top surfaces of a target container and a lifting appliance.
Further, step S2 specifically includes the following steps:
step 2-1: using two groups of binocular cameras to respectively acquire RGB images of a container and a lifting appliance below, and filtering and denoising the acquired RGB images;
step 2-2: converting the RGB image subjected to filtering and noise reduction into an HSV color space to obtain a preprocessed image, wherein the hue H value range of the image in the conversion process is 0-360 degrees, the saturation S value range is 0-1, and the brightness V value range is 0-1;
step 2-3: extracting image blocks with the size of P from a pre-collected standard container image to be used as a container color template, and calculating a statistical histogram of the container color template in an HSV color space;
step 2-4: taking the area with the size of N M at the central position of the image preprocessed in the step 2-2 as an interested area, traversing the area by taking l as a step length by using a window with the size of N N, calculating the matching degree of a statistical histogram of the color space between the image in each window and the container color template, and if all the matching degrees exceed a threshold value TMThe windows are collected to obtain the image segmentation result of the target container;
step 2-5: taking the minimum circumscribed rectangle of the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: carrying out polygon fitting on the outline characteristics of the target container, extracting all convex quadrangles, and removing the convex quadrangles with the areas smaller than a threshold value TS1Sequentially calculating the included angles of all adjacent straight lines of all the convex quadrangles, and if the angle values of all the included angles all satisfy the threshold range [ T ]D1,TD2]Then, the quadrangle can be judged to be a rectangle, and all rectangles meeting the condition form a candidate external rectangle frame set of the target container;
step 2-8: calculating the areas and the aspect ratios of all rectangles in the candidate circumscribed rectangle frame set of the target container, and removing the rectangles with the areas larger than the threshold value TS2The rectangle conforming to the length-width ratio of the standard container is screened out to be used as an external rectangular frame of the target container;
step 2-9: coordinates of the upper left corner of a rectangular frame externally connected with a target container are takenAnd coordinates of lower right cornerCalculating to obtain the position coordinates of the central point of the target container in the image coordinate systemI.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: and (3) respectively carrying out the operations of the steps 2-1 to 2-10 by two cameras in the binocular camera system to realize the identification and the plane positioning of the target container.
Further, step S3 specifically includes the following steps:
step 3-1: respectively extracting key point characteristics from target container images acquired by two cameras in the binocular camera system, performing characteristic matching on the key points, and screening out mismatching points.
Step 3-2: calculating the parallax value d between two camera target containers in a binocular camera system by a stereo matching method, and calculating the distance between the target containers and the cameras according to the binocular vision principle
Where D is the baseline distance between the centers of the two cameras, α is an internal parameter of the camera obtained by camera calibration, and D ═ ul-urWherein u islHorizontal coordinate value u representing the target container in the left camera image coordinate systemrRepresenting the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: the installation height of the camera in the binocular camera system relative to the ground of the storage yard is HCThe binocular distance measurement result isThe preliminary estimated height of the target container in the yardThe height dimension h and the number n of stacked layers of the target container are obtained by the following formula
Wherein p and q represent the height of two standard containers with the existing specification, delta represents the tolerance of binocular range finding, and N represents a single columnThe maximum stacking layer number of the target container, i represents the possible stacking layer number of the single-column container, and the value range is [1, N ]]A positive integer in between; if the height is preliminarily estimatedIf the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Step 3-5: according to the precise height of the target container in the yardAnd the installation height H of the camera in the binocular camera system relative to the ground of the storage yardCCalculating the actual distance between the target container and the binocular cameraRealizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular cameraAnd the plane position coordinates of the target containerObtaining space position coordinates of target container under camera coordinate system through inverse perspective transformationThe calculation process is as follows:
wherein f represents the focal length of the camera;
further, step S4 specifically includes the following steps:
step 4-1: coating marks with specific colors and/or regular geometric shapes on the top corners or edges of the lower top surface of the lifting appliance;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation methodRGBThe pixel points are segmented, noise points in the image are removed, and the vacancy in the communicated region is filled to obtain a color threshold segmentation image;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the imageiI is the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
And 4-5: calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolleyL;
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formulaL,YL,ZL) The following are:
further, the distance Z between the top surface of the lower part of the lifting appliance and the trolley is obtained through calculation in the steps 4 to 5LObtained by the following method:
according to SiAnd calibrating the obtained distance to area ratio in advanceIs calculated to obtain Zl:
Wherein I represents the number of circular or rectangular marks.
Further, the distance Z between the top surface of the lower part of the lifting appliance and the trolley is obtained through calculation in the steps 4 to 5LAnd may also be obtained by a laser ranging sensor.
Has the advantages that: two groups of binocular camera systems are deployed on a trolley which is arranged above a crane beam and can move horizontally, so that the deployment difficulty and the maintenance cost of the cameras are effectively reduced. On the other hand, the binocular cameras and the laser ranging sensors arranged on the trolley can be used for simultaneously realizing the identification, ranging and positioning of the target container and the lifting appliance, and the number of the required cameras is reduced. In order to meet the requirement of positioning accuracy, the scheme makes full use of the structural information of the port container to improve the ranging accuracy of the visual system.
Drawings
FIG. 1 is a schematic structural diagram of a binocular camera constructed according to the present invention;
FIG. 2 is a front view of a schematic of a deployment scenario of a binocular camera and laser range sensor;
FIG. 3 is a left side view of a schematic of a deployment scenario of a binocular camera and laser range sensor;
FIG. 4 is a flow chart involved in the present invention;
FIG. 5 is a schematic view of the principle of binocular ranging;
FIG. 6 is a schematic diagram of the position of the marker on the lower part of the spreader, in which the circle is the position of the red marker in the embodiment;
fig. 7 is a schematic diagram of the marker position of the lower part of the spreader, and the rectangular black bar in the diagram is the green marker position in the embodiment.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in FIGS. 1-3, the visual identification, distance measurement and positioning method for the automatic loading and unloading operation of the harbor container of the present invention is based on hardware as follows: there is dolly 2 on container bridge crane's the crossbeam, and the front and back of dolly, the left and right sides set up binocular camera system respectively, and laser range finding sensor 3 is installed to the bottom of dolly, and the dolly bottom is loaded through lifting rope 4 and is loaded with hoist 5, and the hoist is used for lifting target container to the assigned position.
Based on the hardware system, the method for visual identification, distance measurement and positioning in the automatic loading and unloading operation of the port container of the embodiment comprises the following steps:
step 1: a binocular camera system is built and arranged on two sides of a trolley which moves on a beam of the container bridge crane;
step 1-1:
step 1-1: fixing two cameras 6 with the same model and specification on a datum plane of a rigid support 7 to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes 1 to be parallel, enabling installation datum lines on the support to be used as common vertical lines, enabling the distance between the datum lines of the two cameras to be D, and enabling the frame rate of the cameras to be higher than 60FPS and the resolution to be higher than 2K; as shown in fig. 1;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley of the crane and are vertical to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end 8 of the top surfaces of the target container and the lifting appliance. As shown in fig. 2 and 3;
the flow chart of the method of the invention is shown in fig. 4, and comprises the following steps:
step 2: identifying and positioning a target container on a plane;
step 2-1: and respectively acquiring images of the container and the lifting appliance below by using two groups of binocular cameras, and filtering and denoising the acquired images. The embodiment of the invention adopts a median filtering method to reduce noise, and specifically comprises the steps of moving a rectangular sliding window with the size of 3 x 3 pixels on an image, and replacing the pixels at the center of the window with the intermediate values of all the pixels in the window, thereby removing noise in the image;
step 2-2: converting the RGB image after filtering and noise reduction into HSV color space to obtain a preprocessed image, wherein the conversion process is as follows: :
min=min(R,G,B)
max=max(R,G,B)
V=max
r, G and B respectively represent red, green and blue color components in the image, and the value ranges of the three colors are all 0-1. In the conversion process, the hue H of the image ranges from 0 degrees to 360 degrees, the saturation S ranges from 0 degrees to 1 degrees, and the lightness V ranges from 0 degrees to 1 degrees; step 2-3: extracting image blocks with the size of P (32P, unit: pixel, the same below) from a standard container image collected in advance as a container color template, and calculating a statistical histogram of the container color template in an HSV color space for calculating the matching degree between the container color template and the collected image;
step 2-4: taking the region with the size of N × M at the central position of the image preprocessed in the step 2-2 as the region of interest, and traversing the region by taking l (taking 2 in the embodiment of the invention) as a step size by using a window with the size of N × N (taking 8 × 8 in the embodiment of the invention). Calculating the matching degree of the color space histogram between the image in each window and the container color template, and if all the matching degrees exceed a threshold value TMAnd the window is collected to obtain the image segmentation result of the target container. The matching degree is calculated by using the Pasteur distance, and the calculation method is as follows:
wherein a isi,biRespectively representing a pair of corresponding components in an HSV color space histogram of a color template and a window to be matched, N represents the number of all components in the histogram, and d (a, b) represents the matching degree of the color template and the window to be matched, which are calculated by adopting a Bhattacharyya distance;
step 2-5: taking the minimum circumscribed rectangle from the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region, wherein a Canny edge detection algorithm is adopted in the embodiment of the invention;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: carrying out polygon fitting on the outline characteristics of the target container, extracting all convex quadrangles, and removing the convex quadrangles with the areas smaller than a threshold value TS1(TS1To be set according to image resolution). Sequentially calculating included angles of all adjacent straight lines of the convex quadrangle, and if the angle values of all included angles meet the threshold range [ T ]D1,TD2](in the present embodiment, the value of [80 DEG ], 100 DEG ] is taken]) Then the convex quadrilateral can be determined to be a rectangle. All rectangles meeting the conditions form a candidate circumscribed rectangle frame set of the target container;
step 2-8: calculating the areas and the aspect ratios of all rectangles in the candidate circumscribed rectangle frame set of the target container, and removing the rectangles with the areas larger than the threshold value TS2(TS2To be set according to the image resolution), and screening out a rectangle conforming to the length-width ratio of the standard container as an external rectangular frame of the target container;
step 2-9: coordinates of the upper left corner of a rectangular frame externally connected with a target container are takenAnd coordinates of lower right cornerCalculating to obtain a target containerPosition coordinates of the central point in the image coordinate systemI.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: two cameras in the binocular camera system respectively perform the operations of the steps 2-1 to 2-10 to realize the identification and plane positioning of the target container;
and step 3: ranging and space positioning of the target container;
step 3-1: respectively extracting key point features from target container images acquired by two cameras in a binocular camera system (in the embodiment of the invention, SIFT algorithm is adopted to extract the key point features), and performing feature matching on the key points (in the embodiment of the invention, BF algorithm is adopted to perform feature matching). Screening out mismatching points by using a left-right consistency detection and polar line constraint detection method;
step 3-2: calculating the parallax d between the two camera target containers by stereo matching method, and calculating the distance between the target container and the camera according to the binocular vision principle, as shown in fig. 5
Where D is the distance between the centers of the two cameras (baseline distance) and a is an internal parameter of the camera, which can be obtained by camera calibration. d ═ ul-urWherein u islHorizontal coordinate value u representing the target container in the left camera image coordinate systemrRepresenting the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: camera opposition in binocular camera systemThe installation height of the yard ground is HCThe binocular distance measurement result isThe preliminary estimated height of the target container in the yardThe height h and the number n of stacked layers of the container can be obtained by the following formula
Wherein p and q represent standard container height of two existing specifications, p is 2591mm, q is 2896mm, δ represents the tolerance of binocular range finding, and δ is 150mm in the embodiment of the invention. N represents the maximum number of stacked layers of the single-column container, in the embodiment of the present invention, N is 7, i represents the number of stacked layers that may occur in the single-column container, and the value range is [1, N]A positive integer in between; if the height is preliminarily estimatedIf the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Step 3-5: according to the precise height of the target container in the yardAnd the installation height H of the camera in the binocular camera system relative to the ground of the storage yardCCalculating the actual distance between the target container and the binocular cameraRealizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular cameraAnd the plane position coordinates of the target containerObtaining space position coordinates of target container under camera coordinate system through inverse perspective transformationThe calculation process is as follows:
wherein f represents the focal length of the camera;
and 4, step 4: identifying, ranging and positioning a lifting appliance;
step 4-1: the top corners or the frames at the lower part of the lifting appliance are coated with marks with specific colors and/or regular geometric shapes, for example, the top corners at the lower part of the lifting appliance are coated with regular geometric shape marks, the embodiment of the invention adopts red (RGB value is [255, 0, 0]) round, and at least two marks are ensured to be positioned at the top corners which are opposite to each other and are equidistant to the center of the lifting appliance, as shown in FIG. 6; or the lower edge of the hanger is coated with a regular geometric shape, the embodiment of the invention adopts a green rectangle (RGB value is 0, 255, 0), and at least two marks are positioned at the center of two adjacent edges of the hanger, as shown in fig. 7;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation methodRGBThe pixel points are segmented, the segmented image is processed through morphological operation of the image, noise in the image is removed, and the vacancy in the communicated region is filled;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the imageiI represents the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
And 4-5: distance Z between top surface of lower part of lifting appliance and trolleyLThis can be achieved in two ways:
1) according to SiAnd calibrating the obtained distance to area ratio in advanceIs calculated to obtain ZL:
Wherein I represents the number of circular or rectangular marks.
2) Acquisition of Z by laser ranging sensorL。
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formulaL,YL,ZL) The following are:
finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting the scope of protection thereof, and although the present application is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: numerous variations, modifications, and equivalents will occur to those skilled in the art upon reading the present application and are within the scope of the claims appended hereto.
Claims (7)
1. A visual identification, distance measurement and positioning method in automatic loading and unloading operation of a port container is characterized by comprising the following steps:
step S1: constructing a binocular camera system, and arranging and installing two sides of a movable trolley on a beam of the container bridge crane;
step S2: identifying and plane positioning the target container;
step S3: after the plane position of the target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation;
step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined.
2. The method of claim 1, wherein the method comprises the steps of: step S1 specifically includes the following steps:
step 1-1: fixing two cameras with the same model and specification on a datum plane of a rigid support to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes to be parallel, taking an installation datum line on the support as a common vertical line, enabling a baseline distance between the two cameras to be D, and selecting the camera with a frame rate higher than 60FPS and a resolution higher than 2K;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley on a crane beam and are perpendicular to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end part of the top surfaces of a target container and a lifting appliance.
3. The method of claim 1, wherein the method comprises the steps of: step S2 specifically includes the following steps:
step 2-1: using two groups of binocular cameras to respectively acquire RGB images of a container and a lifting appliance below, and filtering and denoising the acquired RGB images;
step 2-2: converting the RGB image subjected to filtering and noise reduction into an HSV color space to obtain a preprocessed image, wherein the hue H value range of the image in the conversion process is 0-360 degrees, the saturation S value range is 0-1, and the brightness V value range is 0-1;
step 2-3: extracting image blocks with the size of P from a pre-collected standard container image to be used as a container color template, and calculating a statistical histogram of the container color template in an HSV color space;
step 2-4: taking the area with the size of N M at the central position of the image preprocessed in the step 2-2 as an interested area, traversing the area by taking l as a step length by using a window with the size of N N, calculating the matching degree of a statistical histogram of the color space between the image in each window and the container color template, and if all the matching degrees exceed a threshold value TMThe windows are collected to obtain the image segmentation result of the target container;
step 2-5: taking the minimum circumscribed rectangle of the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: carrying out polygon fitting on the outline characteristics of the target container, extracting all convex quadrangles, and removing the convex quadrangles with the areas smaller than a threshold value TS1As a result of (3), all of the extracted are calculated in turnIf the angle values of all the included angles satisfy the threshold range [ T ]D1,TD2]Then, the quadrangle can be judged to be a rectangle, and all rectangles meeting the condition form a candidate external rectangle frame set of the target container;
step 2-8: calculating the areas and the aspect ratios of all rectangles in the candidate circumscribed rectangle frame set of the target container, and removing the rectangles with the areas larger than the threshold value TS2The rectangle conforming to the length-width ratio of the standard container is screened out to be used as an external rectangular frame of the target container;
step 2-9: coordinates of the upper left corner of a rectangular frame externally connected with a target container are takenAnd coordinates of lower right cornerCalculating to obtain the position coordinates of the central point of the target container in the image coordinate systemI.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: and (3) respectively carrying out the operations of the steps 2-1 to 2-10 by two cameras in the binocular camera system to realize the identification and the plane positioning of the target container.
4. The method of claim 1, wherein the method comprises the steps of: step S3 specifically includes the following steps:
step 3-1: respectively extracting key point characteristics from target container images acquired by two cameras in the binocular camera system, performing characteristic matching on the key points, and screening out mismatching points.
Step 3-2: calculating the parallax value d between two camera target containers in a binocular camera system by a stereo matching method, and calculating the distance between the target containers and the cameras according to the binocular vision principle
Where D is the baseline distance between the centers of the two cameras, α is an internal parameter of the camera obtained by camera calibration, and D ═ ul-urWherein u islHorizontal coordinate value u representing the target container in the left camera image coordinate systemrRepresenting the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: the installation height of the camera in the binocular camera system relative to the ground of the storage yard is HCThe binocular distance measurement result isThe preliminary estimated height of the target container in the yardThe height dimension h and the number n of stacked layers of the target container are obtained by the following formula
Wherein p and q represent the height of two standard containers with the existing specification respectively, delta represents the tolerance of binocular ranging, N represents the maximum stacking layer number of a single-row target container, i represents the possible stacking layer number of the single-row container, and the value range is [1, N ]]A positive integer in between; if the height is preliminarily estimatedIf the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Step 3-5: according to the precise height of the target container in the yardAnd the installation height H of the camera in the binocular camera system relative to the ground of the storage yardCCalculating the actual distance between the target container and the binocular cameraRealizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular cameraAnd the plane position coordinates of the target containerObtaining space position coordinates of target container under camera coordinate system through inverse perspective transformationThe calculation process is as follows:
where f denotes the focal length of the camera.
5. The method of claim 3, wherein the method comprises the steps of: step S4 specifically includes the following steps:
step 4-1: coating marks with specific colors and/or regular geometric shapes on the top corners or edges of the lower top surface of the lifting appliance;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation methodRGBThe pixel points are segmented, noise points in the image are removed, and the vacancy in the communicated region is filled to obtain a color threshold segmentation image;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the imageiI is the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
And 4-5: calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolleyL;
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formulaL,YL,ZL) The following are:
6. the method of claim 5, wherein the method comprises the steps of: 4-5, calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolleyLObtained by the following method:
according to SiAnd calibrating the obtained distance to area ratio in advanceIs calculated to obtain ZL:
Wherein I represents the number of circular or rectangular marks.
7. The method of claim 5, wherein the method comprises the steps of: 4-5, calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolleyLObtained by a laser ranging sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111525088.5A CN114219842B (en) | 2021-12-14 | 2021-12-14 | Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111525088.5A CN114219842B (en) | 2021-12-14 | 2021-12-14 | Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114219842A true CN114219842A (en) | 2022-03-22 |
CN114219842B CN114219842B (en) | 2022-08-12 |
Family
ID=80701689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111525088.5A Active CN114219842B (en) | 2021-12-14 | 2021-12-14 | Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114219842B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114758001A (en) * | 2022-05-11 | 2022-07-15 | 北京国泰星云科技有限公司 | PNT-based automatic traveling method for tire crane |
CN114972541A (en) * | 2022-06-17 | 2022-08-30 | 北京国泰星云科技有限公司 | Tire crane three-dimensional anti-collision method based on three-dimensional laser radar and binocular camera fusion |
CN115077385A (en) * | 2022-07-05 | 2022-09-20 | 北京斯年智驾科技有限公司 | Position and attitude measuring method and system for unmanned container truck |
CN116452467A (en) * | 2023-06-16 | 2023-07-18 | 山东曙岳车辆有限公司 | Container real-time positioning method based on laser data |
CN116681721A (en) * | 2023-06-07 | 2023-09-01 | 东南大学 | Linear track detection and tracking method based on vision |
WO2024060792A1 (en) * | 2022-09-22 | 2024-03-28 | 中车资阳机车有限公司 | Lock hole locating system and method for split-type container spreader |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102923578A (en) * | 2012-11-13 | 2013-02-13 | 扬州华泰特种设备有限公司 | Automatic control system of efficient handing operation of container crane |
CN204778452U (en) * | 2015-07-13 | 2015-11-18 | 常州基腾电气有限公司 | A buffer stop for crane sling |
CN105271004A (en) * | 2015-10-26 | 2016-01-27 | 上海海事大学 | Lifting device space positioning device adopting monocular vision and method |
CN105480848A (en) * | 2015-12-21 | 2016-04-13 | 上海新时达电气股份有限公司 | Port crane lifting system and stacking method thereof |
CN106096606A (en) * | 2016-06-07 | 2016-11-09 | 浙江工业大学 | A kind of container profile localization method based on fitting a straight line |
CN109319526A (en) * | 2018-11-16 | 2019-02-12 | 西安中科光电精密工程有限公司 | A kind of container entrucking of bagged material and stocking system and method |
CN110659634A (en) * | 2019-08-23 | 2020-01-07 | 上海撬动网络科技有限公司 | Container number positioning method based on color positioning and character segmentation |
CN111243016A (en) * | 2018-11-28 | 2020-06-05 | 周口师范学院 | Automatic identification and positioning method for container |
CN112102405A (en) * | 2020-08-26 | 2020-12-18 | 东南大学 | Robot stirring-grabbing combined method based on deep reinforcement learning |
CN112194011A (en) * | 2020-08-31 | 2021-01-08 | 南京理工大学 | Tower crane automatic loading method based on binocular vision |
CN113651242A (en) * | 2021-10-18 | 2021-11-16 | 苏州汇川控制技术有限公司 | Control method and device for container crane and storage medium |
-
2021
- 2021-12-14 CN CN202111525088.5A patent/CN114219842B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102923578A (en) * | 2012-11-13 | 2013-02-13 | 扬州华泰特种设备有限公司 | Automatic control system of efficient handing operation of container crane |
CN204778452U (en) * | 2015-07-13 | 2015-11-18 | 常州基腾电气有限公司 | A buffer stop for crane sling |
CN105271004A (en) * | 2015-10-26 | 2016-01-27 | 上海海事大学 | Lifting device space positioning device adopting monocular vision and method |
CN105480848A (en) * | 2015-12-21 | 2016-04-13 | 上海新时达电气股份有限公司 | Port crane lifting system and stacking method thereof |
CN106096606A (en) * | 2016-06-07 | 2016-11-09 | 浙江工业大学 | A kind of container profile localization method based on fitting a straight line |
CN109319526A (en) * | 2018-11-16 | 2019-02-12 | 西安中科光电精密工程有限公司 | A kind of container entrucking of bagged material and stocking system and method |
CN111243016A (en) * | 2018-11-28 | 2020-06-05 | 周口师范学院 | Automatic identification and positioning method for container |
CN110659634A (en) * | 2019-08-23 | 2020-01-07 | 上海撬动网络科技有限公司 | Container number positioning method based on color positioning and character segmentation |
CN112102405A (en) * | 2020-08-26 | 2020-12-18 | 东南大学 | Robot stirring-grabbing combined method based on deep reinforcement learning |
CN112194011A (en) * | 2020-08-31 | 2021-01-08 | 南京理工大学 | Tower crane automatic loading method based on binocular vision |
CN113651242A (en) * | 2021-10-18 | 2021-11-16 | 苏州汇川控制技术有限公司 | Control method and device for container crane and storage medium |
Non-Patent Citations (1)
Title |
---|
梁晓波等: "集装箱起重机自动装卸***的研究与设计", 《计算机应用》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114758001A (en) * | 2022-05-11 | 2022-07-15 | 北京国泰星云科技有限公司 | PNT-based automatic traveling method for tire crane |
CN114758001B (en) * | 2022-05-11 | 2023-01-24 | 北京国泰星云科技有限公司 | PNT-based automatic traveling method for tyre crane |
CN114972541A (en) * | 2022-06-17 | 2022-08-30 | 北京国泰星云科技有限公司 | Tire crane three-dimensional anti-collision method based on three-dimensional laser radar and binocular camera fusion |
CN114972541B (en) * | 2022-06-17 | 2024-01-26 | 北京国泰星云科技有限公司 | Tire crane stereoscopic anti-collision method based on fusion of three-dimensional laser radar and binocular camera |
CN115077385A (en) * | 2022-07-05 | 2022-09-20 | 北京斯年智驾科技有限公司 | Position and attitude measuring method and system for unmanned container truck |
CN115077385B (en) * | 2022-07-05 | 2023-09-26 | 北京斯年智驾科技有限公司 | Unmanned container pose measuring method and measuring system thereof |
WO2024060792A1 (en) * | 2022-09-22 | 2024-03-28 | 中车资阳机车有限公司 | Lock hole locating system and method for split-type container spreader |
CN116681721A (en) * | 2023-06-07 | 2023-09-01 | 东南大学 | Linear track detection and tracking method based on vision |
CN116681721B (en) * | 2023-06-07 | 2023-12-29 | 东南大学 | Linear track detection and tracking method based on vision |
CN116452467A (en) * | 2023-06-16 | 2023-07-18 | 山东曙岳车辆有限公司 | Container real-time positioning method based on laser data |
CN116452467B (en) * | 2023-06-16 | 2023-09-22 | 山东曙岳车辆有限公司 | Container real-time positioning method based on laser data |
Also Published As
Publication number | Publication date |
---|---|
CN114219842B (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114219842B (en) | Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation | |
CN111127318B (en) | Panoramic image splicing method in airport environment | |
US8724885B2 (en) | Integrated image processor | |
CN110807355A (en) | Pointer instrument detection and reading identification method based on mobile robot | |
CN108694741A (en) | A kind of three-dimensional rebuilding method and device | |
CN113011388B (en) | Vehicle outer contour size detection method based on license plate and lane line | |
CN108897246B (en) | Stack box control method, device, system and medium | |
CN115880372A (en) | Unified calibration method and system for external hub positioning camera of automatic crane | |
KR20230091906A (en) | Offset detection method based on binocular vision and symmetry | |
CN105469401B (en) | A kind of headchute localization method based on computer vision | |
Han et al. | Target positioning method in binocular vision manipulator control based on improved canny operator | |
CN114782357A (en) | Self-adaptive segmentation system and method for transformer substation scene | |
CN107038703A (en) | A kind of goods distance measurement method based on binocular vision | |
CN113379684A (en) | Container corner line positioning and automatic container landing method based on video | |
CN105005985B (en) | Backlight image micron order edge detection method | |
CN106097331B (en) | A kind of container localization method based on lockhole identification | |
CN114897981A (en) | Hanger pose identification method based on visual detection | |
CN113793315A (en) | Monocular vision-based camera plane and target plane included angle estimation method | |
CN113674360A (en) | Covariant-based line structured light plane calibration method | |
CN113963107A (en) | Large target three-dimensional reconstruction method and system based on binocular vision | |
Zhang et al. | Depth inpainting algorithm of RGB-D camera combined with color image | |
CN111951334A (en) | Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology | |
CN110992261A (en) | Method for quickly splicing images of unmanned aerial vehicle of power transmission line | |
JP7191352B2 (en) | Method and computational system for performing object detection | |
CN109905692A (en) | A kind of machine new vision system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |