CN114219842B - Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation - Google Patents

Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation Download PDF

Info

Publication number
CN114219842B
CN114219842B CN202111525088.5A CN202111525088A CN114219842B CN 114219842 B CN114219842 B CN 114219842B CN 202111525088 A CN202111525088 A CN 202111525088A CN 114219842 B CN114219842 B CN 114219842B
Authority
CN
China
Prior art keywords
target container
container
image
lifting appliance
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111525088.5A
Other languages
Chinese (zh)
Other versions
CN114219842A (en
Inventor
李俊
冯云剑
闫兴达
徐翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202111525088.5A priority Critical patent/CN114219842B/en
Publication of CN114219842A publication Critical patent/CN114219842A/en
Application granted granted Critical
Publication of CN114219842B publication Critical patent/CN114219842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a visual identification, distance measurement and positioning method in automatic loading and unloading operation of a port container, which comprises the following steps: step S1: constructing a binocular camera system, and arranging and installing the binocular camera system at two ends of the container bridge crane; step S2: identifying and plane positioning the target container; step S3: after the plane position of the target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation; step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined. The invention effectively reduces the deployment difficulty and maintenance cost of the camera.

Description

Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to a vision identification, distance measurement and positioning method in automatic loading and unloading operation of a port container.
Background
At present, in the process of grabbing and stacking containers at a port, a lifting appliance is generally controlled manually to realize butt joint with the containers, and the method has the problems of low efficiency, low precision, dependence on the experience of workers and the like. It is therefore essential to achieve an automated precise docking of the spreader to the container. If the technology is to be realized, the problems to be solved firstly are the accurate identification and positioning of the target container and the lifting appliance, and the accurate position coordinates of the target container and the lifting appliance in a space coordinate system are obtained. The existing mature positioning method usually adopts a 2D and 3D laser radar scanning mode to acquire three-dimensional point cloud information of a container and a lifting appliance, the method is high in positioning accuracy and high in speed, but the hardware cost of the laser radar is high, the acquired information content is single, and only the position information of the container and the lifting appliance can be acquired. Compared with a container and hanger positioning method based on laser radar, the visual identification and positioning scheme is based on lower hardware cost, acquired information is richer, the positions of the container and the hanger can be acquired, information such as container color, container number and damage degree can be acquired, and automatic grabbing and stacking operation of port containers is facilitated.
The requirements for the grabbing and stacking operation of the containers in the port are high on the positioning accuracy of the containers and the hangers (2-5cm), high on the updating frequency of the positions (20-50Hz), and large in the height from the top of the crane to the ground (20-30 m for a tire crane and 30-50m for a rail crane), which puts high requirements on the technology of identifying and positioning the containers in the hangers based on vision. In order to meet the requirement of positioning accuracy, the current visual positioning scheme usually adopts a binocular vision technology to realize the ranging and positioning of a target container, but the accuracy of binocular ranging can be rapidly reduced along with the increase of working distance, so that a binocular camera is usually required to be deployed below a lifting appliance to reduce the distance between the camera and the target container, and the ranging accuracy is ensured. However, the method needs to modify the lifting appliance and deploy the communication and power supply cables of the cameras, so that the maintenance cost is high, the position information of the lifting appliance cannot be acquired, and the actual application requirements cannot be met.
In conclusion, the method for identifying and positioning the container and the lifting appliance by using the laser radar has the problems of high cost and limited acquired information; the method for identifying and positioning the container by using the camera arranged on the lifting appliance has the problems of high maintenance cost, incapability of acquiring the position of the lifting appliance and the like. Aiming at the problems, the invention provides a method for deploying a binocular camera on a trolley above a cross beam and realizing accurate identification, distance measurement and positioning of a target container by fusing port container structural information.
Disclosure of Invention
In order to solve the problems, the invention provides a visual identification, distance measurement and positioning method in the automatic loading and unloading operation of a port container, which comprises the following steps:
step S1: constructing a binocular camera system, and arranging and installing two sides of a movable trolley on a beam of the container bridge crane;
step S2: identifying and planar positioning the target container;
step S3: after the plane position of the target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation;
step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined.
Further, step S1 specifically includes the following steps:
step 1-1: fixing two cameras with the same model and specification on a datum plane of a rigid support to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes to be parallel, taking an installation datum line on the support as a common vertical line, enabling a baseline distance between the two cameras to be D, and selecting the camera with a frame rate higher than 60FPS and a resolution higher than 2K;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley on a crane beam and are perpendicular to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end part of the top surfaces of a target container and a lifting appliance.
Further, step S2 specifically includes the following steps:
step 2-1: using two groups of binocular cameras to respectively acquire RGB images of a container and a lifting appliance below, and filtering and denoising the acquired RGB images;
step 2-2: converting the RGB image subjected to filtering and noise reduction into an HSV color space to obtain a preprocessed image, wherein the hue H value range of the image in the conversion process is 0-360 degrees, the saturation S value range is 0-1, and the brightness V value range is 0-1;
step 2-3: extracting image blocks with the size of P from a pre-collected standard container image to be used as a container color template, and calculating a statistical histogram of the container color template in an HSV color space;
step 2-4: taking the area with the size of N M at the central position of the image preprocessed in the step 2-2 as an interested area, traversing the area by taking l as a step length by using a window with the size of N N, calculating the matching degree of a statistical histogram of the color space between the image in each window and the container color template, and if all the matching degrees exceed a threshold value T M The windows are collected to obtain the image segmentation result of the target container;
step 2-5: taking the minimum circumscribed rectangle of the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: carrying out polygon fitting on the outline characteristics of the target container, extracting all convex quadrangles, and removing the convex quadrangles with the areas smaller than a threshold value T S1 Sequentially calculating the included angles of all adjacent straight lines of all the convex quadrangles, and if the angle values of all the included angles all satisfy the threshold range [ T ] D1 ,T D2 ]Then, the quadrangle can be judged to be a rectangle, and all rectangles meeting the condition form a candidate external rectangle frame set of the target container;
step 2-8: computing a set of candidate bounding rectangles for a target containerCombining the areas and aspect ratios of all rectangles, and removing the rectangles with areas larger than a threshold value T S2 The rectangle conforming to the length-width ratio of the standard container is screened out to be used as an external rectangular frame of the target container;
step 2-9: coordinates of upper left corner of circumscribed rectangular frame of target container
Figure BDA0003409984430000031
And coordinates of lower right corner
Figure BDA0003409984430000032
Calculating to obtain the position coordinates of the central point of the target container in the image coordinate system
Figure BDA0003409984430000033
I.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: and (3) respectively carrying out the operations of the steps 2-1 to 2-10 by two cameras in the binocular camera system to realize the identification and the plane positioning of the target container.
Further, step S3 specifically includes the following steps:
step 3-1: respectively extracting key point characteristics from target container images acquired by two cameras in a binocular camera system, performing characteristic matching on the key points, and screening out mismatching points.
Step 3-2: calculating the parallax value d between two camera target containers in a binocular camera system by a stereo matching method, and calculating the distance between the target containers and the cameras according to the binocular vision principle
Figure BDA0003409984430000034
Figure BDA0003409984430000035
Where D is the baseline distance between the centers of the two cameras, α is an internal parameter of the camera obtained by camera calibration, and D ═ u l -u r Wherein u is l Horizontal coordinate value u representing the target container in the left camera image coordinate system r Representing the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: the installation height of the camera in the binocular camera system relative to the ground of the storage yard is H C The binocular distance measurement result is
Figure BDA0003409984430000036
The preliminary estimated height of the target container in the yard
Figure BDA0003409984430000037
The height dimension h and the number n of stacked layers of the target container are obtained by the following formula
Figure BDA0003409984430000038
Wherein p and q represent the height of two standard containers with the existing specification respectively, delta represents the tolerance of binocular ranging, N represents the maximum stacking layer number of a single-row target container, i represents the possible stacking layer number of the single-row container, and the value range is [1, N ]]A positive integer in between; if the height is preliminarily estimated
Figure BDA0003409984430000039
If the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Figure BDA0003409984430000041
Figure BDA0003409984430000042
Step 3-5: according to the precise height of the target container in the yard
Figure BDA0003409984430000043
And the installation height H of the camera in the binocular camera system relative to the ground of the storage yard C Calculating the actual distance between the target container and the binocular camera
Figure BDA0003409984430000044
Realizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular camera
Figure BDA0003409984430000045
And the plane position coordinates of the target container
Figure BDA0003409984430000046
Obtaining space position coordinates of target container under camera coordinate system through inverse perspective transformation
Figure BDA0003409984430000047
The calculation process is as follows:
Figure BDA0003409984430000048
wherein f represents the focal length of the camera;
further, step S4 specifically includes the following steps:
step 4-1: coating marks with specific colors and/or regular geometric shapes on the top corners or edges of the lower top surface of the lifting appliance;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation method RGB The pixel points are divided, the noise points in the image are removed, and the vacancy in the communicated region is filledSegmenting the image to a color threshold;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the image i I is the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
Figure BDA0003409984430000049
And 4-5: calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolley L
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formula L ,Y L ,Z L ) The following are:
Figure BDA00034099844300000410
further, the distance Z between the top surface of the lower part of the lifting appliance and the trolley is obtained through calculation in the steps 4 to 5 L Obtained by the following method:
according to S i And calibrating the obtained distance to area ratio in advance
Figure BDA00034099844300000411
Is calculated to obtain Z l
Figure BDA0003409984430000051
Wherein I represents the number of circular or rectangular marks.
Further, the distance Z between the top surface of the lower part of the lifting appliance and the trolley is obtained through calculation in the steps 4 to 5 L And may also be obtained by a laser ranging sensor.
Has the advantages that: two groups of binocular camera systems are deployed on a trolley which is arranged above a crane beam and can move horizontally, so that the deployment difficulty and the maintenance cost of the cameras are effectively reduced. On the other hand, the binocular cameras and the laser ranging sensors arranged on the trolley can be used for simultaneously realizing the identification, ranging and positioning of the target container and the lifting appliance, and the number of the required cameras is reduced. In order to meet the requirement of positioning accuracy, the technical scheme makes full use of structural information of the port container to improve the ranging accuracy of the vision system.
Drawings
FIG. 1 is a schematic structural diagram of a binocular camera constructed according to the present invention;
FIG. 2 is a front view of a schematic of a deployment scenario of a binocular camera and laser range sensor;
FIG. 3 is a left side view of a schematic of a deployment scenario of a binocular camera and laser range sensor;
FIG. 4 is a flow chart involved in the present invention;
FIG. 5 is a schematic view of the principle of binocular ranging;
FIG. 6 is a schematic diagram of the position of the marker on the lower part of the spreader, in which the circle is the position of the red marker in the embodiment;
fig. 7 is a schematic diagram of the marker position of the lower part of the spreader, and the rectangular black bar in the diagram is the green marker position in the embodiment.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in FIGS. 1-3, the visual identification, distance measurement and positioning method for the automatic loading and unloading operation of the harbor container of the present invention is based on hardware as follows: there is dolly 2 on container bridge crane's the crossbeam, and the front and back of dolly, the left and right sides set up binocular camera system respectively, and laser range finding sensor 3 is installed to the bottom of dolly, and the dolly bottom is loaded through lifting rope 4 and is loaded with hoist 5, and the hoist is used for lifting target container to the assigned position.
Based on the hardware system, the method for visual identification, distance measurement and positioning in the automatic loading and unloading operation of the port container of the embodiment comprises the following steps:
step 1: a binocular camera system is built and arranged on two sides of a trolley which moves on a beam of the container bridge crane;
step 1-1:
step 1-1: fixing two cameras 6 with the same model and specification on a datum plane of a rigid support 7 to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes 1 to be parallel, enabling installation datum lines on the support to be used as common vertical lines, enabling the distance between the datum lines of the two cameras to be D, and enabling the frame rate of the cameras to be higher than 60FPS and the resolution to be higher than 2K; as shown in fig. 1;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley of the crane and are vertical to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end 8 of the top surfaces of the target container and the lifting appliance. As shown in fig. 2 and 3;
the flow chart of the method of the invention is shown in fig. 4, and comprises the following steps:
step 2: identifying and positioning a target container on a plane;
step 2-1: and respectively acquiring images of the container and the lifting appliance below by using two groups of binocular cameras, and filtering and denoising the acquired images. The embodiment of the invention adopts a median filtering method to reduce noise, and specifically comprises the steps of moving a rectangular sliding window with the size of 3 x 3 pixels on an image, and replacing the pixels at the center of the window with the intermediate values of all the pixels in the window, thereby removing noise in the image;
step 2-2: converting the RGB image after filtering and noise reduction into HSV color space to obtain a preprocessed image, wherein the conversion process is as follows: :
min=min(R,G,B)
max=max(R,G,B)
Figure BDA0003409984430000061
Figure BDA0003409984430000062
V=max
r, G and B respectively represent red, green and blue color components in the image, and the value ranges of the three colors are all 0-1. In the conversion process, the hue H of the image ranges from 0 degrees to 360 degrees, the saturation S ranges from 0 degrees to 1 degrees, and the lightness V ranges from 0 degrees to 1 degrees; step 2-3: extracting image blocks with the size of P (32P, unit: pixel, the same below) from a standard container image collected in advance as a container color template, and calculating a statistical histogram of the container color template in an HSV color space for calculating the matching degree between the container color template and the collected image;
step 2-4: taking the region with the size of N × M at the central position of the image preprocessed in the step 2-2 as the region of interest, and traversing the region by taking l (taking 2 in the embodiment of the invention) as a step size by using a window with the size of N × N (taking 8 × 8 in the embodiment of the invention). Calculating the matching degree of the color space histogram between the image in each window and the container color template, and if all the matching degrees exceed a threshold value T M And the window is collected to obtain the image segmentation result of the target container. The matching degree is calculated by using the Pasteur distance, and the calculation method is as follows:
Figure BDA0003409984430000071
wherein a is i ,b i Respectively representing a pair of corresponding components in an HSV color space histogram of a color template and a window to be matched, N represents the number of all components in the histogram, and d (a, b) represents the matching degree of the color template and the window to be matched, which are calculated by adopting a Bhattacharyya distance;
step 2-5: taking the minimum circumscribed rectangle from the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region, wherein a Canny edge detection algorithm is adopted in the embodiment of the invention;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: to the contour of the target containerPerforming polygon fitting on the features, extracting all convex quadrangles, and removing the area smaller than a threshold value T S1 (T S1 To be set according to image resolution). Sequentially calculating included angles of all adjacent straight lines of the convex quadrangle, and if the angle values of all included angles meet the threshold range [ T ] D1 ,T D2 ](in the present embodiment, the value of [80 DEG ], 100 DEG ] is taken]) Then the convex quadrilateral can be determined to be a rectangle. All rectangles meeting the conditions form a candidate circumscribed rectangle frame set of the target container;
step 2-8: calculating the areas and the aspect ratios of all rectangles in the candidate circumscribed rectangle frame set of the target container, and removing the rectangles with the areas larger than the threshold value T S2 (T S2 To be set according to the image resolution), and screening out a rectangle conforming to the length-width ratio of the standard container as an external rectangular frame of the target container;
step 2-9: coordinates of the upper left corner of a rectangular frame externally connected with a target container are taken
Figure BDA0003409984430000072
And coordinates of lower right corner
Figure BDA0003409984430000073
Calculating to obtain the position coordinates of the central point of the target container in the image coordinate system
Figure BDA0003409984430000074
I.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: two cameras in the binocular camera system respectively perform the operations of the steps 2-1 to 2-10 to realize the identification and plane positioning of the target container;
and step 3: ranging and space positioning of the target container;
step 3-1: respectively extracting key point features from target container images acquired by two cameras in a binocular camera system (in the embodiment of the invention, SIFT algorithm is adopted to extract the key point features), and performing feature matching on the key points (in the embodiment of the invention, BF algorithm is adopted to perform feature matching). Screening out mismatching points by using a left-right consistency detection and polar line constraint detection method;
step 3-2: calculating the parallax d between the two camera target containers by stereo matching method, and calculating the distance between the target container and the camera according to the binocular vision principle, as shown in fig. 5
Figure BDA0003409984430000075
Figure BDA0003409984430000076
Where D is the distance between the centers of the two cameras (baseline distance) and a is an internal parameter of the camera, which can be obtained by camera calibration. d ═ u l -u r Wherein u is l Horizontal coordinate value u representing the target container in the left camera image coordinate system r Representing the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: the installation height of the camera in the binocular camera system relative to the ground of the storage yard is H C The binocular distance measurement result is
Figure BDA0003409984430000081
The preliminary estimated height of the target container in the yard
Figure BDA0003409984430000082
The height h and the number n of stacked layers of the container can be obtained by the following formula
Figure BDA0003409984430000083
Wherein p and q represent standard container height of two existing specifications, p is 2591mm, q is 2896mm, δ represents the tolerance of binocular range finding, and δ is 150mm in the embodiment of the invention. N represents a set of single columnsIn the embodiment of the invention, the maximum stacking layer number of the containers is taken as N-7, i represents the stacking layer number which may appear in a single-row container, and the value range is [1, N%]A positive integer in between; if the height is preliminarily estimated
Figure BDA0003409984430000084
If the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Figure BDA0003409984430000085
Figure BDA0003409984430000086
Step 3-5: according to the precise height of the target container in the yard
Figure BDA0003409984430000087
And the installation height H of the camera in the binocular camera system relative to the ground of the storage yard C Calculating the actual distance between the target container and the binocular camera
Figure BDA0003409984430000088
Realizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular camera
Figure BDA0003409984430000089
And the plane position coordinates of the target container
Figure BDA00034099844300000810
Obtaining space position coordinates of target container under camera coordinate system through inverse perspective transformation
Figure BDA00034099844300000811
The calculation process is as follows:
Figure BDA00034099844300000812
wherein f represents the focal length of the camera;
and 4, step 4: identifying, ranging and positioning a lifting appliance;
step 4-1: the top corners or the frames at the lower part of the lifting appliance are coated with marks with specific colors and/or regular geometric shapes, for example, the top corners at the lower part of the lifting appliance are coated with regular geometric shape marks, the embodiment of the invention adopts red (RGB value is [255, 0, 0]) round, and at least two marks are ensured to be positioned at the top corners which are opposite to each other and are equidistant to the center of the lifting appliance, as shown in FIG. 6; or the lower edge of the hanger is coated with a regular geometric shape, the embodiment of the invention adopts a green rectangle (RGB value is 0, 255, 0), and at least two marks are positioned at the center of two adjacent edges of the hanger, as shown in fig. 7;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation method RGB The pixel points are segmented, the segmented image is processed through morphological operation of the image, noise in the image is removed, and the vacancy in the communicated region is filled;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the image i I represents the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
Figure BDA0003409984430000091
And 4-5: distance Z between top surface of lower part of lifting appliance and trolley L Can pass through twoThe method comprises the following steps:
1) according to S i And calibrating the obtained distance to area ratio in advance
Figure BDA0003409984430000092
Is calculated to obtain Z L
Figure BDA0003409984430000093
Wherein I represents the number of circular or rectangular marks.
2) Acquisition of Z by laser ranging sensor L
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formula L ,Y L ,Z L ) The following are:
Figure BDA0003409984430000094
finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application and not for limiting the scope of protection thereof, and although the present application is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: numerous variations, modifications, and equivalents will occur to those skilled in the art upon reading the present application and are within the scope of the claims appended hereto.

Claims (6)

1. A visual identification, distance measurement and positioning method in automatic loading and unloading operation of a port container is characterized by comprising the following steps:
step S1: two groups of binocular camera systems are built and arranged on two sides of a movable trolley on a beam of the container bridge crane;
step S2: identifying and plane positioning the target container;
step S3: after the plane position of a target container is obtained, the primary distance measurement of the target container is realized by using a binocular camera system, the accurate calculation of the height of the target container is realized by combining the structural size information of the target container, and the space position coordinate of the target container is determined through the inverse perspective transformation, wherein the structural size information of the container refers to the heights of two standard containers with the existing specifications;
step S4: the plane positioning of the lifting appliance is realized by identifying the pre-coating mark on the lifting appliance, and the distance between the lifting appliance and the moving trolley of the container bridge crane is obtained by utilizing a monocular camera or a laser ranging sensor, so that the spatial position coordinate of the lifting appliance is determined;
step S2 specifically includes the following steps:
step 2-1: using two groups of binocular cameras to respectively acquire RGB images of a container and a lifting appliance below, and filtering and denoising the acquired RGB images;
step 2-2: converting the RGB image subjected to filtering and noise reduction into an HSV color space to obtain a preprocessed image, wherein the hue H value range of the image in the conversion process is 0-360 degrees, the saturation S value range is 0-1, and the brightness V value range is 0-1;
step 2-3: extracting image blocks with the size of P from a pre-collected standard container image to be used as a container color template, and calculating a statistical histogram of the container color template in an HSV color space;
step 2-4: taking the area with the size of N M at the central position of the image preprocessed in the step 2-2 as an interested area, traversing the area by taking l as a step length by using a window with the size of N N, calculating the matching degree of a statistical histogram of the color space between the image in each window and the container color template, and if all the matching degrees exceed a threshold value T M The windows are collected to obtain the image segmentation result of the target container;
step 2-5: taking the minimum circumscribed rectangle of the target container image segmentation result obtained in the step 2-4, expanding K pixels outwards to serve as a new region of interest, and carrying out edge detection in the region;
step 2-6: combining effective edge pixels obtained by edge detection to form the outline characteristics of the target container;
step 2-7: carrying out polygon fitting on the outline characteristics of the target container, extracting all convex quadrangles, and removing the convex quadrangles with the areas smaller than a threshold value T S1 Sequentially calculating the included angles of all adjacent straight lines of all the convex quadrangles, and if the angle values of all the included angles satisfy the threshold range [ T ] D1 ,T D2 ]Then, the quadrangle can be judged to be a rectangle, and all rectangles meeting the condition form a candidate external rectangle frame set of the target container;
step 2-8: calculating the areas and the aspect ratios of all rectangles in the candidate circumscribed rectangle frame set of the target container, and removing the rectangles with the areas larger than the threshold value T S2 The rectangle conforming to the length-width ratio of the standard container is screened out to be used as an external rectangular frame of the target container;
step 2-9: coordinates of the upper left corner of a rectangular frame externally connected with a target container are taken
Figure FDA0003715548250000021
And coordinates of lower right corner
Figure FDA0003715548250000022
Calculating to obtain the position coordinates of the central point of the target container in the image coordinate system
Figure FDA0003715548250000023
I.e. the planar position of the target container;
step 2-10: utilizing an external rectangular frame of the target container, and segmenting the target container from the acquired image in a mask mode for subsequent ranging and space positioning;
step 2-11: and (3) respectively carrying out the operations of the steps 2-1 to 2-10 by two cameras in the binocular camera system to realize the identification and the plane positioning of the target container.
2. The method of claim 1, wherein the method comprises the steps of: step S1 specifically includes the following steps:
step 1-1: fixing two cameras with the same model and specification on a datum plane of a rigid support to form a group of binocular camera systems, enabling imaging planes of the two cameras to be coplanar, enabling optical axes to be parallel, taking an installation datum line on the support as a common vertical line, enabling a baseline distance between the two cameras to be D, and selecting the camera with a frame rate higher than 60FPS and a resolution higher than 2K;
step 1-2: two groups of binocular camera systems are deployed on the same horizontal height of two sides of a movable trolley on a crane beam and are perpendicular to the advancing direction of the trolley, so that each group of binocular camera systems can shoot at least one end part of the top surfaces of a target container and a lifting appliance.
3. The method of claim 1, wherein the method comprises the steps of: step S3 specifically includes the following steps:
step 3-1: respectively extracting key point characteristics from target container images acquired by two cameras in a binocular camera system, performing characteristic matching on the key points, and screening out mismatching points;
step 3-2: calculating the parallax value d between two camera target containers in a binocular camera system by a stereo matching method, and calculating the distance between the target containers and the cameras according to the binocular vision principle
Figure FDA0003715548250000024
Figure FDA0003715548250000025
Where D is the baseline distance between the centers of the two cameras, α is an internal parameter of the camera obtained by camera calibration, and D ═ u l -u r Wherein u is l Horizontal coordinate value u representing the target container in the left camera image coordinate system r Representing the horizontal coordinate value of the target container under the right camera image coordinate system;
step 3-3: the installation height of the camera in the binocular camera system relative to the ground of the storage yard is H C The binocular distance measurement result is
Figure FDA0003715548250000026
The preliminary estimated height of the target container in the yard
Figure FDA0003715548250000027
The height dimension h and the number n of stacked layers of the target container are obtained by the following formula
Figure FDA0003715548250000031
Wherein p and q represent the height of two standard containers with the existing specification respectively, delta represents the tolerance of binocular ranging, N represents the maximum stacking layer number of a single-row target container, i represents the possible stacking layer number of the single-row container, and the value range is [1, N ]]A positive integer in between; if the height is preliminarily estimated
Figure FDA0003715548250000032
If the first two conditions in the formula are not met, namely the binocular ranging error exceeds the allowable range, the preliminary estimation fails, (F, F) can take any negative value, and the target container identification and the binocular ranging should be carried out again;
step 3-4: calculating to obtain the accurate height of the target container in the storage yard according to the height dimension h of the target container and the stacking layer number n obtained in the step 3-3
Figure FDA0003715548250000033
Figure FDA0003715548250000034
Step 3-5: according to the precise height of the target container in the yard
Figure FDA0003715548250000035
And the installation height H of the camera in the binocular camera system relative to the ground of the storage yard C Calculating the actual distance between the target container and the binocular camera
Figure FDA0003715548250000036
Realizing accurate distance measurement of a target container;
step 3-6: according to the actual distance between the target container and the binocular camera
Figure FDA0003715548250000037
And the plane position coordinates of the target container
Figure FDA0003715548250000038
Obtaining space position coordinates of target container under camera coordinate system through inverse perspective transformation
Figure FDA0003715548250000039
The calculation process is as follows:
Figure FDA00037155482500000310
where f denotes the focal length of the camera.
4. The method of claim 3, wherein the method comprises the steps of: step S4 specifically includes the following steps:
step 4-1: coating marks with specific colors and/or regular geometric shapes on the top corners or edges of the lower top surface of the lifting appliance;
step 4-2: satisfying color threshold T in the image preprocessed in the step 2-2 by utilizing a color threshold segmentation method RGB The pixel points are segmented, noise points in the image are removed, and the vacancy in the communicated region is filled to obtain a color threshold segmentation image;
step 4-3: respectively carrying out regular geometric shape detection on each color threshold segmentation image obtained in the step 4-2, obtaining the coordinates of the center point of each regular geometric shape, and calculating the area S of each regular geometric shape in the image i I is the serial number of the regular geometric shape mark;
step 4-4: according to the coordinates of the central point of the regular geometric shape, the coordinates of the central position of the top surface of the lower part of the lifting appliance are calculated
Figure FDA00037155482500000311
And 4-5: calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolley L
And 4-6: finally, obtaining the space position coordinate (X) of the lifting appliance by adopting an inverse perspective transformation formula L ,Y L ,Z L ) The following are:
Figure FDA0003715548250000041
5. the method of claim 4, wherein the method comprises the steps of: 4-5, calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolley L Obtained by the following method:
according to S i And calibrating the obtained distance to area ratio in advance
Figure FDA0003715548250000042
Is calculated to obtain Z L
Figure FDA0003715548250000043
Wherein I represents the number of circular or rectangular marks.
6. According to claim4 the visual identification, distance measurement and positioning method in the automatic loading and unloading operation of the harbor container is characterized in that: 4-5, calculating to obtain the distance Z between the top surface of the lower part of the lifting appliance and the trolley L Obtained by a laser ranging sensor.
CN202111525088.5A 2021-12-14 2021-12-14 Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation Active CN114219842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111525088.5A CN114219842B (en) 2021-12-14 2021-12-14 Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111525088.5A CN114219842B (en) 2021-12-14 2021-12-14 Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation

Publications (2)

Publication Number Publication Date
CN114219842A CN114219842A (en) 2022-03-22
CN114219842B true CN114219842B (en) 2022-08-12

Family

ID=80701689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111525088.5A Active CN114219842B (en) 2021-12-14 2021-12-14 Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation

Country Status (1)

Country Link
CN (1) CN114219842B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758001B (en) * 2022-05-11 2023-01-24 北京国泰星云科技有限公司 PNT-based automatic traveling method for tyre crane
CN114972541B (en) * 2022-06-17 2024-01-26 北京国泰星云科技有限公司 Tire crane stereoscopic anti-collision method based on fusion of three-dimensional laser radar and binocular camera
CN115077385B (en) * 2022-07-05 2023-09-26 北京斯年智驾科技有限公司 Unmanned container pose measuring method and measuring system thereof
CN115578237A (en) * 2022-09-22 2023-01-06 中车资阳机车有限公司 Lock hole positioning system and method for split container spreader
CN116681721B (en) * 2023-06-07 2023-12-29 东南大学 Linear track detection and tracking method based on vision
CN116452467B (en) * 2023-06-16 2023-09-22 山东曙岳车辆有限公司 Container real-time positioning method based on laser data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105271004A (en) * 2015-10-26 2016-01-27 上海海事大学 Lifting device space positioning device adopting monocular vision and method
CN105480848A (en) * 2015-12-21 2016-04-13 上海新时达电气股份有限公司 Port crane lifting system and stacking method thereof
CN106096606A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on fitting a straight line
CN109319526A (en) * 2018-11-16 2019-02-12 西安中科光电精密工程有限公司 A kind of container entrucking of bagged material and stocking system and method
CN112194011A (en) * 2020-08-31 2021-01-08 南京理工大学 Tower crane automatic loading method based on binocular vision
CN113651242A (en) * 2021-10-18 2021-11-16 苏州汇川控制技术有限公司 Control method and device for container crane and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102923578A (en) * 2012-11-13 2013-02-13 扬州华泰特种设备有限公司 Automatic control system of efficient handing operation of container crane
CN204778452U (en) * 2015-07-13 2015-11-18 常州基腾电气有限公司 A buffer stop for crane sling
CN111243016B (en) * 2018-11-28 2024-03-19 周口师范学院 Automatic container identification and positioning method
CN110659634A (en) * 2019-08-23 2020-01-07 上海撬动网络科技有限公司 Container number positioning method based on color positioning and character segmentation
CN112102405B (en) * 2020-08-26 2022-11-15 东南大学 Robot stirring-grabbing combined method based on deep reinforcement learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105271004A (en) * 2015-10-26 2016-01-27 上海海事大学 Lifting device space positioning device adopting monocular vision and method
CN105480848A (en) * 2015-12-21 2016-04-13 上海新时达电气股份有限公司 Port crane lifting system and stacking method thereof
CN106096606A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on fitting a straight line
CN109319526A (en) * 2018-11-16 2019-02-12 西安中科光电精密工程有限公司 A kind of container entrucking of bagged material and stocking system and method
CN112194011A (en) * 2020-08-31 2021-01-08 南京理工大学 Tower crane automatic loading method based on binocular vision
CN113651242A (en) * 2021-10-18 2021-11-16 苏州汇川控制技术有限公司 Control method and device for container crane and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
集装箱起重机自动装卸***的研究与设计;梁晓波等;《计算机应用》;20150620;第35卷;第229-231页 *

Also Published As

Publication number Publication date
CN114219842A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN114219842B (en) Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation
CN111127318B (en) Panoramic image splicing method in airport environment
US8724885B2 (en) Integrated image processor
CN112070838B (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN113011388B (en) Vehicle outer contour size detection method based on license plate and lane line
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN103761765A (en) Three-dimensional object model texture mapping algorithm based on mapping boundary optimization
CN115880372A (en) Unified calibration method and system for external hub positioning camera of automatic crane
CN105469401B (en) A kind of headchute localization method based on computer vision
CN114782357A (en) Self-adaptive segmentation system and method for transformer substation scene
Han et al. Target positioning method in binocular vision manipulator control based on improved canny operator
CN107038703A (en) A kind of goods distance measurement method based on binocular vision
CN113379684A (en) Container corner line positioning and automatic container landing method based on video
CN105005985B (en) Backlight image micron order edge detection method
CN116664591A (en) Automatic container stacking method based on vision system
CN115909216A (en) Cargo ship hatch detection method and system based on laser radar and monocular camera
CN106097331B (en) A kind of container localization method based on lockhole identification
CN114897981A (en) Hanger pose identification method based on visual detection
CN113793315A (en) Monocular vision-based camera plane and target plane included angle estimation method
CN113674360A (en) Covariant-based line structured light plane calibration method
CN113963107A (en) Large target three-dimensional reconstruction method and system based on binocular vision
CN112464789A (en) Power transmission line extraction method based on line characteristics
CN113091627B (en) Method for measuring vehicle height in dark environment based on active binocular vision
JP7191352B2 (en) Method and computational system for performing object detection
CN109905692A (en) A kind of machine new vision system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant