CN113658244A - Method for identifying three-dimensional geometric dimension of navigation ship in bridge area - Google Patents
Method for identifying three-dimensional geometric dimension of navigation ship in bridge area Download PDFInfo
- Publication number
- CN113658244A CN113658244A CN202110741113.7A CN202110741113A CN113658244A CN 113658244 A CN113658244 A CN 113658244A CN 202110741113 A CN202110741113 A CN 202110741113A CN 113658244 A CN113658244 A CN 113658244A
- Authority
- CN
- China
- Prior art keywords
- ship
- navigation
- camera
- pixel
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000009466 transformation Effects 0.000 claims abstract description 42
- 238000012549 training Methods 0.000 claims abstract description 18
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000000877 morphologic effect Effects 0.000 claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims description 22
- 238000000889 atomisation Methods 0.000 claims description 19
- 239000004576 sand Substances 0.000 claims description 11
- 238000012946 outsourcing Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 230000003321 amplification Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 230000002779 inactivation Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000013434 data augmentation Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000012544 monitoring process Methods 0.000 description 5
- 208000003464 asthenopia Diseases 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 210000000709 aorta Anatomy 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a method for identifying three-dimensional geometric dimensions of a navigation ship in a bridge region, which comprises the following steps: arranging a camera at the bottom span middle part of a bridge navigation span beam to acquire image information of navigation ships, and arranging calibration buoys at two sides of a channel at a reasonable position from the plane of the camera according to identification distance requirements; making a ship detection data set by using the acquired navigable ship image, training a ship detector by using a deep convolution network to acquire a rough position of a ship, and acquiring a precise outline of the ship by combining morphological operation and an outline extraction technology; and modeling the space where the navigation ship is located by using a projective transformation equation, and determining the conversion relation between the three-dimensional geometric dimension of the ship and the ship outline in the image so as to calculate the three-dimensional geometric dimension of the navigation ship. The method realizes that the three-dimensional geometric dimension of the navigation ship can be accurately calculated by using one camera, reduces the equipment cost and ensures the dimension identification precision.
Description
Technical Field
The invention relates to the field of bridge engineering health monitoring, in particular to a method for identifying three-dimensional geometric dimensions of a navigation ship in a bridge area.
Background
The bridge is the throat in the transportation aorta. In recent years, bridge construction steps in the climax, more and more bridges are built to serve, and the functions of improving the traffic transportation efficiency of China and pulling the economy of China to rapidly increase are not negligible. Similarly, shipping is increasingly prosperous with the development of the economic society. With the increase of bridges and the increase of density of shipping ships, the frequency of accidents caused by ship collision with bridges is higher and higher. By incomplete statistics, more than 140 serious ship bridge collision accidents causing casualties and major property losses occur from 1959 to 2011, more than 40 serious ship bridge collision accidents occur in China, and the occurrence frequency is greatly increased since 2011. Although a ship collides with a bridge by accident, once the accident happens, both normal operation of the bridge and safety of a ship are great threats, and the existing measures are that cameras are installed above a bridge pier and below a main beam in key areas of the bridge, and the early warning of the ship colliding with the bridge is carried out in a manual monitoring mode, so that the great contradiction of the ship colliding with the bridge is greatly relieved. However, two inevitable weaknesses exist in manual monitoring, firstly, visual fatigue is generated rapidly when people concentrate on observing objects, and the accuracy of judgment is greatly reduced due to the visual fatigue; secondly, the individual subjective influence cannot be overcome, and people with different technical bases and working experiences have different risk evaluation standards for ship bridge collision.
With the development of sensing technology, many scholars try to solve the problem of ship identification in a bridge area, however, the methods have two problems, namely expensive equipment and poor signal intuitiveness; secondly, most researches consider a navigation ship as a point, and the three-dimensional geometric dimension of the ship cannot be acquired in real time. How to provide a method which has strong robustness and can accurately identify the three-dimensional geometric dimension of a ship aiming at the problems existing in the existing research, an automatic intelligent solution is provided for preventing ship collision in a bridge region, and the method is a problem to be researched urgently.
Disclosure of Invention
Based on the defects, the invention aims to provide the method for identifying the three-dimensional geometric dimension of the navigation ship in the bridge area, and solves the problems that the existing navigation ship in the bridge area is expensive in equipment and high in calculation cost.
The technology adopted by the invention is as follows: a method for identifying three-dimensional geometric dimension of navigation ship in bridge region comprises the following steps:
the method comprises the following steps of arranging a camera at the bottom span and the middle span of a bridge navigation span to acquire image information of navigation ships, and the camera setting method comprises the following steps: the camera lens plane faces the channel, the lens plane inclines downwards, and the inclination angle is:
wherein H is the height from the lens of the camera to the water surface, R is the required distance for identification,
two calibration buoys are respectively arranged on two sides of a channel at a reasonable position from the plane of the camera according to the identification required distance, the identification required distance is the minimum warning distance determined by the bridge grade and the navigation requirement, and the navigation ship is identified according to the distance;
the method comprises the following steps that a camera collects pictures of various navigation ships, and a manual marking tool is used for marking the ships to manufacture a data set;
amplifying ship images in a data set by using a data amplification method, and then dividing the ship images into a training set and a test set;
constructing a ship detector based on a deep convolutional neural network, performing network training by using a training set, and storing network weights after testing by using a test set meets the requirement of test precision;
the method comprises the steps that a video is collected by a camera, pixel coordinate values of a calibration buoy are manually marked in a first frame of a video, two buoy pixel coordinates on each side of a channel determine a pixel straight line, namely the pixel position of the channel boundary, and the surrounded part of the pixel straight lines of the two channel boundaries corresponds to the real channel part;
loading network weight by using a ship detector, detecting a channel part acquired by a camera to acquire a rough position of a ship, and describing the position of the ship by adopting a form of an outsourcing rectangular frame on the rough position;
the detected rough position of the ship is cut out from the channel part image separately, morphological operation is carried out to fill the cavity of the ship part, and then the contour extraction technology is used to obtain the accurate contour of the ship;
modeling the space where the navigation ship is located by using a projective transformation equation, wherein the modeling method is as shown in a formula (2):
wherein R and R are the equivalent focal length of the camera and the horizontal distance from the calibration buoy to the camera, WcAnd d1Respectively the actual width of the channel and the width of the pixels of the channel in the image,
determining the conversion relation between the three-dimensional geometric dimension of the ship and the ship contour in the image,
actual width W of ships:
Wherein d is1 sIs the pixel width in the image where the bow has traveled to the identification required distance R;
actual length L of shipsAnd height Hs:
Wherein R' is the horizontal distance from the bow to the camera when the stern drives to the position of the required distance R for identification, h2 sAnd d2 sThe pixel height and pixel width of the ship in the image with the bow at R ', R' is formed byThe following formula is calculated:
thereby calculating the three-dimensional geometric dimension of the navigation ship.
The data amplification method for the data set comprises the following steps: carrying out picture brightness random transformation, picture atomization treatment and rain treatment on the actual illumination change of the bridge area, wherein the picture brightness random transformation is carried out according to a formula (6):
Y=0.299×R×(1+randr%)+0.587×G×(1+randg%)+0.114×B×(1+randb%) (6)
wherein Y is the brightness of the picture pixel, R, G, B is the red, green and blue color component of the picture pixel, randr, randg, randb are the red, green and blue color random disturbance component, and are random integers between-100 to 100;
the image atomization and rain transformation processing uses an inverse transformation network of a defogging and rain removal method based on a deep convolution neural network, 5 levels, namely 1-level atomization-5-level atomization and 1-level rain transformation-5-level rain transformation are respectively prepared according to different atomization and rain transformation degrees when a transformation image is prepared, the inverse transformation network is trained to obtain an atomization model and a rain transformation model, and when data is expanded, joint transformation is carried out on the image, namely brightness random transformation, level random atomization and rain transformation processing are carried out on the image at the same time.
The method for constructing the ship detector based on the deep convolutional neural network comprises the following steps: comprises a ship detector, a ship multi-scale feature generation module, a ship reference frame generation module, a ship feature fusion module and a ship classification regression module,
the ship multi-scale feature generation module is formed by six convolution operations, wherein each convolution operation comprises convolution operation, activation operation, batch normalization operation and random inactivation operation, so that six ship feature graphs with different scales are formed;
the ship reference frame generation module is formed by ship outsourcing rectangular frame central point coordinate generation and rectangular frame width and height generation, wherein the central point coordinate is generated by adopting pixel coordinate values of six ship characteristic diagrams with different scales, and the rectangular frame width and height are determined by adopting the size of a multi-scale characteristic diagram according to a proportion;
the ship feature fusion module is generated by up-sampling the last stage of six ship feature maps with different scales generated by the ship multi-scale feature generation module, and then adding the up-sampled values with the fifth, fourth and third-stage feature maps in a point-to-point manner; the ship classification regression module is divided into ship type classification and ship outsourcing rectangular frame parameter regression, the types of the ships are divided into four types of sand extraction ships, container ships, bulk cargo ships and passenger ships according to common ships in a bridge area, the rectangular frame parameter regression process comprises the steps of participating in network training and testing after encoding operation is carried out on four parameters of a center point coordinate and width and height, and then decoding is carried out to recover the rectangular frame.
The method for performing morphological operation and contour extraction on the rough position of the ship comprises the following steps: dividing the image clipped by the rough position of the ship into three RGB channels, and performing binarization operation on each channel to obtain three binarization images; respectively carrying out once morphological expansion operation and once morphological corrosion operation on the three binary images to respectively obtain three rough ship part closed graphs, and integrating the three graphs by adopting the following strategy: namely, when the values of the three graphs corresponding to the pixel points are two or three 255, the final value of the pixel point is 255, otherwise, the final value of the pixel point is 0, so that the final ship part closed graph is obtained, the final ship part closed graph is approximated by using the polygonal curve to obtain a series of candidate outlines, the outlines are sorted according to the size of the surrounding area, and the maximum outline is taken as the final accurate outline of the ship.
The invention has the advantages that: the invention realizes the automatic identification of the three-dimensional geometric dimension of the navigation ship only by depending on a single camera. The method is convenient and accurate, and improves the accuracy and stability of ship positioning in the bridge area. The whole process is automated, and the human participation in the detection process is obviously reduced. The invention can also meet the requirements of on-line monitoring and early warning and real-time data processing for preventing ship collision in the bridge area. The method improves the automation, the intellectualization, the accuracy and the robustness of the three-dimensional geometric dimension identification of the navigation ship in the bridge area, and provides a solution for the automatic monitoring of the bridge engineering for preventing ship collision.
Drawings
FIG. 1 is a schematic view of a camera installation position
FIG. 2 is a schematic diagram of a test result of a rough detection network of a navigable ship
FIG. 3 is a drawing of a navigation vessel profile extraction result
Detailed Description
Example 1
A method for identifying three-dimensional geometric dimension of navigation ship in bridge region comprises the following steps:
the method comprises the following steps of arranging a camera at the bottom span and the middle span of a bridge navigation span to acquire image information of navigation ships, and the camera setting method comprises the following steps: the camera lens plane faces the channel, the lens plane inclines downwards, and the inclination angle is:
wherein H is the height from the lens of the camera to the water surface, R is the required distance for identification,
two calibration buoys are respectively arranged on two sides of a channel at a reasonable position from the plane of the camera according to the identification required distance, the identification required distance is the minimum warning distance determined by the bridge grade and the navigation requirement, and the navigation ship is identified according to the distance;
the method comprises the following steps that a camera collects pictures of various navigation ships, and a manual marking tool is used for marking the ships to manufacture a data set;
amplifying ship images in a data set by using a data amplification method, and then dividing the ship images into a training set and a test set;
constructing a ship detector based on a deep convolutional neural network, performing network training by using a training set, and storing network weights after testing by using a test set meets the requirement of test precision;
the method comprises the steps that a video is collected by a camera, pixel coordinate values of a calibration buoy are manually marked in a first frame of a video, two buoy pixel coordinates on each side of a channel determine a pixel straight line, namely the pixel position of the channel boundary, and the surrounded part of the pixel straight lines of the two channel boundaries corresponds to the real channel part; based on the problem that the precision is reduced due to the fact that the camera calibration is carried out on the video camera at a short distance by adopting a black and white chessboard calibration plate to obtain the known condition of an indeterminate equation, the calibration buoy of the same navigation ship on the same plane is used for providing the known condition, so that the complex calibration process can be effectively avoided, and the three-dimensional size identification precision is improved;
loading network weight by using a ship detector, detecting a channel part acquired by a camera to acquire a rough position of a ship, and describing the position of the ship by adopting a form of an outsourcing rectangular frame on the rough position;
the detected rough position of the ship is cut out from the channel part image separately, morphological operation is carried out to fill the cavity of the ship part, and then the contour extraction technology is used to obtain the accurate contour of the ship;
modeling the space where the navigation ship is located by using a projective transformation equation, wherein the modeling method is as shown in a formula (2):
wherein R and R are the equivalent focal length of the camera and the horizontal distance from the calibration buoy to the camera, WcAnd d1The actual width of the channel and the pixel width of the channel in the image;
determining the conversion relation between the three-dimensional geometric dimension of the ship and the ship contour in the image,
actual width W of ships:
Wherein d is1 sThe distance R required for identifying the ship headThe width of a pixel in the image at (a);
actual length L of shipsAnd height Hs:
Wherein R' is the horizontal distance from the bow to the camera when the stern drives to the position of the required distance R for identification, h2 sAnd d2 sThe pixel height and the pixel width of the ship at the position of the ship bow in the image are respectively, and R' is calculated by the following formula:
thereby calculating the three-dimensional geometric dimension of the navigation ship.
The data amplification method for the data set comprises the following steps: carrying out picture brightness random transformation, picture atomization treatment and rain treatment on the actual illumination change of the bridge area, wherein the picture brightness random transformation is carried out according to a formula (6):
Y=0.299×R×(1+randr%)+0.587×G×(1+randg%)+0.114×B×(1+randb%) (6)
wherein Y is the brightness of the picture pixel, R, G, B is the red, green and blue color component of the picture pixel, randr, randg, randb are the red, green and blue color random disturbance component, and are random integers between-100 to 100;
the image atomization and rain transformation processing uses an inverse transformation network of a defogging and rain removal method based on a deep convolution neural network, 5 levels, namely 1-level atomization-5-level atomization and 1-level rain transformation-5-level rain transformation are respectively prepared according to different atomization and rain transformation degrees when a transformation image is prepared, the inverse transformation network is trained to obtain an atomization model and a rain transformation model, and when data is expanded, joint transformation is carried out on the image, namely brightness random transformation, level random atomization and rain transformation processing are carried out on the image at the same time. In consideration of the difficulty in image acquisition of a foggy and rainy navigable ship, the embodiment adopts an inverse transformation network of a defogging and rain removing method based on a deep convolutional neural network, and classifies the degree of atomization and rain according to five grades, so that the image quantitative foggy and rain adding augmentation method is obtained.
The method for constructing the ship detector based on the deep convolutional neural network comprises the following steps: comprises a ship detector, a ship multi-scale feature generation module, a ship reference frame generation module, a ship feature fusion module and a ship classification regression module,
the ship multi-scale feature generation module is formed by six convolution operations, wherein each convolution operation comprises convolution operation, activation operation, batch normalization operation and random inactivation operation, so that six ship feature graphs with different scales are formed;
the ship reference frame generation module is formed by ship outsourcing rectangular frame central point coordinate generation and rectangular frame width and height generation, wherein the central point coordinate is generated by adopting pixel coordinate values of six ship characteristic diagrams with different scales, and the rectangular frame width and height are determined by adopting the size of a multi-scale characteristic diagram according to a proportion;
the ship feature fusion module is generated by up-sampling the last stage of six ship feature maps with different scales generated by the ship multi-scale feature generation module and then adding the up-sampled ship feature maps with the fifth, fourth and third stage feature maps in a point-to-point manner, so that the utilization efficiency of ship image multi-scale features can be effectively improved, and the detection precision is obviously improved;
the ship classification regression module is divided into ship type classification and ship outsourcing rectangular frame parameter regression, the types of the ships are divided into four types of sand extraction ships, container ships, bulk cargo ships and passenger ships according to common ships in a bridge area, the rectangular frame parameter regression process comprises the steps of participating in network training and testing after encoding operation is carried out on four parameters of a center point coordinate and width and height, and then decoding is carried out to recover the rectangular frame.
The method for performing morphological operation and contour extraction on the rough position of the ship comprises the following steps: dividing the image clipped by the rough position of the ship into three RGB channels, and performing binarization operation on each channel to obtain three binarization images; respectively carrying out once morphological expansion operation and once morphological corrosion operation on the three binary images to respectively obtain three rough ship part closed graphs, and integrating the three graphs by adopting the following strategy: when the values of the three images corresponding to the pixel point are two or three 255, the final value of the pixel point is 255, otherwise the final value of the pixel point is 0, so that the final ship part closed image is obtained, the final ship part closed image is approximated by using the polygonal curve to obtain a series of candidate outlines, the outlines are sorted according to the size of the surrounding area, and the maximum outline is taken as the final accurate outline of the ship; the prior art often adopts a single-channel gray image to operate when the ship contour extraction is carried out, which can lead to the discarding and the waste of information. In the embodiment, the RGB three-channel image is used for carrying out binarization and morphological operation respectively, and a three-channel integration strategy is provided to form a final ship partial closed graph, so that the robustness of ship contour extraction is effectively improved.
Example 2
In this embodiment, a bridge arranged on the Yangtze river is used, and the method for identifying the three-dimensional geometric dimension of the navigable ship in the bridge area in embodiment 1 is further described as follows:
the bridge for the test is a nine-span continuous beam highway-railway dual-purpose bridge crossing the Yangtze river, the span is 128m, and the square position of the camera 1 is shown in figure 1. A camera 1 is arranged at the bottom-span middle part of a bridge navigation span beam to acquire image information of navigation ships, and the camera setting method comprises the following steps: the camera lens plane faces the channel, the lens plane inclines downwards, and the inclination angle is:
h is the height from the camera lens to the water surface, and R is the identification required distance. The inclination angle was calculated to be 3.4 degrees based on the water surface height H of 12m and R of 200 m.
According to the requirement of identification distance, two calibration buoys are respectively arranged on two sides of a channel at a reasonable position from the plane of a camera, the buoys are placed at positions 200m away from the horizontal position of the camera on the two sides of the channel, and the size of each buoy is 700mm multiplied by 900 mm. The identification distance specifically considers the bridge grade and the navigation requirement to determine the minimum warning distance, and the navigation ship is identified according to the distance;
the method comprises the following steps of collecting pictures of various navigation ships by using an installed camera, marking the ships by using a manual marking tool, and manufacturing a data set;
amplifying ship images in a data set by using a data amplification method, and then dividing the ship images into a training set and a test set;
the ship detector is constructed based on a deep convolutional neural network, a training set is used for network training, a test set is used for testing, the network weight is stored after the requirement of test precision is met, a software platform used for training is Pythroch, a hardware platform is a CPU of Intel Xeon E5-2620 v4 and a GPU of Nvidia GTX 1080Ti, and the training is carried out for 12000 times. The test results are shown in fig. 2;
the method comprises the steps that a video is collected by a camera, pixel coordinate values of a calibration buoy are manually marked in a first frame of a video, two buoy pixel coordinates on each side of a channel can determine a pixel straight line, namely the pixel position of the channel boundary, and the surrounded part of the pixel straight lines of the two channel boundaries corresponds to the real channel part;
loading network weight by using a ship detector, and detecting a channel part acquired by a camera to acquire a rough position of a ship, wherein the rough position is described in a mode of an outsourcing rectangular frame;
after the detected rough position of the ship is cut out from the part image of the channel separately, morphological operation is carried out to fill the cavity of the ship part, and then a contour extraction technology is used to obtain the accurate contour of the ship, wherein the contour extraction result is shown in fig. 3;
modeling the space where the navigation ship is located by using a projective transformation equation, wherein the modeling method is as shown in a formula (2):
wherein the content of the first and second substances,r and R are the equivalent focal length of the camera and the horizontal distance from the calibration buoy to the camera, WcAnd d1The actual width of the channel and the width of the pixels of the channel in the image, respectively.
Determining the conversion relation between the three-dimensional geometric dimension of the ship and the ship contour in the image,
actual width W of ships:
Wherein d is1 sThe bow travels to the width of a pixel in the image at the recognition required distance R.
Actual length L of shipsAnd height Hs:
Wherein R' is the horizontal distance from the bow to the camera when the stern drives to the position of the required distance R for identification, h2 sAnd d2 sThe pixel height and the pixel width of the ship at the position of the ship bow in the image R' are respectively obtained by the following formula:
thereby calculating the three-dimensional geometric dimension of the navigation ship. And analyzing the extracted ship profile by using the constructed conversion relation to obtain that the length, the width and the height of the ship to be identified for navigation are respectively 5.3m, 14.46m and 4.31m, and the identification errors are respectively 3.64%, 9.62% and 4.22%, so that the engineering requirements are met.
The embodiment verifies the accuracy of the algorithm provided by the invention.
Claims (4)
1. A method for identifying three-dimensional geometric dimension of navigation ship in bridge area is characterized by comprising the following steps:
the method comprises the following steps of arranging a camera at the bottom span and the middle span of a bridge navigation span to acquire image information of navigation ships, and the camera setting method comprises the following steps: the camera lens plane faces the channel, the lens plane inclines downwards, and the inclination angle is:
wherein H is the height from the lens of the camera to the water surface, R is the required distance for identification,
two calibration buoys are respectively arranged on two sides of a channel at a reasonable position from the plane of the camera according to the identification required distance, the identification required distance is the minimum warning distance determined by the bridge grade and the navigation requirement, and the navigation ship is identified according to the distance;
the method comprises the following steps that a camera collects pictures of various navigation ships, and a manual marking tool is used for marking the ships to manufacture a data set;
amplifying ship images in a data set by using a data amplification method, and then dividing the ship images into a training set and a test set;
constructing a ship detector based on a deep convolutional neural network, performing network training by using a training set, and storing network weights after testing by using a test set meets the requirement of test precision;
the method comprises the steps that a video is collected by a camera, pixel coordinate values of a calibration buoy are manually marked in a first frame of a video, two buoy pixel coordinates on each side of a channel determine a pixel straight line, namely the pixel position of the channel boundary, and the surrounded part of the pixel straight lines of the two channel boundaries corresponds to the real channel part;
loading network weight by using a ship detector, detecting a channel part acquired by a camera to acquire a rough position of a ship, and describing the position of the ship by adopting a form of an outsourcing rectangular frame on the rough position;
the detected rough position of the ship is cut out from the channel part image separately, morphological operation is carried out to fill the cavity of the ship part, and then the contour extraction technology is used to obtain the accurate contour of the ship;
modeling the space where the navigation ship is located by using a projective transformation equation, wherein the modeling method is as shown in a formula (2):
wherein R and R are the equivalent focal length of the camera and the horizontal distance from the calibration buoy to the camera, WcAnd d1Respectively the actual width of the channel and the width of the pixels of the channel in the image,
determining the conversion relation between the three-dimensional geometric dimension of the ship and the ship contour in the image,
actual width W of ships:
Wherein d is1 sIs the pixel width in the image where the bow has traveled to the identification required distance R;
actual length L of shipsAnd height Hs:
Wherein R' is the horizontal distance from the bow to the camera when the stern drives to the position of the required distance R for identification, h2 sAnd d2 sThe pixel height and the pixel width of the ship at the position of the ship bow in the image are respectively, and R' is calculated by the following formula:
thereby calculating the three-dimensional geometric dimension of the navigation ship.
2. The method for identifying the three-dimensional geometric dimension of the navigable vessel in the bridge area according to claim 1, wherein the data augmentation method for the data set comprises: carrying out picture brightness random transformation, picture atomization treatment and rain treatment on the actual illumination change of the bridge area, wherein the picture brightness random transformation is carried out according to a formula (6):
Y=0.299×R×(1+randr%)+0.587×G×(1+randg%)+0.114×B×(1+randb%) (6)
wherein Y is the brightness of the picture pixel, R, G, B is the red, green and blue color component of the picture pixel, randr, randg, randb are the red, green and blue color random disturbance component, and are random integers between-100 to 100;
the image atomization and rain transformation processing uses an inverse transformation network of a defogging and rain removal method based on a deep convolution neural network, 5 levels, namely 1-level atomization-5-level atomization and 1-level rain transformation-5-level rain transformation are respectively prepared according to different atomization and rain transformation degrees when a transformation image is prepared, the inverse transformation network is trained to obtain an atomization model and a rain transformation model, and when data is expanded, joint transformation is carried out on the image, namely brightness random transformation, level random atomization and rain transformation processing are carried out on the image at the same time.
3. The method for identifying the three-dimensional geometric dimension of the navigation ship in the bridge area according to claim 1 or 2, wherein the method for constructing the ship detector based on the deep convolutional neural network comprises the following steps: comprises a ship detector, a ship multi-scale feature generation module, a ship reference frame generation module, a ship feature fusion module and a ship classification regression module,
the ship multi-scale feature generation module is formed by six convolution operations, wherein each convolution operation comprises convolution operation, activation operation, batch normalization operation and random inactivation operation, so that six ship feature graphs with different scales are formed;
the ship reference frame generation module is formed by ship outsourcing rectangular frame central point coordinate generation and rectangular frame width and height generation, wherein the central point coordinate is generated by adopting pixel coordinate values of six ship characteristic diagrams with different scales, and the rectangular frame width and height are determined by adopting the size of a multi-scale characteristic diagram according to a proportion;
the ship feature fusion module is generated by up-sampling the last stage of six ship feature maps with different scales generated by the ship multi-scale feature generation module, and then adding the up-sampled values with the fifth, fourth and third-stage feature maps in a point-to-point manner;
the ship classification regression module is divided into ship type classification and ship outsourcing rectangular frame parameter regression, the types of the ships are divided into four types of sand extraction ships, container ships, bulk cargo ships and passenger ships according to common ships in a bridge area, the rectangular frame parameter regression process comprises the steps of participating in network training and testing after encoding operation is carried out on four parameters of a center point coordinate and width and height, and then decoding is carried out to recover the rectangular frame.
4. The method for identifying the three-dimensional geometric dimension of the navigation ship in the bridge area according to claim 3, wherein the method for performing morphological operation and contour extraction on the rough position of the ship comprises the following steps: dividing the image clipped by the rough position of the ship into three RGB channels, and performing binarization operation on each channel to obtain three binarization images; respectively carrying out once morphological expansion operation and once morphological corrosion operation on the three binary images to respectively obtain three rough ship part closed graphs, and integrating the three graphs by adopting the following strategy: namely, when the values of the three graphs corresponding to the pixel points are two or three 255, the final value of the pixel point is 255, otherwise, the final value of the pixel point is 0, so that the final ship part closed graph is obtained, the final ship part closed graph is approximated by using the polygonal curve to obtain a series of candidate outlines, the outlines are sorted according to the size of the surrounding area, and the maximum outline is taken as the final accurate outline of the ship.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741113.7A CN113658244A (en) | 2021-07-01 | 2021-07-01 | Method for identifying three-dimensional geometric dimension of navigation ship in bridge area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741113.7A CN113658244A (en) | 2021-07-01 | 2021-07-01 | Method for identifying three-dimensional geometric dimension of navigation ship in bridge area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113658244A true CN113658244A (en) | 2021-11-16 |
Family
ID=78489808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110741113.7A Pending CN113658244A (en) | 2021-07-01 | 2021-07-01 | Method for identifying three-dimensional geometric dimension of navigation ship in bridge area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113658244A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115238368A (en) * | 2022-09-21 | 2022-10-25 | 中南大学 | Pier drawing identification automatic modeling method and medium based on computer vision |
CN117115274A (en) * | 2023-10-24 | 2023-11-24 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for determining three-dimensional information |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106710313A (en) * | 2016-12-28 | 2017-05-24 | 中国交通通信信息中心 | Method and system for ship in bridge area to actively avoid collision based on laser three-dimensional imaging technique |
CN107133973A (en) * | 2017-05-12 | 2017-09-05 | 暨南大学 | A kind of ship detecting method in bridge collision prevention system |
CN107330377A (en) * | 2017-06-08 | 2017-11-07 | 暨南大学 | A kind of virtual navigation channel in bridge collision prevention system is built and DEVIATION detection method |
CN107369337A (en) * | 2017-08-16 | 2017-11-21 | 广州忘平信息科技有限公司 | Actively anti-ship hits monitoring and pre-warning system and method to bridge |
CN112800838A (en) * | 2020-12-28 | 2021-05-14 | 浙江万里学院 | Channel ship detection and identification method based on deep learning |
-
2021
- 2021-07-01 CN CN202110741113.7A patent/CN113658244A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106710313A (en) * | 2016-12-28 | 2017-05-24 | 中国交通通信信息中心 | Method and system for ship in bridge area to actively avoid collision based on laser three-dimensional imaging technique |
CN107133973A (en) * | 2017-05-12 | 2017-09-05 | 暨南大学 | A kind of ship detecting method in bridge collision prevention system |
CN107330377A (en) * | 2017-06-08 | 2017-11-07 | 暨南大学 | A kind of virtual navigation channel in bridge collision prevention system is built and DEVIATION detection method |
CN107369337A (en) * | 2017-08-16 | 2017-11-21 | 广州忘平信息科技有限公司 | Actively anti-ship hits monitoring and pre-warning system and method to bridge |
CN112800838A (en) * | 2020-12-28 | 2021-05-14 | 浙江万里学院 | Channel ship detection and identification method based on deep learning |
Non-Patent Citations (1)
Title |
---|
SHUNLONG LI 等: "Real-time geometry identification of moving ships by computer vision techniques in bridge area", 《SMART STRUCTURES AND SYSTEMS》, vol. 23, no. 4, pages 359 - 371 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115238368A (en) * | 2022-09-21 | 2022-10-25 | 中南大学 | Pier drawing identification automatic modeling method and medium based on computer vision |
CN117115274A (en) * | 2023-10-24 | 2023-11-24 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for determining three-dimensional information |
CN117115274B (en) * | 2023-10-24 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for determining three-dimensional information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107145905B (en) | Image recognition detection method for looseness of elevator fastening nut | |
CN108759973B (en) | Water level measuring method | |
CN107169953B (en) | Bridge concrete surface crack detection method based on HOG characteristics | |
CN105787923B (en) | Vision system and analysis method for plane surface segmentation | |
CN104392212B (en) | The road information detection and front vehicles recognition methods of a kind of view-based access control model | |
CN112651968B (en) | Wood board deformation and pit detection method based on depth information | |
CN109472822A (en) | Dimension of object measurement method based on depth image processing | |
CN102975826A (en) | Portable ship water gauge automatic detection and identification method based on machine vision | |
CN113658244A (en) | Method for identifying three-dimensional geometric dimension of navigation ship in bridge area | |
CN112330593A (en) | Building surface crack detection method based on deep learning network | |
CN109658391B (en) | Circle radius measuring method based on contour merging and convex hull fitting | |
CN109506628A (en) | Object distance measuring method under a kind of truck environment based on deep learning | |
CN115797354B (en) | Method for detecting appearance defects of laser welding seam | |
CN109376740A (en) | A kind of water gauge reading detection method based on video | |
CN107798293A (en) | A kind of crack on road detection means | |
CN110619328A (en) | Intelligent ship water gauge reading identification method based on image processing and deep learning | |
CN111539927B (en) | Detection method of automobile plastic assembly fastening buckle missing detection device | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN113781537A (en) | Track elastic strip fastener defect identification method and device and computer equipment | |
CN113252103A (en) | Method for calculating volume and mass of material pile based on MATLAB image recognition technology | |
CN115953550A (en) | Point cloud outlier rejection system and method for line structured light scanning | |
CN112198170A (en) | Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe | |
CN115656182A (en) | Sheet material point cloud defect detection method based on tensor voting principal component analysis | |
JPH03204089A (en) | Image processing method | |
CN115984806A (en) | Road marking damage dynamic detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |