CN111238478A - Port water area navigation mark navigation aid system and method based on three-dimensional visual scene - Google Patents

Port water area navigation mark navigation aid system and method based on three-dimensional visual scene Download PDF

Info

Publication number
CN111238478A
CN111238478A CN202010008385.1A CN202010008385A CN111238478A CN 111238478 A CN111238478 A CN 111238478A CN 202010008385 A CN202010008385 A CN 202010008385A CN 111238478 A CN111238478 A CN 111238478A
Authority
CN
China
Prior art keywords
dimensional
navigation
camera
scene
water area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010008385.1A
Other languages
Chinese (zh)
Inventor
沈金城
郭志富
李文锋
伊富春
方琼林
邵哲平
陈麒龙
洪长华
张志昌
薛晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Aids To Navigation Department Of Dongguan Navigation Safety Administration Mot
Original Assignee
Xiamen Aids To Navigation Department Of Dongguan Navigation Safety Administration Mot
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Aids To Navigation Department Of Dongguan Navigation Safety Administration Mot filed Critical Xiamen Aids To Navigation Department Of Dongguan Navigation Safety Administration Mot
Priority to CN202010008385.1A priority Critical patent/CN111238478A/en
Publication of CN111238478A publication Critical patent/CN111238478A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/203Specially adapted for sailing ships

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to the technical field of water traffic, in particular to a harbor water area navigation aid system and method based on three-dimensional visual scenes. The method comprises the following steps: the video image is adopted to enhance the depth sense and the sense of reality of the scene model; performing enhancement processing on the three-dimensional visual scene; and obtaining the optimal visual scene of the navigation mark based on the computer vision technology. In the harbor water area navigation mark navigation assisting system and the method based on the three-dimensional visual scene, an overwater navigation mark environment real scene model is provided on the basis of providing an electronic chart, the visual experience is improved, a user can know and understand the arrangement intention of the navigation mark more intuitively, the network flow pressure of video monitoring is reduced, and the light flicker in sea waves can be processed by reading real-time telemetering latitude and longitude data and adopting a space-time regular correlation filter-based online learning tracking navigation mark.

Description

Port water area navigation mark navigation aid system and method based on three-dimensional visual scene
Technical Field
The invention relates to the technical field of water traffic, in particular to a harbor water area navigation aid system and method based on three-dimensional visual scenes.
Background
The invention aims to strive for three-dimensional monitoring of a navigation support system and construction of a uniformly-allocated maritime information sharing large platform through augmented reality. The method integrates multiple technologies such as communication and computer networks, electronic chart systems, navigation mark basic databases, augmented reality and the like, fully and effectively fuses remote-measuring remote-control real-time data and augmented reality optical image data, converts an original information island into a navigation aid cloud required by navigation aid big data, realizes seamless sharing of information, and carries out deep mining and reutilization on association of various types of data in the navigation aid big data.
Disclosure of Invention
The invention aims to provide a harbor water area navigation aid system and a harbor water area navigation aid method based on three-dimensional vision so as to solve the problems in the background technology.
In order to solve the above technical problems, an object of the present invention is to provide a three-dimensional view-based navigation aid method for a harbor water area navigation mark, comprising the steps of:
s1, adopting the video image to enhance the depth sense and the reality sense of the scene model;
on the basis of providing the electronic chart, an overwater navigation mark environment real scene model is provided, and visual experience is improved; the user can know and understand the arrangement intention of the navigation mark more intuitively, is familiar with the quantity, the position and the corresponding technical parameters of various navigation aids such as a lighthouse, a lightpile, a lightbuoy, a radio navigation mark and the like arranged on the navigation channel, and realizes the operations of positioning, steering, danger avoiding and the like on the navigation channel according to a navigation mark navigation aid guide according to a navigation mark realistic view and a chart diagram, thereby ensuring the safe and convenient navigation of the ship;
s2, performing enhancement processing on the three-dimensional visual scene;
the method comprises three core technologies of realizing augmented reality, namely real-time three-dimensional registration of virtual and real space, high-fidelity fusion of virtual and real scenes, and efficient man-machine interaction, calibrating projection transformation and camera parameters of multiple navigation marks by a least square method, and calibrating a camera, namely solving various parameters of the camera through calculation, wherein the parameters comprise optical parameters and geometric parameters, the optical parameters are internal parameters of the camera, and the geometric parameters are external parameters of the camera, and comprise a rotation matrix and a translation vector of the camera generated by motion in the space. And real-time three-dimensional reconstruction is realized by reading real-time telemetering longitude and latitude. In the invention, the purpose of three-dimensional reconstruction is to reduce the network flow pressure of video monitoring, so that the three-dimensional view of the navigation mark can be reconstructed by reading real-time telemetering latitude and longitude data. Therefore, the period interval between video monitoring frames can be prolonged, the network flow is reduced, and the storage capacity of a hard disk is reduced.
And S3, obtaining the optimal view of the navigation mark based on the computer vision technology.
The method comprises the steps of processing light flicker in sea waves by adopting a space-time regular correlation filter-based online learning tracking navigation mark, detecting and registering panoramic images based on angular points, enhancing low-illumination images based on multi-exposure fusion, and enabling the precision and quality effect after processing to reach or exceed commercial peer levels based on a defogging technology of a convolutional neural network.
Preferably, the method for enhancing the three-dimensional view comprises the following steps:
s1.1, converting between a three-dimensional scene and a two-dimensional imaging plane of a video image shot by a camera, wherein the conversion relation is as follows:
Figure BDA0002356162510000021
in the formula (X)w,Yw,Zw) As coordinates in a three-dimensional scene, (x)Y) coordinates in a two-dimensional imaging plane, R is a rotation matrix, t is a translation vector, αx,αy,u0,v0Gamma is the radial distortion correction quantity for the transformation and the internal parameters of the camera;
the projection equation is expanded and can be written as:
Figure BDA0002356162510000022
taking into account sea surface specificities, Zw0, can be simplified as:
Figure BDA0002356162510000023
s1.2, telemetering longitude and latitude in real time;
the virtual-real fusion presentation of vision is realized by means of information visualization and graphic drawing technologies, and the real-time visualization of information and the real-time fidelity drawing of virtual scenery are realized, so that a user can visually and seamlessly fuse a real environment, and an immersion feeling is generated.
S1.3, projecting transformation and camera parameter calibration;
in the computer vision technology, two-dimensional information of an object or a scene in a space is acquired through a camera, three-dimensional information of the object in the space, including the size, the position, the motion state and the like of the object, is restored by combining internal and external parameters of the camera, and in the whole process, the camera needs to be calibrated firstly, namely, all parameters of the camera, including optical parameters and geometric parameters, are solved through operation. The optical parameters are internal parameters of the camera, and the geometric parameters are external parameters of the camera, including a rotation matrix and a translation vector of the camera in space due to motion.
And S1.4, real-time three-dimensional reconstruction.
In the invention, the purpose of three-dimensional reconstruction is to reduce the network flow pressure of video monitoring, so that the three-dimensional view of the navigation mark can be reconstructed by reading real-time telemetering latitude and longitude data, thus the period interval between video monitoring frames can be prolonged, the network flow is reduced, and the storage capacity of a hard disk is reduced.
Preferably, in S1.3, the method for projection transformation and camera parameter calibration includes a three-beacon solution equation set method and a multi-beacon least square method.
Preferably, in S1.4, the method for real-time three-dimensional reconstruction includes the following steps:
s1.4.1, image acquisition: before image processing, a camera is used for acquiring a two-dimensional image of a three-dimensional object; the illumination condition, the geometric characteristics of the camera and the like have great influence on the subsequent image processing;
s1.4.2, calibrating a camera: an effective imaging model is established through camera calibration, and internal and external parameters of a camera are solved, so that three-dimensional point coordinates in a space can be obtained by combining the matching result of images, and the purpose of three-dimensional reconstruction is achieved;
s1.4.3, feature extraction: the features mainly include feature points, feature lines, and regions. In most cases, feature points are used as matching primitives, and the form of extracting the feature points is closely related to the matching strategy. The feature point extraction algorithm can be summarized as follows: the method is based on three methods, namely a method based on a directional derivative, a method based on an image brightness contrast relation and a method based on mathematical morphology;
s1.4.4, stereo matching: stereo matching is to establish a correspondence between image pairs according to the extracted features, that is, to make imaging points of the same physical space point in two different images correspond one to one. Attention needs to be paid to interference of factors in a scene during matching, such as lighting conditions, noise interference, geometric distortion of a scene, surface physical characteristics, camera characteristics and other variable factors;
s1.4.5, three-dimensional reconstruction: with the accurate matching result, the three-dimensional scene information can be recovered by combining the internal and external parameters calibrated by the camera. Because the three-dimensional reconstruction precision is influenced by factors such as matching precision, internal and external parameter errors of a camera and the like, the work of the previous steps is needed to be done firstly, so that the precision of each link is high, the error is small, and a relatively precise stereoscopic vision system can be designed.
If there is no object with fixed longitude and latitude, such as a beacon or lighthouse, near the buoy, the camera can be fixed at a high position on the shore, and the height and angle of the camera are not changed. Shooting towards the buoy for multiple times, obtaining coordinates of the same lamp floating in the plane images at multiple moments and longitude and latitude corresponding to the same moment read from a telemetering database, and solving an equation set.
In order to improve the precision of the projective transformation and the camera parameter calibration, the following methods can be adopted:
(1) converting the longitude and latitude of the geodetic coordinate into a plane rectangular coordinate by adopting Gaussian projection forward calculation;
(2) when the wind is calm, the camera can take pictures by taking a boat, and the image motion blur caused by bumping and swinging is reduced.
Preferably, the method for obtaining the optimal view of the navigation mark based on the computer vision technology comprises the following steps:
s2.1, fusing fidelity of virtual and real scenes represented by lamplight flicker;
and S2.2, other cloth distribution intentions.
Preferably, the fidelity fusion of the virtual and real scenes represented by the light flicker comprises the navigation mark light quality, the light flicker based on the on-line learning and tracking of the spatiotemporal regular correlation filter and the fidelity fusion of the virtual and real scenes.
Tracking a navigation mark when light quality flicker is processed is a complex tracking problem and relates to various conditions such as scale change, illumination change, shading, wind flow wave swinging and bumping, deformation, motion blur, rapid motion, in-plane rotation, out-of-plane rotation, partial invisibility, background clutter and the like.
Time regularization is introduced during learning in the historical tracking result of multiple samples, so that the method adapts to larger image change, the accuracy, the robustness and the speed are improved, and the navigation mark is tracked in real time.
Preferably, the other arrangements are intended to include shoals, submarine pipelines, and cross-waters.
The second objective of the present invention is to provide a navigation aid system for a harbor water area navigation mark based on three-dimensional view, comprising:
the enhanced scene model module adopts the video image to enhance the depth sense and the reality sense of the scene model;
the three-dimensional visual enhancement module is used for reducing the network flow pressure of video monitoring and reconstructing the three-dimensional visual of the navigation mark by reading real-time telemetering longitude and latitude data;
and obtaining an optimal view module of the navigation mark, and processing the light flicker in the sea wave by adopting a space-time regular correlation filter-based on-line learning tracking navigation mark.
The invention also provides a three-dimensional view-based harbor water area navigation aid device, which comprises a processor, a memory and a computer program stored in the memory and running on the processor, wherein the processor realizes the steps of the three-dimensional view-based harbor water area navigation aid method when executing the computer program.
The present invention also provides a computer readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is executed by the processor to implement the steps of the three-dimensional view-based harbour water area navigation method.
Compared with the prior art, the invention has the beneficial effects that:
1. in the harbor water area navigation aid system and the method based on the three-dimensional visual scene, a video image is adopted to enhance the depth sense and the sense of reality of a scene model, an overwater navigation mark environment realistic model is provided on the basis of providing an electronic chart, the visual experience is improved, a user can know and understand the navigation mark arrangement intention more intuitively, the number, the position and the corresponding technical parameters of various navigation aid facilities such as a lighthouse, a lightpost, a lightbuoy, a radio navigation mark and the like arranged in a channel are familiar, and the operations of positioning, steering, danger avoiding and the like are realized in the channel according to a navigation mark navigation aid guide according to the navigation mark realistic chart and a chart, so that the safety and the convenient navigation of a ship are ensured.
2. In the harbor water area navigation mark navigation aid system and the method based on the three-dimensional view, the network flow pressure of video monitoring is reduced based on the three-dimensional view of augmented reality, so that the three-dimensional view of the navigation mark can be reconstructed by reading real-time telemetering latitude and longitude data, the periodic interval between video monitoring frames is prolonged, the network flow is reduced, and the hard disk storage capacity is reduced.
3. In the harbor water area navigation mark navigation aid system and the method based on the three-dimensional view, the optimal view of the navigation mark is obtained based on the computer vision technology, and the light flicker in sea waves is processed by adopting the online learning and tracking navigation mark based on the time-space regular correlation filter, so that the precision and quality effect after processing can reach or exceed the commercial parallel level.
Drawings
FIG. 1 is an overall process flow diagram of the present invention;
FIG. 2 is a flow chart of a method for enhancing a three-dimensional scene according to the present invention;
FIG. 3 is a flow chart of a method of real-time three-dimensional reconstruction of the present invention;
FIG. 4 is a flowchart of a method for obtaining an optimal view of a navigation mark based on computer vision techniques according to the present invention;
FIG. 5 is a block diagram of the harbor water area navigation aid system based on three-dimensional view of the present invention;
fig. 6 is a structural diagram of the harbour water area navigation aid device based on three-dimensional view.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-6, the present invention provides a technical solution:
the invention provides a three-dimensional view-based navigation assisting method for a harbor water area navigation mark, which comprises the following steps:
s1, adopting the video image to enhance the depth sense and the reality sense of the scene model;
on the basis of providing the electronic chart, an overwater navigation mark environment real scene model is provided, and visual experience is improved; the user can know and understand the arrangement intention of the navigation mark more intuitively, is familiar with the quantity, the position and the corresponding technical parameters of various navigation aids such as a lighthouse, a lightpile, a lightbuoy, a radio navigation mark and the like arranged on the navigation channel, and realizes the operations of positioning, steering, danger avoiding and the like on the navigation channel according to a navigation mark navigation aid guide according to a navigation mark realistic view and a chart diagram, thereby ensuring the safe and convenient navigation of the ship;
s2, performing enhancement processing on the three-dimensional visual scene;
the method comprises three core technologies of realizing augmented reality, namely real-time three-dimensional registration of virtual and real space, high-fidelity fusion of virtual and real scenes, and efficient man-machine interaction, calibrating projection transformation and camera parameters of multiple navigation marks by a least square method, and calibrating a camera, namely solving various parameters of the camera through calculation, wherein the parameters comprise optical parameters and geometric parameters, the optical parameters are internal parameters of the camera, and the geometric parameters are external parameters of the camera, and comprise a rotation matrix and a translation vector of the camera generated by motion in the space. And real-time three-dimensional reconstruction is realized by reading real-time telemetering longitude and latitude. In the invention, the purpose of three-dimensional reconstruction is to reduce the network flow pressure of video monitoring, so that the three-dimensional view of the navigation mark can be reconstructed by reading real-time telemetering latitude and longitude data. Therefore, the period interval between video monitoring frames can be prolonged, the network flow is reduced, and the storage capacity of a hard disk is reduced.
And S3, obtaining the optimal view of the navigation mark based on the computer vision technology.
The method comprises the steps of processing light flicker in sea waves by adopting a space-time regular correlation filter-based online learning tracking navigation mark, detecting and registering panoramic images based on angular points, enhancing low-illumination images based on multi-exposure fusion, and enabling the precision and quality effect after processing to reach or exceed commercial peer levels based on a defogging technology of a convolutional neural network.
In this embodiment, the method for enhancing the three-dimensional view includes the following steps:
s1.1, converting between a three-dimensional scene and a two-dimensional imaging plane of a video image shot by a camera, wherein the conversion relation is as follows:
Figure BDA0002356162510000071
in the formula (X)w,Yw,Zw) For coordinates in a three-dimensional scene, (x, y) are coordinates in a two-dimensional imaging plane, R is a rotation matrix, t is a translation vector, αx,αy,u0,v0Gamma is the radial distortion correction quantity for the transformation and the internal parameters of the camera;
the projection equation is expanded and can be written as:
Figure BDA0002356162510000072
taking into account sea surface specificities, Zw0, can be simplified as:
Figure BDA0002356162510000073
s1.2, telemetering longitude and latitude in real time;
the virtual-real fusion presentation of vision is realized by means of information visualization and graphic drawing technologies, and the real-time visualization of information and the real-time fidelity drawing of virtual scenery are realized, so that a user can visually and seamlessly fuse a real environment, and an immersion feeling is generated.
S1.3, projecting transformation and camera parameter calibration;
further, the computer vision technology obtains two-dimensional information of an object or a scene in a space through a camera, and restores three-dimensional information of the object in the space, including the size, the position, the motion state and the like of the object, by combining internal and external parameters of the camera. The optical parameters are internal parameters of the camera, and the geometric parameters are external parameters of the camera, including a rotation matrix and a translation vector of the camera in space due to motion.
And S1.4, real-time three-dimensional reconstruction.
In the invention, the purpose of three-dimensional reconstruction is to reduce the network flow pressure of video monitoring, so that the three-dimensional view of the navigation mark can be reconstructed by reading real-time telemetering latitude and longitude data, thus the period interval between video monitoring frames can be prolonged, the network flow is reduced, and the storage capacity of a hard disk is reduced.
Specifically, in S1.3, the method for projection transformation and camera parameter calibration includes a three-beacon solution equation set method and a multi-beacon least square method.
It should be noted that, in S1.4, the method for real-time three-dimensional reconstruction includes the following steps:
s1.4.1, image acquisition: before image processing, a camera is used for acquiring a two-dimensional image of a three-dimensional object; the illumination condition, the geometric characteristics of the camera and the like have great influence on the subsequent image processing;
s1.4.2, calibrating a camera: an effective imaging model is established through camera calibration, and internal and external parameters of a camera are solved, so that three-dimensional point coordinates in a space can be obtained by combining the matching result of images, and the purpose of three-dimensional reconstruction is achieved;
s1.4.3, feature extraction: the features mainly include feature points, feature lines, and regions. In most cases, feature points are used as matching primitives, and the form of extracting the feature points is closely related to the matching strategy. The feature point extraction algorithm can be summarized as follows: the method is based on three methods, namely a method based on a directional derivative, a method based on an image brightness contrast relation and a method based on mathematical morphology;
s1.4.4, stereo matching: stereo matching is to establish a correspondence between image pairs according to the extracted features, that is, to make imaging points of the same physical space point in two different images correspond one to one. Attention needs to be paid to interference of factors in a scene during matching, such as lighting conditions, noise interference, geometric distortion of a scene, surface physical characteristics, camera characteristics and other variable factors;
s1.4.5, three-dimensional reconstruction: with the accurate matching result, the three-dimensional scene information can be recovered by combining the internal and external parameters calibrated by the camera. Because the three-dimensional reconstruction precision is influenced by factors such as matching precision, internal and external parameter errors of a camera and the like, the work of the previous steps is needed to be done firstly, so that the precision of each link is high, the error is small, and a relatively precise stereoscopic vision system can be designed.
Further, if there is no object to fix the longitude and latitude, such as a beacon or a beacon, near the buoy, the camera can be fixed at a high position on the shore, and the height and angle of the camera are not changed. Shooting towards the buoy for multiple times, obtaining coordinates of the same lamp floating in the plane images at multiple moments and longitude and latitude corresponding to the same moment read from a telemetering database, and solving an equation set.
Specifically, in order to improve the precision of the projective transformation and the camera parameter calibration, the following methods can be adopted:
(1) converting the longitude and latitude of the geodetic coordinate into a plane rectangular coordinate by adopting Gaussian projection forward calculation;
(2) when the wind is calm, the camera can take pictures by taking a boat, and the image motion blur caused by bumping and swinging is reduced.
It is worth to be noted that the method for obtaining the optimal view of the navigation mark based on the computer vision technology comprises the following steps:
s2.1, fusing fidelity of virtual and real scenes represented by lamplight flicker;
and S2.2, other cloth distribution intentions.
The fidelity fusion of the virtual and real scenery represented by the light flicker comprises the navigation mark light quality, the light flicker based on the space-time regular correlation filter on-line learning tracking and the fidelity fusion of the virtual and real scenery.
Specifically, tracking a navigation mark when light flicker is processed is a complex tracking problem, and relates to various conditions such as scale change, illumination change, shielding, wind flow wave swinging and bumping, deformation, motion blur, rapid motion, in-plane rotation, out-of-plane rotation, partial invisibility, background clutter and the like.
Furthermore, when learning is carried out in the historical tracking result of the multiple samples, time regularization is introduced, and the method adapts to larger image change so as to improve the accuracy, robustness and speed and track the navigation mark in real time.
It is worth noting that other arrangements are contemplated including shoals, submarine pipelines, and cross-over waters.
The second objective of the present invention is to provide a navigation aid system for a harbor water area navigation mark based on three-dimensional view, comprising:
the enhanced scene model module adopts the video image to enhance the depth sense and the reality sense of the scene model;
the three-dimensional visual enhancement module is used for reducing the network flow pressure of video monitoring and reconstructing the three-dimensional visual of the navigation mark by reading real-time telemetering longitude and latitude data;
and obtaining an optimal view module of the navigation mark, and processing the light flicker in the sea wave by adopting a space-time regular correlation filter-based on-line learning tracking navigation mark.
It should be noted that the functions of the enhanced scene model module, the three-dimensional view enhancement module, and the module for obtaining the optimal view of the navigation mark are specifically described in the description of the method portion corresponding to each module, and are not repeated here.
Referring to fig. 6, a schematic structural diagram of a harbour water area navigation aid device based on three-dimensional view is shown, the device includes a processor, a memory and a bus.
The processor comprises one or more processing cores, the processor is connected with the processor through a bus, the memory is used for storing program instructions, and the processor executes the program instructions in the memory to realize the three-dimensional view-based harbor water area navigation assisting method.
Alternatively, the memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The present invention further provides a computer readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is executed by the processor to implement the steps of the three-dimensional view-based harbour water area navigation method.
Optionally, the present invention further provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the above-described three-dimensional view-based method for navigation aid of a harbour water area navigation system.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by hardware related to instructions of a program, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A three-dimensional view-based navigation aid method for a harbor water area navigation mark comprises the following steps:
s1, adopting the video image to enhance the depth sense and the reality sense of the scene model;
s2, performing enhancement processing on the three-dimensional visual scene;
and S3, obtaining the optimal view of the navigation mark based on the computer vision technology.
2. The harbour water area navigation aid method based on the three-dimensional view according to claim 1, wherein: the method for enhancing the three-dimensional visual scene comprises the following steps:
s1.1, converting between a three-dimensional scene and a two-dimensional imaging plane of a video image shot by a camera, wherein the conversion relation is as follows:
Figure FDA0002356162500000011
in the formula (X)w,Yw,Zw) For coordinates in a three-dimensional scene, (x, y) are coordinates in a two-dimensional imaging plane, R is a rotation matrix, t is a translation vector, αx,αy,u0,v0Gamma is the radial distortion correction quantity for the transformation and the internal parameters of the camera;
the projection equation is expanded and can be written as:
Figure FDA0002356162500000012
taking into account sea surface specificities, Zw0, can be simplified as:
Figure FDA0002356162500000013
s1.2, telemetering longitude and latitude in real time;
s1.3, projecting transformation and camera parameter calibration;
and S1.4, real-time three-dimensional reconstruction.
3. The harbour water area navigation aid method based on the three-dimensional view according to claim 2, wherein: in S1.3, the method for projection transformation and camera parameter calibration comprises a three-navigation-mark solution equation set method and a multi-navigation-mark least square method.
4. The harbour water area navigation aid method based on the three-dimensional view according to claim 2, wherein: in S1.4, the real-time three-dimensional reconstruction method includes the following steps:
s1.4.1, image acquisition: acquiring a two-dimensional image of a three-dimensional object with a camera;
s1.4.2, calibrating a camera: establishing an effective imaging model through camera calibration, and solving internal and external parameters of a camera;
s1.4.3, feature extraction: the features mainly comprise feature points, feature lines and regions;
s1.4.4, stereo matching: imaging points of the same physical space point in two different images are in one-to-one correspondence;
s1.4.5, three-dimensional reconstruction: and recovering the three-dimensional scene information by combining the internal and external parameters calibrated by the camera.
5. The harbour water area navigation aid method based on the three-dimensional view according to claim 1, wherein: the method for obtaining the optimal visual scene of the navigation mark based on the computer vision technology comprises the following steps:
s2.1, fusing fidelity of virtual and real scenes represented by lamplight flicker;
and S2.2, other cloth distribution intentions.
6. The harbour water area navigation aid method based on the three-dimensional view according to claim 5, wherein: the fidelity fusion of the virtual and real scenery represented by the light flicker comprises the navigation mark light quality, the light flicker based on the space-time regular correlation filter on-line learning tracking and the fidelity fusion of the virtual and real scenery.
7. The harbour water area navigation aid method based on the three-dimensional view according to claim 5, wherein: other fabric arrangements are contemplated including shoals, submarine pipelines, and cross-over waters.
8. A harbour water area navigation aid system based on three-dimensional visual scene comprises:
the enhanced scene model module adopts the video image to enhance the depth sense and the reality sense of the scene model;
the three-dimensional visual enhancement module is used for reducing the network flow pressure of video monitoring and reconstructing the three-dimensional visual of the navigation mark by reading real-time telemetering longitude and latitude data;
and obtaining an optimal view module of the navigation mark, and processing the light flicker in the sea wave by adopting a space-time regular correlation filter-based on-line learning tracking navigation mark.
9. The utility model provides a harbour waters fairway buoy navigation aid based on three-dimensional visual scene which characterized in that: comprising a processor, a memory and a computer program stored in and run on the memory, the processor implementing the steps of the three-dimensional view-based harbour water navigation aid method according to any one of claims 1-7 when executing the computer program.
10. A computer readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is executed by a processor to implement the steps of the three-dimensional view-based harbour water navigation aid method according to any one of claims 1-7.
CN202010008385.1A 2020-01-06 2020-01-06 Port water area navigation mark navigation aid system and method based on three-dimensional visual scene Pending CN111238478A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010008385.1A CN111238478A (en) 2020-01-06 2020-01-06 Port water area navigation mark navigation aid system and method based on three-dimensional visual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010008385.1A CN111238478A (en) 2020-01-06 2020-01-06 Port water area navigation mark navigation aid system and method based on three-dimensional visual scene

Publications (1)

Publication Number Publication Date
CN111238478A true CN111238478A (en) 2020-06-05

Family

ID=70872279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010008385.1A Pending CN111238478A (en) 2020-01-06 2020-01-06 Port water area navigation mark navigation aid system and method based on three-dimensional visual scene

Country Status (1)

Country Link
CN (1) CN111238478A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115245A (en) * 2021-11-09 2022-03-01 厦门蓝海天信息技术有限公司 Navigation mark identification device and method
CN115495469A (en) * 2022-11-17 2022-12-20 北京星天科技有限公司 Method and device for updating chart file and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150753A (en) * 2013-03-22 2013-06-12 中国人民解放军63680部队 Wide-range high-precision matched digital channel three-dimensional visualization method
CN103456041A (en) * 2013-08-28 2013-12-18 中国人民解放军海军大连舰艇学院 Three-dimensional terrain and radar terrain generating method based on S-57 electronic chart data
CN105241457A (en) * 2015-08-10 2016-01-13 武汉理工大学 Establishing method of three-dimensional aided navigation system for ship handling
US20180292213A1 (en) * 2017-04-10 2018-10-11 Martha Grabowski Critical system operations and simulations using wearable immersive augmented reality technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150753A (en) * 2013-03-22 2013-06-12 中国人民解放军63680部队 Wide-range high-precision matched digital channel three-dimensional visualization method
CN103456041A (en) * 2013-08-28 2013-12-18 中国人民解放军海军大连舰艇学院 Three-dimensional terrain and radar terrain generating method based on S-57 electronic chart data
CN105241457A (en) * 2015-08-10 2016-01-13 武汉理工大学 Establishing method of three-dimensional aided navigation system for ship handling
US20180292213A1 (en) * 2017-04-10 2018-10-11 Martha Grabowski Critical system operations and simulations using wearable immersive augmented reality technology

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BINSHAN LI 等,: ""Visual Tracking using Spatial-Temporal Regularized Support Correlation Filters"", 《2018 3RD INTERNATIONAL CONFERENCE ON COMMUNICATION, IMAGE AND SIGNAL PROCESSING》 *
刘喜作 等,: ""增强现实的助航环境构建方法"", 《中国航海》 *
叶新源 等,: ""基于三维GIS的航标配布管理和效能评估"", 《第二届广东海事高级论坛论文集》 *
宋伟东 等著,: "《遥感影像几何纠正与三维重建》", 30 April 2011, 测绘出版社 *
张志利 等,: ""基于核相关滤波器无标识AR的目标跟踪仿真"", 《***仿真学报》 *
王贤隆,: ""交通场景的自动三维建模技术研究"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
邹帆 等,: ""增强现实技术在船舶导助航领域的应用"", 《水运管理》 *
陈凌锋,: ""基于Rhino与Grasshopper参数化技术在风景园林规划设计中地形的应用研究"", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
陈祎荻 等,: ""利用AR技术构建虚拟助航***的展望与思考"", 《中国水运》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115245A (en) * 2021-11-09 2022-03-01 厦门蓝海天信息技术有限公司 Navigation mark identification device and method
CN114115245B (en) * 2021-11-09 2024-04-16 厦门蓝海天信息技术有限公司 Navigation mark recognition equipment and method
CN115495469A (en) * 2022-11-17 2022-12-20 北京星天科技有限公司 Method and device for updating chart file and electronic equipment
CN115495469B (en) * 2022-11-17 2023-03-10 北京星天科技有限公司 Method and device for updating chart file and electronic equipment

Similar Documents

Publication Publication Date Title
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN101422035B (en) Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution
EP2992508B1 (en) Diminished and mediated reality effects from reconstruction
CN102750697B (en) Parameter calibration method and device
CN112434709A (en) Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
CN109961401A (en) A kind of method for correcting image and storage medium of binocular camera
WO2001048683A1 (en) Any aspect passive volumetric image processing method
CN106875431A (en) Picture charge pattern method and augmented reality implementation method with moving projection
KR101705536B1 (en) A fog removing method based on camera image
CN111238478A (en) Port water area navigation mark navigation aid system and method based on three-dimensional visual scene
Skinner et al. Underwater image dehazing with a light field camera
CN109685879B (en) Method, device, equipment and storage medium for determining multi-view image texture distribution
US20220392211A1 (en) Water non-water segmentation systems and methods
CN107832331A (en) Generation method, device and the equipment of visualized objects
CN112288637A (en) Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method
Sharma et al. Unsupervised learning of depth and ego-motion from cylindrical panoramic video
Žuži et al. Impact of dehazing on underwater marker detection for augmented reality
WO2021178603A1 (en) Water non-water segmentation systems and methods
CN116228962A (en) Large scene neuroview synthesis
CN113313116A (en) Vision-based accurate detection and positioning method for underwater artificial target
CN115810112A (en) Image processing method, image processing device, storage medium and electronic equipment
CN116309732A (en) Ship motion visualization method based on digital twinning
CN113850905B (en) Panoramic image real-time stitching method for circumferential scanning type photoelectric early warning system
CN115830140A (en) Offshore short-range photoelectric monitoring method, system, medium, equipment and terminal
JP2022089743A (en) Auxiliary correction system of marine vessel and operation system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605

RJ01 Rejection of invention patent application after publication