CN112785619A - Unmanned underwater vehicle autonomous tracking method based on visual perception - Google Patents

Unmanned underwater vehicle autonomous tracking method based on visual perception Download PDF

Info

Publication number
CN112785619A
CN112785619A CN202011635410.5A CN202011635410A CN112785619A CN 112785619 A CN112785619 A CN 112785619A CN 202011635410 A CN202011635410 A CN 202011635410A CN 112785619 A CN112785619 A CN 112785619A
Authority
CN
China
Prior art keywords
image
underwater vehicle
underwater
binary image
visual perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011635410.5A
Other languages
Chinese (zh)
Inventor
刘彦呈
朱鹏莅
董张伟
姚书翰
刘厶源
庄绪州
姜涛
陈瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202011635410.5A priority Critical patent/CN112785619A/en
Publication of CN112785619A publication Critical patent/CN112785619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63CLAUNCHING, HAULING-OUT, OR DRY-DOCKING OF VESSELS; LIFE-SAVING IN WATER; EQUIPMENT FOR DWELLING OR WORKING UNDER WATER; MEANS FOR SALVAGING OR SEARCHING FOR UNDERWATER OBJECTS
    • B63C11/00Equipment for dwelling or working underwater; Means for searching for underwater objects
    • B63C11/52Tools specially adapted for working underwater, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/38Arrangement of visual or electronic watch equipment, e.g. of periscopes, of radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • B63G2008/002Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned
    • B63G2008/004Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations unmanned autonomously operating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Ocean & Marine Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an autonomous tracking method of an unmanned underwater vehicle based on visual perception, which comprises the following steps: converting the multi-dimensional environmental characteristics of the shot underwater image into an HSV color space to perform background removal processing and resolving to obtain a binary image containing a target track; denoising the binary image by adopting a corrosion and expansion processing mode to obtain an effective image; building a neural network controller, taking the preprocessed effective image as input, taking label information of the correct moving direction of the underwater vehicle as output, training the optimized neural network controller, seeking an optimal planning path and converting the optimal planning path into a turning control instruction of the underwater vehicle so as to obtain an autonomous visual perception tracking motion control method; an underwater vehicle is adopted for carrying out experiments, and the effectiveness and the stability of the provided autonomous visual perception tracking motion control method are proved through experimental verification and simulation analysis.

Description

Unmanned underwater vehicle autonomous tracking method based on visual perception
Technical Field
The invention relates to the field of autonomous tracking of underwater vehicles, in particular to an unmanned underwater vehicle autonomous tracking method based on visual perception.
Background
China is one of the earliest countries in the world for freshwater fish culture development, and in recent years, with the rapid development of aquaculture industry, the water environment of parts of regions deteriorates. Therefore, the vigorous popularization of the aquatic ecological breeding technology has become a current task. At present, aquaculture monitoring mode is mostly the system formula monitoring, and it can't in time seek the traceability and solve the problem when the problem appears. Therefore, unmanned full-coverage monitoring of aquaculture has significant meaning for accurately locking and solving problems and reducing the loss of cultured products.
According to the current research situation, the current aquaculture unmanned multi-dependence image monitoring and processing system, however, in this case, the imaging quality of the camera is susceptible to the water quality, and the underwater vehicle in the aspect of mainflow surface aquaculture still needs to be manually controlled in the aspect of tracking.
Disclosure of Invention
According to the problems in the prior art, the invention discloses an unmanned underwater vehicle autonomous tracking method based on visual perception, which specifically comprises the following steps:
converting the multi-dimensional environmental characteristics of the shot underwater image into an HSV color space to perform background removal processing and resolving to obtain a binary image containing a target track;
denoising the binary image by adopting a corrosion and expansion processing mode to obtain an effective image;
building a neural network controller, taking the preprocessed effective image as input, taking label information of the correct moving direction of the underwater vehicle as output, training the optimized neural network controller, seeking an optimal planning path and converting the optimal planning path into a turning control instruction of the underwater vehicle so as to obtain an autonomous visual perception tracking motion control method;
an underwater vehicle is adopted for carrying out experiments, and the effectiveness and the stability of the provided autonomous visual perception tracking motion control method are proved through experimental verification and simulation analysis.
Further, converting the multi-dimensional environmental characteristics in the shot underwater image into an HSV color space for background removal to obtain an HSV image:
setting (r, g, b) as red, green and blue coordinates of one color of the underwater image, setting max as the maximum of (r, g, b) and min as the minimum of (r, g, b), and calculating (h, s, v) values in the HSV space, wherein h belongs to [0,360 ] as a hue angle of an angle, and s, v belongs to [0,1) as saturation and brightness;
and respectively setting corresponding track color upper limit threshold and lower limit threshold for the HSV image, obtaining three single-channel images of the HSV through upper and lower limit filtering, and obtaining a binary image containing the target track through the processes of operation, combination and calculation.
Further, expanding and contracting the image communication area in a corrosion and expansion mode, and carrying out the following optimization processing on the binary image:
assuming that T is a template of n × n, different parameters are set according to different shapes of the structural elements, and for a square structural element, T (I, j) is 1, all parameters in the template are 1, I is a binary image for performing morphological operations, and T (I, j) is 0 or 1, the formula of corrosion is described as follows:
Figure BDA0002881005400000021
in the formula, the Erode (x, y) is the result after the corrosion operation, the I (x, y) is the gray value of the binary image (x, y) position for performing the morphological operation, and the T (I, j) is the parameter of the (I, j) position in the template;
carrying out an expansion operation:
Figure BDA0002881005400000022
in the formula, dilate (x, y) is the result after the erosion algorithm processing, I (x, y) is the gray value of the binary image (x, y) position where the dilation operation is performed, and T (I, j) is the parameter of the (I, j) position in the template.
Further, a label of the effective image is made, wherein the content of the label is the direction of the underwater vehicle moving correctly, the neural network controller comprises a three-layer fully-connected network, an input layer comprises 38400 neurons as processing units of the tracking image, an implicit layer comprises 64 neurons, an output layer comprises 5 neurons and corresponds to a Sigmoid activation function respectively, 5 control instructions are output, and assuming that the input layer is represented by a vector x, the output of the implicit layer is calculated in the form that:
Figure BDA0002881005400000031
in the formula, wkFor fully-connected network weights, b is a bias coefficient, a function
Figure BDA0002881005400000032
Is a neural network activation function.
By adopting the technical scheme, the unmanned underwater vehicle autonomous tracking method based on visual perception provided by the invention realizes the tracking navigation requirement of integration of autonomous perception and motion control of an underwater vehicle through an advanced morphological underwater image processing technology and a data-driven vehicle control method, and in the aspect of aquaculture, the method gets rid of the influence of camera imaging quality and the constraint of artificial monitoring, greatly improves the working efficiency of unmanned full-coverage monitoring of aquaculture, and reduces the aquaculture product loss and the aquaculture cost on the premise of ensuring the monitoring accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is an overall flow diagram of the process of the present invention;
FIG. 2 is a flow chart of the RGB image conversion HSV format according to the present invention;
FIG. 3 is a flow chart of the underwater image morphology (erosion and dilation) process of the present invention;
FIG. 4 is a diagram of the underwater image HSV conversion, corrosion and expansion denoising effect in the invention;
FIG. 5 is a block diagram of a fully-connected neural network of the present invention;
FIG. 6 is a schematic diagram of the overall hardware configuration of the experiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
as shown in fig. 1, the autonomous tracking method of the unmanned underwater vehicle based on visual perception specifically includes the following steps:
s1, in order to highlight the path information, converting the captured multi-dimensional environment features into HSV color space for background removal, as shown in FIG. 2, which is a flow chart for converting underwater RGB images into HSV format, firstly processing the RGB images shown by the camera to convert the RGB images into HSV models for the look and feel of the user. The specific transformation process is as follows:
let (r, g, b) be the red, green and blue coordinates of a color, respectively, whose values are real numbers between 0 and 1, max be equal to the maximum of (r, g, b), min be equal to the minimum of (r, g, b). To find the (h, s, v) value in HSV space, where h e [0,360) is the hue angle of the angle and s, v e [0,1) is the saturation and lightness, calculated as
Figure BDA0002881005400000041
Figure BDA0002881005400000042
v=max (3)
Through the conversion of the multi-dimensional environment characteristic space, the RGB image shot by the underwater vehicle can be converted into an HSV image. And filtering the background on the basis, respectively setting corresponding track color upper limit threshold and lower limit threshold for the HSV three-channel image in the thresholding processing and image synthesis stages, filtering the upper limit and the lower limit to obtain three HSV single-channel images, combining the three-channel images through bitwise and operation, and then calculating to obtain a binary image containing the target track.
S2: the mathematical morphology operation is used for image processing to solve the noise problem in the image, the connected region is expanded and contracted through corrosion and expansion morphology operation, and the HSV image output by the S1 is further processed. The erosion and expansion operations are mainly used in underwater binary images, wherein erosion and expansion refer to highlight track parts in underwater tracking images, the erosion operation reduces the highlight areas from the edges to weaken the protrusions, and the expansion operation expands the highlight areas from the edges to fill up the holes. In the underwater binary image, the track part gray value is 1, and the background part gray value is 0. Let T be a template of n × n, and set different parameters according to the shape of the structural element, where T (i, j) is 1 for a square structural element and all parameters in the template are 1. I is a binary image subjected to morphological operations, and T (I, j) is 0 or 1. The formula for corrosion is described as follows:
Figure BDA0002881005400000043
Figure DA00028810054036828703
in the formula, the enode (x, y) is the result of the etching operation, I (x, y) is the gray scale value of the binary image (x, y) position where the morphological operation is performed, and T (I, j) is the parameter of the (I, j) position in the template.
By means of the formula (4), it can be known that, by performing image corrosion, if a 0 value exists in a pixel point corresponding to a template, the center point result is 0, and when the gray value of the image corresponding to the template is all 1 or all 0, the gray value does not change after corrosion. As shown in the erosion processing part of fig. 4, after the erosion operation, the edge protrusions in the binary image are "eroded", but some small areas may be excessively removed, so that a further dilation operation is required to make the highlighted track portion smoother and more complete, and the dilation operation is performed according to the following formula:
Figure BDA0002881005400000052
in the formula, dilate (x, y) is the result after the etching algorithm processing, I (x, y) is the gray value of the binary image (x, y) position, the expansion operation value is performed on the gray value, and T (I, j) is the parameter of the (I, j) position in the template.
According to the formula, if a value 1 exists in the pixel corresponding to the template, the gray value of the target point is 1, and when all the pixel values corresponding to the template are 0 or 1, the gray value of the target point is not changed. Fig. 3 shows a flow chart of morphological processing (erosion and expansion) of an underwater image, in which the erosion process is to find a local minimum value near an edge pixel of a binary image, and the expansion process is to find a local maximum value near an edge pixel of a binary image. The underwater image HSV conversion and the corrosion and expansion denoising effects are shown in figure 4, path information is extracted through the HSV conversion, corrosion and expansion operations can be used for separating and connecting objects, the originally connected objects are disconnected through the corrosion operations, the originally unconnected objects are connected through the expansion operations, the complex underwater noisy tracking image can be converted into a tracking effect image with a highlight path and a completely black background through the close matching of the corrosion operations and the expansion operations, and effective input guarantee is provided for decision making of an aircraft.
S3: and then, by using the preprocessed images and taking the correct moving direction of the underwater vehicle as label information, training and optimizing a neural network model in a supervision way, seeking an optimal planning path, judging the turning direction of the underwater vehicle and converting the turning direction into a control instruction. The method comprises the following specific steps:
and manually labeling the black bottom and high brightness tracking images processed by the S1 and the S2, wherein the main labeling content is the steering of the underwater vehicle. The neural network of the part consists of three layers of fully-connected networks, wherein an input layer consists of 38400 neurons and is used as a processing unit of a tracking image, an implicit layer consists of 64 neurons, an output layer consists of 5 neurons and respectively corresponds to a Sigmoid activation function to output 5 control instructions (forward and backward, left and right turn and stop).
As shown in fig. 5, the layers of the neural network are all connected, the input layer neurons are mainly responsible for receiving information, and the hidden layer neurons are responsible for processing input information. Assuming the input layer is represented by a vector x, the hidden layer output is calculated in the form:
Figure BDA0002881005400000061
in the formula, wkFor fully-connected network weights, b is a bias coefficient, a function
Figure BDA0002881005400000062
Is a neural network activation function.
And S4, performing experiments by using the secondarily developed underwater vehicle, and verifying the effectiveness and stability of the provided autonomous visual perception tracking motion control technology through simulation analysis and experiments in a real environment.
Firstly, a hardware environment required by an experiment is built, then software is developed for the second time, and the algorithm program is connected with an underwater vehicle ground control station through a UDP communication protocol, so that the video information sensed underwater is converted into control instructions of all channels of the underwater vehicle in real time, and the underwater vehicle is controlled to execute expected movement. The autonomous tracking experiment platform of the underwater vehicle built by the experiment comprises software and hardware such as an experiment pool, the underwater vehicle and a ground control station, and is combined with auxiliary equipment such as an AI single module computer to carry out related experiments.
The overall hardware configuration schematic diagram of the experiment is shown in fig. 6, the left part is the hardware configuration and connection process of an underwater vehicle BlueRov2, wherein a raspberry is used as a microcomputer in the underwater vehicle, a channel instruction from a ground control station is received through a cable, the channel instruction is sent to a navigation controller after being processed, and a camera pan-tilt, a propeller and an illuminating lamp are uniformly managed and assigned by the camera pan-tilt, the propeller and the illuminating lamp. The right part mainly extracts a video image of the QGC ground control station through a TX2 microcomputer, carries out the operation of a depth prediction algorithm and an enhanced control algorithm, and then sends output linear velocity signals and angular velocity signals to the QGC ground control station so as to control the underwater vehicle to move. And developing underwater vehicle ground station software, enabling the underwater vehicle ground station software to mutually transmit and receive data with a TX2 microcomputer, finally extracting vehicle video frames by using OpenCV in the TX2 microcomputer, processing and outputting linear velocity information and angular velocity information in real time, converting the linear velocity information and the angular velocity information into control instructions of each channel, and transmitting the control instructions to a ground control station through UDP communication so as to control the motion of the vehicle.
In the experimental pool, underwater light bands with different shapes are arranged, so that an underwater vehicle can identify the appearance contour of the underwater light band and further output action information, and the underwater vehicle has certain autonomous identification and decision-making capability. Experiments prove that the autonomous perception, identification and tracking method of the underwater vehicle designed by the method has certain feasibility in actual operation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (4)

1. An unmanned underwater vehicle autonomous tracking method based on visual perception is characterized by comprising the following steps:
converting the multi-dimensional environmental characteristics of the shot underwater image into an HSV color space to perform background removal processing and resolving to obtain a binary image containing a target track;
denoising the binary image by adopting a corrosion and expansion processing mode to obtain an effective image;
building a neural network controller, taking the preprocessed effective image as input, taking label information of the correct moving direction of the underwater vehicle as output, training the optimized neural network controller, seeking an optimal planning path and converting the optimal planning path into a turning control instruction of the underwater vehicle so as to obtain an autonomous visual perception tracking motion control method;
an underwater vehicle is adopted for carrying out experiments, and the effectiveness and the stability of the provided autonomous visual perception tracking motion control method are proved through experimental verification and simulation analysis.
2. The method of claim 1, further characterized by: converting the multi-dimensional environmental characteristics in the shot underwater image into an HSV color space for background removal to obtain an HSV image:
setting (r, g, b) as the red, green and blue coordinates of one color of the underwater image, setting max as the maximum of (r, g, b) and min as the minimum of (r, g, b), and calculating the (h, s, v) value in the HSV space, wherein h belongs to [0,360 ] as the hue angle of the angle, and s, v belongs to [0,1) as the saturation and the brightness;
and respectively setting corresponding track color upper limit threshold and lower limit threshold for the HSV image, obtaining three single-channel images of the HSV through upper and lower limit filtering, and obtaining a binary image containing the target track through the processes of operation, combination and calculation.
3. The method of claim 1, further characterized by: expanding and contracting the image communication area in a corrosion and expansion mode, and carrying out the following optimization processing on the binary image:
assuming that T is a template of n × n, different parameters are set according to different shapes of the structural elements, and for a square structural element, T (I, j) is 1, all parameters in the template are 1, I is a binary image for performing morphological operations, and T (I, j) is 0 or 1, the formula of corrosion is described as follows:
Figure FDA0002881005390000021
in the formula, the Erode (x, y) is the result after the corrosion operation, the I (x, y) is the gray value of the binary image (x, y) position for performing the morphological operation, and the T (I, j) is the parameter of the (I, j) position in the template;
carrying out an expansion operation:
Figure FDA0002881005390000022
in the formula, dilate (x, y) is the result after the erosion algorithm processing, I (x, y) is the gray value of the binary image (x, y) position where the dilation operation is performed, and T (I, j) is the parameter of the (I, j) position in the template.
4. The method of claim 1, further characterized by: making a label of an effective image, wherein the content of the label is the direction of correct movement of an underwater vehicle, the neural network controller comprises a three-layer fully-connected network, an input layer comprises 38400 neurons as processing units of a tracking image, an implicit layer comprises 64 neurons, an output layer comprises 5 neurons and respectively corresponds to a Sigmoid activation function, 5 control instructions are output, and if the input layer is represented by a vector x, the output form of the implicit layer is as follows:
Figure FDA0002881005390000023
in the formula, wkFor fully-connected network weights, b is a bias coefficient, a function
Figure FDA0002881005390000024
Is a neural network activation function.
CN202011635410.5A 2020-12-31 2020-12-31 Unmanned underwater vehicle autonomous tracking method based on visual perception Pending CN112785619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011635410.5A CN112785619A (en) 2020-12-31 2020-12-31 Unmanned underwater vehicle autonomous tracking method based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011635410.5A CN112785619A (en) 2020-12-31 2020-12-31 Unmanned underwater vehicle autonomous tracking method based on visual perception

Publications (1)

Publication Number Publication Date
CN112785619A true CN112785619A (en) 2021-05-11

Family

ID=75753360

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011635410.5A Pending CN112785619A (en) 2020-12-31 2020-12-31 Unmanned underwater vehicle autonomous tracking method based on visual perception

Country Status (1)

Country Link
CN (1) CN112785619A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113353216A (en) * 2021-06-15 2021-09-07 陈问淑 Intelligent autonomous navigation underwater detection robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139072A (en) * 2015-09-09 2015-12-09 东华大学 Reinforcement learning algorithm applied to non-tracking intelligent trolley barrier-avoiding system
CN107808161A (en) * 2017-10-26 2018-03-16 江苏科技大学 A kind of Underwater targets recognition based on light vision
CN108227491A (en) * 2017-12-28 2018-06-29 重庆邮电大学 A kind of intelligent vehicle Trajectory Tracking Control method based on sliding formwork neural network
CN108549877A (en) * 2018-04-23 2018-09-18 重庆大学 A kind of tracking robot trajectory's recognition methods based on neural network
CN108829130A (en) * 2018-06-11 2018-11-16 重庆大学 A kind of unmanned plane patrol flight control system and method
CN111161312A (en) * 2019-12-16 2020-05-15 重庆邮电大学 Object trajectory tracking and identifying device and system based on computer vision
WO2020165544A1 (en) * 2019-02-13 2020-08-20 Safran Identification of drivable areas with consideration of the uncertainty by a deep learning method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139072A (en) * 2015-09-09 2015-12-09 东华大学 Reinforcement learning algorithm applied to non-tracking intelligent trolley barrier-avoiding system
CN107808161A (en) * 2017-10-26 2018-03-16 江苏科技大学 A kind of Underwater targets recognition based on light vision
CN108227491A (en) * 2017-12-28 2018-06-29 重庆邮电大学 A kind of intelligent vehicle Trajectory Tracking Control method based on sliding formwork neural network
CN108549877A (en) * 2018-04-23 2018-09-18 重庆大学 A kind of tracking robot trajectory's recognition methods based on neural network
CN108829130A (en) * 2018-06-11 2018-11-16 重庆大学 A kind of unmanned plane patrol flight control system and method
WO2020165544A1 (en) * 2019-02-13 2020-08-20 Safran Identification of drivable areas with consideration of the uncertainty by a deep learning method
CN111161312A (en) * 2019-12-16 2020-05-15 重庆邮电大学 Object trajectory tracking and identifying device and system based on computer vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113353216A (en) * 2021-06-15 2021-09-07 陈问淑 Intelligent autonomous navigation underwater detection robot

Similar Documents

Publication Publication Date Title
Mehra et al. ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions
CN110084234B (en) Sonar image target identification method based on example segmentation
CN110084307B (en) Mobile robot vision following method based on deep reinforcement learning
CN107145889B (en) Target identification method based on double CNN network with RoI pooling
CN111428765B (en) Target detection method based on global convolution and local depth convolution fusion
CN110633632A (en) Weak supervision combined target detection and semantic segmentation method based on loop guidance
CN107273905B (en) Target active contour tracking method combined with motion information
CN112733914B (en) Underwater target visual identification classification method based on support vector machine
CN104298996B (en) A kind of underwater active visual tracking method applied to bionic machine fish
CN104517103A (en) Traffic sign classification method based on deep neural network
CN109377555B (en) Method for extracting and identifying three-dimensional reconstruction target features of foreground visual field of autonomous underwater robot
Kim et al. Image-based monitoring of jellyfish using deep learning architecture
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110321937B (en) Motion human body tracking method combining fast-RCNN with Kalman filtering
CN110111351B (en) Pedestrian contour tracking method fusing RGBD multi-modal information
CN113011338B (en) Lane line detection method and system
CN111158491A (en) Gesture recognition man-machine interaction method applied to vehicle-mounted HUD
CN114548256A (en) Small sample rare bird identification method based on comparative learning
Zhou et al. Graph attention guidance network with knowledge distillation for semantic segmentation of remote sensing images
CN114937083A (en) Laser SLAM system and method applied to dynamic environment
CN112785619A (en) Unmanned underwater vehicle autonomous tracking method based on visual perception
CN116797926A (en) Robot multi-mode near-field environment sensing method and system
CN115588237A (en) Three-dimensional hand posture estimation method based on monocular RGB image
WO2021026855A1 (en) Machine vision-based image processing method and device
CN106650814B (en) Outdoor road self-adaptive classifier generation method based on vehicle-mounted monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination