CN115240069A - Real-time obstacle detection method in full-fog scene - Google Patents

Real-time obstacle detection method in full-fog scene Download PDF

Info

Publication number
CN115240069A
CN115240069A CN202210855171.7A CN202210855171A CN115240069A CN 115240069 A CN115240069 A CN 115240069A CN 202210855171 A CN202210855171 A CN 202210855171A CN 115240069 A CN115240069 A CN 115240069A
Authority
CN
China
Prior art keywords
fog
scene
foggy
day
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210855171.7A
Other languages
Chinese (zh)
Inventor
李琳辉
张鑫亮
连静
付一帆
郭烈
周雅夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202210855171.7A priority Critical patent/CN115240069A/en
Publication of CN115240069A publication Critical patent/CN115240069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a real-time obstacle detection method in a full-foggy scene, which is characterized in that a Transformer backbone network is used for grading in the foggy scene, and different target detection schemes in the foggy scene are selected to improve the accuracy of a detection network. For a light fog scene, a visual sensor is used for collecting image information, and YOLOv5 pre-training weight is used for directly detecting an obstacle target. For a medium fog scene, a visual sensor is used for collecting image information, a multiscale fusion CycleGAN network is used for realizing simulation of a fog day data set training YOLOv5 network, and fog day obstacle detection is carried out. For a dense fog scene, an infrared sensor is used for collecting image information, an infrared data set is used for training a YOLOv5 detection network, and the target detection is carried out on the obstacles in the dense fog scene. The invention realizes real-time self-adaptive obstacle target detection in different foggy day scenes and meets the engineering requirements of the actual foggy day scenes.

Description

Real-time obstacle detection method in full-fog scene
Technical Field
The invention belongs to an image processing technology and a foggy day detection method applied by multiple sensors, and particularly relates to a real-time obstacle detection method in a full-foggy day scene.
Background
In recent years, computer vision recognition has become a research focus in the fields of target recognition, target tracking, and the like, and is widely applied to a plurality of fields such as automatic driving, medical image segmentation, human-computer interaction, robot vision, and the like. When the outdoor work is actually performed with the help of computer vision, the phenomenon that some foggy days occur to reduce the effect of visual identification is difficult to avoid, so that the corresponding identification work cannot be performed normally.
In a foggy scene, local information of the image is lost, so that the identification of human vision is influenced, and the identification difficulty of computer vision is increased. The recognition accuracy of the computer vision target is determined by the quality of the image, and the foggy weather has adverse effect on the image recognition and restricts the development of the computer vision. At present, the target detection based on computer vision is remarkably developed and widely applied to outdoor application scenes, but a set of reasonable computer vision field method for detecting obstacles in foggy weather scenes is lacked.
At present, the solution for foggy day scenes is to adopt a defogging algorithm to correct foggy day image noise, and use a detection algorithm to detect a barrier target in a defogged image.
Image defogging algorithms under foggy scenes are mainly classified into three categories:
1) Defogging algorithms based on image contrast enhancement. The algorithm based on contrast enhancement is an image processing method for early image enhancement, the image quality in foggy days is improved by adjusting the difference between target and environment information, and the method mainly comprises methods such as histogram equalization, wavelet transformation, homomorphic filtering, log transformation, power transformation, gamma correction and the like.
2) And (3) an image restoration algorithm based on physical model enhancement. The physical model defogging algorithm is used for simulating a physical model by utilizing a principle generated in a natural foggy scene, determining environmental information and carrying out reverse reasoning on a defogged image to obtain a clear image. The physical model of the simulation scene in the foggy weather mainly uses an atmospheric scattering model, and clear images can be obtained by using the atmospheric scattering model according to information such as atmospheric illumination, distance and the like. In 1975, mcCartney et al propose an original atmospheric scattering model, and Narasimohan proposes an atmospheric scattering model for a foggy day scene by using the atmospheric scattering model, so that a physical model for generating a foggy day image in the foggy day scene is constructed, and an important physical model foundation is laid for subsequent image demisting and other works.
3) And (4) demisting algorithm based on deep learning. The defogging algorithm based on deep learning does not depend on contrast enhancement and a physical model, and defogging network model training is carried out through a large amount of foggy scene images and clear scene images, so that an image defogging effect is achieved. The deep learning demisting algorithm is not limited to a specific physical model, a foggy scene model is built by using a large amount of data, and deep demisting is effectively carried out on foggy images, so that a better demisting effect is achieved. A defogging algorithm for a foggy scene generally takes a Unet network architecture as a main part, and Encode extraction is carried out on foggy images to extract effective features and Decode in the images to construct defogged images, so that the generated images not only retain the features in the original images, but also reduce the interference of foggy information.
For the three defogging method model methods, the defogging effect of the deep learning defogging network under the self test environment data set is better. However, due to the influence of randomness and variability factors of the real fog environment, the complexity of the practical working condition of the fog environment is caused, and the defogging effect of the deep learning defogging algorithm is greatly influenced. The existing computer vision foggy scene obstacle detection method is more based on a defogging method, and because the application of the defogging method in an actual scene is poor, the difficulty of model deployment is increased, and the real-time requirement of a computer vision target obstacle detection algorithm cannot be met. Besides the inherent problems of the defogging algorithm, a more notable problem is the multi-level nature of the foggy day scene, and the applicability of the solution must be more reliable and stable in different levels of foggy day scenes.
And after the image is demisted by using a demisting method, detecting the obstacle target by using a detection algorithm. Currently, target detection algorithms are mainly classified into two categories: one type is a target detection algorithm based on a target candidate region, namely a two-stage detection method, and typical algorithms comprise Fast R-CNN, fasterR-CNN, R-FCN and the like; the other type is a regression-based target detection algorithm, namely a one-stage detection method, and typical algorithms are YOLO series, SSD, retinaNet and the like.
Disclosure of Invention
The invention aims to provide a real-time obstacle detection method in a full-fog scene, which aims to solve the problems of poor obstacle detection effect and poor real-time performance in the full-fog scene in the prior art and improve the adaptability and reliability of a computer vision obstacle detection method in the fog scene.
In order to achieve the purpose, the technical scheme of the invention is as follows: a real-time obstacle detection method under a full-fog scene comprises the following steps:
A. establishing a Transformer classification model
A1, utilizing a foggy day visibility detection device and a visual sensor to acquire image information of foggy day concentrations in different levels, and grading visual foggy day scenes according to horizontal visibility:
the visibility is less than 10000m and is light fog when the visibility is more than or equal to 1000 m;
the visibility is more than or equal to 500m and less than 1000m, and the fog is medium fog;
visibility <500m is dense fog;
a2, after the collection of the classification data sets of light fog, medium fog and dense fog is completed, training a Transformer classification network by using fog day images with different grades;
B. detecting obstacle target in foggy day scene with different levels of concentration
If the scene is a light fog day scene, turning to the step B1; if the scene is a middle fog day scene, turning to the step B2; if the scene is a dense fog and foggy day scene, turning to the step B3;
b1, for a light fog scene, acquiring image information by using a visual sensor, and detecting an obstacle target by using an ImageNet pre-training weight by using a YOLOv5 detection model; turning to the step C;
b2, for a medium fog and foggy day scene, acquiring image information by using a visual sensor, generating a foggy day image data set by using a multi-scale fused cycleGAN network (DF-cycleGAN), training a YOLOv5 detection model by using a mixed data set of a clear image and a foggy day image, and finally, detecting an obstacle in the foggy day scene; turning to the step C;
b3, for a dense fog and foggy day scene, acquiring infrared image information by using an infrared sensor, training a YOLOv5 detection network by using infrared images, and detecting an obstacle target in the foggy day scene;
C. and D, outputting the result of the obstacle detection target in the step B.
Further, the method for generating the foggy day image dataset by the multi-scale fusion CycleGAN network in the step B2 comprises the following steps:
b21, carrying out scale direction enhancement on an Encode structure in a generator Unet of the original CycleGAN network, and making up the lost characteristic information of the lower characteristic diagram by using the difference between the upper characteristic diagram and the lower characteristic diagram, wherein the calculation formula is as follows:
Figure BDA0003754125450000041
J n =D n (J n-1 )
in the formula, J n Representing enhancement feature information from the decoder nth layer;
Figure BDA0003754125450000042
representing the feature information after feature fusion;
Figure BDA0003754125450000043
the fusion module represents the nth layer and consists of N residual modules; d n The n-th layer down-sampling method is composed of 3 multiplied by 3 convolution with step size of 2, and ↓ represents down-sampling 2 times; ×) represents up-sampling by 2 times, using a method of transposed convolution;
b22, using an enhancement module for a Decode structure in Unet, and fusing Encode information and Decode information by using a plurality of residual error structures, wherein the calculation formula is as follows:
Figure BDA0003754125450000044
in the formula (I), the compound is shown in the specification,
Figure BDA0003754125450000045
enhanced feature information expressed as a Decode nth layer;
Figure BDA0003754125450000046
representing the I-th layer enhanced feature information of the Encode, wherein n and i correspond to each other, and the size of an n-layer feature graph in the Decode is equal to the size of an i-layer feature graph in the Decode and is 2 times of the size of an i-1-layer feature graph in the Decode; f n The enhancement module represents the nth layer and consists of N residual modules; ×) represents up-sampling by 2 times, using a quadratic linear interpolation method;
and B23, performing information characteristic extraction on the M residual modules by using a G structure at the connection position of the Encode and the Decode.
Further, in step B23, N =10 and m =20.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, a Transformer model is utilized to construct classification networks of fog scenes in different levels, so that the fog level classification in the fog scenes can be effectively realized, and corresponding real-time solutions are provided for the use requirements of different fog scenes.
2. The method is developed aiming at medium fog and foggy day scenes, a vivid foggy day scene data set is generated by utilizing a multi-scale fusion DF-cycleGAN network, and the target detection network is trained by mixing clear images and foggy day images, so that the adaptability of the foggy day scene can be effectively improved.
3. The invention develops research aiming at the foggy scene of dense fog, carries out infrared detection network training and obstacle detection by utilizing the advantages of infrared data in the foggy scene, can effectively improve the detection effect of the dense fog scene, and is a necessary process for ensuring the feasibility of the full foggy scene.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Fig. 2 is a schematic flow chart of the multi-scale fusion module of the present invention.
Fig. 3 is a flow chart of the DF-CycleGAN network structure of the present invention.
Fig. 4 is a schematic flow chart of obstacle detection by the infrared sensor of the present invention.
Detailed Description
The embodiments of the present invention are described below with specific examples, and those skilled in the art can easily understand the advantages of the present invention from the disclosure of the present specification. The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be practiced without these particulars. Moreover, some of the specific details have been omitted from the description in order not to obscure or obscure the focus of the present invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below with reference to the accompanying drawings:
FIG. 1 is a diagram of an overall obstacle detection framework designed in a foggy day scene, wherein a Transformer classification network with better detection performance effect is used to effectively consider global information of a foggy day image so as to accurately predict a foggy day grade; for fog scenes of different grades, the obstacle detection precision in the fog scenes can be effectively improved by using different obstacle detection schemes: the image definition of a light fog scene is high, the visibility is larger than 1 kilometer, a visual sensor can effectively acquire target information, and barrier target detection can be completed by using a YOLOv5 detection network with high real-time performance and detection effect; for a dense fog scene, a visual sensor cannot effectively acquire target information, great trouble is caused for obstacle detection, the detection requirement under the foggy weather scene is difficult to effectively meet, an infrared sensor is less affected by the foggy weather scene and has a similar imaging effect to the visual sensor, and an obstacle target can be effectively identified, so that the requirement for detecting the foggy obstacle can be met by using a YOLOv5 detection network on the basis of acquiring an image by using the infrared sensor, and FIG. 4 is an integral application flow of the infrared sensor; for a medium fog scene, a visual image is influenced to a certain extent, mainly due to the fact that white fog day noise in the image is collected, due to the fact that special data are collected by visual image information in most scenes, an infrared sensor cannot be directly used for replacing a visual sensor, the infrared sensor and the visual sensor need to be cooperated, synchronous use of multiple sensors can be caused, and multiple equipment resources are wasted, therefore, the medium fog scene obstacle target detection is carried out only on the basis of the visual sensor, the simulated fog day scene image is used for training a detection network, the obstacle detection effect of the detected image can be met, the real-time performance of the fog scene can be guaranteed, the simulated image is generated by utilizing a DF-CycleGAN network (a multi-scale fusion module and a DF-CycleGAN network structure are respectively adopted in the images, compared with the CycleGAN network, the DF-CycleGAN network carries out deeper information interaction on a generator part, more image characteristic information can be effectively extracted, a more image characteristic information can be generated, the simulated fog scene image is trained by using YOLOYOL 5, the real-time fog scene detection network can be strengthened, and the real-time fog scene detection requirements can be guaranteed.
The foregoing has described the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

Claims (3)

1. A real-time obstacle detection method under a full-fog scene is characterized by comprising the following steps: the method comprises the following steps:
A. establishing a Transformer classification model
A1, utilizing a foggy day visibility detection device and a visual sensor to acquire image information of foggy day concentrations in different levels, and grading visual foggy day scenes according to horizontal visibility:
the visibility is less than 10000m and is light fog when the visibility is more than or equal to 1000 m;
the visibility is more than or equal to 500m and less than 1000m, and the fog is medium fog;
the visibility is less than 500m, and the fog is thick fog;
a2, after the collection of the classification data sets of light fog, medium fog and dense fog is completed, training a Transformer classification network by using fog day images with different grades;
B. detecting obstacle target in foggy day scene with different levels of concentration
If the scene is a light fog day scene, turning to the step B1; if the scene is a middle fog day scene, turning to the step B2; if the scene is a dense fog and foggy day scene, turning to the step B3;
b1, for a light fog scene, acquiring image information by using a visual sensor, and detecting an obstacle target by using an ImageNet pre-training weight by using a YOLOv5 detection model; turning to the step C;
b2, for a medium fog and foggy day scene, acquiring image information by using a visual sensor, generating a foggy day image data set by using a multi-scale fused cycleGAN network (DF-cycleGAN), training a YOLOv5 detection model by using a mixed data set of a clear image and a foggy day image, and finally, detecting an obstacle in the foggy day scene; turning to the step C;
b3, for a dense fog and foggy day scene, acquiring infrared image information by using an infrared sensor, training a YOLOv5 detection network by using an infrared image, and detecting an obstacle target in the foggy day scene;
C. and D, outputting a target result of the obstacle detection in the step B.
2. The method for detecting real-time obstacles in a full-fog day scene as claimed in claim 1, wherein: the method for generating the foggy day image data set by the multi-scale fusion CycleGAN network comprises the following steps:
b21, carrying out scale direction enhancement on an Encode structure in a generator Unet of the original CycleGAN network, and making up the lost characteristic information of the lower characteristic diagram by using the difference between the upper characteristic diagram and the lower characteristic diagram, wherein the calculation formula is as follows:
Figure FDA0003754125440000021
J n =D n (J n-1 )
in the formula, J n Representing enhancement feature information from the decoder nth layer;
Figure FDA0003754125440000022
representing the feature information after feature fusion;
Figure FDA0003754125440000023
a fusion module representing the N-th layer and consisting of N residual modulesComposition is carried out; d n The n-th layer down-sampling method is composed of 3 × 3 convolutions with step size of 2, and ↓indicatesdown-sampling 2 times; ×) represents up-sampling by 2 times, using a method of transposed convolution;
b22, using an enhancement module for a Decode structure in Unet, and fusing Encode information and Decode information by using a plurality of residual error structures, wherein the calculation formula is as follows:
Figure FDA0003754125440000024
in the formula (I), the compound is shown in the specification,
Figure FDA0003754125440000025
enhanced feature information expressed as a Decode nth layer;
Figure FDA0003754125440000026
representing the I-th layer enhanced feature information of the Encode, wherein n and i correspond to each other, and the size of an n-layer feature graph in the Decode is equal to the size of an i-layer feature graph in the Decode and is 2 times of the size of an i-1-layer feature graph in the Decode; f n The enhancement module represents the nth layer and consists of N residual modules; ×) represents up-sampling by 2 times, using a method of quadratic linear interpolation;
and B23, using a G structure at the connection position of the Encode and the Decode to extract information characteristics of the M residual modules.
3. The method for detecting real-time obstacles in a full-fog day scene as claimed in claim 2, wherein: in step B23, N =10,m =20.
CN202210855171.7A 2022-07-19 2022-07-19 Real-time obstacle detection method in full-fog scene Pending CN115240069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210855171.7A CN115240069A (en) 2022-07-19 2022-07-19 Real-time obstacle detection method in full-fog scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210855171.7A CN115240069A (en) 2022-07-19 2022-07-19 Real-time obstacle detection method in full-fog scene

Publications (1)

Publication Number Publication Date
CN115240069A true CN115240069A (en) 2022-10-25

Family

ID=83673200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210855171.7A Pending CN115240069A (en) 2022-07-19 2022-07-19 Real-time obstacle detection method in full-fog scene

Country Status (1)

Country Link
CN (1) CN115240069A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237859A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Night expressway foggy day visibility detection method based on low illumination enhancement
CN117409193A (en) * 2023-12-14 2024-01-16 南京深业智能化***工程有限公司 Image recognition method, device and storage medium under smoke scene
CN117557477A (en) * 2024-01-09 2024-02-13 浙江华是科技股份有限公司 Defogging recovery method and system for ship

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237859A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Night expressway foggy day visibility detection method based on low illumination enhancement
CN117237859B (en) * 2023-11-14 2024-02-13 南京信息工程大学 Night expressway foggy day visibility detection method based on low illumination enhancement
CN117409193A (en) * 2023-12-14 2024-01-16 南京深业智能化***工程有限公司 Image recognition method, device and storage medium under smoke scene
CN117409193B (en) * 2023-12-14 2024-03-12 南京深业智能化***工程有限公司 Image recognition method, device and storage medium under smoke scene
CN117557477A (en) * 2024-01-09 2024-02-13 浙江华是科技股份有限公司 Defogging recovery method and system for ship
CN117557477B (en) * 2024-01-09 2024-04-05 浙江华是科技股份有限公司 Defogging recovery method and system for ship

Similar Documents

Publication Publication Date Title
CN115240069A (en) Real-time obstacle detection method in full-fog scene
CN110956094A (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network
CN110335270A (en) Transmission line of electricity defect inspection method based on the study of hierarchical regions Fusion Features
CN106127204A (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN111368690A (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN112949633B (en) Improved YOLOv 3-based infrared target detection method
CN112489054A (en) Remote sensing image semantic segmentation method based on deep learning
CN112434723B (en) Day/night image classification and object detection method based on attention network
CN111598098A (en) Water gauge water line detection and effectiveness identification method based on full convolution neural network
CN113870160B (en) Point cloud data processing method based on transformer neural network
CN114972312A (en) Improved insulator defect detection method based on YOLOv4-Tiny
Zheng et al. A review of remote sensing image object detection algorithms based on deep learning
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN112700476A (en) Infrared ship video tracking method based on convolutional neural network
CN114241310B (en) Improved YOLO model-based intelligent identification method for piping dangerous case of dike
CN115375639A (en) Cable pipeline internal defect detection method with obvious perception guidance
CN116206112A (en) Remote sensing image semantic segmentation method based on multi-scale feature fusion and SAM
CN113469097B (en) Multi-camera real-time detection method for water surface floaters based on SSD network
CN116778346B (en) Pipeline identification method and system based on improved self-attention mechanism
CN110807372A (en) Rapid optical remote sensing target identification method based on depth feature recombination
CN115100428A (en) Target detection method using context sensing
CN113095181A (en) Traffic sign identification method based on Defense-GAN
Wang Remote sensing image semantic segmentation network based on ENet
CN117876362B (en) Deep learning-based natural disaster damage assessment method and device
CN117557775B (en) Substation power equipment detection method and system based on infrared and visible light fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination