CN113780524A - Weather self-adaptive target detection network model and method - Google Patents

Weather self-adaptive target detection network model and method Download PDF

Info

Publication number
CN113780524A
CN113780524A CN202111000940.7A CN202111000940A CN113780524A CN 113780524 A CN113780524 A CN 113780524A CN 202111000940 A CN202111000940 A CN 202111000940A CN 113780524 A CN113780524 A CN 113780524A
Authority
CN
China
Prior art keywords
prediction
domain data
data set
weather
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111000940.7A
Other languages
Chinese (zh)
Other versions
CN113780524B (en
Inventor
邹斌
刘洋洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111000940.7A priority Critical patent/CN113780524B/en
Publication of CN113780524A publication Critical patent/CN113780524A/en
Application granted granted Critical
Publication of CN113780524B publication Critical patent/CN113780524B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a weather self-adaptive target detection network model and a method, wherein a visual acquisition device is used for respectively acquiring sunny video data and rainy video data, and then a script is compiled to process a video into a picture; labeling the image data in sunny days, taking a labeled data set in sunny days as a source domain data set, and taking a non-labeled data set in rainy days as a target domain data set; putting the source domain data set and the target domain data set into a weather self-adaptive target detection network model for training to obtain a weight parameter file of the model; loading the weight parameter file to obtain a detection network model; and extracting a picture from the video stream according to the set frame number, inputting the picture into the detection network model for prediction, and displaying the prediction result. The method ensures and even improves the detection precision under the condition of being influenced by weather, and simultaneously solves the problem of large workload of manual marking data.

Description

Weather self-adaptive target detection network model and method
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a domain-based adaptive target detection network model and a detection method.
Background
Two important problems exist with current target detection networks such as yolov5, fast-rcnn, FCOS, etc. Firstly, these detection networks all rely on a large amount of labeled training data, and it usually takes a large amount of cost to label all training data accurately, and training data of some scenes is not easy to obtain. Secondly, these networks have a problem of insufficient generalization capability of the model, for example, when weather changes, such as sunny days and rainy days, the detection accuracy of the previous target detection algorithm is reduced.
Disclosure of Invention
The invention mainly solves the problems that the target detection model in the prior practical application is greatly influenced by weather, and the model trained in a normal weather data set is not good in adaptability in rainy and foggy weather, provides a weather-adaptive target detection network model and a detection method, ensures and even improves the detection precision under the condition of being influenced by weather, and solves the problem of large workload of manual labeling data.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a weather-adaptive target detection network model, characterized by:
the range of a five-layer characteristic diagram Fi, i containing source domain data characteristics Fi _ s and target domain data characteristics Fi _ t is [3, 4, 5, 6, 7 ];
firstly connecting two feature extraction structures G1 and G2 behind the feature map Fi in parallel, wherein the feature output after the feature extraction structure G1 is a boundary frame prediction map F _ reg, and the feature output after the feature extraction structure G2 is a central point prediction map F _ cnt; the boundary frame prediction graph F _ reg is connected with the regression loss of the boundary frame, and the central point prediction characteristic graph F _ cnt is connected with the central point prediction loss;
the central point prediction characteristic diagram F _ cnt is activated through a Sigmoid activation function, multiplied by the corresponding characteristic diagram Fi element by element, and multiplied by a method coefficient a to obtain an enhanced characteristic diagram Fe;
adding a feature extraction structure G3 behind the enhanced feature map Fe to obtain a class probability prediction feature map F _ cls; adding a class loss function after the class probability prediction graph F _ cls;
and adding an LMMD loss in a domain self-adaptive classification algorithm, and respectively inputting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain data feature Fi _ s and the target domain data feature Fi _ t of each layer into the LMMD loss connected with the LMMD loss.
In the technical scheme, the feature map Fi containing the source domain data features Fi _ s and the target domain data features Fi _ t comes from five layers of features of feature extraction network output before the FCOS network head.
In the above technical solution, the convolution layer structures formed by the three feature extraction structures G1, G2, and G3 are the same.
In the above technical solution, all of the three feature extraction structures G1, G2, and G3 are formed by 4 convolution layers of 3 × 3.
In the above technical scheme, the model training process is as follows:
processing pictures of source domain data and target domain data in parallel, inputting the pictures into a feature extraction network, and outputting a five-layer feature map Fi comprising source domain data features Fi _ s and target domain data features Fi _ t after FPN processing; the source domain data set corresponds to a marked sunny image data set, and the target domain data set corresponds to a non-marked rainy image data set and is from sunny video data and rainy video data;
then inputting the feature maps into three feature extraction structures G1, G2 and G3 respectively; after the feature extraction structure is carried out, the regression loss, the central point loss and the category loss are respectively calculated according to the result obtained by the source domain data; then respectively inputting and connecting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain and the target domain of each layer to the LMMD loss; finally accumulating the loss sum of each layer; and then carrying out gradient feedback, and updating parameters to finally obtain a model parameter file.
In the above technical solution, in the model training process, each batch of input data includes the same amount of source domain and target domain data.
In the above technical solution, in the model training process, the sum of the loss and the regression loss, the loss of the center point, and the loss of the category is equal to the sum of the loss of the regression, the loss of the center point, and the loss of the category.
In the above technical solution, the process of target prediction by the model is as follows:
directly inputting an input picture into a network, firstly extracting features, then entering an FPN layer, and inputting the extracted features into feature extraction structures G1, G2 and G3 to obtain a prediction result containing category prediction, central point prediction and boundary frame prediction; and then, the prediction result is mapped to the original image, and the redundant boundary frame is removed through the NMS, so that the target detection image can be output.
A weather adaptive target detection method is characterized by comprising the following steps:
the method comprises the following steps: collecting data and making a data set:
firstly, respectively acquiring sunny video data and rainy video data by using a visual acquisition device, and then compiling a script to process a video into a picture; then, only image data in sunny days are labeled by using a labelme labeling tool, a labeled data set in sunny days is used as a source domain data set, and a non-labeled data set in rainy days is used as a target domain data set;
step two: putting the source domain data set and the target domain data set into a weather self-adaptive target detection network model for training to obtain a weight parameter file of the model;
step three: loading the weight parameter file to obtain a detection network model;
step four: and extracting a picture from the video stream acquired by the visual acquisition device according to the set frame number, inputting the picture into the detection network model for prediction, and displaying the prediction result.
In the above technical scheme, the vision acquisition device is a camera on the unmanned vehicle.
Compared with the prior art, the invention has the following beneficial effects:
the method provided by the invention can adapt to changes brought by various weathers and has stronger generalization capability; and the scheme designed by the user can save a large amount of manual labeling cost.
The invention can be applied to detecting static targets and also can be used for detecting dynamic targets. If the device is used on an unmanned vehicle and is used for detecting objects such as pedestrians, vehicles and the like, the detection accuracy can be free from the influence of weather change.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic diagram of a portion of the same feature extraction network architecture as the head portion of the FCOS network of the present invention.
FIG. 2 is a diagram of a weather adaptive object detection network model architecture of the present invention.
FIG. 3 is a flow chart of a weather adaptive target detection network model prediction of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A weather adaptive target detection method implemented according to the present invention is as follows.
1. Description of network architecture
The weather adaptive target detection network model implemented according to the present invention is a network architecture and algorithm innovation on the FCOS detection network. The feature extraction network architecture of the present invention is shown in FIG. 1, which only preserves the FCOS network head. The structure diagram of the weather adaptive target detection network model of the invention is shown in figure 2.
(1) The feature extraction network which is the same as the head part of the FCOS network outputs five layers of features and records the five layers of features as Fi, wherein the range of i is [3, 4, 5, 6, 7], and the Fi comprises a source domain data feature Fi _ s and a target domain data feature Fi _ t;
(2) in the weather adaptive target detection network model, two feature extraction structures G1 and G2 (the feature extraction structures are composed of 4 convolution layers of 3 x 3) are connected in parallel behind a feature map Fi. And outputting a boundary box prediction feature map F _ reg and a central point prediction feature map F _ cnt respectively. The bounding box prediction feature map F _ reg, in which only the source domain data features are output, connects the bounding box regression losses iou losses. Only the central point prediction characteristic graph F _ cnt output by the source domain data characteristic is connected with the central point prediction loss BCE loss;
(3) activating the central point prediction characteristic graph F _ cnt through a Sigmoid activation function, multiplying the central point prediction characteristic graph F _ cnt with corresponding Fi element by element, and multiplying the central point prediction characteristic graph F _ cnt by a method coefficient a to obtain an enhanced characteristic graph Fe;
(4) a feature extraction structure G3 (consisting of 4 convolution layers of 3 × 3) is added behind the enhanced feature map Fe to obtain a class probability prediction feature map F _ cls. Adding a category loss function (cross entropy loss) after the category probability prediction graph F _ cls only output by the source domain data features;
(5) adding an LMMD loss in a domain adaptive classification algorithm; and respectively inputting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain data feature Fi _ s and the target domain data feature Fi _ t of each layer into the connected LMMD losses.
The invention relates to a method for training and predicting a weather self-adaptive target detection network model, which comprises the following steps:
model training: processing pictures of a source domain and a target domain in parallel (each batch of input data comprises the same amount of source domain data and target domain data), inputting the pictures into a feature extraction network, and outputting five-layer feature maps Fi (source domain data features Fi _ s and target domain data features Fi _ t) after FPN processing, wherein the range of i is [3, 4, 5, 6, 7] as shown in FIG. 3, and the ranges of i correspond to F3-F7 respectively; then inputting the feature maps into three feature extraction structures G1, G2 and G3 respectively; after the feature extraction structure is carried out, the regression loss, the central point loss and the category loss are respectively calculated according to the result obtained by the source domain data; then respectively inputting and connecting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain and the target domain of each layer to the LMMD loss; and finally accumulating the loss sum (classification loss, central point prediction loss, iou loss and LMMD loss) of each layer, and then carrying out gradient return to update the parameters. And finally obtaining a model parameter file.
Model prediction: when prediction reasoning is carried out, an input picture is directly input into a network, feature extraction is carried out firstly, then the input picture enters an FPN layer, and then features are input into feature extraction structures G1, G2 and G3 to obtain category prediction, central point prediction and boundary box prediction; and then, the prediction result is mapped to the original image, and the redundant boundary frame is removed through NMS (network management system), so that the detection image can be output. The prediction flow is shown in fig. 3.
The invention can be used for detecting static targets and also can be used for detecting dynamic targets. The device is used on an unmanned vehicle for detecting objects such as pedestrians and vehicles, and the detection precision is not affected by weather changes. Specifically, the invention introduces a weather adaptive target detection method by unmanned vehicle target detection, which comprises the following steps:
the method comprises the following steps: collecting data and making a data set:
firstly, respectively acquiring clear-weather video data and rainy-weather video data by using a camera on the unmanned vehicle; then compiling a script to process the video into pictures; then, only image data in sunny days are labeled by using a labelme labeling tool, a labeled data set in sunny days is used as a source domain data set S, and a non-labeled data set in rainy days is used as a target domain data set T;
step two: putting the source domain data set S and the target domain data set T into a weather self-adaptive target detection network model for training to obtain a weight parameter file of the weather self-adaptive target detection network model;
step three: loading a weight file of the target detection network model to obtain a weather self-adaptive target detection network model M;
step four: extracting a picture from a video stream acquired by a camera according to a set frame number (extracting a picture from every 3 frames), inputting the picture into the weather adaptive target detection network model M for prediction, and displaying a prediction result.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. A weather-adaptive target detection network model, characterized by:
the range of a five-layer characteristic diagram Fi, i containing source domain data characteristics Fi _ s and target domain data characteristics Fi _ t is [3, 4, 5, 6, 7 ];
firstly connecting two feature extraction structures G1 and G2 behind the feature map Fi in parallel, wherein the feature output after the feature extraction structure G1 is a boundary frame prediction map F _ reg, and the feature output after the feature extraction structure G2 is a central point prediction map F _ cnt; the boundary frame prediction graph F _ reg is connected with the regression loss of the boundary frame, and the central point prediction characteristic graph F _ cnt is connected with the central point prediction loss;
the central point prediction characteristic diagram F _ cnt is activated through a Sigmoid activation function, multiplied by the corresponding characteristic diagram Fi element by element, and multiplied by a method coefficient a to obtain an enhanced characteristic diagram Fe;
adding a feature extraction structure G3 behind the enhanced feature map Fe to obtain a class probability prediction feature map F _ cls; adding a class loss function after the class probability prediction graph F _ cls;
and adding an LMMD loss in a domain self-adaptive classification algorithm, and respectively inputting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain data feature Fi _ s and the target domain data feature Fi _ t of each layer into the LMMD loss connected with the LMMD loss.
2. The weather-adaptive target detection network model of claim 1, wherein:
the feature map Fi containing the source domain data features Fi _ s and the target domain data features Fi _ t is derived from five-level features of the feature extraction network output before the FCOS network head.
3. The weather-adaptive target detection network model of claim 1, wherein: the convolution layer structures composed of the three feature extraction structures G1, G2 and G3 are the same.
4. The weather-adaptive target detection network model of claim 1, wherein: all three feature extraction structures G1, G2, and G3 were composed of 4 convolution layers of 3 × 3.
5. The weather-adaptive target detection network model of claim 1, wherein: the model training process is as follows:
processing pictures of a source domain data set and a target domain data set in parallel, inputting the pictures into a feature extraction network, and outputting a five-layer feature map Fi containing source domain data features Fi _ s and target domain data features Fi _ t after FPN processing; the source domain data set corresponds to a marked sunny image data set, and the target domain data set corresponds to a non-marked rainy image data set and is from sunny video data and rainy video data;
then inputting the feature maps into three feature extraction structures G1, G2 and G3 respectively; after the feature extraction structure is carried out, the regression loss, the central point loss and the category loss are respectively calculated according to the result obtained by the source domain data; then respectively inputting and connecting an enhanced feature map Fe and a class probability prediction feature map F _ cls corresponding to the source domain and the target domain of each layer to the LMMD loss; finally accumulating the loss sum of each layer; and then carrying out gradient feedback, and updating parameters to finally obtain a model parameter file.
6. The weather-adaptive target detection network model of claim 1, wherein each batch of input data during the model training process contains the same amount of source domain and target domain data.
7. The weather-adaptive target detection network model of claim 1, wherein the sum of the losses equals to a sum of a regression loss, a center point loss, and a class loss during model training.
8. The weather-adaptive target detection network model of claim 1, wherein: the process of target prediction by the model is as follows:
directly inputting an input picture into a network, firstly extracting features, then entering an FPN layer, and inputting the extracted features into feature extraction structures G1, G2 and G3 to obtain a prediction result containing category prediction, central point prediction and boundary frame prediction; and then, the prediction result is mapped to the original image, and the redundant boundary frame is removed through the NMS, so that the target detection image can be output.
9. A weather adaptive target detection method is characterized by comprising the following steps:
the method comprises the following steps: collecting data and making a data set:
firstly, respectively acquiring sunny video data and rainy video data by using a visual acquisition device, and then compiling a script to process a video into a picture; then, only the sunny image data are labeled by using a labelme labeling tool, the labeled sunny image data set is used as a source domain data set, and the unlabeled rainy image data set is used as a target domain data set;
step two: putting the source domain data set and the target domain data set into a weather self-adaptive target detection network model for training to obtain a weight parameter file of the model;
step three: loading the weight parameter file to obtain a detection network model;
step four: and extracting a picture from the video stream acquired by the visual acquisition device according to the set frame number, inputting the picture into the detection network model for prediction, and displaying the prediction result.
10. The weather-adaptive target detection method of claim 9, wherein the visual capture device is a camera on an unmanned vehicle.
CN202111000940.7A 2021-08-30 2021-08-30 Weather self-adaptive target detection system and method Active CN113780524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111000940.7A CN113780524B (en) 2021-08-30 2021-08-30 Weather self-adaptive target detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111000940.7A CN113780524B (en) 2021-08-30 2021-08-30 Weather self-adaptive target detection system and method

Publications (2)

Publication Number Publication Date
CN113780524A true CN113780524A (en) 2021-12-10
CN113780524B CN113780524B (en) 2024-02-13

Family

ID=78839815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111000940.7A Active CN113780524B (en) 2021-08-30 2021-08-30 Weather self-adaptive target detection system and method

Country Status (1)

Country Link
CN (1) CN113780524B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576658A (en) * 2023-11-16 2024-02-20 南京大学 Airport runway foreign matter detection intelligent early warning implementation method based on vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978036A (en) * 2019-03-11 2019-07-05 华瑞新智科技(北京)有限公司 Target detection deep learning model training method and object detection method
CN109977918A (en) * 2019-04-09 2019-07-05 华南理工大学 A kind of target detection and localization optimization method adapted to based on unsupervised domain
CN110210561A (en) * 2019-05-31 2019-09-06 北京市商汤科技开发有限公司 Training method, object detection method and device, the storage medium of neural network
CN112183788A (en) * 2020-11-30 2021-01-05 华南理工大学 Domain adaptive equipment operation detection system and method
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
CN113177549A (en) * 2021-05-11 2021-07-27 中国科学技术大学 Few-sample target detection method and system based on dynamic prototype feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174149A1 (en) * 2018-11-20 2021-06-10 Xidian University Feature fusion and dense connection-based method for infrared plane object detection
CN109978036A (en) * 2019-03-11 2019-07-05 华瑞新智科技(北京)有限公司 Target detection deep learning model training method and object detection method
CN109977918A (en) * 2019-04-09 2019-07-05 华南理工大学 A kind of target detection and localization optimization method adapted to based on unsupervised domain
CN110210561A (en) * 2019-05-31 2019-09-06 北京市商汤科技开发有限公司 Training method, object detection method and device, the storage medium of neural network
CN112183788A (en) * 2020-11-30 2021-01-05 华南理工大学 Domain adaptive equipment operation detection system and method
CN113177549A (en) * 2021-05-11 2021-07-27 中国科学技术大学 Few-sample target detection method and system based on dynamic prototype feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHIJIAN QU 等: "Scale Self-Adaption Tracking Method of Defog-PSA-Kcf Defogging and Dimensionality Reduction of Foreign Matter Intrusion Along Railway Lines", 《IEEE ACCESS》, pages 126720 - 126733 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576658A (en) * 2023-11-16 2024-02-20 南京大学 Airport runway foreign matter detection intelligent early warning implementation method based on vision

Also Published As

Publication number Publication date
CN113780524B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN114241282B (en) Knowledge distillation-based edge equipment scene recognition method and device
CN109815903B (en) Video emotion classification method based on self-adaptive fusion network
CN110458077B (en) Vehicle color identification method and system
CN112434723B (en) Day/night image classification and object detection method based on attention network
CN112529931B (en) Method and system for foreground segmentation
CN114005085A (en) Dense crowd distribution detection and counting method in video
CN112163508A (en) Character recognition method and system based on real scene and OCR terminal
CN114170570A (en) Pedestrian detection method and system suitable for crowded scene
CN113780524B (en) Weather self-adaptive target detection system and method
CN113378830B (en) Autonomous learning data tag generation method based on domain adaptation
CN111242870A (en) Low-light image enhancement method based on deep learning knowledge distillation technology
CN117115614B (en) Object identification method, device, equipment and storage medium for outdoor image
CN114359167A (en) Insulator defect detection method based on lightweight YOLOv4 in complex scene
CN117115616A (en) Real-time low-illumination image target detection method based on convolutional neural network
CN116912673A (en) Target detection method based on underwater optical image
CN110929632A (en) Complex scene-oriented vehicle target detection method and device
CN116597424A (en) Fatigue driving detection system based on face recognition
CN115588130A (en) Cross-domain YOLO detection method based on domain self-adaptation
CN113255514B (en) Behavior identification method based on local scene perception graph convolutional network
CN114882469A (en) Traffic sign detection method and system based on DL-SSD model
CN112949424B (en) Neuromorphic visual sampling method and device
CN114092746A (en) Multi-attribute identification method and device, storage medium and electronic equipment
CN112348823A (en) Object-oriented high-resolution remote sensing image segmentation algorithm
CN112070048A (en) Vehicle attribute identification method based on RDSNet
CN116189132B (en) Training method for target detection model of road information, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant