CN112766092A - Method for quickly identifying background category based on brain-like neural network and application thereof - Google Patents

Method for quickly identifying background category based on brain-like neural network and application thereof Download PDF

Info

Publication number
CN112766092A
CN112766092A CN202110007074.8A CN202110007074A CN112766092A CN 112766092 A CN112766092 A CN 112766092A CN 202110007074 A CN202110007074 A CN 202110007074A CN 112766092 A CN112766092 A CN 112766092A
Authority
CN
China
Prior art keywords
target
neural network
background
brain
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110007074.8A
Other languages
Chinese (zh)
Inventor
张弘
张恺
陈浩
杨一帆
袁丁
李亚伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110007074.8A priority Critical patent/CN112766092A/en
Publication of CN112766092A publication Critical patent/CN112766092A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image processing, and discloses a method for quickly identifying background categories based on a brain-like neural network and application thereof. The method enhances the accuracy of the model in distinguishing the target from the background by introducing the brain-like neural network, omits the process of gradient back transmission and accelerates the processing speed of the model. The method is suitable for quickly identifying the target-background of the image, and is particularly suitable for identifying the target-background of the monitoring area of the unmanned aerial vehicle.

Description

Method for quickly identifying background category based on brain-like neural network and application thereof
Technical Field
The invention belongs to the field of image processing, and relates to a background category rapid identification method and application thereof, in particular to a background category rapid identification method based on a brain-like neural network and application thereof.
Background
Object detection is a direction of wide application and mature technology in the field of pattern recognition, and the object detection technology is a method for identifying the positions of objects in certain categories and the corresponding categories in the image range. The target detection technology extracts the characteristics and background information of a target from a data set consistent with a real scene through learning, and a model is constructed to identify the real scene. The target detection technology is widely applied to the fields of intelligent monitoring, man-machine interaction, intelligent transportation, unmanned aerial vehicles and the like.
In recent years, object detection technologies have been developed very rapidly, and many excellent object detection algorithms emerge, but these object detection technologies can only detect object classes of one closed set model, and cannot identify classes that are not seen in the closed set model, so that a background area cannot be accurately identified.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a method for quickly identifying the background category based on a brain-like neural network so as to accurately and quickly identify the background area aiming at an image;
the invention also provides application of the background category rapid identification method based on the brain-like neural network.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a background category rapid identification method based on a brain-like neural network comprises a training stage and an identification stage;
the training phase mainly comprises the following steps:
a1, inputting the training picture into a convolutional neural network for feature extraction, and establishing a target feature database;
a2, inputting live-action pictures to the convolutional neural network periodically, comparing the live-action pictures with a target characteristic database, and updating characteristic elements of the target characteristic database in the convolutional neural network by using a loss formula;
the identification phase mainly comprises the following steps:
b1, inputting the real-time picture shot by the unmanned aerial vehicle into the trained convolutional neural network for feature recognition;
b2, comparing the identified features with the established target feature database, and screening out a target part and a background part;
b3, performing secondary classification on the target part by using a classification function to screen out a primary target and a secondary target;
b4, labeling output primary target, secondary target and background;
as a limitation of the invention, the pictures for training comprise a main target element labeling picture and a generalized target labeling picture;
as another limitation of the present invention, the loss formula is a cross-entropy loss formula
Figure BDA0002883939060000021
Wherein, H is a calculated value of cross entropy, namely a calculated value of loss in the invention; p is the predicted probability value of the classification function; q is the complement of the prediction probability value of the classification function; i is a subscript for each category;
as a third limitation of the present invention, the classification function is a sigmoid function of
Figure BDA0002883939060000022
Wherein S represents a probability value predicted by a sigmoid classification function, and x is each element of an input one-dimensional classification characteristic vector;
the invention also provides an application of the background category rapid identification method based on the brain-like neural network, wherein the method is applied to target-background identification of the unmanned aerial vehicle on the monitoring area;
due to the adoption of the technical scheme, compared with the prior art, the invention has the following beneficial effects:
(1) according to the method for quickly identifying the background category based on the brain-like neural network, the brain-like neural network is introduced to learn the generalized target category, so that the model has the two-classification detection capability of positioning and judging whether the target object is the target object, the background area is divided, the interference of other category objects on the identification of the background area is avoided, and the prediction of the background category is more accurate;
(2) the background category rapid identification method based on the brain-like neural network provided by the invention replaces a back propagation neuron used in the traditional identification method by the pulse neuron, omits the step of gradient back propagation and improves the speed of target detection;
in summary, the method for rapidly identifying the background category based on the brain-like neural network provided by the invention enhances the accuracy of the model in distinguishing the target from the background by introducing the brain-like neural network, omits the process of gradient back transmission, and accelerates the processing speed of the model.
The method is suitable for quickly identifying the target-background of the image, and is particularly suitable for identifying the target-background of the monitoring area of the unmanned aerial vehicle.
Drawings
The invention is described in further detail below with reference to the figures and the embodiments.
FIG. 1 is a logic block diagram of a method for rapidly identifying a background category based on a neural network of the brain;
FIG. 2 is a labeled diagram of the main target elements for training in accordance with the present invention;
FIG. 3 is a full-scale plot of the monitored area of the present invention;
FIG. 4 is a live-action view taken by the unmanned aerial vehicle according to the embodiment of the present invention;
FIG. 5 is a target-background labeling diagram output by an embodiment of the present invention.
In fig. 5: 1. a primary target; 2. a secondary target; 3. a background region.
Detailed Description
Preferred embodiments of the present invention will be described below with reference to the accompanying drawings. It should be understood that the description of the preferred embodiment is only for purposes of illustration and understanding, and is not intended to limit the invention.
Embodiment of the invention relates to a method for quickly identifying background categories based on a brain-like neural network
The present embodiment implements the provided method for quickly identifying the background category based on the brain-like neural network by using the unmanned aerial vehicle, the monitor mounted below the unmanned aerial vehicle, and the image processing module, and the logic block diagram of the method is shown in fig. 1.
Before the image processing module is carried on the unmanned aerial vehicle, firstly, the image processing module is trained according to the following steps:
a1, respectively inputting all visual information labeling pictures (figure 2) of the overall shape, outline and partial regional background for training and all essential element target labeling pictures (figure 3) of the monitoring region into a convolutional neural network in an image processing module for learning and extracting target category characteristics (the category characteristics refer to one-dimensional characteristics obtained by calculating a characteristic graph through classified full connection layers after the characteristic graph is obtained by model convolution of the input pictures through a model learned by the convolutional neural network), and establishing a target characteristic database;
a2, inputting the live-action pictures of the monitoring area in one week into a convolutional neural network in an image processing module, comparing the live-action pictures with the target class characteristics extracted in A1 through a cross entropy loss function, and updating characteristic elements in a database;
the cross entropy loss function calculation process is as follows:
Figure BDA0002883939060000041
wherein L is a loss value, y is a class predictor,
Figure BDA0002883939060000042
is a category true value, and log is a logarithmic function with 2 as a base;
before the unmanned aerial vehicle carries on the image processing module for the first time, the A1 is used for training the image processing module, and before the unmanned aerial vehicle is used for monitoring, only the A2 needs to be carried out, and the target element characteristics of the monitoring area can be updated.
Control unmanned aerial vehicle and under no highlight and haze interference, steadily fly in the air of 1 kilometer below apart from ground. In the flight process of the unmanned aerial vehicle, a monitor mounted below the unmanned aerial vehicle is controlled to shoot an area, an obtained image is uploaded to an image processing module, and the following steps are carried out to carry out target-background discrimination:
b1, inputting the uploaded real-time picture (figure 4) into a trained convolutional neural network, and extracting characteristic elements in the real-time picture;
b2, comparing the extracted characteristic elements with an established target element database in the convolutional neural network, and screening out a target part and a background part;
b3, performing secondary classification on the screened target part through a sigmoid function, and screening the target part into a main target and a secondary target;
the sigmoid function is calculated as follows:
Figure BDA0002883939060000051
wherein x is an input vector, y is an output vector, and e is a natural logarithm;
b4, selecting the background part, the primary target part and the secondary target part by using frame frames with different colors respectively, uploading the selected parts to a display screen of the monitoring terminal for display, and outputting the result as shown in fig. 5;
wherein, B3 is needed to be carried out for a plurality of times until the screening result is the same as the last screening result, and then the label output of B4 is carried out.
Although the present invention has been described in detail with reference to the above embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described above, or equivalents may be substituted for elements thereof. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A kind of background classification fast recognition method based on neural network of brain-like, said method is carried in the image processing module, through the neural network of convolution among them, through training stage and recognition stage to realize, characterized by that:
the training phase mainly comprises the following steps:
a1, inputting the training picture into a convolutional neural network for feature extraction, and establishing a target feature database;
a2, inputting live-action pictures to the convolutional neural network periodically, comparing the live-action pictures with a target characteristic database, and updating characteristic elements of the target characteristic database in the convolutional neural network by using a loss formula;
the identification phase mainly comprises the following steps:
b1, inputting the real-time picture shot by the unmanned aerial vehicle into the trained convolutional neural network for feature recognition;
b2, comparing the identified features with the established target feature database, and screening out a target part and a background part;
b3, performing secondary classification on the target part by using a classification function to screen out a primary target and a secondary target;
b4, label output primary target, secondary target, and background.
2. The method for rapidly identifying the background category based on the brain-like neural network as claimed in claim 1, wherein: the loss formula is a cross entropy loss formula
Figure DEST_PATH_IMAGE001
… … formula I
Wherein, H is a calculated value of cross entropy, namely a calculated value of loss in the invention; p is the predicted probability value of the classification function; q is the complement of the prediction probability value of the classification function; i is a subscript for each category.
3. The method for rapidly identifying the background category based on the brain-like neural network according to claim 1 or 2, wherein: the classification function is a sigmoid function as follows
Figure 7183DEST_PATH_IMAGE002
… … formula II
Wherein S represents the probability value predicted by the sigmoid classification function, and x is each element of the input one-dimensional classification feature vector.
4. An application of the method for rapidly identifying the background category based on the cranial neural network according to any one of claims 1 to 3, wherein: the method is applied to target-background recognition of the unmanned aerial vehicle on the monitoring area.
CN202110007074.8A 2021-01-05 2021-01-05 Method for quickly identifying background category based on brain-like neural network and application thereof Pending CN112766092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110007074.8A CN112766092A (en) 2021-01-05 2021-01-05 Method for quickly identifying background category based on brain-like neural network and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110007074.8A CN112766092A (en) 2021-01-05 2021-01-05 Method for quickly identifying background category based on brain-like neural network and application thereof

Publications (1)

Publication Number Publication Date
CN112766092A true CN112766092A (en) 2021-05-07

Family

ID=75699194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110007074.8A Pending CN112766092A (en) 2021-01-05 2021-01-05 Method for quickly identifying background category based on brain-like neural network and application thereof

Country Status (1)

Country Link
CN (1) CN112766092A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569683A (en) * 2021-07-20 2021-10-29 上海明略人工智能(集团)有限公司 Scene classification method, system, device and medium combining salient region detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100272366A1 (en) * 2009-04-24 2010-10-28 Sony Corporation Method and device of detecting object in image and system including the device
US20160342837A1 (en) * 2015-05-19 2016-11-24 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
JP6407467B1 (en) * 2018-05-21 2018-10-17 株式会社Gauss Image processing apparatus, image processing method, and program
CN108875803A (en) * 2018-05-30 2018-11-23 长安大学 A kind of detection of harmful influence haulage vehicle and recognition methods based on video image
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning
CN110929774A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Method for classifying target objects in image, method and device for training model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100272366A1 (en) * 2009-04-24 2010-10-28 Sony Corporation Method and device of detecting object in image and system including the device
US20160342837A1 (en) * 2015-05-19 2016-11-24 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
JP6407467B1 (en) * 2018-05-21 2018-10-17 株式会社Gauss Image processing apparatus, image processing method, and program
CN108875803A (en) * 2018-05-30 2018-11-23 长安大学 A kind of detection of harmful influence haulage vehicle and recognition methods based on video image
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110826612A (en) * 2019-10-31 2020-02-21 上海法路源医疗器械有限公司 Training and identifying method for deep learning
CN110929774A (en) * 2019-11-18 2020-03-27 腾讯科技(深圳)有限公司 Method for classifying target objects in image, method and device for training model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569683A (en) * 2021-07-20 2021-10-29 上海明略人工智能(集团)有限公司 Scene classification method, system, device and medium combining salient region detection
CN113569683B (en) * 2021-07-20 2024-04-02 上海明略人工智能(集团)有限公司 Scene classification method, system, equipment and medium combined with salient region detection

Similar Documents

Publication Publication Date Title
CN110070008B (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN108388927B (en) Small sample polarization SAR terrain classification method based on deep convolution twin network
CN110070530B (en) Transmission line icing detection method based on deep neural network
CN111640101B (en) Ghost convolution characteristic fusion neural network-based real-time traffic flow detection system and method
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN108039044B (en) Vehicle intelligent queuing system and method based on multi-scale convolutional neural network
CN109598268A (en) A kind of RGB-D well-marked target detection method based on single flow depth degree network
CN106096602A (en) Chinese license plate recognition method based on convolutional neural network
CN104517103A (en) Traffic sign classification method based on deep neural network
CN104598924A (en) Target matching detection method
He et al. A robust method for wheatear detection using UAV in natural scenes
CN110569779A (en) Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning
CN109255286A (en) A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
US11449707B2 (en) Method for processing automobile image data, apparatus, and readable storage medium
CN108345900B (en) Pedestrian re-identification method and system based on color texture distribution characteristics
CN114627447A (en) Road vehicle tracking method and system based on attention mechanism and multi-target tracking
CN113435407B (en) Small target identification method and device for power transmission system
CN103336971A (en) Target matching method among multiple cameras based on multi-feature fusion and incremental learning
CN111680705A (en) MB-SSD method and MB-SSD feature extraction network suitable for target detection
CN113255634A (en) Vehicle-mounted mobile terminal target detection method based on improved Yolov5
CN116563410A (en) Electrical equipment electric spark image generation method based on two-stage generation countermeasure network
CN114359167A (en) Insulator defect detection method based on lightweight YOLOv4 in complex scene
CN110618129A (en) Automatic power grid wire clamp detection and defect identification method and device
CN112766092A (en) Method for quickly identifying background category based on brain-like neural network and application thereof
CN110334703B (en) Ship detection and identification method in day and night image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507