CN113592031A - Image classification system, violation tool identification method and device - Google Patents

Image classification system, violation tool identification method and device Download PDF

Info

Publication number
CN113592031A
CN113592031A CN202110945015.5A CN202110945015A CN113592031A CN 113592031 A CN113592031 A CN 113592031A CN 202110945015 A CN202110945015 A CN 202110945015A CN 113592031 A CN113592031 A CN 113592031A
Authority
CN
China
Prior art keywords
layer
image
features
module
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110945015.5A
Other languages
Chinese (zh)
Other versions
CN113592031B (en
Inventor
张屹
张国梁
杜泽旭
卢卫疆
赵婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Global Energy Interconnection Research Institute
Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Global Energy Interconnection Research Institute
Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Global Energy Interconnection Research Institute, Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202110945015.5A priority Critical patent/CN113592031B/en
Publication of CN113592031A publication Critical patent/CN113592031A/en
Application granted granted Critical
Publication of CN113592031B publication Critical patent/CN113592031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an image classification system, a violation tool identification method and a violation tool identification device, wherein the image classification system comprises: the system comprises a layer feature extraction module, a layer generation model and a layer classification module, wherein the layer generation model comprises at least one layer generation module, and the layer feature extraction module is used for extracting image features according to a target image; if the generation module is not the first layer generation module in the hierarchical generation model, determining the generation characteristics of the current layer according to the image characteristics, taking the upper layer artificial characteristics output by the upper layer generation module as a basic value, and taking the generation characteristics of the current layer as an offset to form the artificial characteristics of the current layer; and the layer classification module is used for outputting an image classification result according to the artificial features of the current layer. The upper artificial features output by the generation modules above the generation module are used as basic values, so that the field drift of the current artificial features calculated by the current generation module is reduced, and the finally obtained classification result is more accurate.

Description

Image classification system, violation tool identification method and device
Technical Field
The invention relates to the technical field of image classification, in particular to an image classification system, a violation tool identification method and a violation tool identification device.
Background
For training a conventional image classification algorithm, such as a deep convolutional neural network, a large amount of labeled image data is required, which may result in a reduction in recognition accuracy when a new class appears. To solve the problem, experts at home and abroad propose a plurality of zero sample classification methods. In the zero sample classification scenario, the classes in the training set are called known classes, the classes in the test set are unknown classes, and the two classes do not overlap. The training set has only known classes of image data. Currently, zero sample classification methods are mainly classified into three categories, namely, embedding-based methods, model generation-based methods and knowledge graph-based methods. The embedding-based method aims to learn a mapping function to map auxiliary information (such as attribute vectors and word vectors) and visual features into a public space, and the most recent label is searched for the most classified result after the test image features are mapped during testing. The method is easy to cause the problem of 'pivot point', aiming at the problem, a generation model is introduced in the zero sample classification field, and the artificial features are generated for the unknown classes by using auxiliary information to solve the problem of data imbalance.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defect of inaccurate classification result obtained by calculation based on a generated model in the prior art, so that an image classification system, a violation tool identification method and a violation tool identification device are provided.
A first aspect of the present invention provides an image classification system, comprising: the hierarchical feature extraction module is used for extracting image features according to a target image and transmitting the image features to the generation module; if the generation module is not the first layer generation module in the hierarchical generation model, the generation module is used for determining the generation characteristics of the current layer according to the image characteristics, taking the upper layer artificial characteristics output by the upper layer generation module as a basic value and the current layer generation characteristics as an offset to form the current layer artificial characteristics, and transmitting the current layer artificial characteristics to the hierarchical classification model; and the layer classification module is used for outputting an image classification result according to the artificial features of the current layer.
Optionally, in the image classification system provided by the present invention, the hierarchical classification module includes at least one layer of classifier, the classifiers correspond to the generation module one to one, and the generation module transmits the current-layer artificial features to the classifier corresponding to the generation module in the hierarchical classification module; if the classifier is not the first-layer classifier in the hierarchical classification module, the classifier is used for obtaining a current-layer classification result according to the current-layer artificial features and obtaining an image classification result according to the current-layer classification result and an upper-layer classification result obtained by the previous-layer classifier.
Optionally, in the image classification system provided by the present invention, the classifier includes a plurality of node sets, each node set includes at least one node, and each node represents different image classification options; the node set of the classifier corresponds to the nodes of the classifier at the upper layer one by one, and the nodes in the node set are child nodes of the classifier at the upper layer corresponding to the node set.
Optionally, in the image classification system provided by the present invention, the hierarchical feature extraction module includes a basic feature extraction network and at least one layer of branch network, the branch networks correspond to the generation module one to one, and the basic feature extraction network is configured to extract a basic feature set according to the target image and transmit the basic feature set to the branch networks; if the branch network is not the first layer branch network in the layer feature extraction module, the branch network is used for obtaining an upper layer classification result obtained by an upper layer classifier, calculating the weight of each feature in the basic feature set according to the upper layer classification result, calculating the current layer feature according to the weight of the basic feature set and each feature, taking the current layer feature as an image feature, and transmitting the image feature to a generation module corresponding to the branch network.
Optionally, in the image classification system provided by the present invention, the number of levels of the generation module in the hierarchical generation model is determined by the number of levels of the data set taxonomy structure corresponding to the target image.
Optionally, in the image classification system provided by the present invention, the branch network includes: the upper-layer classification result conversion submodule is used for expanding the upper-layer classification result into a visual characteristic dimension parameter; the abstract conversion submodule is used for converting the features in the basic feature set into an abstract space to obtain abstract features; the weight calculation submodule inputs the visual characteristic dimension parameters and the abstract characteristics into an attention network to obtain the weight of each characteristic in the basic characteristic set; the visual characteristic calculation submodule is used for executing dot product operation on the weight and the abstract characteristic of each characteristic to obtain local characteristics; the global feature calculation module is used for calculating global features according to the basic features; and the current layer feature calculation module is used for superposing the local features and the global features to obtain current layer features.
Optionally, in the image classification system provided by the present invention, the generating module generates the current-layer artificial feature according to the following formula: f'gl=(1-α)×fgl+α×f′g(l-1)Wherein, f'g(l-1)Representing upper artificial features, fglRepresenting the current layer generation characteristics, and alpha is a hyper-parameter.
The second aspect of the invention provides a violation tool identification method, which comprises the following steps: acquiring an image to be classified; inputting the image to be classified into the image classification system provided by the first aspect of the invention to obtain the category of the tool in the image to be classified; and if the category of the tool in the image to be classified belongs to the preset violation tool category, judging that the tool is a violation tool.
A third aspect of the present invention provides a violation tool identification apparatus comprising: the image acquisition module is used for acquiring an image to be classified; the classification module is used for inputting the image to be classified into the image classification system provided by the first aspect of the invention to obtain the category of the tool in the image to be classified; and the violation tool identification module is used for judging that the tool is a violation tool if the category of the tool in the image to be classified belongs to the preset violation tool category.
A fourth aspect of the present invention provides a computer device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to perform the violation tool identification method as provided by the second aspect of the present invention.
A fifth aspect of the present invention provides a computer readable storage medium having stored thereon computer instructions for causing a computer to execute the violation tool identification method as provided by the second aspect of the present invention.
The technical scheme of the invention has the following advantages:
the invention provides an image classification system, a violation tool identification method and a device, comprising a layer feature extraction module, a layer generation module and a layer classification module, wherein the layer feature extraction module is used for acquiring the image features of a target image, the layer generation module is used for generating artificial features according to the image features, and the layer classification module is used for outputting a classification result according to the artificial features of a current layer, wherein the layer generation module comprises a plurality of layers of generation modules, when the generation module receiving the image features is not the first layer generation module in the layer generation module, the upper layer artificial features output by the upper layer generation module of the generation modules are used as basic values, the current layer generation features generated by the generation modules are used as offset, so that the current layer artificial features are calculated, the upper layer artificial features output by the upper layer generation modules are used as basic values, the field drift of the current layer artificial features calculated by the current generation module is reduced, thereby making the final classification result more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic block diagram of a specific example of an image classification system in an embodiment of the invention;
FIG. 2 is a functional block diagram of a specific example of a generation module in an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a specific example of an image classification system in an embodiment of the invention;
FIG. 4 is a diagram illustrating a data set classification structure according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a specific example of a hierarchical feature extraction module in an embodiment of the present invention;
FIG. 6 is a flowchart of computing image features in branch networks other than the first layer of branch network in the hierarchical feature extraction module according to an embodiment of the present invention;
FIG. 7 is a flow chart of one particular example of a violation tool identification methodology in an embodiment of the present invention;
FIG. 8 is a functional block diagram of one particular example of a violation tool identification device in an embodiment of the present invention;
fig. 9 is a functional block diagram of a computer device provided in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the technical features related to the different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.
An embodiment of the present invention provides an image classification system, as shown in fig. 1, including: the device comprises a layer feature extraction module, a layer generation model and a layer classification module, wherein the layer generation model comprises at least one layer generation module.
The hierarchical feature extraction module is used for extracting image features according to the target image and transmitting the image features to the generation module.
In an optional embodiment, the target image includes an object to be classified, and the image classification system is configured to classify the object to be classified in the target image, so as to identify the object to be classified in the target image. In an alternative embodiment, the object to be classified may be an animal, a plant, a tool, or the like.
In an optional embodiment, the hierarchical feature extraction module extracts features of objects to be classified in the target image according to image features extracted from the target image, and the extracted features are different for different objects to be classified.
In an optional embodiment, after the image features are obtained by the hierarchical feature extraction module, the image features are transmitted to a generation module in the hierarchical generation model.
If the generation module receiving the image features is not the first layer generation module in the hierarchical generation model, the generation module is used for determining the current layer generation features according to the image features, taking the upper layer artificial features output by the upper layer generation module as basic values, taking the current layer generation features as offsets to form the current layer artificial features, and transmitting the current layer artificial features to the hierarchical classification model.
And the layer classification module is used for outputting an image classification result according to the artificial features of the current layer. In an optional embodiment, the hierarchical classification module includes a classifier, and the image classification result can be obtained through the classifier.
The image classification system provided by the embodiment of the invention comprises a layer feature extraction module, a layer generation model and a layer classification module, wherein the layer feature extraction module is used for acquiring the image features of a target image, the layer generation model is used for generating artificial features according to the image features, and the layer classification module is used for outputting a classification result according to the artificial features of a current layer, wherein the layer generation model comprises a plurality of layers of generation modules, when the generation module receiving the image features is not the first layer generation module in the layer generation model, the upper layer artificial features output by the generation modules above the generation modules are used as basic values, the generated features of the current layer generated by the generation modules themselves are used as offset values, so that the artificial features of the current layer are calculated, the drift of the field of the artificial features of the current layer calculated by the current generation module is reduced by using the upper layer artificial features output by the generation modules above the generation modules as basic values, thereby making the final classification result more accurate.
In an optional embodiment, in the hierarchical generation model, except for the first layer generation module, other generation modules include a countermeasure generation network and a feature synthesis submodule, the countermeasure generation network is used to generate current layer generated features, and the feature synthesis submodule is used to use the upper layer artificial features output by the upper layer generation module as a base value, use the current layer generated features as an offset to form current layer artificial features, and input the current layer artificial features into the hierarchical classification model.
In an optional embodiment, in the hierarchical generation model, the first layer generation module comprises a confrontation generation network for generating the current layer feature.
In an optional embodiment, if the generation module that receives the image features is a first layer generation module in the hierarchical generation model, the current layer generation features determined by the generation module according to the image features are determined as current layer artificial features.
In an alternative embodiment, as shown in FIG. 2, the countermeasure generation network includes a generator for generating a vector of attributes (a) from the categories, and an arbitery) And noise randomly generated from a normal distribution: (N (0,1)) calculating synthesis characteristics; the discriminator is used for judging whether the input features are real features or synthetic features, and the training of the generator is guided by the discriminator when the anti-biotic network is trained. The current layer generation characteristics in the above embodiments are calculated by the generator.
In an alternative embodiment, the current layer artificial features are calculated by the following formula:
f′gl=(1-α)×fgl+α×f′g(l-1)
wherein, f'g(l-1)Representing upper artificial features, fglRepresenting the current layer generation characteristics, and alpha is a hyper-parameter.
In an alternative embodiment, as shown in FIG. 3, at least one layer of Classifier (Classifer) is included in the hierarchical classification modulel) The classifiers correspond to the generation modules one to one.
In an alternative embodiment, the number of levels of the generation module in the hierarchical generation model is determined by the number of levels of the taxonomic structure of the data set corresponding to the target image.
Illustratively, if the target image includes an animal, the data set taxonomy structure corresponding to the target image is a data set taxonomy structure constructed according to animal taxonomy. As shown in fig. 4, if the taxonomic structure of the data set includes three levels, the first level includes tiger shark, sperm whale, and felidae, the second level includes tiger shark, sperm whale, cat, and leopard, and the third level pair includes tiger shark, sperm whale, cat, tiger, and leopard, the number of the levels of the classifiers and the generation modules is 3.
In an optional embodiment, the classifier includes a plurality of node sets, each node set includes at least one node, and each node represents a different image classification option. The node set of the classifier corresponds to the nodes of the classifier at the upper layer one by one, and the nodes in the node set are child nodes of the classifier at the upper layer corresponding to the node set. The number of nodes corresponding to each classifier and the relationship between each node and the upper node are determined by the taxonomic structure of the data set corresponding to the target image.
If the taxonomy structure of the data set corresponding to the target image is shown in fig. 4, in the image classification system, the first-layer classifier includes three nodes respectively corresponding to tiger shark, sperm whale and feline, the second-layer classifier includes four nodes respectively corresponding to tiger shark, sperm whale, feline and leopard, and the third-layer classifier includes five nodes respectively corresponding to tiger shark, sperm whale, cat, tiger and leopard. Moreover, the tiger shark in the second-layer classifier is a child node of the tiger shark family in the first-layer classifier, the sperm whale in the second-layer classifier is a child node of the sperm whale family in the first-layer classifier, and the cat and leopard in the second-layer classifier are child nodes of the cat family in the first-layer classifier; tiger shark in the third-layer classifier is a child node of tiger shark in the second-layer classifier, sperm whale in the third-layer classifier is a child node of sperm whale in the second-layer classifier, cat and tiger in the third-layer classifier are child nodes of cat in the second-layer classifier, and leopard in the third-layer classifier is a child node of leopard in the second-layer classifier.
In the image classification system provided by the embodiment of the invention, the hierarchical structure of the hierarchical generation model and the hierarchical classification module is constructed according to the data set classification structure, so that the image classification system excavates and utilizes the shared information of the known class and the unknown class in the knowledge base when classifying the target image.
In an optional embodiment, in the image classification system provided in the embodiment of the present invention, after the generation module generates the current-layer artificial feature, a mode of determining an image classification result is as follows:
and the generation module transmits the artificial features of the current layer to a classifier corresponding to the generation module in the hierarchical classification module.
In an optional embodiment, the artificial features generated by each layer generation module are different, and the classification options of each layer classifier are also different when classifying the target object, so that the generation module needs to input the current layer artificial features into the classifier corresponding to the current layer artificial features.
If the classifier is not the first-layer classifier in the hierarchical classification module, the classifier is used for obtaining a current-layer classification result according to the current-layer artificial features and obtaining an image classification result according to the current-layer classification result and an upper-layer classification result obtained by the previous-layer classifier.
In an optional embodiment, each classifier corresponds to different classification options, the output image classification result is a probability value corresponding to each classification option, if the current classifier is not the first-layer classifier, when the current classifier calculates the probability value corresponding to each option, the current classifier calculates the probability value of each classification option corresponding to the current classifier by combining the classification result of the current classifier and the probability value of each classification option in the upper-layer classifier, and the probability value of each classification option corresponding to the current classifier is the image classification result obtained by the current classifier.
Illustratively, if the classification options corresponding to the upper-layer classifier include probabilities corresponding to tiger shark, sperm whale, cat and leopard, the classification result obtained by the lower-layer classifier is the probabilities corresponding to tiger shark, sperm whale, cat, tiger and leopard, then the probability corresponding to tiger shark and the probability corresponding to tiger shark are combined to obtain the final probability of tiger shark, the probability corresponding to sperm whale and the probability corresponding to sperm whale are combined to obtain the final probability of sperm whale, the probability corresponding to cat and the probability corresponding to cat are combined to obtain the final probability of cat, the probability corresponding to tiger and the probability corresponding to cat are combined to obtain the final probability of tiger, and the probability corresponding to leopard are combined to obtain the final probability of leopard.
In an optional embodiment, when the probability value of each classification option in the self classification result is combined with the probability value of the parent node of each classification option, the summation may be performed, or the averaging may be performed.
In an optional embodiment, if the classifier is a first-layer classifier in the hierarchical classification module, the probability value of each classification option in the self classification result is determined as the image classification result.
In the technical solution provided in the embodiment of the present invention, the classifier includes a plurality of node sets, each node set includes a plurality of nodes, different nodes respectively represent different image classification options, and each node of the lower-layer classifier is a child node of a node of the upper-layer classifier adjacent to the node, that is, in the embodiment of the present invention, the dimensionality of the classification option is extended according to the data set taxonomy structure, and the larger the probability value of the classification option corresponding to a certain node in the previous-layer classifier is, the more likely the classification option with the highest probability value obtained by the current-layer classifier is to be the child node thereof. In the embodiment of the invention, the classification of the target image is carried out step by step, so that the classification process is more refined, and the obtained classification result is more accurate.
In an alternative embodiment, as shown in fig. 3, the hierarchical feature extraction module includes a basic feature extraction network and at least one layer of branch networks, and the branch networks correspond to the generation modules one to one. The method for determining the number of levels of the branch network is the same as the method for determining the number of levels of the generation module, and for details, reference is made to the above description of the embodiments, and details are not repeated here.
In an alternative embodiment, the base feature extraction network may employ the first 41 layers of ResNet-50.
The basic feature extraction network is used for extracting a basic feature set according to the target image and transmitting the basic feature set to the branch network. In an alternative embodiment, the set of underlying features input into all the branched networks is the same.
If the branch network is not the first layer branch network in the layer feature extraction module, the branch network is used for obtaining an upper layer classification result obtained by an upper layer classifier, calculating the weight of each feature in the basic feature set according to the upper layer classification result, calculating the current layer feature according to the weight of the basic feature set and each feature, taking the current layer feature as an image feature, and transmitting the image feature to a generation module corresponding to the branch network.
As described in the above embodiments, different classifiers are used to implement different levels of classification, and the emphasis of the used features on different levels of classification is different, so in the inventive embodiment, different branch networks are used to generate different image features respectively, and then input into corresponding generation modules.
In an optional embodiment, because the upper-layer classification option and the current-layer classification option have an upper-lower level relationship, and the current-layer classification option is an option after the upper-layer classification option is refined, when the weight of each feature in the basic feature set is calculated according to the upper-layer classification result, the relevance of each feature and each classification option of the upper layer can be determined firstly, the feature corresponding to the classification option with the higher probability value in the upper-layer classification result is given with the higher weight, and the feature corresponding to the classification option with the lower probability value in the upper-layer classification result is given with the lower weight.
The image features obtained by the image classification system provided by the embodiment of the invention pay more attention to local features, and the classification results obtained by the classifiers of all layers have smaller deviation.
In an optional embodiment, except for the first layer of branch network in the hierarchical feature extraction module, the branch networks of other layers respectively include:
an upper-layer classification result conversion submodule for expanding the upper-layer classification result into visual characteristic dimension parameters
Figure BDA0003216452550000121
In an optional embodiment, the generated classification result may be input into a pre-trained neural network model, so as to obtain the visual feature dimension parameter.
An abstract conversion submodule for converting the features in the basic feature set into an abstract space to obtain abstract features
Figure BDA0003216452550000122
In an optional embodiment, the features in the basic feature set may be input into a neural network model trained in advance, so as to obtain the abstract features.
And the weight calculation submodule inputs the visual characteristic dimension parameters and the abstract characteristics into the attention network to obtain the weight of each characteristic in the basic characteristic set:
Figure BDA0003216452550000123
a visual feature calculation submodule for performing weight and abstract feature of each featurePerforming dot product operation to obtain local feature fl
A global feature calculation module for calculating global features based on the base features
Figure BDA0003216452550000131
It is shown that, in an alternative embodiment, the basic features may be input into a neural network model trained in advance, so as to obtain global features.
The current layer feature calculation module is used for superposing the local features and the global features to obtain current layer features: f. ofrl. In an alternative embodiment, the superposition of the local features and the global features may be a calculation of a mean of the local features and the global features.
In an alternative embodiment, the flow of calculating the current-layer feature by the branch networks of other layers except the first-layer branch network in the hierarchical feature extraction module is shown in fig. 5.
In an optional embodiment, the first level of the branched network in the hierarchical feature extraction module comprises:
an abstract conversion submodule for converting the features in the basic feature set into an abstract space to obtain abstract features
Figure BDA0003216452550000132
Wherein, it is shown.
A global feature calculation module for calculating global features based on the base features
Figure BDA0003216452550000133
Wherein, it is shown.
The current layer feature calculation module is used for superposing the abstract features and the global features to obtain current layer features: f. ofrl. In an contemplated alternative embodiment, superimposing the abstract features and the global features may be computing a mean of the abstract features and the global features.
In an alternative embodiment, the hierarchical feature extraction module may be trained separately, as shown in fig. 6, when the hierarchical feature extraction module is trained, the hierarchical feature extraction module classifies the global features obtained from the branch networks to obtain a classification result, if the current branch network is not the last branch network, the classification result is input into the next branch network, the step of classifying the global features obtained from the branch networks is repeatedly performed in the next branch network to obtain a classification result, until the current branch network is the last branch network, the current classification result is determined as the final classification result, if the accuracy of the final classification result is less than a preset value, the parameters in each branch network are modified according to the final classification result, if the accuracy of the final classification result is greater than or equal to the preset value, the hierarchical extraction module is used as a part of the image classification system, for obtaining image features.
The embodiment of the invention also provides a violation tool identification method, as shown in fig. 7, comprising the following steps:
step S21: and acquiring an image to be classified. In an optional embodiment, if the violation tool of the power grid operation field needs to be identified, the picture of the power grid operation field can be obtained through an image acquisition device arranged on the power grid operation field, and the picture of the power grid operation field comprises the tool.
Step S22: the image classification system for identifying the violation tool is provided in any of the embodiments above, and the image classification system is configured to input the image to be classified into the image classification system to obtain the category of the tool in the image to be classified.
If the category of the tool in the image to be classified belongs to the preset violation tool category, judging that the tool in the image to be classified is a violation tool; and if the category of the tool in the image to be classified does not belong to the preset violation tool category, judging that the tool in the image to be classified is not a violation tool.
In the embodiment of the invention, various violation tools can be preset, and since the tools required to be used and the tools prohibited to be used are different in different application scenes, the types of the tools contained in the preset violation tools are different in different application scenes.
In an optional embodiment, when the image classification system is used for identifying the category of the violation tool, the number of levels of the branch network in the level feature extraction module, the number of levels of the generation module in the level generation model, the number of levels of the classifiers in the level classification module, and the classification options of the classifiers in the level classification model in the image classification system are determined according to the data set taxonomic structure of the tool.
In an optional embodiment, the process of classifying the violation tool in the image to be classified by the image classification system provided in the above embodiment is as follows:
firstly, extracting the image characteristics of the image to be classified through a hierarchical characteristic extraction module, wherein the image characteristics are the characteristics of tools in the image to be classified.
Then, inputting the image characteristics into a hierarchical generating model, if the generating module receiving the image characteristics is not the first layer generating module in the hierarchical generating model, forming the current layer artificial characteristics by taking the upper layer artificial characteristics output by the upper layer generating module as a basic value and taking the current layer generating characteristics as an offset, and transmitting the current layer artificial characteristics to the hierarchical classification model.
And finally, outputting the type of the violation tool by the level classification module according to the artificial features of the current layer.
In an optional embodiment, the process of classifying the violation tool in the image to be classified by the image classification system provided in the above embodiment may further include:
step one, extracting a basic feature set according to a target image through a basic feature extraction network in a hierarchical feature extraction module, and respectively inputting the basic feature set into each branch network. The first layer of branch network respectively adopts different functions to calculate the basic feature set to obtain abstract features and global features, the abstract features and the global features are superposed to obtain current layer features corresponding to the first layer of branch network, and the current layer features are input to the first layer generating module.
And step two, the first layer generation module obtains the current layer artificial features according to the current layer features and inputs the current layer artificial features into the first layer classifier.
And thirdly, the first-layer classifier obtains the probability value of each classification option corresponding to the first-layer classifier according to the artificial features of the current layer.
Step four, the next layer of branch network receives the probability values of all the classification options obtained by the last layer of classifier, and the probability values of all the classification options obtained by the last layer of classifier are expanded into visual feature dimension parameters through a linear conversion function; converting the features in the basic feature set into an abstract space to obtain abstract features; inputting the visual characteristic dimension parameters and the abstract characteristics into an attention network to obtain the weight of each characteristic in the basic characteristic set; performing dot product operation on the weight and abstract features of each feature to obtain local features; calculating global features according to the basic features; superposing the local features and the global features to obtain the current layer features; and determining the current layer characteristics as the graphic characteristics and inputting the graphic characteristics into a corresponding generation module.
And step five, the generation module receiving the image features determines the generation features of the current layer according to the image features, the upper artificial features output by the upper generation module are used as basic values, the generation features of the current layer are used as offsets to form the artificial features of the current layer, and the artificial features of the current layer are transmitted to the hierarchical classification model.
And step six, the classifier which receives the artificial features of the current layer obtains the classification result of the current layer according to the artificial features of the current layer, and obtains the probability value of each classification option according to the classification result of the current layer and the classification result of the upper layer obtained by the classifier of the upper layer.
And judging whether a next-layer branch network exists or not, or judging whether a generation module or a classifier exists or not, if so, repeatedly executing the fourth step, the fifth step and the sixth step until the next-layer branch network does not exist, or judging whether the generation module or the classifier outputs the probability value of each classification option corresponding to the current classifier, and determining the class of the violation tool corresponding to the classification option with the maximum probability value as the class of the violation tool in the image to be classified.
The embodiment of the invention also provides a violation tool identification device, as shown in fig. 8, including:
the image obtaining module 31 is configured to obtain an image to be classified, and details of the image to be classified refer to the description of step S21 in the foregoing embodiment, which is not described herein again.
The classification module 32 is configured to input the image to be classified into the image classification system, so as to obtain the category of the tool in the image to be classified, for details, refer to the description of step S22 in the foregoing embodiment, which is not described herein again.
If the type of the tool in the image to be classified belongs to the preset type of the violation tool, the violation tool identification module 33 is configured to determine that the tool is the violation tool, and the detailed contents refer to the description in the above method embodiment and are not described herein again.
An embodiment of the present invention provides a computer device, as shown in fig. 9, the computer device mainly includes one or more processors 41 and a memory 42, and one processor 41 is taken as an example in fig. 9.
The computer device may further include: an input device 43 and an output device 44.
The processor 41, the memory 42, the input device 43 and the output device 44 may be connected by a bus or other means, and the bus connection is exemplified in fig. 9.
The processor 41 may be a Central Processing Unit (CPU). The Processor 41 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the stored data area may store data created from use of the violation tool identification device, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 42 optionally includes memory located remotely from the processor 41, and these remote memories may be connected to the violation tool identification device via a network. The input device 43 may receive a calculation request (or other numeric or character information) entered by the user and generate a key signal input associated with the violation tool identification device. The output device 44 may include a display device such as a display screen for outputting the calculation result.
Embodiments of the present invention provide a computer-readable storage medium, where the computer-readable storage medium stores computer instructions, and the computer-readable storage medium stores computer-executable instructions, where the computer-executable instructions may execute the violation tool identification method in any of the above method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (11)

1. An image classification system, comprising: a layer characteristic extraction module, a layer generation model and a layer classification module, wherein the layer generation model comprises at least one layer generation module,
the hierarchical feature extraction module is used for extracting image features according to a target image and transmitting the image features to the generation module;
if the generation module is not the first layer generation module in the hierarchical generation model, the generation module is used for determining the current layer generation characteristics according to the image characteristics, taking the upper layer artificial characteristics output by the upper layer generation module as a basic value and the current layer generation characteristics as an offset to form the current layer artificial characteristics, and transmitting the current layer artificial characteristics to the hierarchical classification model;
and the layer classification module is used for outputting an image classification result according to the artificial features of the current layer.
2. The image classification system according to claim 1, wherein the hierarchical classification module includes at least one layer of classifiers, the classifiers correspond to the generation module one to one,
the generation module transmits the current-layer artificial features to a classifier corresponding to the generation module in the hierarchical classification module;
if the classifier is not the first-layer classifier in the hierarchical classification module, the classifier is used for obtaining a current-layer classification result according to the current-layer artificial features and obtaining the image classification result according to the current-layer classification result and an upper-layer classification result obtained by a previous-layer classifier.
3. The image classification system of claim 2,
the classifier comprises a plurality of node sets, each node set comprises at least one node, and each node represents different image classification options;
the node set of the classifier corresponds to the nodes of the classifier at the upper layer one by one, and the nodes in the node set are child nodes of the classifier at the upper layer corresponding to the node set.
4. The image classification system according to claim 2, wherein the hierarchical feature extraction module includes a basic feature extraction network and at least one layer of branch networks, the branch networks are in one-to-one correspondence with the generation module,
the basic feature extraction network is used for extracting a basic feature set according to the target image and transmitting the basic feature set to the branch network;
if the branch network is not the first layer branch network in the layer feature extraction module, the branch network is used for obtaining an upper layer classification result obtained by an upper layer classifier, calculating the weight of each feature in the basic feature set according to the upper layer classification result, calculating the current layer feature according to the weight of each feature in the basic feature set, taking the current layer feature as the image feature, and transmitting the image feature to a generation module corresponding to the branch network.
5. The image classification system according to any one of claims 1 to 4,
the number of levels of the generation modules in the hierarchical generative model is determined by the number of levels of the data set taxonomic structure corresponding to the target image.
6. The image classification system of claim 4, wherein the branch network comprises:
the upper-layer classification result conversion submodule is used for expanding the upper-layer classification result into a visual characteristic dimension parameter;
the abstract conversion submodule is used for converting the features in the basic feature set into an abstract space to obtain abstract features;
the weight calculation submodule inputs the visual characteristic dimension parameters and the abstract characteristics into an attention network to obtain the weight of each characteristic in the basic characteristic set;
the visual feature calculation submodule is used for executing dot product operation on the weight of each feature and the abstract feature to obtain local features;
the global feature calculation module is used for calculating global features according to the basic features;
and the current layer feature calculation module is used for superposing the local features and the global features to obtain the current layer features.
7. The image classification system of claim 1, wherein the generation module generates the current-layer artificial features by:
f′gl=(1-α)×fgl+α×f′g(l-1),
wherein, f'g(l-1)Representing upper artificial features, fglRepresenting the current layer generation characteristics, and alpha is a hyper-parameter.
8. A violation tool identification method, comprising:
acquiring an image to be classified;
inputting the image to be classified into the image classification system according to any one of claims 1 to 7, and obtaining the category of the tool in the image to be classified;
and if the category of the tool in the image to be classified belongs to the preset violation tool category, judging that the tool is a violation tool.
9. A violation tool identification device comprising:
the image acquisition module is used for acquiring an image to be classified;
a classification module, configured to input the image to be classified into the image classification system according to any one of claims 1 to 7, to obtain a category of a tool in the image to be classified;
and the violation tool identification module is used for judging that the tool is a violation tool if the category of the tool in the image to be classified belongs to the preset violation tool category.
10. A computer device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to perform the violation tool identification method of claim 8.
11. A computer readable storage medium having stored thereon computer instructions for causing the computer to execute the violation tool identification method of claim 8.
CN202110945015.5A 2021-08-17 2021-08-17 Image classification system, and method and device for identifying violation tool Active CN113592031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110945015.5A CN113592031B (en) 2021-08-17 2021-08-17 Image classification system, and method and device for identifying violation tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110945015.5A CN113592031B (en) 2021-08-17 2021-08-17 Image classification system, and method and device for identifying violation tool

Publications (2)

Publication Number Publication Date
CN113592031A true CN113592031A (en) 2021-11-02
CN113592031B CN113592031B (en) 2023-11-28

Family

ID=78258389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110945015.5A Active CN113592031B (en) 2021-08-17 2021-08-17 Image classification system, and method and device for identifying violation tool

Country Status (1)

Country Link
CN (1) CN113592031B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN110309861A (en) * 2019-06-10 2019-10-08 浙江大学 A kind of multi-modal mankind's activity recognition methods based on generation confrontation network
CN110543563A (en) * 2019-08-20 2019-12-06 暨南大学 Hierarchical text classification method and system
CN111737521A (en) * 2020-08-04 2020-10-02 北京微播易科技股份有限公司 Video classification method and device
CN112183672A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Image classification method, and training method and device of feature extraction network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765512A (en) * 2018-05-30 2018-11-06 清华大学深圳研究生院 A kind of confrontation image generating method based on multi-layer feature
CN110309861A (en) * 2019-06-10 2019-10-08 浙江大学 A kind of multi-modal mankind's activity recognition methods based on generation confrontation network
CN110543563A (en) * 2019-08-20 2019-12-06 暨南大学 Hierarchical text classification method and system
CN111737521A (en) * 2020-08-04 2020-10-02 北京微播易科技股份有限公司 Video classification method and device
CN112183672A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Image classification method, and training method and device of feature extraction network

Also Published As

Publication number Publication date
CN113592031B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
US11853903B2 (en) SGCNN: structural graph convolutional neural network
CN110163258B (en) Zero sample learning method and system based on semantic attribute attention redistribution mechanism
Lucchi et al. Are spatial and global constraints really necessary for segmentation?
CN102902821B (en) The image high-level semantics mark of much-talked-about topic Network Based, search method and device
CN111506722A (en) Knowledge graph question-answering method, device and equipment based on deep learning technology
Sun et al. Dagc: Employing dual attention and graph convolution for point cloud based place recognition
Wu et al. Automatic road extraction from high-resolution remote sensing images using a method based on densely connected spatial feature-enhanced pyramid
CN109783666A (en) A kind of image scene map generation method based on iteration fining
US11687716B2 (en) Machine-learning techniques for augmenting electronic documents with data-verification indicators
KR101939209B1 (en) Apparatus for classifying category of a text based on neural network, method thereof and computer recordable medium storing program to perform the method
CN108985133B (en) Age prediction method and device for face image
JP2022078310A (en) Image classification model generation method, device, electronic apparatus, storage medium, computer program, roadside device and cloud control platform
CN112434718B (en) New coronary pneumonia multi-modal feature extraction fusion method and system based on depth map
CN114048468A (en) Intrusion detection method, intrusion detection model training method, device and medium
CN114064928A (en) Knowledge inference method, knowledge inference device, knowledge inference equipment and storage medium
CN114168795B (en) Building three-dimensional model mapping and storing method and device, electronic equipment and medium
CN113094533A (en) Mixed granularity matching-based image-text cross-modal retrieval method
Du et al. Convolutional neural network-based data anomaly detection considering class imbalance with limited data
CN108805280B (en) Image retrieval method and device
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
US20220139069A1 (en) Information processing system, information processing method, and recording medium
CN113592031B (en) Image classification system, and method and device for identifying violation tool
JPH0944518A (en) Method for structuring image data base, and method and device for retrieval from image data base
CN115018215B (en) Population residence prediction method, system and medium based on multi-modal cognitive atlas
CN114708462A (en) Method, system, device and storage medium for generating detection model for multi-data training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant