CN116883955A - Vehicle identification method, device and medium - Google Patents
Vehicle identification method, device and medium Download PDFInfo
- Publication number
- CN116883955A CN116883955A CN202310651701.0A CN202310651701A CN116883955A CN 116883955 A CN116883955 A CN 116883955A CN 202310651701 A CN202310651701 A CN 202310651701A CN 116883955 A CN116883955 A CN 116883955A
- Authority
- CN
- China
- Prior art keywords
- neural network
- vehicle
- feature map
- network model
- fuzzy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003062 neural network model Methods 0.000 claims abstract description 42
- 238000013528 artificial neural network Methods 0.000 claims abstract description 33
- 230000007246 mechanism Effects 0.000 claims abstract description 14
- 238000009499 grossing Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 21
- 238000010586 diagram Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 11
- 230000015654 memory Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 238000002939 conjugate gradient method Methods 0.000 claims description 3
- 238000007599 discharging Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010047513 Vision blurred Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000036632 reaction speed Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a vehicle identification method, equipment and a medium, wherein the vehicle identification method comprises the following steps: s20: feature extraction is carried out on the fuzzy situation vehicle image to be identified through the trained backbone neural network, and a sample feature map is obtained; s40: extracting fuzzy features of the sample feature map through the trained impulse neural network model to obtain a fuzzy feature map aiming at a fuzzy situation; s60: and processing the outputs of the backbone neural network model and the impulse neural network model by using an attention mechanism, and obtaining a vehicle re-identification result through smoothing operation and a classifier. In the embodiment, the backbone neural network, the impulse neural network and the attention mechanism are combined to identify the vehicle image, the impulse neural network is utilized to process the fuzzy scene vehicle image, and the judgment accuracy is high; and simultaneously optimizing the backbone neural network and the impulse neural network through an attention mechanism.
Description
Technical Field
The application relates to the technical field of computer vision. And more particularly, to a vehicle identification method, apparatus, and medium.
Background
Along with the continuous development and the growth of the technology of the internet of things, vehicles are gradually fused with the internet of things in life. The number of vehicles in cities and suburbs is gradually increased, and functions such as real-time positioning, cross-domain tracking and the like of the vehicles become an important component of public security management.
Currently, vehicle identification in most areas is mainly to identify license plates through cameras. However, in a practical environment, particularly in a remote area, a large number of cameras have low definition and low processing speed. Because of the power consumption reasons, the calculation cannot be obtained on the terminal, the calculation needs to be carried out after the calculation is carried out on the cloud, the anti-interference capability of the edge equipment is weak under the extreme weather condition, the transmission speed is low, the image quality is poor, the accuracy of the calculation of the cloud is reduced, and the reaction speed is reduced.
Disclosure of Invention
The present application is directed to a vehicle recognition method, apparatus, and medium to solve at least one of the problems of the related art.
In order to achieve the above purpose, the application adopts the following technical scheme:
the first aspect of the present application provides a vehicle identification method comprising
S20: feature extraction is carried out on the fuzzy situation vehicle image to be identified through the trained backbone neural network, and a sample feature map is obtained;
s40: extracting fuzzy features of the sample feature map through the trained impulse neural network model to obtain a fuzzy feature map aiming at a fuzzy situation;
s60: and processing the outputs of the backbone neural network model and the impulse neural network model by using an attention mechanism, and obtaining a vehicle re-identification result through smoothing operation and a classifier.
Optionally, the vehicle information identification method further includes:
s10: training the backbone neural network model by using the known fuzzy situation vehicle image to obtain a trained backbone neural network model and a preliminary sample feature map, and constructing a training set.
Optionally, the vehicle information identification method further includes:
s30: training the impulse neural network model by using the preliminary sample feature map to obtain a trained impulse neural network model, and calculating and classifying feature values of the feature map aiming at the fuzzy features by using a classifier;
wherein the characteristic values include a vehicle color, a vehicle appearance, a vehicle contour, a lamp position, a windshield position, a vehicle motion blur degree, a foreground color, and a background color.
Optionally, the backbone neural network model employs a standard residual convolutional neural network, and the activation function thereof employs a Relu function.
Optionally, the backbone neural network model is optimized by a back propagation algorithm, a conjugate gradient method, and a gauss newton method.
Optionally, the impulse neural network model employs standard IFNode impulse neurons.
Optionally, step S60 further includes:
inputting the sample feature map and the fuzzy feature map to a channel attention module and a space attention module to obtain a corresponding channel attention feature map and a space attention feature map;
smoothing the spatial attention feature map and the channel attention feature map, wherein the smoothing is convolution or pooling operation;
and inputting the processed space attention characteristic diagram and the channel attention characteristic diagram into a classifier, and classifying the vehicle ID to obtain a vehicle re-identification result.
Optionally, the discharging function of the impulse neural network model is a binary step function.
A second aspect of the application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method provided in the first aspect of the application when executing the program.
A third aspect of the present application provides a computer storage medium having stored thereon a computer program which when executed by a processor implements the method provided by the first aspect of the present application.
The beneficial effects of the application are as follows:
the vehicle identification method provided by the embodiment combines the backbone neural network, the pulse neural network and the attention mechanism to identify the vehicle image, and utilizes the pulse neural network to process the fuzzy scene vehicle image, so that the judgment accuracy is high; and simultaneously optimizing the backbone neural network and the impulse neural network through an attention mechanism.
Drawings
The following describes the embodiments of the present application in further detail with reference to the drawings.
Fig. 1 shows a flow chart of a vehicle identification method according to an embodiment of the present application.
Fig. 2 shows a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the present application, the present application will be further described with reference to examples and drawings. Like parts in the drawings are denoted by the same reference numerals. It is to be understood by persons skilled in the art that the following detailed description is illustrative and not restrictive, and that this application is not limited to the details given herein.
In order to solve at least one of the problems, the application provides a vehicle identification method, device and medium.
The vehicle identification method of the present application will be described below by way of several specific examples.
One embodiment of the present application provides a vehicle identification method, as shown in fig. 1, including:
s20: feature extraction is carried out on the fuzzy situation vehicle image to be identified through the trained backbone neural network, and a sample feature map is obtained;
s40: extracting fuzzy features of the sample feature map through the trained impulse neural network model to obtain a fuzzy feature map aiming at a fuzzy situation;
s60: and processing the outputs of the backbone neural network model and the impulse neural network model by using an attention mechanism, and obtaining a vehicle re-identification result through smoothing operation and a classifier.
Specifically, overall feature extraction is performed on the vehicle image to be identified in the fuzzy situation by using a backbone neural network, and a sample feature map is obtained.
In the embodiment, the backbone neural network, the impulse neural network and the attention mechanism are combined to identify the vehicle image, the impulse neural network is utilized to process the fuzzy scene vehicle image, and the judgment accuracy is high; meanwhile, the backbone neural network and the impulse neural network are optimized through an attention mechanism, so that the accuracy of fuzzy scene vehicle judgment and vehicle ID classification calculation is further improved, and the detection cost is low.
The main characteristic of the vehicle image in the fuzzy situation is that the image generates dynamic fuzzy under the fuzzy situation, the image has noise, only the vehicle characteristics such as color, appearance and the like can be extracted, license plates and vehicle models can not be extracted, and obvious differences exist between the vehicle information under the fuzzy situation and the vehicle information under the clear situation.
As an embodiment of the present application, the characteristic values include a vehicle color, a vehicle appearance, a vehicle contour, a lamp position, a windshield position, a vehicle motion blur degree, a foreground color, and a background color. The feature value may also include other indicators related to the vehicle in the blurred vision, and is not particularly limited herein.
In a specific embodiment, the vehicle information identification method further includes:
s10: training the backbone neural network model by using the known fuzzy situation vehicle image to obtain a trained backbone neural network model and a preliminary sample feature map, and constructing a training set.
In a specific example, the sample feature map output by the trained backbone neural network model is processed to obtain a preliminary sample feature map.
The size of the sample feature map is 384×136, the size of the preliminary feature map is 192×68, and the reduction scale is 2. The pool size is 2 to 5, and in the embodiment of the present application, 2 may be selected, and other values may be selected, and the present application is not particularly limited.
In a specific example, the fuzzy situation vehicle image of each monitoring point is required to be acquired, and feature extraction is carried out, wherein the extracted features correspond to the sample feature values; the acquisition frequency of the vehicle known as the vehicle for the fuzzy scene is 1 second and 1 frame, and the acquisition frequency may be other values, and is not particularly limited herein.
In a specific example, the sample feature map output by the trained backbone neural network model is subjected to standardized preprocessing, and correspondingly, the feature map corresponding to the fuzzy scene vehicle obtained in real time is also subjected to standardized preprocessing.
In a specific example, the preliminary sample feature map in the training set is optimized through a attention mechanism, so that the accuracy and speed of the algorithm are improved.
Wherein the attention mechanism adopts a CBAM attention mechanism, including a CBAM convolutional neural network.
In a specific embodiment, the vehicle identification method further includes:
creating a pulse neural network, wherein neurons of the pulse neural network are IFNodes, and a discharging function of the pulse neural network model is a binary step function; the neuron may be any other type of neuron, and is not particularly limited herein.
In a specific embodiment, the backbone neural network model employs a standard residual neural network algorithm (ResNet 50) and the activation function employs a Relu function.
It should be noted that, the backbone neural network model may also use other algorithms, such as a self-adaptive learning rate BP algorithm, a momentum improvement BP algorithm, a conjugate gradient method, and a gauss newton method, which are not limited herein.
The basic idea of the neural network is to abstract the human brain neural network from the information processing point of view, build a certain simple model and compose different networks according to different connection modes. Also commonly referred to in engineering and academia as neural networks or neural-like networks. The neural network is an operation model and is formed by interconnecting a plurality of nodes or neurons. Each node represents a specific output function, called the excitation function. The connection between each two nodes represents a weight, called a weight, for the signal passing through the connection, which corresponds to the memory of the artificial neural network. The output of the network is different according to the connection mode of the network, the weight value and the excitation function.
In a specific embodiment, the vehicle information identification method further includes:
s30: training the impulse neural network model by using the preliminary sample feature map to obtain a trained impulse neural network model, and calculating and classifying feature values of the feature map aiming at the fuzzy features by using a classifier;
wherein the characteristic values include a vehicle color, a vehicle appearance, a vehicle contour, a lamp position, a windshield position, a vehicle motion blur degree, a foreground color, and a background color.
In a specific example, the preliminary sample feature map is further processed, so as to reduce the size of the input signal input into the impulse neural network and speed up the computation.
The vehicle re-identification method provided by the application comprises the optimization calculation of the fuzzy scene vehicle, and can calculate the ID classification probability of the fuzzy scene vehicle more accurately.
The neural network module is a function of a nonlinear relation of multiple inputs and single outputs, the input components are multiplied by the corresponding weight components and then added with the threshold value to obtain an activated function, and the output value and the target value error are minimized by changing the values of the weight components and the threshold value, so that the neural network model is obtained.
The input signal of the neural network regressor is the sample feature map, the target value is the vehicle ID, the output value is the vehicle ID classification probability, and the neural network model is obtained by changing the weight component and the value of the threshold value to enable the vehicle re-identification model classification probability to be closest to the actual position.
In a specific embodiment, step S60 further includes:
inputting the sample feature map and the fuzzy feature map to a channel attention module and a space attention module to obtain a corresponding channel attention feature map and a space attention feature map;
smoothing the spatial attention feature map and the channel attention feature map, wherein the smoothing is convolution or pooling operation;
and inputting the processed space attention characteristic diagram and the channel attention characteristic diagram into a classifier, and classifying the vehicle ID to obtain a vehicle re-identification result.
In a specific example, inputting the sample feature map and the blur feature map to a channel attention module to obtain a channel attention feature map includes:
dividing an input sample feature map and a fuzzy feature map along a channel dimension to obtain a plurality of channel feature maps;
for each channel feature graph, calculating a global average pooling value of the channel feature graph as an importance weight of the channel;
carrying out normalization processing on the weight value of each channel so that the sum of all the weight values is 1;
multiplying each channel characteristic diagram by a corresponding weight value to obtain a weighted channel characteristic diagram, and combining the weighted channel characteristic diagrams according to channels to obtain a channel attention characteristic diagram;
the step of inputting the sample feature map and the fuzzy feature map to a spatial attention module to obtain a corresponding spatial attention feature map comprises the following steps:
for each channel feature graph, calculating the maximum pooling value and the average pooling value of the channel feature graph respectively to obtain a vector with the size of 2;
taking the vector of each channel as input, and obtaining the space attention weight of each channel through an MLP model;
multiplying the spatial attention weight of each channel with the channel feature map to obtain a weighted channel feature map, and combining the weighted channel feature maps according to the channels to obtain a spatial attention feature map;
further comprises: smoothing the space attention characteristic diagram and the channel attention characteristic diagram, and performing operations such as convolution or pooling;
and inputting the processed feature map into a classifier, classifying the vehicle ID, and outputting the corresponding probability.
As shown in fig. 2, a second embodiment of the present application provides a schematic structural diagram of a computer device. A vehicle identification method suitable for use in implementing the above-described embodiment includes a central processing module (CPU) that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the computer device are also stored. The CPU, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface, including the input part of the keyboard, mouse, etc.; an output section including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section including a hard disk or the like; and a communication section including a network interface card such as a LAN card, a modem, and the like. The communication section performs communication processing via a network such as the internet. The drives are also connected to the I/O interfaces as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
In particular, according to the present embodiment, the procedure described in the above flowcharts may be implemented as a computer software program. For example, the present embodiments include a computer program product comprising a computer program tangibly embodied on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium.
A third embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
s20: feature extraction is carried out on the fuzzy situation vehicle image to be identified through the trained backbone neural network, and a sample feature map is obtained;
s40: extracting fuzzy features of the sample feature map through the trained impulse neural network model to obtain a fuzzy feature map aiming at a fuzzy situation;
s60: and processing the outputs of the backbone neural network model and the impulse neural network model by using an attention mechanism, and obtaining a vehicle re-identification result through smoothing operation and a classifier.
In practical applications, the computer-readable storage medium may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
It should be noted that the flowcharts and diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to the present embodiments. In this regard, each block in the flowchart or schematic diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the diagrams and/or flowchart illustration, and combinations of blocks in the diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the foregoing examples of the present application are provided merely for clearly illustrating the present application and are not intended to limit the embodiments of the present application, and that various other changes and modifications may be made therein by one skilled in the art without departing from the spirit and scope of the present application as defined by the appended claims.
Claims (10)
1. A vehicle identification method, characterized by comprising:
s20: feature extraction is carried out on the fuzzy situation vehicle image to be identified through the trained backbone neural network, and a sample feature map is obtained;
s40: extracting fuzzy features of the sample feature map through the trained impulse neural network model to obtain a fuzzy feature map aiming at a fuzzy situation;
s60: and processing the outputs of the backbone neural network model and the impulse neural network model by using an attention mechanism, and obtaining a vehicle re-identification result through smoothing operation and a classifier.
2. The vehicle identification method according to claim 1, characterized in that,
the vehicle information identification method further includes:
s10: training the backbone neural network model by using the known fuzzy situation vehicle image to obtain a trained backbone neural network model and a preliminary sample feature map, and constructing a training set.
3. The vehicle identification method according to claim 2, characterized in that,
the vehicle information identification method further includes:
s30: training the impulse neural network model by using the preliminary sample feature map to obtain a trained impulse neural network model, and calculating and classifying feature values of the feature map aiming at the fuzzy features by using a classifier;
wherein the characteristic values include a vehicle color, a vehicle appearance, a vehicle contour, a lamp position, a windshield position, a vehicle motion blur degree, a foreground color, and a background color.
4. The method for identifying a vehicle according to claim 3, wherein,
the backbone neural network model adopts a standard residual convolution neural network, and the activation function adopts a Relu function.
5. The vehicle identification method according to claim 4, wherein,
and optimizing the backbone neural network model by a back propagation algorithm, a conjugate gradient method and a Gauss Newton method.
6. The vehicle identification method according to claim 5, characterized in that,
the impulse neural network model adopts standard IFNode impulse neurons.
7. The vehicle identification method according to claim 6, characterized in that,
step S60 further includes:
inputting the sample feature map and the fuzzy feature map to a channel attention module and a space attention module to obtain a corresponding channel attention feature map and a space attention feature map;
smoothing the spatial attention feature map and the channel attention feature map, wherein the smoothing is convolution or pooling operation;
and inputting the processed space attention characteristic diagram and the channel attention characteristic diagram into a classifier, and classifying the vehicle ID to obtain a vehicle re-identification result.
8. The vehicle identification method according to claim 7, characterized in that,
the discharging function of the impulse neural network model is a binary step function.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-8 when the program is executed by the processor.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310651701.0A CN116883955A (en) | 2023-06-02 | 2023-06-02 | Vehicle identification method, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310651701.0A CN116883955A (en) | 2023-06-02 | 2023-06-02 | Vehicle identification method, device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116883955A true CN116883955A (en) | 2023-10-13 |
Family
ID=88257502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310651701.0A Pending CN116883955A (en) | 2023-06-02 | 2023-06-02 | Vehicle identification method, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116883955A (en) |
-
2023
- 2023-06-02 CN CN202310651701.0A patent/CN116883955A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109902806B (en) | Method for determining target bounding box of noise image based on convolutional neural network | |
CN110321910B (en) | Point cloud-oriented feature extraction method, device and equipment | |
US11983245B2 (en) | Unmanned driving behavior decision-making and model training | |
CN112163465A (en) | Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium | |
Qin et al. | Etdnet: An efficient transformer deraining model | |
CN113326826A (en) | Network model training method and device, electronic equipment and storage medium | |
Liu et al. | SETR-YOLOv5n: A lightweight low-light lane curvature detection method based on fractional-order fusion model | |
Li et al. | An end-to-end system for unmanned aerial vehicle high-resolution remote sensing image haze removal algorithm using convolution neural network | |
CN116432736A (en) | Neural network model optimization method and device and computing equipment | |
CN116861262B (en) | Perception model training method and device, electronic equipment and storage medium | |
CN111160282B (en) | Traffic light detection method based on binary Yolov3 network | |
CN116430317A (en) | Radiation source modulation pattern and individual identification method and system | |
CN116883955A (en) | Vehicle identification method, device and medium | |
CN116168132A (en) | Street view reconstruction model acquisition method, device, equipment and medium | |
Li et al. | Automatic modulation recognition based on a new deep K-SVD denoising algorithm | |
CN111815658B (en) | Image recognition method and device | |
CN115424250A (en) | License plate recognition method and device | |
CN113343924A (en) | Modulation signal identification method based on multi-scale cyclic spectrum feature and self-attention generation countermeasure network | |
CN116629462B (en) | Multi-agent unified interaction track prediction method, system, equipment and medium | |
CN116486203B (en) | Single-target tracking method based on twin network and online template updating | |
Klaiber et al. | A Systematic Literature Review on SOTA Machine learning-supported Computer Vision Approaches to Image Enhancement | |
Wang et al. | Keyframe image processing of semantic 3D point clouds based on deep learning | |
CN116645566B (en) | Classification method based on full-addition pulse type transducer | |
CN117173644A (en) | Vehicle re-identification method based on fuzzy feature extraction | |
Liu et al. | Application of Image Recognition for On-board Sailor Behavior Based on Broad Learning System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |