CN112069958A - Material identification method, device, equipment and storage medium - Google Patents

Material identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112069958A
CN112069958A CN202010878343.3A CN202010878343A CN112069958A CN 112069958 A CN112069958 A CN 112069958A CN 202010878343 A CN202010878343 A CN 202010878343A CN 112069958 A CN112069958 A CN 112069958A
Authority
CN
China
Prior art keywords
neural network
image
identified
information
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010878343.3A
Other languages
Chinese (zh)
Inventor
杨思雨
蔡登胜
孙金泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Liugong Machinery Co Ltd
Original Assignee
Guangxi Liugong Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Liugong Machinery Co Ltd filed Critical Guangxi Liugong Machinery Co Ltd
Priority to CN202010878343.3A priority Critical patent/CN112069958A/en
Publication of CN112069958A publication Critical patent/CN112069958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a material identification method, a device, equipment and a storage medium. The method comprises the following steps: acquiring an image of a material to be identified; processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information; inputting a material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer; weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature; and inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified. The technical scheme of the embodiment of the invention solves the problem of identification failure caused by death and incapability of reviving of part of neurons in the neural network when the material identification is carried out by only adopting the set neural network, improves the accuracy of material identification, and simultaneously improves the fault tolerance rate of the neural network when the material identification is carried out by setting the neural network.

Description

Material identification method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of engineering machinery, in particular to a material identification method, a device, equipment and a storage medium.
Background
With the continuous progress of science and technology, people have more and more obvious pursuit for comfort, safety and high efficiency in work and life, and intellectualization and even artificial intelligence become an important means for realizing the goal. Due to the severe working environment during the earthwork, various problems of high temperature, dust, moisture, being far away from living areas and the like, the demand for unmanned construction is increasingly strong.
For the identification of the type of the collected material in the earth moving machinery, at present, the key characteristic information of the material is generally obtained through sound, light and other media for identification, a deep learning method based on machine vision is mostly adopted, the obtained material is classified through a classifier after being subjected to sliding window processing, and the identification is usually realized through an Alexnet network model.
However, when the traditional machine vision is adopted to identify the materials, the identification process is slow due to the fact that the materials need to be subjected to multiple treatments in the identification process, different treatment results can be generated for different materials through the same treatment, and the identification accuracy is low. When the material is identified through the Alexnet network model, the phenomenon that the nerve units die and cannot revive exists in some places of the acquired material image through the network model, and the accuracy of material identification is reduced.
Disclosure of Invention
The invention provides a material identification method, a material identification device, material identification equipment and a storage medium, which improve the accuracy of material identification and improve the fault tolerance of a model when an Alexnet network model is applied to material identification.
In a first aspect, an embodiment of the present invention provides a material identification method, including:
acquiring an image of a material to be identified;
processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information;
inputting a material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer;
weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature;
and inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified.
Further, before inputting the material image to be identified into the convolutional layer in the set neural network, the method further comprises the following steps:
performing sliding window segmentation on a material image to be identified to obtain a target material image; the size of the target material image meets the requirement of a set neural network;
correspondingly, inputting the target material image into a convolutional layer in a set neural network, comprising:
and inputting the target material image into a convolutional layer of a set neural network.
Further, the weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature includes:
adjusting the data characteristics of the local characteristic information to ensure that the adjusted data characteristics of the local characteristic information are the same as the data characteristics of the depth characteristic information;
and performing weighted fusion on the adjusted local feature information and the depth feature information according to a preset weight, and determining a fusion result as an intermediate feature.
Further, after obtaining the material image to be identified, the method further comprises the following steps:
preprocessing the material image to be identified according to at least one of the following modes:
binarization, graying, noise suppression, image segmentation and edge extraction;
correspondingly, the method for processing the material image to be recognized by adopting the set edge recognition algorithm comprises the following steps:
and processing the preprocessed material image to be recognized by adopting a set edge recognition algorithm.
Further, setting an edge recognition algorithm as a Hough transform algorithm; and setting the neural network as an Alexnet network model.
In a second aspect, an embodiment of the present invention further provides a material identification device, where the material identification device includes:
the image acquisition module is used for acquiring an image of the material to be identified;
the local information acquisition module is used for processing the material image to be identified by adopting a set edge identification algorithm to acquire local characteristic information;
the depth information acquisition module is used for inputting the material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer;
the intermediate feature acquisition module is used for weighting and fusing the local feature information and the depth feature information to acquire intermediate features;
and the type identification module is used for inputting the intermediate features into a full connection layer of the set neural network to obtain the material type of the material image to be identified.
Further, the intermediate feature obtaining module includes:
the data characteristic adjusting unit is used for adjusting the data characteristics of the local characteristic information to enable the adjusted data characteristics of the local characteristic information to be the same as the data characteristics of the depth characteristic information;
and the weighted fusion unit is used for carrying out weighted fusion on the adjusted local feature information and the depth feature information according to the preset weight and determining a fusion result as an intermediate feature.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement a method of material identification as provided in any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for identifying materials as provided in any of the embodiments of the present invention.
The embodiment of the invention obtains the image of the material to be identified; processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information; inputting a material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer; weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature; and inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified. The method has the advantages that the local characteristic information of the material image to be identified is obtained, the local characteristic information is fused with the depth characteristic information determined by the set neural network convolution layer to replace the original depth characteristic information and input the depth characteristic information into the full connection layer of the set neural network to determine the type of the material to be identified, the problem of identification failure caused by death and incapability of revival of part of neurons in the neural network when the material identification is carried out only by the set neural network is solved, the accuracy of material identification is improved, and meanwhile, the fault tolerance rate of the neural network when the material identification is carried out by the set neural network is improved.
Drawings
Fig. 1 is a flowchart of a material identification method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a material identification method in the second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a material identification apparatus in a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
Example one
Fig. 1 is a flowchart of a material identification method according to an embodiment of the present invention, where the embodiment is applicable to a situation of detecting and identifying an earthwork material, the method may be executed by a material identification device, the material identification device may be implemented by software and/or hardware, and the material identification device may be configured on a computing device, and specifically includes the following steps:
s101, obtaining an image of the material to be identified.
The image of the material to be identified can be understood as an image of a target shoveling material obtained before shoveling by the earthmoving machine.
Specifically, based on the characteristics of the to-be-identified material to be acquired and the requirement of the earthwork machine to identify the to-be-identified material image, the characteristics of the earthwork material are combined, in order to enable the acquired to-be-identified material image to be close to the material image acquired by human eyes, the image acquisition device arranged at the middle upper part in the driving cabin of the earthwork machine is used for acquiring the complete material image to be shoveled before shoveling, and the acquired material image is used as the to-be-identified material image.
And S102, processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information.
Here, the local feature information may be understood as feature information reflecting local characteristics of an image with respect to a local expression of image features of global features. In the embodiment of the present invention, it can be understood as feature information for characterizing material boundaries, surface areas, aspect ratios, and the like in the image, relative to the depth features obtained by the neural network model.
Specifically, a set edge recognition algorithm is used for processing an image of the material to be recognized, which is acquired by an image acquisition device, the information of material boundaries, surface areas and the like in the image of the material to be recognized is obtained through processing, the boundaries of the image of the material are determined through the information, so that the positions of the material in the image of the material to be recognized are determined, and the information of the positions, the boundaries, the surface areas, the length-width ratios and the like is used as local characteristic information of the image of the material to be recognized. Optionally, the edge recognition algorithm may be set to be a hough transform algorithm, which is not limited in this embodiment of the present invention.
S103, inputting the material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information.
Wherein, the set neural network comprises a convolution layer and a full connection layer.
The Neural Network can be understood as a trained Artificial Neural Network (ANN), which abstracts a human brain neuron Network from the perspective of information processing, establishes a simple model, and forms different networks according to different connection modes, or can be understood as an operation model. Convolutional layers are understood as data processing layers in a neural network consisting of several convolutional units, wherein the parameters of each convolutional unit are optimized by a back propagation algorithm to extract different input features, and as the number of convolutional layers in the neural network increases, the convolutional layers at a higher level can iteratively extract more complex features from the lower-level features in the convolutional layers at a lower level. A fully-connected layer is understood to be a data processing layer in a neural network for synthesizing features, each node of the fully-connected layer being connected to all nodes of the previous layer to synthesize complex features of each layer in the neural network to obtain a desired result.
Specifically, the material image to be identified collected by the image collection device is input into the convolutional layer of the set neural network, the features in the material image to be identified are extracted layer by layer, and the feature matrix output by the last convolutional layer in the set neural network is determined as the depth feature information of the material image to be identified, wherein the depth feature information may include texture features, entropy values and the like of the material image to be identified. Alternatively, the set neural network may be an Alexnet neural network model, which includes 5 convolutional layers and 3 fully-connected layers.
And S104, weighting and fusing the local feature information and the depth feature information to obtain intermediate features.
Specifically, when the characteristic data matrix output by the neural network convolution layer is directly input into the full connection layer for processing, if a value smaller than 0 exists in the characteristic data matrix, part of neurons in the neural network die and can not be reactivated, in order to solve the problem, local characteristic information is adjusted to be the characteristic data matrix with the same characteristic as the depth characteristic information data, the two matrixes are weighted and fused according to a predetermined weight value, a fused characteristic data matrix is obtained, and the fused characteristic data matrix is determined to be an intermediate characteristic.
And S105, inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified.
Specifically, the intermediate feature is a fused feature data matrix, wherein the intermediate feature has no value smaller than 0, so that the problem of death of neurons in the neural network cannot be caused after the intermediate feature is input into a trained full connection layer in the set neural network, the full connection layer integrates various feature information in the fused feature data matrix, and the material type of the material in the material image to be recognized can be determined according to a pre-training result.
According to the technical scheme of the embodiment, the material image to be identified is obtained; processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information; inputting a material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer; weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature; and inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified. The method has the advantages that the local characteristic information of the material image to be identified is obtained, the local characteristic information is fused with the depth characteristic information determined by the set neural network convolution layer to replace the original depth characteristic information and input the depth characteristic information into the full connection layer of the set neural network to determine the type of the material to be identified, the problem of identification failure caused by death and incapability of revival of part of neurons in the neural network when the material identification is carried out only by the set neural network is solved, the accuracy of material identification is improved, and meanwhile, the fault tolerance rate of the neural network when the material identification is carried out by the set neural network is improved.
Example two
Fig. 2 is a flowchart of a material identification method according to an embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, and specifically comprises the following steps:
s201, obtaining an image of the material to be identified.
S202, preprocessing the material image to be identified.
Specifically, the acquired image of the material to be identified is preprocessed according to at least one of the following modes: binarization, graying, noise suppression, image segmentation and edge extraction. The image binarization is a process of setting the gray value of a pixel point on an image to be 0 or 255, namely, the whole image presents an obvious black-and-white effect; graying is a process of adjusting the RGB value of each pixel point on an image to be in a state of R, B, G; noise suppression is the process of suppressing unnecessary or more than necessary interference information in image data; image segmentation is a technique and a process for dividing an image into a plurality of specific regions with unique properties and extracting an interested target; edge extraction refers to the process of processing the image outline in the digital image processing, and the place with severe gray value change is defined as the edge.
Exemplarily, before the material image to be recognized is processed by adopting a set edge recognition algorithm, the material image to be recognized can be binarized by utilizing threshold firstly, and grayed by utilizing rgb2 gray; and then, performing median filtering on the image by using a median function to inhibit noise, determining a segmentation threshold value of the image by using a histogram dual-peak method, performing edge extraction by using a canny operator, and taking the processed image as a preprocessed material image to be recognized.
And S203, processing the preprocessed material image to be recognized by adopting a set edge recognition algorithm.
Specifically, local feature information such as a boundary, an area, an aspect ratio and the like in the preprocessed material image to be recognized is extracted through Hough transformation, and the boundary of the material image is determined, so that the position area of the material in the material image to be recognized is determined.
And S204, performing sliding window segmentation on the material image to be identified to obtain a target material image.
The sliding window segmentation can be understood as a process of segmenting an image to be recognized by using a sliding window before the image is placed into a neural network for recognition so as to divide the image to be recognized into a size suitable for the input of the neural network. Namely, the size of the target material image meets the requirement of setting a neural network.
And S205, inputting the target material image into a convolutional layer of a set neural network.
Illustratively, when the neural network is set as an Alexnet neural network model, a first layer inputs a processed target material image, a second layer of the processed target material image is preprocessed to obtain a second processed image, a plurality of information are extracted by using 96 filter filters, a relu function is used as an excitation function to ensure that a plurality of extracted characteristic values are all within a reasonable range, and data after relu is 55 × 96; the second layer uses the filter to further extract the characteristic of the image, namely the characteristic diagram, of which the characteristic value is extracted from the first layer, and carries out convolution operation after weighting and biasing the corresponding areas of some fused characteristic diagrams; the third layer outputs characteristic data after synchronous convolution relu operation without down sampling; the fourth layer carries out all-zero padding convolution on the output data of the third layer and then outputs the data; and the fifth layer outputs the depth characteristic data matrix processed by each convolution layer in the Alexnet neural network model after adopting the down-sampling operation, and takes the depth characteristic data matrix as the depth characteristic information output by the convolution layer.
S206, adjusting the data characteristics of the local characteristic information to enable the adjusted data characteristics of the local characteristic information to be the same as the data characteristics of the depth characteristic information.
Specifically, because the data features of the local feature information are different from the data features of the depth feature information, and the fused feature information needs to be input into a neural layer of a set neural network to obtain the material type of the material to be identified, the data features of the local feature information need to be adjusted to be the same as the data features of the depth feature information. For example, in the Alexnet neural network model, the data feature of the depth feature information output by the convolutional layer is a 6 × 256 feature data matrix, and then the local feature information of the material image to be identified, which is determined by the Hough transform, needs to be converted into a 6 × 256 feature data matrix to realize the fusion of the two.
And S207, performing weighted fusion on the adjusted local feature information and the adjusted depth feature information according to a preset weight, and determining a fusion result as an intermediate feature.
The fused characteristic values can avoid the death phenomenon of partial neurons and avoid the input of 0 numerical values.
Specifically, according to preset weights determined in model training in advance, the adjusted feature data matrix of the local feature information and the feature data matrix of the depth feature information are weighted respectively, the two weighted feature data matrices are fused, and an obtained fusion result is determined as a feature data matrix of the intermediate feature. Optionally, the weight of the local feature information may be 0.384, different weight distributions may also be adopted for different earth moving machines and materials, and the weighted fusion method may also adopt other data fusion methods such as weighted average, kalman filter, and the like, which is not limited in this embodiment of the present invention.
And S208, inputting the intermediate characteristics into a full connection layer of the set neural network to obtain the material type of the material to be identified.
According to the technical scheme, the local characteristic information and the depth characteristic information are extracted from the material image to be recognized at the same time, the local characteristic information and the depth characteristic information are fused, and the fused characteristic information is input into the full connection layer of the trained Alexenet neural network model to obtain the material type of the material to be recognized. The identification failure caused by death of part of neurons when the Alexnet neural network model is used independently is avoided, and the accuracy of identifying the type of the material to be identified is improved by combining local characteristic information; the local feature information and the depth feature information are fused by a weighted fusion mode, the phenomenon of neuron death in the depth feature information is eliminated, the fused feature information avoids the input of a 0 value, and the fault tolerance rate of a neural network during material identification is increased.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a material identification device according to a third embodiment of the present invention, where the material identification device includes: an image acquisition module 31, a local information acquisition module 32, a depth information acquisition module 33, an intermediate feature acquisition module 34 and a type identification module 35.
The image acquisition module 31 is used for acquiring an image of the material to be identified; the local information acquisition module 32 is configured to process the material image to be identified by using a set edge identification algorithm to obtain local feature information; the depth information acquisition module 33 is used for inputting the material image to be identified into the convolutional layer in the set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer; the intermediate feature obtaining module 34 is configured to perform weighted fusion on the local feature information and the depth feature information to obtain an intermediate feature; and the type identification module 35 is used for inputting the intermediate features into a full connection layer of the set neural network to obtain the material type of the material image to be identified.
According to the technical scheme, the problem of identification failure caused by death and incapability of reviving of part of neurons in the neural network when the material identification is carried out only by adopting the set neural network is solved, the accuracy of material identification is improved, and meanwhile, the fault tolerance rate of the neural network when the material identification is carried out by setting the neural network is improved.
Optionally, the material identification device further includes:
the preprocessing module is used for preprocessing the material image to be identified according to at least one of the following modes: binarization, graying, noise suppression, image segmentation and edge extraction.
Further, the local information obtaining module 32 is specifically configured to: and processing the preprocessed material image to be recognized by adopting a set edge recognition algorithm to obtain local characteristic information.
Optionally, the material identification device further includes:
the target image acquisition module is used for performing sliding window segmentation on the material image to be identified to obtain a target material image; the size of the material image meets the requirement of setting a neural network.
Further, the depth information obtaining module 33 is specifically configured to: and inputting the target material image into a convolutional layer of a set neural network to obtain depth characteristic information.
Optionally, the intermediate feature obtaining module 34 includes:
and the data characteristic adjusting unit is used for adjusting the data characteristic of the local characteristic information so that the adjusted data characteristic of the local characteristic information is the same as the data characteristic of the depth characteristic information.
And the weighted fusion unit is used for carrying out weighted fusion on the adjusted local feature information and the depth feature information according to the preset weight and determining a fusion result as an intermediate feature.
Optionally, the set edge recognition algorithm may be a hough transform algorithm, and the set neural network may be an Alexnet network model.
The material identification device provided by the embodiment of the invention can execute the material identification method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention, as shown in fig. 4, the apparatus includes a processor 41, a storage device 42, an input device 43, and an output device 44; the number of the processors 41 in the device may be one or more, and one processor 41 is taken as an example in fig. 4; the processor 41, the storage means 42, the input means 43 and the output means 44 in the device may be connected by a bus or other means, as exemplified by the bus connection in fig. 4.
The storage device 42 serves as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the material identification method in the embodiment of the present invention (for example, the image acquisition module 31, the local information acquisition module 32, the depth information acquisition module 33, the intermediate feature acquisition module 34, and the type identification module 35). The processor 41 executes various functional applications of the device and data processing by executing software programs, instructions and modules stored in the storage device 42, so as to realize the material identification method.
The storage device 42 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 43 may be used to receive input numeric or character information and to generate key signal inputs relating to user settings and function control of the device, and may include a touch screen, a keyboard, a mouse, and the like. The output device 44 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for identifying materials, the method including:
acquiring an image of a material to be identified;
processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information;
inputting a material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein, the set neural network comprises a convolution layer and a full connection layer;
weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature;
and inputting the intermediate characteristics into a full connection layer of a set neural network to obtain the material type of the material to be identified.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the material identification method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above search apparatus, each included unit and module are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for identifying a material, comprising:
acquiring an image of a material to be identified;
processing the material image to be identified by adopting a set edge identification algorithm to obtain local characteristic information;
inputting the material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein the set neural network comprises a convolutional layer and a full link layer;
weighting and fusing the local feature information and the depth feature information to obtain an intermediate feature;
and inputting the intermediate features into a full connection layer of the set neural network to obtain the material type of the material to be identified.
2. The method of claim 1, prior to inputting the material image to be identified into a convolutional layer in a set neural network, further comprising:
performing sliding window segmentation on the material image to be identified to obtain a target material image; the size of the target material image meets the requirement of the set neural network;
correspondingly, inputting the target material image into a convolutional layer in a set neural network, comprising:
and inputting the target material image into the convolutional layer of the set neural network.
3. The method of claim 2, wherein the weighted fusion of the local feature information and the depth feature information to obtain an intermediate feature comprises:
adjusting the data characteristics of the local characteristic information to ensure that the adjusted data characteristics of the local characteristic information are the same as the data characteristics of the depth characteristic information;
and performing weighted fusion on the adjusted local feature information and the depth feature information according to a preset weight, and determining a fusion result as an intermediate feature.
4. The method of claim 1, after acquiring the image of the material to be identified, further comprising:
preprocessing the material image to be identified according to at least one of the following modes:
binarization, graying, noise suppression, image segmentation and edge extraction;
correspondingly, the method for processing the material image to be identified by adopting a set edge identification algorithm comprises the following steps:
and processing the preprocessed material image to be recognized by adopting a set edge recognition algorithm.
5. The method according to any one of claims 1 to 4, wherein the set edge identification algorithm is a Hough transform algorithm.
6. The method according to any one of claims 1 to 4, wherein the set neural network is an Alexnet network model.
7. A material identification device, comprising:
the image acquisition module is used for acquiring an image of the material to be identified;
the local information acquisition module is used for processing the material image to be identified by adopting a set edge identification algorithm to acquire local characteristic information;
the depth information acquisition module is used for inputting the material image to be identified into a convolutional layer in a set neural network to obtain depth characteristic information; wherein the set neural network comprises a convolutional layer and a full link layer;
the intermediate feature acquisition module is used for weighting and fusing the local feature information and the depth feature information to acquire intermediate features;
and the type identification module is used for inputting the intermediate features into the full connection layer of the set neural network to obtain the material type of the material image to be identified.
8. The apparatus of claim 7, wherein the intermediate feature obtaining module comprises:
a data feature adjusting unit, configured to adjust a data feature of the local feature information so that the adjusted data feature of the local feature information is the same as the data feature of the depth feature information;
the weighted fusion unit is used for carrying out weighted fusion on the adjusted local feature information and the depth feature information according to preset weight;
and the intermediate characteristic determining unit is used for determining the fusion result as the intermediate characteristic.
9. An apparatus, characterized in that the apparatus comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of material identification as claimed in any one of claims 1-6.
10. A storage medium containing computer-executable instructions for performing the method of identifying a material as recited in any one of claims 1-6 when executed by a computer processor.
CN202010878343.3A 2020-08-27 2020-08-27 Material identification method, device, equipment and storage medium Pending CN112069958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010878343.3A CN112069958A (en) 2020-08-27 2020-08-27 Material identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010878343.3A CN112069958A (en) 2020-08-27 2020-08-27 Material identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112069958A true CN112069958A (en) 2020-12-11

Family

ID=73660477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010878343.3A Pending CN112069958A (en) 2020-08-27 2020-08-27 Material identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112069958A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108710916A (en) * 2018-05-22 2018-10-26 重庆完美空间科技有限公司 The method and device of picture classification
CN109190752A (en) * 2018-07-27 2019-01-11 国家新闻出版广电总局广播科学研究院 The image, semantic dividing method of global characteristics and local feature based on deep learning
CN109815967A (en) * 2019-02-28 2019-05-28 北京环境特性研究所 CNN ship seakeeping system and method based on Fusion Features
KR20200027428A (en) * 2018-09-04 2020-03-12 주식회사 스트라드비젼 Learning method, learning device for detecting object using edge image and testing method, testing device using the same
CN110969171A (en) * 2019-12-12 2020-04-07 河北科技大学 Image classification model, method and application based on improved convolutional neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372648A (en) * 2016-10-20 2017-02-01 中国海洋大学 Multi-feature-fusion-convolutional-neural-network-based plankton image classification method
CN107330463A (en) * 2017-06-29 2017-11-07 南京信息工程大学 Model recognizing method based on CNN multiple features combinings and many nuclear sparse expressions
CN108710916A (en) * 2018-05-22 2018-10-26 重庆完美空间科技有限公司 The method and device of picture classification
CN109190752A (en) * 2018-07-27 2019-01-11 国家新闻出版广电总局广播科学研究院 The image, semantic dividing method of global characteristics and local feature based on deep learning
KR20200027428A (en) * 2018-09-04 2020-03-12 주식회사 스트라드비젼 Learning method, learning device for detecting object using edge image and testing method, testing device using the same
CN109815967A (en) * 2019-02-28 2019-05-28 北京环境特性研究所 CNN ship seakeeping system and method based on Fusion Features
CN110969171A (en) * 2019-12-12 2020-04-07 河北科技大学 Image classification model, method and application based on improved convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐克虎 等: "《智能计算方法及其应用》", 31 July 2019, 国防工业出版社 *

Similar Documents

Publication Publication Date Title
EP3289529B1 (en) Reducing image resolution in deep convolutional networks
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN107529650B (en) Closed loop detection method and device and computer equipment
Raza et al. Mimo-net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images
WO2022057262A1 (en) Image recognition method and device, and computer-readable storage medium
CN111160407B (en) Deep learning target detection method and system
CN108009554A (en) A kind of image processing method and device
CN104063686B (en) Crop leaf diseases image interactive diagnostic system and method
CN110728179A (en) Pig face identification method adopting multi-path convolutional neural network
CN111339862B (en) Remote sensing scene classification method and device based on channel attention mechanism
JP2014041476A (en) Image processing apparatus, image processing method, and program
CN107239759A (en) A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic
CN110807362A (en) Image detection method and device and computer readable storage medium
CN113239875B (en) Method, system and device for acquiring face characteristics and computer readable storage medium
CN112819007B (en) Image recognition method, device, electronic equipment and storage medium
WO2023155389A1 (en) Three-dimensional object detection method and apparatus, storage medium, processor, and system
EP3686793A1 (en) Learning method and learning device for extracting feature from input image by using convolutional layers in multiple blocks in cnn, resulting in hardware optimization which allows key performance index to be satisfied, and testing method and testing device using the same
CN111696196A (en) Three-dimensional face model reconstruction method and device
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN108345835B (en) Target identification method based on compound eye imitation perception
CN112069958A (en) Material identification method, device, equipment and storage medium
CN115116111B (en) Anti-disturbance human face living body detection model training method and device and electronic equipment
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
CN116091844A (en) Image data processing method and system based on edge calculation
CN111325706A (en) Grain boundary extraction and grain boundary extraction model generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211

RJ01 Rejection of invention patent application after publication