CN117523465A - Automatic identification method for material types of material yard - Google Patents

Automatic identification method for material types of material yard Download PDF

Info

Publication number
CN117523465A
CN117523465A CN202410003990.8A CN202410003990A CN117523465A CN 117523465 A CN117523465 A CN 117523465A CN 202410003990 A CN202410003990 A CN 202410003990A CN 117523465 A CN117523465 A CN 117523465A
Authority
CN
China
Prior art keywords
image
analysis
knocking
identification
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410003990.8A
Other languages
Chinese (zh)
Other versions
CN117523465B (en
Inventor
纪辉
尹可晖
房文静
张�杰
赵伟丽
董怡
薛松
赵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chaohui Automation Technology Co ltd
Original Assignee
Shandong Chaohui Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Chaohui Automation Technology Co ltd filed Critical Shandong Chaohui Automation Technology Co ltd
Priority to CN202410003990.8A priority Critical patent/CN117523465B/en
Publication of CN117523465A publication Critical patent/CN117523465A/en
Application granted granted Critical
Publication of CN117523465B publication Critical patent/CN117523465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/045Analysing solids by imparting shocks to the workpiece and detecting the vibrations or the acoustic waves caused by the shocks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/12Analysing solids by measuring frequency or resonance of acoustic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention relates to the technical field of material identification, in particular to an automatic identification method for material types in a material yard, which comprises the following steps: s1, placing a material yard material below a camera module, and shooting the material; s2, uploading the shot image into a computer, and performing recognition analysis through an image recognition module in the computer; s3, knocking the materials by using knocking equipment again, and collecting knocked sound through a sound collecting module; s4, matching analysis is carried out on the collected knocking sound waves and sound wave waveforms corresponding to the recognition analysis results through a comparison analysis model; s5, checking the accuracy of the identification of the image identification module, and verifying the accuracy. The method for discarding manual identification is rapid and convenient to operate, and the method for detecting whether the image identification module is accurate or not is realized in a sound wave matching mode, so that the error reduction condition is reduced, and a better use prospect is brought.

Description

Automatic identification method for material types of material yard
Technical Field
The invention belongs to the technical field of material identification, and particularly relates to an automatic identification method for material types in a material yard.
Background
The stock yard of the modern large-scale stock yard comprises an ore yard, a coal yard, an auxiliary stock yard and a mixing stock yard, along with the rapid development of the construction industry, the number of construction stock yards and the types of materials are rapidly increased, how to accurately and rapidly distinguish different types of materials is one of the problems to be solved in the construction stock yard, currently, the determination of the types of materials in the construction stock yard mainly utilizes a human eye observation method, the measurement precision of the manual observation method is low, the labor intensity is high, the labor cost is high, the interference of human factors is easy, the convenience is high, and the judgment result has errors.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an automatic identification method for the material types of a material yard, so as to solve the technical problems.
In order to achieve the above purpose, the present invention provides the following technical solutions: an automatic identification method for the material types of a material yard comprises the following steps:
s1, placing a material yard material below a camera module, and shooting the material;
s2, uploading the shot image into a computer, and performing recognition analysis through an image recognition module in the computer;
s3, knocking the materials by using knocking equipment again, and collecting knocked sound through a sound collecting module;
s4, matching analysis is carried out on the collected knocking sound waves and sound wave waveforms corresponding to the recognition analysis results through a comparison analysis model;
s5, checking the identification accuracy of the image identification module, and verifying the accuracy;
and S6, if the comparison result is inconsistent, continuously repeating the steps S1-S4 on the materials to identify the materials.
Preferentially, in the step S1, the camera module is a camera, the camera is connected with the computer, in the step S2, the image recognition module is built in the computer, and the image recognition module building step is as follows:
constructing a model frame, determining required convolution layers, activating functions of each layer, and hiding units of each layer;
model training, once the model framework is built, using training images, the process also requires defining the number of iterations, evaluating the performance of the model after each element;
the performance of the model is evaluated by the test data.
Preferably, the image recognition module recognizes the steps of:
preprocessing an image, preprocessing an image file, and preprocessing necessary for zooming, normalizing and denoising the image;
extracting features of the image, and extracting distinguishing features including colors, edges and textures from the image;
classifying the extracted features by adopting a deep learning algorithm, and outputting the classified results;
the feature extraction steps are as follows:
graying, converting the color image into a gray image, reducing the data amount in the image, and highlighting the brightness information of the image;
filtering, namely smoothing the image through filtering, removing noise and details, and enabling the image to be smoother;
edge detection, namely detecting edges in the image by using an edge detection algorithm, and extracting the outline and the boundary of an object;
corner detection, namely detecting corners in the image, namely, areas with sharp changes of pixel values, and highlighting important feature points in the image;
the feature description describes the extracted features and is used for subsequent matching and recognition;
feature filtering, namely filtering the extracted features, eliminating repeated and redundant features, and reserving important feature points;
feature encoding, namely encoding the features to carry out subsequent matching and identification;
the formula involved in the edge detection algorithm is as follows:
wherein G is the gradient value of the pixel point (x, y), f is the gray value of the pixel point, and abs (x) is the weighted difference.
Preferentially, the image scaling step is to pre-open the image file to be scaled, use a scaling tool to adjust the scaling tool, input the self-defined width and height, adjust the relative units such as the percentage, keep the aspect ratio of the image, avoid the image deformation, execute the scaling operation, and save the scaled image;
the image denoising step is to open the zoomed image, read the image, process the image by using programming language, observe and analyze the noise in the image, determine the noise type, remove the noise by using average filtering method, evaluate the denoised image and the image before denoise, compare the effect before and after denoise, save the image again;
the calculation formula of the mean value filtering method is as follows:
where x (n) is the pixel value in the image, yn is the filtered pixel value, m is the size of the filter, and the formula is the average of the pixel values within the filter window.
Preferentially, in the step S3, the knocking equipment is driven by a cylinder, the knocking effect is generated by a hammer head, the cylinder can control the knocking force of the cylinder, a sound collecting module collects by a recording device, an audio file obtained by collection is stored in a computer, sound waves are analyzed by a frequency spectrum analysis method in the computer, the condition of different frequency distribution of the sound waves is revealed, a time domain signal is decomposed into components with different frequencies, the amplitude and the phase of each component are calculated, and through analyzing the frequency spectrum, the frequency components in the signal and the intensity and phase information of the frequency components can be known, so that a corresponding waveform diagram is obtained.
Preferably, in the step S4, the calculation formula of the comparative analysis model is:
wherein Y is the mean, a1 is data 1, a2 is data 2, a3 is data 3, an is data n, and n is the total number of data;
where x1 and x2 are the values of the two sets of data and Q is the absolute difference between them.
Preferably, the step of comparing the analysis model in the step S4 is:
collecting the collected sound wave waveform data corresponding to the knocking sound wave and the recognition analysis result;
and overlapping the images of the two groups of waveforms together, checking whether the waveforms are completely overlapped, and comparing the difference and the similarity between the images of the two groups of waveforms.
Preferentially, in the step S4, the matching analysis step is to determine the covariates, select the covariates according to the condition independence assumption, the variables should influence confounding factors of the intervention variable and the result variable at the same time, select the variables occurring before the intervention variable, and simultaneously, the variables which do not influence the intervention variable but have important influence on the result variable need to be introduced;
similarity is defined, and distances are calculated according to multidimensional covariates, assuming that dimensions on coordinate axes are the same, indicating that dimensions of covariates must be the same.
Preferentially, in the step S5, the accuracy of the identification result of the image identification module is judged through the comparison analysis in the step S4.
Preferentially, if the judgment in the step S5 is inconsistent, the materials are continuously and repeatedly identified until the acoustic waveforms of the materials are consistent with the waveforms of the identification analysis results, and the identification is finished.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the shot image is subjected to characteristic extraction and identification through the image identification module, the type of the material in the stock ground is analyzed and judged, the material is knocked again through a knocking mode, the knocked sound waves are collected, the frequency of the sound waves is analyzed to prepare a sound wave diagram, the sound waves of the type of the material judged by the image identification module are extracted from the database, the sound waves are matched with the collected sound waves to judge whether the two are consistent, if the two are consistent, the material identification is accurate, if the two are inconsistent, the material identification is error, the repeated judgment is continued until the two are accurate, the identification result is obtained, the manual identification method is abandoned, the operation is rapid and convenient, whether the judgment of the image identification module is accurate or not is detected through the sound wave matching mode, the error reduction condition is reduced, and a better application prospect is brought.
Drawings
FIG. 1 is a block diagram of an identification procedure according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a technical scheme that: an automatic identification method for the material types of a material yard comprises the following steps:
s1, placing a material yard material below a camera module, and shooting the material;
s2, uploading the shot image into a computer, and performing recognition analysis through an image recognition module in the computer;
s3, knocking the materials by using knocking equipment again, and collecting knocked sound through a sound collecting module;
s4, matching analysis is carried out on the collected knocking sound waves and sound wave waveforms corresponding to the recognition analysis results through a comparison analysis model;
s5, checking the identification accuracy of the image identification module, and verifying the accuracy;
s6, if the comparison result is inconsistent, continuously repeating the steps S1-S4 to identify the materials;
wherein the types of materials comprise ore yards, coal yards, iron ores, iron concentrates, pellets, manganese ores, limestone, dolomite, serpentine, silica, coking coal, power coal, sinter, pellets and the like, the different types of materials are different in appearance and knocking sound, the appearance is primarily judged to obtain the approximate type of the material, and the knocking sound is used for comparing whether the primary judging result is consistent.
Further, in the step S1, the camera is connected with the computer, and in the step S2, the image recognition module is built in the computer, and the image recognition module building step is as follows:
constructing a model frame, determining required convolution layers, activating functions of each layer, and hiding units of each layer;
model training, once the model framework is built, using training images, the process also requires defining the number of iterations, evaluating the performance of the model after each element;
the performance of the model is evaluated by the test data.
Further, the image recognition module recognizes the steps as follows:
preprocessing an image, preprocessing an image file, and preprocessing necessary for zooming, normalizing and denoising the image;
extracting features of the image, and extracting distinguishing features including colors, edges and textures from the image;
classifying the extracted features by adopting a deep learning algorithm, and outputting the classified results;
the feature extraction steps are as follows:
graying, converting the color image into a gray image, reducing the data amount in the image, and highlighting the brightness information of the image;
filtering, namely smoothing the image through filtering, removing noise and details, and enabling the image to be smoother;
edge detection, namely detecting edges in the image by using an edge detection algorithm, and extracting the outline and the boundary of an object;
corner detection, namely detecting corners in the image, namely, areas with sharp changes of pixel values, and highlighting important feature points in the image;
the feature description describes the extracted features and is used for subsequent matching and recognition;
feature filtering, namely filtering the extracted features, eliminating repeated and redundant features, and reserving important feature points;
feature encoding, namely encoding the features to carry out subsequent matching and identification;
the formula involved in the edge detection algorithm is as follows:
wherein G is the gradient value of the pixel point (x, y), f is the gray value of the pixel point, and abs (x) is the weighted difference.
Further, the image scaling step is to pre-open the image file to be scaled, use a scaling tool to adjust the scaling tool, input the self-defined width and height, adjust the relative units such as the percentage, keep the aspect ratio of the image, avoid the image deformation, execute the scaling operation, and save the scaled image;
the image denoising step is to open the zoomed image, read the image, process the image by using programming language, observe and analyze the noise in the image, determine the noise type, remove the noise by using average filtering method, evaluate the denoised image and the image before denoise, compare the effect before and after denoise, save the image again;
the calculation formula of the mean value filtering method is as follows:
wherein x (n) is a pixel value in the image, yn is a filtered pixel value, m is the size of the filter, and the formula is represented as an average value of the pixel values in a window of the filter;
the image normalization operation step subtracts the mean value of all pixels in the image from the coordinates of each pixel so that the mean value of all pixels in the transformed image becomes 0, and this process is also called 0 mean centering, the essence of centering is that all pixel values are shifted once on the coordinate axis, and the coordinates of each pixel point subtract the center coordinates of the image so that the coordinates of each pixel point are converted into relative coordinates.
Further, in the step S3, the knocking device is driven by a cylinder, the knocking effect is generated by a hammer, the cylinder can control the knocking force of the knocking device, a sound collecting module collects by a recording device, an audio file obtained by collection is stored in a computer, sound waves are analyzed by a sound wave analysis step in the computer by a frequency spectrum analysis method, the condition of different frequency distribution of the sound waves is revealed, a time domain signal is decomposed into components with different frequencies, the amplitude and the phase of each component are calculated, and through analyzing the frequency spectrum, the frequency components in the signal and the intensity and phase information of each frequency component can be known, so that a corresponding waveform diagram is obtained;
the cylinder is a cylindrical metal part which guides the piston to do linear reciprocating motion in the cylinder, and air converts heat energy into mechanical energy through expansion in the engine cylinder; the gas is compressed by a piston in a compressor cylinder to increase the pressure;
the structure of the air cylinder consists of an air cylinder, an end cover, a piston rod and a sealing element;
the principle of the air cylinder is that the pressure energy of compressed air is converted into mechanical energy, and the driving mechanism carries out linear reciprocating, swinging and rotating motions;
the structure of the cylinder is as follows:
the inner diameter of the cylinder barrel represents the output force of the cylinder, the piston slides in the cylinder in a smooth and reciprocating manner, and the roughness of the inner surface of the cylinder is Ra0.8mu;
the end cover is provided with an inlet and an outlet, a buffer mechanism is arranged in the end cover, a sealing ring and a dust-proof ring are arranged on the end cover at the rod side, so that the piston rod is prevented from leaking air, and external dust is prevented from entering the cylinder;
the piston is a pressure-bearing component in the cylinder, and in order to prevent gas leakage between the left chamber and the right chamber of the piston, a piston sealing ring is arranged, so that the position of the cylinder can be improved by a wear-resistant ring on the piston, the abrasion of the piston sealing ring is reduced, and the friction resistance is reduced;
the piston rod is the most important supporting piece in the cylinder, and high-carbon steel, hard chromium or stainless steel plated on the surface is generally used for preventing corrosion, so that the wear resistance of the sealing ring is improved;
sealing rings, the seals of rotating or reciprocating parts are called dynamic seals, the seals of stationary parts are also called static seals;
the output force of the air cylinder is controlled by controlling the air source pressure, the output force of the air cylinder is in direct proportion to the air source pressure, so that the output force of the air cylinder can be controlled by controlling the air source pressure, and common air source pressure control modes comprise a manual regulating valve, a proportional pressure regulating valve and an air pressure sensor and a controller.
Further, in the step S4, the calculation formula of the comparative analysis model is as follows:
wherein Y is the mean, a1 is data 1, a2 is data 2, a3 is data 3, an is data n, and n is the total number of data;
where x1 and x2 are the values of the two sets of data and Q is the absolute difference between them.
Further, in the step S4, the step of comparing the analysis model is:
collecting the collected sound wave waveform data corresponding to the knocking sound wave and the recognition analysis result;
and overlapping the images of the two groups of waveforms together, checking whether the waveforms are completely overlapped, and comparing the difference and the similarity between the images of the two groups of waveforms.
Furthermore, in the step S4, the matching analysis step is to determine a covariate, select the covariate according to the condition independence assumption, and the covariate should influence confounding factors of the intervention variable and the result variable at the same time, select the variable before the intervention variable, and meanwhile, the variable which does not influence the intervention variable but has an important influence on the result variable needs to be introduced;
covariates are quantities used in statistical analysis to represent relationships between two or more variables, which is not the main point of investigation, but causal and independent variables may be related, by controlling coherent variables, the effect of parameters on factor variables can be more accurately assessed;
the covariates play an important role in regression analysis, and potential congestion factors are eliminated by adding the covariates, so that the relation between parameters and factors is more accurately estimated;
selecting and controlling covariates is a complex process that requires consideration of the following points:
correlation analysis, namely selecting covariates related to parameters and reason variables;
influence evaluation, namely evaluating influence of a coherent variable on parameters and reason variables;
data acquisition, namely ensuring the accuracy and the integrity of covariate data;
similarity is defined, and distances are calculated according to multidimensional covariates, assuming that dimensions on coordinate axes are the same, indicating that dimensions of covariates must be the same.
Further, in the step S5, the accuracy of the recognition result of the image recognition module is determined through the comparison analysis in the step S4.
Further, if the judgment in the step S5 is inconsistent, the materials are continuously and repeatedly identified until the acoustic waveforms of the materials are consistent with the waveforms of the identification analysis results, and the identification is finished.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. The automatic identification method for the material types of the material yard is characterized by comprising the following steps of:
s1, placing a material yard material below a camera module, and shooting the material;
s2, uploading the shot image into a computer, and performing recognition analysis through an image recognition module in the computer;
s3, knocking the materials by using knocking equipment again, and collecting knocked sound through a sound collecting module;
s4, matching analysis is carried out on the collected knocking sound waves and sound wave waveforms corresponding to the recognition analysis results through a comparison analysis model;
s5, checking the identification accuracy of the image identification module, and verifying the accuracy;
and S6, if the comparison result is inconsistent, continuously repeating the steps S1-S4 on the materials to identify the materials.
2. The automatic stock yard material type identification method according to claim 1, wherein: in the step S1, the camera is connected with the computer, and in the step S2, the image recognition module is built in the computer, and the image recognition module building step is as follows:
constructing a model frame, determining required convolution layers, activating functions of each layer, and hiding units of each layer;
model training, once the model framework is built, using training images, the process also requires defining the number of iterations, evaluating the performance of the model after each element;
the performance of the model is evaluated by the test data.
3. The automatic stock ground material type identification method according to claim 2, wherein the image identification module identifies the steps of:
preprocessing an image, preprocessing an image file, and preprocessing necessary for zooming, normalizing and denoising the image;
extracting features of the image, and extracting distinguishing features including colors, edges and textures from the image;
classifying the extracted features by adopting a deep learning algorithm, and outputting the classified results;
the feature extraction steps are as follows:
graying, converting the color image into a gray image, reducing the data amount in the image, and highlighting the brightness information of the image;
filtering, namely smoothing the image through filtering, removing noise and details, and enabling the image to be smoother;
edge detection, namely detecting edges in the image by using an edge detection algorithm, and extracting the outline and the boundary of an object;
corner detection, namely detecting corners in the image, namely, areas with sharp changes of pixel values, and highlighting important feature points in the image;
the feature description describes the extracted features and is used for subsequent matching and recognition;
feature filtering, namely filtering the extracted features, eliminating repeated and redundant features, and reserving important feature points;
feature encoding, namely encoding the features to carry out subsequent matching and identification;
the formula involved in the edge detection algorithm is as follows:
wherein G is the gradient value of the pixel point (x, y), f is the gray value of the pixel point, and abs (x) is the weighted difference.
4. A method for automatically identifying the types of materials in a stock ground according to claim 3, wherein: the image scaling step is to pre-open an image file to be scaled, use a scaling tool to adjust the scaling tool, input the self-defined width and height, use relative units such as percentage to adjust, keep the aspect ratio of the image, avoid image deformation, execute scaling operation, and save the scaled image;
the image denoising step is to open the zoomed image, read the image, process the image by using programming language, observe and analyze the noise in the image, determine the noise type, remove the noise by using average filtering method, evaluate the denoised image and the image before denoise, compare the effect before and after denoise, save the image again;
the calculation formula of the mean value filtering method is as follows:
where x (n) is the pixel value in the image, yn is the filtered pixel value, m is the size of the filter, and the formula is the average of the pixel values within the filter window.
5. The automatic stock yard material type identification method according to claim 1, wherein: and S3, driving the knocking equipment through a cylinder, generating a knocking effect through a hammer head, wherein the cylinder can control the knocking force of the knocking equipment, collecting the sound through the recording equipment, storing the collected audio file in a computer, analyzing the sound wave through a frequency spectrum analysis method in the computer, revealing the condition of different frequency distribution of the sound wave, decomposing a time domain signal into components with different frequencies, calculating the amplitude and the phase of each component, and knowing the frequency components in the signal and the intensity and phase information of each frequency component through analyzing the frequency spectrum to obtain a corresponding waveform diagram.
6. The automatic stock ground material type identification method according to claim 1, wherein the calculation formula of the comparison analysis model in the step S4 is:
wherein Y is the mean, a1 is data 1, a2 is data 2, a3 is data 3, an is data n, and n is the total number of data;
where x1 and x2 are the values of the two sets of data and Q is the absolute difference between them.
7. The automatic stock ground material type identification method according to claim 1, wherein the comparison analysis model analysis step in the step S4 is as follows:
collecting the collected sound wave waveform data corresponding to the knocking sound wave and the recognition analysis result;
and overlapping the images of the two groups of waveforms together, checking whether the waveforms are completely overlapped, and comparing the difference and the similarity between the images of the two groups of waveforms.
8. The automatic stock yard material type identification method according to claim 1, wherein: in the step S4, the matching analysis step is to determine the covariates, select the covariates according to the condition independence assumption, the variables should influence confounding factors of the intervention variables and the result variables at the same time, select the variables which occur before the intervention variables, and simultaneously, the variables which do not influence the intervention variables but have important influence on the result variables need to be introduced;
similarity is defined, and distances are calculated according to multidimensional covariates, assuming that dimensions on coordinate axes are the same, indicating that dimensions of covariates must be the same.
9. The automatic stock yard material type identification method according to claim 1, wherein: and S5, judging the accuracy of the identification result of the image identification module through the contrast analysis in the S4.
10. The automatic stock yard material type identification method according to claim 1, wherein: if the judgment in the step S5 is inconsistent, the materials are continuously and repeatedly identified until the acoustic waveforms of the materials are consistent with the waveforms of the identification analysis results, and the identification is finished.
CN202410003990.8A 2024-01-03 2024-01-03 Automatic identification method for material types of material yard Active CN117523465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410003990.8A CN117523465B (en) 2024-01-03 2024-01-03 Automatic identification method for material types of material yard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410003990.8A CN117523465B (en) 2024-01-03 2024-01-03 Automatic identification method for material types of material yard

Publications (2)

Publication Number Publication Date
CN117523465A true CN117523465A (en) 2024-02-06
CN117523465B CN117523465B (en) 2024-04-19

Family

ID=89762997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410003990.8A Active CN117523465B (en) 2024-01-03 2024-01-03 Automatic identification method for material types of material yard

Country Status (1)

Country Link
CN (1) CN117523465B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243450A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Material recognition from an image
CN109635776A (en) * 2018-12-23 2019-04-16 广东腾晟信息科技有限公司 Pass through the method for procedure identification human action
CN114821735A (en) * 2022-05-12 2022-07-29 国网河南省电力公司信息通信公司 Intelligent storage cabinet based on face recognition and voice recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110243450A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Material recognition from an image
CN109635776A (en) * 2018-12-23 2019-04-16 广东腾晟信息科技有限公司 Pass through the method for procedure identification human action
CN114821735A (en) * 2022-05-12 2022-07-29 国网河南省电力公司信息通信公司 Intelligent storage cabinet based on face recognition and voice recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUO, R等: "Perception for coupler rod of freight train based on image and point clouds", 《 2022 CHINA AUTOMATION CONGRESS (CAC)》, 31 December 2022 (2022-12-31) *
彭鸿元;吴恋;郑旭;张雯雯;兰腾腾;: "基于深度学习的植物识别技术的发展", 电脑知识与技术, no. 19, 5 July 2018 (2018-07-05) *

Also Published As

Publication number Publication date
CN117523465B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
Luo et al. Automated visual defect detection for flat steel surface: A survey
Peng et al. Study of tool wear monitoring using machine vision
Guler et al. Measurement of particle movement in granular soils using image analysis
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
CN115965624B (en) Method for detecting wear-resistant hydraulic oil pollution particles
CN110264445A (en) The screen printing of battery quality determining method of piecemeal template matching combining form processing
CN106870957A (en) A kind of feature extracting method of pipeline defect and magnetic leakage signal
CN109118471A (en) A kind of polishing workpiece, defect detection method suitable under complex environment
CN112989481B (en) Method for processing stable visual image data of complex geological tunnel construction surrounding rock
CN112258444A (en) Elevator steel wire rope detection method
CN113155839A (en) Steel plate outer surface defect online detection method based on machine vision
CN117523465B (en) Automatic identification method for material types of material yard
CN116452944A (en) Surface crack identification method and device
Fu et al. Research on image-based detection and recognition technologies for cracks on rail surface
CN117115075A (en) Metal surface rust spot detection method integrating multidirectional multi-element universe local segmentation
CN106897723B (en) Target real-time identification method based on characteristic matching
CN117689662B (en) Visual detection method and system for welding quality of heat exchanger tube head
Gu et al. A detection and identification method based on machine vision for bearing surface defects
CN112597923B (en) Pulse pile-up correction method based on morphology and optimized gray model
Zeng et al. MFAM-Net: A Surface Defect Detection Network for Strip Steel via Multiscale Feature Fusion and Attention Mechanism
CN112651341B (en) Processing method of welded pipe weld joint real-time detection video
CN113870328A (en) Liquid foreign matter visual detection method and system
CN116740052B (en) Method for measuring torch discharge flow in real time based on torch video
Gao et al. A fast surface-defect detection method based on Dense-Yolo network
CN117474910B (en) Visual detection method for motor quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant