CN117788871A - Vehicle-mounted weighing management method and platform based on artificial intelligence - Google Patents

Vehicle-mounted weighing management method and platform based on artificial intelligence Download PDF

Info

Publication number
CN117788871A
CN117788871A CN202311815795.7A CN202311815795A CN117788871A CN 117788871 A CN117788871 A CN 117788871A CN 202311815795 A CN202311815795 A CN 202311815795A CN 117788871 A CN117788871 A CN 117788871A
Authority
CN
China
Prior art keywords
data
article
image
carrying
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311815795.7A
Other languages
Chinese (zh)
Inventor
张凯亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Yanfa High Tech Co ltd
Original Assignee
Hainan Yanfa High Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Yanfa High Tech Co ltd filed Critical Hainan Yanfa High Tech Co ltd
Priority to CN202311815795.7A priority Critical patent/CN117788871A/en
Publication of CN117788871A publication Critical patent/CN117788871A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to a vehicle-mounted weighing management method and platform based on artificial intelligence. The method comprises the following steps: carrying out real-time image acquisition of the load-carrying articles by using a vehicle-mounted camera to obtain image data of the load-carrying articles; carrying out article data detection on the image data of the loaded articles to obtain article detection data, wherein the article detection data comprises article position data and article boundary data; carrying out item volume weight estimation on the load item image data according to the item position data and the item boundary data to obtain item first weight data; and carrying out article material identification on the loaded article image data according to the article detection data to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data to obtain article second weight data. The invention realizes the automatic detection and weighing process of the load-carrying articles, does not need manual intervention, and improves the working efficiency.

Description

Vehicle-mounted weighing management method and platform based on artificial intelligence
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a vehicle-mounted weighing management method and platform based on artificial intelligence.
Background
The vehicle-mounted weighing management method refers to a set of system or program for weighing and managing a cargo vehicle in real time by using weighing equipment in the transportation industry. This method aims at ensuring accurate loading of goods to avoid overload or low load situations, thereby ensuring road safety and transport efficiency. In general, a conventional method of weighing the entire vehicle in a stationary state, which is generally used in a stationary place, a driver needs to drive the vehicle to a designated location and then weigh, which can provide accurate weight information, but requires additional time and labor, and a stationary weighing requires the driver to drive the vehicle to a designated location, which takes additional time, particularly in a busy transportation route, which may cause delays.
Disclosure of Invention
The invention provides an artificial intelligence-based vehicle-mounted weighing management method and platform for solving the technical problems, and aims to solve at least one of the technical problems.
The application provides a vehicle-mounted weighing management method based on artificial intelligence, which comprises the following steps:
step S1: carrying out real-time image acquisition of the load-carrying articles by using a vehicle-mounted camera so as to obtain image data of the load-carrying articles;
Step S2: carrying out item data detection on the load item image data so as to obtain item detection data, wherein the item detection data comprises item position data and item boundary data;
step S3: calculating volume data according to the article position data and the article boundary data to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data, and estimating the volume and the weight of the article to obtain article first weight data;
step S4: carrying out article material identification on the loaded article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
step S5: and carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, thereby obtaining vehicle-mounted weighing abnormality detection data and carrying out vehicle-mounted weighing abnormality warning operation.
According to the method, the vehicle-mounted camera is used for image acquisition, and the steps of object data detection and material identification are adopted, so that the automatic detection and weighing processes of the loaded objects are realized, manual intervention is not needed, and the working efficiency is improved. By extracting the position data and the boundary data of the article and combining the first volume weight estimation of the article and the material weight estimation of the article, the weight of the article can be estimated more accurately, personal errors are reduced, and the weighing accuracy is improved. The method not only considers the position and boundary information of the article, but also combines the identification of the material of the article, thereby comprehensively utilizing various information sources and improving the weighing precision. By detecting and generating the vehicle-mounted weighing abnormality, abnormal conditions such as overload and abnormal shapes in the weighing process can be found and warned in time, and the conditions that accidents or damages are possibly caused are avoided. Through automatic and intelligent weighing process, manual participation is reduced, danger and potential safety hazard in the working process are reduced, and working safety is improved.
Preferably, step S1 is specifically:
step S11: acquiring real-time load article images through a vehicle-mounted camera, so as to obtain real-time load article image data;
step S12: carrying out dynamic denoising processing on real-time load article image data by using a preset dynamic adaptive noise detection model so as to obtain load article image denoising data;
step S13: carrying out image quality evaluation on the denoising data of the image of the load article, thereby evaluating the data of the image quality;
step S14: and marking the denoising data of the image of the load article according to the image quality evaluation data, thereby obtaining image data of the load picture.
According to the invention, through the step S11, the image of the load article is acquired in real time by utilizing the vehicle-mounted camera, so that the weighing process can be monitored in real time, and the timeliness and the accuracy of data are ensured. In step S12, a preset dynamic adaptive noise detection model is adopted, which means that the system can automatically identify and filter noise collected by the camera, thereby effectively improving the image quality. Step S12 helps to reduce noise interference in the image by dynamic denoising processing, so that subsequent image analysis and processing are more accurate and reliable. In step S13, quality evaluation is performed on the denoised image, so that a reliable basis can be provided for subsequent data processing, and accuracy of a final result is ensured. Through image quality evaluation, the step S14 marks the denoised image, and can provide data support with more dimensions and more richness for the links such as subsequent data analysis, model training and the like, thereby being beneficial to improving the automatic processing capability of the system. The invention ensures that the obtained image data of the load article has high quality and low noise from image acquisition to denoising processing to image quality evaluation, thereby ensuring the accuracy and reliability of the subsequent weighing process.
Preferably, the step of constructing the dynamic adaptive noise detection model in step S12 specifically includes:
step S121: acquiring standard load article image data and corresponding noise tag data, wherein the noise tag data comprises sensor noise tag data and environment noise tag data;
step S122: performing similar pixel image region division on the standard load article image data so as to obtain image region division data;
step S123: performing spectrum conversion on the image region division data to obtain image region spectrum data;
step S124: frequency domain feature extraction is carried out on the image area spectrum data so as to obtain frequency domain feature data, and pixel statistical feature extraction is carried out on the image area division data so as to obtain pixel statistical feature data, wherein the pixel statistical feature data comprises pixel average feature data, pixel variance feature data and pixel characteristic feature data;
step S125: carrying out noise characteristic detection on the pixel statistical characteristic data and the frequency domain characteristic data by using a preset noise detection engine so as to obtain noise characteristic data;
step S126: generating a primary noise extractor from the noise characteristic data;
Step S127: carrying out noise data extraction on the image data of the standard load article by using a primary noise extractor so as to obtain image noise data;
step S128: performing self-attention clustering calculation on the image noise data so as to obtain image noise clustering data;
step S129: and carrying out parameter optimization on the primary noise extractor by using the image noise cluster data and carrying out model construction by using the noise label data so as to obtain a dynamic adaptive noise detection model.
According to the invention, through processing the image area spectrum data and the pixel statistical characteristic data and combining with a preset noise detection engine, the fine detection of the noise characteristic is realized, so that an accurate data basis is provided for the construction of a subsequent noise extractor. The primary noise extractor is generated according to the noise characteristic data, so that the noise processing process has the characteristic of individuation, and effective noise suppression can be performed according to specific noise conditions. And the noise extractor is used for extracting noise data from the image data of the standard load article, so that noise from a sensor and the environment is effectively removed, and the quality of data in subsequent processing is ensured. The self-attention clustering calculation is carried out on the noise clustering data, so that finer clustering of noise is realized, and the efficiency and accuracy of noise extraction are further improved. By optimizing parameters of the primary noise extractor and constructing a model, a dynamic adaptive noise detection model is formed, so that the noise detection process can be adaptively adjusted and optimized according to specific conditions, and the maximization of the noise processing effect is ensured.
Preferably, the step of dividing the image area of the similar pixels in step S122 is specifically:
step S1221: image segmentation is carried out on the standard load article image data by utilizing preset image division parameter data, so that regional image data are obtained;
step S1222: extracting texture features and color features of the regional image data to obtain regional image texture feature data and regional image color feature data;
step S1223: carrying out similarity matrix construction on the regional image data according to the regional image texture feature data and the regional image color feature data, so as to obtain similarity matrix data;
step S1224: carrying out regional image merging on the regional image data according to the similarity matrix data so as to obtain regional image merging data;
step S1225: generating an image region descriptor for the region image merging data, thereby obtaining image region descriptor data;
step S1226: performing region association calculation on adjacent image data in the region image merging data according to the image region descriptor data, so as to obtain region association data;
step S1227: and carrying out data marking on the region image merging data according to the region association data so as to obtain image region division data.
According to the method, the standard load article image is segmented by utilizing the preset image division parameter data, so that the regional image data are obtained. This allows the image to be segmented at a finer level, thereby obtaining more accurate region information. And extracting texture features and color features from the regional image data so as to obtain rich regional image feature data, and facilitating subsequent similarity calculation and regional association analysis. Based on the texture and color characteristics of the region images, a similarity matrix is constructed to quantify the similarity between different regions, and region merging can be performed on a more accurate and reliable basis. And according to the similarity matrix data, the region images are combined, so that further refined combination of similar regions is realized, and the combination accuracy is improved. Image region descriptors are generated so that each region can be characterized by a unique descriptor, providing a basis for subsequent region association calculations. Based on the image region descriptor data, the association calculation of the regions is performed, so that the association relation between different regions is accurately determined. Accurate image region division data is obtained through the data marking, and a reliable basis is provided for subsequent processing.
Preferably, in step S124, the pixel characteristic feature data is calculated by a pixel characteristic calculation formula, where the pixel characteristic calculation formula specifically includes:
f is pixel characteristic feature data, n is pixel number data of the image area division data, i is pixel order item data of the image area division data, F (x, y) is pixel data at (x, y) in the image area division data, x is pixel abscissa data of the image area division data, and y is pixel ordinate data of the image area division data.
The invention constructs a pixel characteristic calculation formula which covers image gradientsAnd->The amplitude and phase information of the image is integrated by the sine and cosine components of the fourier transform. By combining the sine function and the cosine function, the sensitivity of the formula to the image high-frequency information is relatively high, and the characteristics of image details, textures and the like can be extracted. Gradient information is used in the formula to calculate image characteristics, which enables the formula to capture the rate of change of the image, and thus extract features related to the edges of the image. Wherein F represents an imageThe pixel characteristic feature data, i.e. the characteristic value of a specific pixel position calculated by this formula. n represents the number of pixels of the image area division data, which represents the number of pixels sampled when calculating the characteristic feature. i represents a pixel sequence term of the image area division data, which represents a pixel sequence number during sampling. f (x, y) represents a pixel value at a position (x, y), i.e., a gradation value, in the image area division data. / >And->The gray value change rate, i.e. the gradient, of the image in the x and y directions, respectively, is shown. According to the invention, the characteristic value of the specific position is obtained by carrying out a series of mathematical operations on the pixel value and the gradient thereof of the image at the specific position so as to describe certain image characteristics of the pixel position, such as local contrast, texture information and the like, so that a formula can capture the complex characteristic of the image at the specific position and convert the complex characteristic into a single characteristic value F.
Preferably, step S2 is specifically:
step S21: carrying out high-dimensional feature extraction on the image data of the load article so as to obtain high-dimensional feature data;
step S22: positioning the region of interest on the high-dimensional characteristic data, so as to obtain region of interest data;
step S23: feature fusion is carried out on the data of the region of interest and the high-dimensional feature data, so that feature fusion data are obtained;
step S24: performing target detection on the feature fusion data and performing minimum error non-maximum suppression so as to obtain target detection data;
step S25: and generating article position data and article boundary data according to the target detection data, thereby obtaining the article position data and the article boundary data.
According to the invention, through carrying out high-dimensional feature extraction on the image data of the load picture, the image can be abstracted from the pixel level to the feature space with higher dimension, so that the image data can be understood and processed at a more abstract level, and the abstraction capability on the object features is improved. By analyzing and processing the high-dimensional feature data, the region of interest is located, i.e. important regions related to the task are identified in the image, so that the calculation amount of subsequent processing can be reduced, and the processing efficiency is improved. The high-dimensional characteristic data and the interested region data are fused, and the local region and the whole characteristic can be comprehensively considered, so that the characteristic data with more representativeness and global property is obtained, and the accuracy of article identification is improved. By carrying out target detection on the feature fusion data, articles in the image can be identified, and repeated detection and redundant detection are effectively reduced if the non-maximum value is restrained, so that the detection precision and efficiency are improved. Based on the target detection data, position and boundary information of the article are generated, so that the position of the article in the image can be accurately positioned, information such as the size and the shape of the article is provided, and an accurate data basis is provided for subsequent weighing.
Preferably, the minimum error non-maximum suppression in step S24 is non-maximum suppression processing performed by a minimum error attenuation calculation formula, where the minimum error attenuation calculation formula is specifically:
for minimum error non-maximum value suppression data, k is the minimum error attenuation coefficient, z is target frame characteristic parameter data, r is gamma shape parameter data, Γ (r) is gamma data, u is color high-dimensional characteristic weight data in characteristic fusion data, v is texture high-dimensional characteristic weight data in characteristic fusion data, NMS is primary non-maximum value suppression numberAccording to (I)>For the decay rate control term, ioU is cross ratio data, and θ is a minimum error non-maximum suppression severity adjustment term.
The invention constructs a minimum error attenuation calculation formula, which carries out minimum error non-maximum value inhibition treatment on the target frame by comprehensively considering factors such as characteristics, gamma function, characteristic weight, cross-over ratio and the like of the target frame, reserves the most representative target frame, and eliminates redundant frames, thereby improving the accuracy of target detection. Wherein the method comprises the steps ofAnd NNS represent data before and after the minimum error non-maximum suppression processing, respectively. After the processing, the most representative target frame is reserved, and redundant frames are removed. The k minimum error attenuation coefficient controls the attenuation speed of the error, and the larger the k value is, the faster the error attenuation is, and the more strict the corresponding non-maximum value is suppressed. And z the characteristic parameter data of the target frame is used for measuring the characteristic information of the target frame. r gamma shape parameter data, affects the shape of the gamma function. Gamma function Γ (r), which is a mathematical function whose shape is determined by the parameter r. u and v represent the color high-dimensional feature weight and texture high-dimensional feature weight in the feature fusion data, respectively, and the two parameters influence the weight distribution of the features in the fusion process. The NNS primary non-maximum suppression data includes initial target frame information. IoU are cross-referenced to measure the degree of overlap of two target frames. The θ minimum error non-maximum suppression severity adjustment term can adjust the severity of suppression, affecting the degree of tightness of non-maximum suppression.
Preferably, step S3 is specifically:
step S31: calculating volume data according to the article position data and the article boundary data, so as to obtain article volume data;
step S32: carrying out three-dimensional construction on the image data of the load-carrying article according to the article volume data so as to obtain a three-dimensional model of the load-carrying article;
step S33: extracting deformation characteristics of the three-dimensional model of the load-carrying article, thereby obtaining deformation characteristic data;
step S34: and estimating the volume and the weight of the object by utilizing a preset linear regression weight detection model to the three-dimensional model of the load object and deformation characteristic data, so as to obtain first weight data of the object.
By utilizing the position data and the boundary data of the article, the volume data of the article can be accurately calculated, which is an important basis for accurately estimating the weight of the article. Based on the volume data of the object, a three-dimensional model of the heavy-duty object can be constructed, so that the understanding of the shape of the object is more visual, and an accurate data basis is provided for the subsequent deformation characteristic extraction. The deformation characteristic extraction is carried out on the three-dimensional model of the load-carrying article, so that the deformation information of the article in the weighing process can be captured, which is the key for accurately estimating the weight of the article. And the pre-set linear regression weight detection model is utilized to combine the three-dimensional model and deformation characteristic data to estimate the weight of the object, and the linear regression model is used to enable the estimation of the weight of the object to be more accurate and reliable.
Preferably, step S4 is specifically:
step S41: carrying out load article area division on the load article image data according to the article detection data so as to obtain load article area data;
step S42: clustering calculation is carried out on the load article area data, so that load article area clustering data are obtained;
step S43: carrying out image division on the load article area data according to the load article area clustering data so as to obtain load article division image data, wherein the load article division image data comprises load article division sub-image data load articles, load article area clustering data and load article area clustering description data generated by the load article area clustering data through a preset mapping rule;
step S44: carrying out material identification on the load article division image data so as to obtain article material data;
step S45: carrying out material tension and stress evaluation optimization on the article material data according to the article detection data so as to obtain article material optimization data;
step S46: calculating the material density according to the material optimization data of the article, so as to obtain second weight data of the article;
the step of optimizing the material tension and the stress evaluation in the step S45 specifically includes:
Step S451: carrying out material tension distribution calculation on the article detection data and the article material data so as to obtain article material tension distribution data;
step S452: carrying out the internal stress distribution evaluation of the article according to the tension distribution data of the article material, thereby obtaining the internal stress distribution data of the article;
step S453: performing object deformation simulation according to the internal stress distribution data of the object and the material data of the object, so as to obtain object deformation simulation data;
step S454: performing similarity calculation according to the object deformation simulation data and the object detection data, so as to obtain similarity data;
step S455: optimizing the material data of the article according to the similarity data, so as to obtain article material optimization data;
in step S454, the similarity calculation is performed by using an object deformation similarity calculation formula, where the object deformation similarity calculation formula specifically includes:
s is object deformation similarity data, delta a is object deformation tiny displacement data, a 0 The method comprises the steps of obtaining initial position data of deformation of an object, obtaining position data before deformation of the object in object detection data, obtaining position data after deformation of the object in object detection data, obtaining position data before deformation of the object in object deformation simulation data, obtaining position data after deformation of the object in object deformation simulation data, and obtaining external stress of the object Data g is internal stress data of an object, and m is deformation index data of the object.
The invention constructs an object deformation similarity calculation formula, which is used for calculating the similarity of the object deformation by carrying out a series of mathematical operations on the position and stress condition of the object in the deformation process, wherein the calculation of the similarity involves the comprehensive influence of factors such as position change, external stress, internal resistance and the like before and after deformation. Wherein S represents deformation similarity data of the object, that is, a deformation similarity value calculated by this formula. Δa represents a minute displacement of the deformation of the object, and represents the amount of change in the deformation. a, a 0 The initial position representing the deformation of the object is the position at which the deformation starts. b represents position data before deformation of the object in the object detection data. a represents position data after deformation of an object in the object detection data. Q represents the position data before the deformation of the object in the object deformation simulation data. A represents position data after the deformation of the object in the object deformation simulation data. G represents external stress data of the object. g represents internal stress data of the object. m represents the deformation index data of the object. According to the method, after factors such as position change, external stress and internal resistance of the object before and after deformation are considered, a deformation similarity value is calculated and used for evaluating the similarity of the object in the deformation process.
According to the invention, the object area in the image is accurately divided by carrying out area division on the image of the loaded object according to the object detection data, so that accurate area data is provided for subsequent processing. And clustering calculation is carried out on the divided areas of the load articles, the similar areas can be classified into one type, and the characteristic information of the load articles is further extracted. Based on the clustering result, further image division is carried out on the load article area to obtain subdivision image data, wherein the subdivision image data comprise component data and position data of the load articles. The material identification is carried out on the subdivision image data, so that the material information of the load-carrying article can be accurately distinguished, and the material information is the basis of subsequent material mechanics evaluation. By carrying out the steps of tension distribution calculation, internal stress distribution evaluation, article deformation simulation and the like on the article detection data and the material data, the material mechanical properties of the articles can be comprehensively evaluated and optimized, and an accurate physical basis is provided for subsequent weight estimation. Based on the optimized material data, the density information of the object can be accurately calculated, so that the weight data of the second object can be obtained.
Preferably, the present application further provides an artificial intelligence based vehicle-mounted weighing management platform for executing the artificial intelligence based vehicle-mounted weighing management method as described above, the artificial intelligence based vehicle-mounted weighing management platform comprising:
The load article image acquisition module is used for acquiring the image of the load article in real time through the vehicle-mounted camera, so as to obtain image data of the load article;
the article data detection module is used for detecting article data of the image data of the loaded articles so as to obtain article detection data, wherein the article detection data comprises article position data and article boundary data;
the article volume weight estimating module is used for calculating volume data according to the article position data and the article boundary data so as to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data and estimating the article volume weight so as to obtain article first weight data;
the article material identification module is used for carrying out article material identification on the load article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
the vehicle-mounted weighing abnormality detection generation model is used for carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, so as to obtain vehicle-mounted weighing abnormality detection data for carrying out vehicle-mounted weighing abnormality warning operation.
The invention has the beneficial effects that: by comprehensively utilizing the image data and the article characteristic data, the accurate identification and weighing precision of the loaded articles can be improved, and the delay problem caused by fixed-point weighing brought by the traditional weighing method is reduced. By adopting an automatic article detection and weighing method, compared with a traditional manual weighing mode, the weighing speed can be greatly improved, the labor cost is reduced, and the working efficiency is improved. By utilizing the vehicle-mounted camera to collect images, compared with a traditional sensor device, the vehicle-mounted camera is lower in cost and easy to implement and maintain. Multiple articles can be identified and weighed simultaneously, so that the treatment efficiency is improved, and the method has important application value in logistics and transportation industries. Through on-vehicle weighing anomaly detection, can real-time supervision load condition, once the abnormal condition is found, can in time send out the warning, help preventing potential safety problem. Through the identification of the material of the article, the material information of the article can be acquired, and the subsequent logistics processing and management are facilitated.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting implementations made with reference to the following drawings in which:
FIG. 1 shows a flow chart of steps of an artificial intelligence based vehicle weight management method according to an embodiment;
FIG. 2 shows a step flow diagram of step S1 of an embodiment;
FIG. 3 shows a step flow diagram of step S12 of an embodiment;
FIG. 4 shows a step flow diagram of step S2 of an embodiment;
fig. 5 shows a step flow diagram of step S3 of an embodiment.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1 to 5, the application provides a vehicle-mounted weighing management method based on artificial intelligence, which comprises the following steps:
step S1: carrying out real-time image acquisition of the load-carrying articles by using a vehicle-mounted camera so as to obtain image data of the load-carrying articles;
specifically, for example, a high resolution camera, such as a Sony IMX586 sensor, is used to mount the camera on the vehicle device to acquire image data of the cargo item in real time.
Step S2: carrying out item data detection on the load item image data so as to obtain item detection data, wherein the item detection data comprises item position data and item boundary data;
Specifically, the image of the loaded item is detected, for example, using a deep learning technique, such as an object detection model based on Convolutional Neural Network (CNN), such as YOLO (You Only Look Once) or fast R-CNN, to obtain the position and boundary data of the item.
Step S3: calculating volume data according to the article position data and the article boundary data to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data, and estimating the volume and the weight of the article to obtain article first weight data;
specifically, for example, based on the position and boundary data of the object, volume estimation is performed using a volume integration method. For example, for a regular shaped object, the calculation may be performed using a geometric formula.
Step S4: carrying out article material identification on the loaded article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
specifically, features such as texture, color, etc. of the object are analyzed, for example, using image processing and machine learning techniques, to identify the material of the article. Corresponding density models are established for different materials, and weight estimation is carried out through the first volume of the object and the second material density of the object.
Step S5: and carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, thereby obtaining vehicle-mounted weighing abnormality detection data and carrying out vehicle-mounted weighing abnormality warning operation.
Specifically, for example, whether or not there is an abnormality is determined using a weight threshold value or a ratio set in advance, and if, for example, the weight difference between the first article and the second article exceeds the set threshold value, abnormality detection data is generated, and a corresponding warning operation is triggered.
Specifically, for example, there is a transport vehicle on which square wooden boxes are loaded. The platform shoots the image of the loaded article by using the vehicle-mounted camera, and obtains article position data and article boundary data through image processing. The exact dimensions of these wooden boxes are 1mx1mx1m. The platform may use a volumetric integration method for volume estimation: volume = length x width x height = 1mx1mx1m = 1 cubic meter assuming a material density of 500 kg/cubic meter for the wooden box, then the first weight of the object is 1 cubic meter x500 kg/cubic meter = 500 kg. The wooden boxes are identified for materials by image processing and machine learning techniques, confirming that they are woody. The platform then uses a density model of wood, assuming 600 kg/cubic meter, to make a weight estimate based on the volume of the first object and the material density of the second object: article second weight = 1 cubic meter x600 kg/cubic meter = 600 kg. A weight threshold of 100 kg was set. In this case, the weight difference of the article first and the article second exceeds the set threshold (|500 kg-600 kg|=100 kg), and thus abnormality detection data is generated and a corresponding warning operation is triggered. And evaluating the planned specific cargo transportation based on the image, so as to provide a dynamic and efficient vehicle-mounted weighing management method.
According to the method, the vehicle-mounted camera is used for image acquisition, and the steps of object data detection and material identification are adopted, so that the automatic detection and weighing processes of the loaded objects are realized, manual intervention is not needed, and the working efficiency is improved. By extracting the position data and the boundary data of the article and combining the first volume weight estimation of the article and the material weight estimation of the article, the weight of the article can be estimated more accurately, personal errors are reduced, and the weighing accuracy is improved. The method not only considers the position and boundary information of the article, but also combines the identification of the material of the article, thereby comprehensively utilizing various information sources and improving the weighing precision. By detecting and generating the vehicle-mounted weighing abnormality, abnormal conditions such as overload and abnormal shapes in the weighing process can be found and warned in time, and the conditions that accidents or damages are possibly caused are avoided. Through automatic and intelligent weighing process, manual participation is reduced, danger and potential safety hazard in the working process are reduced, and working safety is improved.
Preferably, step S1 is specifically:
step S11: acquiring real-time load article images through a vehicle-mounted camera, so as to obtain real-time load article image data;
Specifically, for example, a high-resolution, wide-angle in-vehicle camera is used to support a real-time image transmission function. And if the model is XYZ-5000, a high-definition camera is selected.
Step S12: carrying out dynamic denoising processing on real-time load article image data by using a preset dynamic adaptive noise detection model so as to obtain load article image denoising data;
specifically, for example, a real-time noise detection model based on a Convolutional Neural Network (CNN) is used, and the model can dynamically adapt to noise conditions in different environments. The real-time noise detection model is trained through historical load article image data and corresponding noise labels, and the convolution layer is processed by the denoising convolution layer in the training process.
Specifically, for example, in a real-time image processing process of performing noise detection and denoising operation on real-time load article image data by using a preset dynamic adaptive noise detection model, a feedback mechanism is adopted to obtain an estimate of noise intensity from the noise detection model. And dynamically adjusting parameters of a denoising algorithm according to the noise estimation output by the model.
Step S13: carrying out image quality evaluation on the denoising data of the image of the load article, thereby evaluating the data of the image quality;
Specifically, the denoised image is quality evaluated, for example, using an image quality evaluation algorithm such as peak signal to noise ratio (PSNR) or Structural Similarity Index (SSIM).
Step S14: and marking the denoising data of the image of the load article according to the image quality evaluation data, thereby obtaining image data of the load picture.
Specifically, the quality evaluation data is classified, for example, based on a preset threshold value, and image data satisfying the quality criterion is marked. Such as marking an image that meets quality criteria as a "premium image".
According to the invention, through the step S11, the image of the load article is acquired in real time by utilizing the vehicle-mounted camera, so that the weighing process can be monitored in real time, and the timeliness and the accuracy of data are ensured. In step S12, a preset dynamic adaptive noise detection model is adopted, which means that the system can automatically identify and filter noise collected by the camera, thereby effectively improving the image quality. Step S12 helps to reduce noise interference in the image by dynamic denoising processing, so that subsequent image analysis and processing are more accurate and reliable. In step S13, quality evaluation is performed on the denoised image, so that a reliable basis can be provided for subsequent data processing, and accuracy of a final result is ensured. Through image quality evaluation, the step S14 marks the denoised image, and can provide data support with more dimensions and more richness for the links such as subsequent data analysis, model training and the like, thereby being beneficial to improving the automatic processing capability of the system. The invention ensures that the obtained image data of the load article has high quality and low noise from image acquisition to denoising processing to image quality evaluation, thereby ensuring the accuracy and reliability of the subsequent weighing process.
Preferably, the step of constructing the dynamic adaptive noise detection model in step S12 specifically includes:
step S121: acquiring standard load article image data and corresponding noise tag data, wherein the noise tag data comprises sensor noise tag data and environment noise tag data;
specifically, a set of standard load item image data is acquired, for example, using a HighVision 2000 camera, and tag data of sensor noise and environmental noise are recorded simultaneously. Standard load item image data is acquired at a speed of 30 frames/second, using a HighVision 2000 model onboard camera, and tag data for both sensor noise and ambient noise is recorded. For example, for a certain frame of image, the sensor noise is 0.5 and the ambient noise is 0.2.
Step S122: performing similar pixel image region division on the standard load article image data so as to obtain image region division data;
specifically, for example, an image is segmented into similar regions by using an image segmentation algorithm, such as a segmentation method based on region growing, so as to obtain image region segmentation data, wherein the image region segmentation data comprises a plurality of similar regions, and each region comprises a plurality of pixels.
Step S123: performing spectrum conversion on the image region division data to obtain image region spectrum data;
Specifically, for example, a Fast Fourier Transform (FFT) algorithm is applied to perform spectrum conversion on each image region, resulting in frequency domain information.
Step S124: frequency domain feature extraction is carried out on the image area spectrum data so as to obtain frequency domain feature data, and pixel statistical feature extraction is carried out on the image area division data so as to obtain pixel statistical feature data, wherein the pixel statistical feature data comprises pixel average feature data, pixel variance feature data and pixel characteristic feature data;
specifically, for example, statistical feature extraction such as average, variance, and the like is performed on the frequency domain data. Meanwhile, the pixel statistical characteristics are calculated, including pixel average values, pixel variances and the like.
Step S125: carrying out noise characteristic detection on the pixel statistical characteristic data and the frequency domain characteristic data by using a preset noise detection engine so as to obtain noise characteristic data;
specifically, for example, a preset noise detection algorithm is used to detect the noise characteristics of the extracted feature data, so as to obtain noise characteristic data.
Step S126: generating a primary noise extractor from the noise characteristic data;
specifically, the primary noise extractor is constructed, for example, using the obtained noise characteristic data, and is a linear model or other noise model.
Step S127: carrying out noise data extraction on the image data of the standard load article by using a primary noise extractor so as to obtain image noise data;
specifically, for example, the primary noise extractor is applied to the standard load article image data to obtain noise data.
Step S128: performing self-attention clustering calculation on the image noise data so as to obtain image noise clustering data;
specifically, the noise data is clustered, for example, using a self-attention mechanism and a clustering algorithm, resulting in clustered data.
Step S129: and carrying out parameter optimization on the primary noise extractor by using the image noise cluster data and carrying out model construction by using the noise label data so as to obtain a dynamic adaptive noise detection model.
Specifically, a dynamic adaptive noise detection model is constructed, for example, by optimizing parameters of the primary noise extractor while model training is performed using data with noise tags.
According to the invention, through processing the image area spectrum data and the pixel statistical characteristic data and combining with a preset noise detection engine, the fine detection of the noise characteristic is realized, so that an accurate data basis is provided for the construction of a subsequent noise extractor. The primary noise extractor is generated according to the noise characteristic data, so that the noise processing process has the characteristic of individuation, and effective noise suppression can be performed according to specific noise conditions. And the noise extractor is used for extracting noise data from the image data of the standard load article, so that noise from a sensor and the environment is effectively removed, and the quality of data in subsequent processing is ensured. The self-attention clustering calculation is carried out on the noise clustering data, so that finer clustering of noise is realized, and the efficiency and accuracy of noise extraction are further improved. By optimizing parameters of the primary noise extractor and constructing a model, a dynamic adaptive noise detection model is formed, so that the noise detection process can be adaptively adjusted and optimized according to specific conditions, and the maximization of the noise processing effect is ensured.
Preferably, the step of dividing the image area of the similar pixels in step S122 is specifically:
step S1221: image segmentation is carried out on the standard load article image data by utilizing preset image division parameter data, so that regional image data are obtained;
specifically, for example, the standard load article image is segmented using a region-based growth algorithm, and an appropriate threshold and parameters are set to segment the image into a plurality of regions.
Step S1222: extracting texture features and color features of the regional image data to obtain regional image texture feature data and regional image color feature data;
specifically, for example, a texture feature extraction is performed on pixels of each region, and a gray level co-occurrence matrix (GLCM) or the like method may be used. At the same time, color features are extracted, for example, by using a color histogram or other method, and texture and color feature data of the region image are obtained.
Step S1223: carrying out similarity matrix construction on the regional image data according to the regional image texture feature data and the regional image color feature data, so as to obtain similarity matrix data;
specifically, for example, the similarity between the regional images is calculated by using the texture features and the color features, and a euclidean distance or other similarity measurement method can be adopted to construct a similarity matrix.
Step S1224: carrying out regional image merging on the regional image data according to the similarity matrix data so as to obtain regional image merging data;
specifically, for example, region merging is performed by using a similarity matrix, and regions with similarity higher than a set threshold are merged to obtain merged region image data.
Step S1225: generating an image region descriptor for the region image merging data, thereby obtaining image region descriptor data;
specifically, for example, according to the combined regional image data, the feature descriptors of the region may include features such as shapes and textures, and the feature descriptors are generated by feature extraction, and feature vectors are generated and compared with a preset vector description threshold value, so as to obtain corresponding feature descriptors.
Step S1226: performing region association calculation on adjacent image data in the region image merging data according to the image region descriptor data, so as to obtain region association data;
specifically, for example, the image region descriptor is used to perform association calculation of adjacent regions, and methods such as similarity matching may be used to obtain association information between regions.
Step S1227: and carrying out data marking on the region image merging data according to the region association data so as to obtain image region division data.
Specifically, for example, the combined region is marked according to the region association information, so as to obtain final image region division data.
According to the method, the standard load article image is segmented by utilizing the preset image division parameter data, so that the regional image data are obtained. This allows the image to be segmented at a finer level, thereby obtaining more accurate region information. And extracting texture features and color features from the regional image data so as to obtain rich regional image feature data, and facilitating subsequent similarity calculation and regional association analysis. Based on the texture and color characteristics of the region images, a similarity matrix is constructed to quantify the similarity between different regions, and region merging can be performed on a more accurate and reliable basis. And according to the similarity matrix data, the region images are combined, so that further refined combination of similar regions is realized, and the combination accuracy is improved. Image region descriptors are generated so that each region can be characterized by a unique descriptor, providing a basis for subsequent region association calculations. Based on the image region descriptor data, the association calculation of the regions is performed, so that the association relation between different regions is accurately determined. Accurate image region division data is obtained through the data marking, and a reliable basis is provided for subsequent processing.
Preferably, in step S124, the pixel characteristic feature data is calculated by a pixel characteristic calculation formula, where the pixel characteristic calculation formula specifically includes:
f is pixel characteristic feature data, n is pixel number data of the image area division data, i is pixel order item data of the image area division data, F (x, y) is pixel data at (x, y) in the image area division data, x is pixel abscissa data of the image area division data, and y is pixel ordinate data of the image area division data.
Specifically, for example, import numpy as np; defpixel_ characteristics (image): # assume that image is a two-dimensional gray image matrix dx=np.gradient (image, axis=0) # x direction partial derivative dy=np.gradient (image, axis=1) # y direction partial derivative, sin_term=np.sqrt (np.sin (np.pi/2 x+npi.arcan (dx+2))), cos_term=np.cos (np.pi/2 x+arcan (dy+2)), feature=np.log 2 (1+sin_term), return.mean (feature) # image data (assumed to be an image) # image data, ([ 100,150,200], [50,75,100], [25,50,75], [ pixel characteristic data ], ("feature data,", feature data, ".
The invention constructs a pixel characteristic calculation formula which covers image gradientsAnd->The amplitude and phase information of the image is integrated by the sine and cosine components of the fourier transform. By combining the sine function and the cosine function, the sensitivity of the formula to the image high-frequency information is relatively high, and the characteristics of image details, textures and the like can be extracted. Gradient information is used in the formula to calculate image characteristics, which enables the formula to capture the rate of change of the image, and thus extract features related to the edges of the image. Where F represents pixel characteristic feature data, i.e., a characteristic value of a specific pixel position calculated by this formula. n represents the number of pixels of the image area division data, which represents the number of pixels sampled when calculating the characteristic feature. i represents a pixel sequence term of the image area division data, which represents a pixel sequence number during sampling. f (x, y) represents a pixel value at a position (x, y), i.e., a gradation value, in the image area division data. />And->The gray value change rate, i.e. the gradient, of the image in the x and y directions, respectively, is shown. The invention uses the image of the image at a specific positionThe pixel values and gradients thereof are subjected to a series of mathematical operations to obtain characteristic feature values of the position, so as to describe certain image characteristics of the pixel position, such as local contrast, texture information and the like, so that the formula can capture complex characteristics of the image at a specific position and convert the complex characteristics into a single characteristic value F.
Preferably, step S2 is specifically:
step S21: carrying out high-dimensional feature extraction on the image data of the load article so as to obtain high-dimensional feature data;
specifically, feature extraction is performed on the payload picture image data, for example, using a pre-trained CNN model (e.g., res net, VGG, etc.). And carrying out multi-level feature extraction on the image by utilizing the convolution layer and the pooling layer to obtain high-dimensional feature data.
Step S22: positioning the region of interest on the high-dimensional characteristic data, so as to obtain region of interest data;
specifically, the high-dimensional feature data is located for a region of interest, for example, using a target detection algorithm (e.g., faster R-CNN, YOLO, etc.).
Step S23: feature fusion is carried out on the data of the region of interest and the high-dimensional feature data, so that feature fusion data are obtained;
specifically, for example, a fusion network or a feature stitching method is used to combine the high-dimensional feature data with the region-of-interest data to obtain feature fusion data.
Step S24: performing target detection on the feature fusion data and performing minimum error non-maximum suppression so as to obtain target detection data;
specifically, for example, the feature fusion data is processed using a target detection algorithm to obtain target detection data. And removing redundant detection results by applying a minimum error non-maximum suppression algorithm, and reserving an optimal target frame.
Step S25: and generating article position data and article boundary data according to the target detection data, thereby obtaining the article position data and the article boundary data.
Specifically, for example, positional information of the article is extracted from the target detection result. The position information of the article can be further processed as required, such as calculating the center coordinates, length and width of the article, and the like, to generate article boundary data.
According to the invention, through carrying out high-dimensional feature extraction on the image data of the load picture, the image can be abstracted from the pixel level to the feature space with higher dimension, so that the image data can be understood and processed at a more abstract level, and the abstraction capability on the object features is improved. By analyzing and processing the high-dimensional feature data, the region of interest is located, i.e. important regions related to the task are identified in the image, so that the calculation amount of subsequent processing can be reduced, and the processing efficiency is improved. The high-dimensional characteristic data and the interested region data are fused, and the local region and the whole characteristic can be comprehensively considered, so that the characteristic data with more representativeness and global property is obtained, and the accuracy of article identification is improved. By carrying out target detection on the feature fusion data, articles in the image can be identified, and repeated detection and redundant detection are effectively reduced if the non-maximum value is restrained, so that the detection precision and efficiency are improved. Based on the target detection data, position and boundary information of the article are generated, so that the position of the article in the image can be accurately positioned, information such as the size and the shape of the article is provided, and an accurate data basis is provided for subsequent weighing.
Preferably, the minimum error non-maximum suppression in step S24 is non-maximum suppression processing performed by a minimum error attenuation calculation formula, where the minimum error attenuation calculation formula is specifically:
to minimize errorDifferential non-maximum value suppression data, k is minimum error attenuation coefficient, z is target frame characteristic parameter data, r is gamma shape parameter data, Γ (r) is gamma data, u is color high-dimensional characteristic weight data in characteristic fusion data, v is texture high-dimensional characteristic weight data in characteristic fusion data, NMS is primary non-maximum value suppression data>For the decay rate control term, ioU is cross ratio data, and θ is a minimum error non-maximum suppression severity adjustment term.
Specifically, for example, an image area includes 3×3 pixel matrix, as follows: for this image region division data, the platform will use the pixel feature calculation formula to calculate the pixel characteristic feature data F, 10, 20, 30 15, 25, 35 5, 15, 10. First, the partial derivative of each pixel needs to be calculated, assuming that a central difference method is used: partial derivativePartial derivative->The following partial derivative matrix is obtained: 5, 10, 10, 20 0,5, 10, will calculate arctan and sin, cos for each pixel point: |1.3734,1.3734,1.1071||1.1071,1.1071,0.8761||1.5708,1.3734,1.1071|,/>|0.9837,0.9837,0.8912||0.8912,0.8912,0.7673||1,0.9837,0.8912|,/>0.1799,0.1799,0.4534 0.4534,0.4534,0.641 0,0.1799,0.4534, multiplying sin and cos results, adding 1 and taking the logarithm: i 0.2015,0.2015 the number of the block to be processed,0.3687 0.3687,0.3687,0.5251 0.301,0.2015,0.3687, and averaging the results of all pixels: f= (0.2015+0.2015+0.3687+0.3687+0.5251+0.301+0.2015+0.3687)/8≡0.305. For this image region division data, the pixel characteristic feature data F is about 0.305, which can be used as a quantization index for the pixel characteristic of the region.
The invention constructs a minimum error attenuation calculation formula, which carries out minimum error non-maximum value inhibition treatment on the target frame by comprehensively considering factors such as characteristics, gamma function, characteristic weight, cross-over ratio and the like of the target frame, reserves the most representative target frame, and eliminates redundant frames, thereby improving the accuracy of target detection. Wherein the method comprises the steps ofAnd NNS represent data before and after the minimum error non-maximum suppression processing, respectively. After the processing, the most representative target frame is reserved, and redundant frames are removed. The k minimum error attenuation coefficient controls the attenuation speed of the error, and the larger the k value is, the faster the error attenuation is, and the more strict the corresponding non-maximum value is suppressed. And z the characteristic parameter data of the target frame is used for measuring the characteristic information of the target frame. r gamma shape parameter data, affects the shape of the gamma function. Gamma function Γ (r), which is a mathematical function whose shape is determined by the parameter r. u and v represent the color high-dimensional feature weight and texture high-dimensional feature weight in the feature fusion data, respectively, and the two parameters influence the weight distribution of the features in the fusion process. The NNS primary non-maximum suppression data includes initial target frame information. IoU are cross-referenced to measure the degree of overlap of two target frames. The θ minimum error non-maximum suppression severity adjustment term can adjust the severity of suppression, affecting the degree of tightness of non-maximum suppression.
Preferably, step S3 is specifically:
step S31: calculating volume data according to the article position data and the article boundary data, so as to obtain article volume data;
specifically, for example, assume that the item position data is (x, y, z), and the item boundary data is (l, w, h), where l represents a length, w represents a width, and h represents a height. The volume of the item can be calculated using the following formula: volume v=l×w×h.
Step S32: carrying out three-dimensional construction on the image data of the load-carrying article according to the article volume data so as to obtain a three-dimensional model of the load-carrying article;
specifically, for example, using modeling software in computer graphics, such as Blender, maya, etc., a corresponding three-dimensional model is generated by inputting volumetric data of the item.
Step S33: extracting deformation characteristics of the three-dimensional model of the load-carrying article, thereby obtaining deformation characteristic data;
specifically, for example, a load article is applied with a certain external force, and the load article is simulated by using finite element analysis software to obtain deformation field information, so as to extract deformation characteristic data.
Step S34: and estimating the volume and the weight of the object by utilizing a preset linear regression weight detection model to the three-dimensional model of the load object and deformation characteristic data, so as to obtain first weight data of the object.
Specifically, for example, assume that the linear regression model is: weight w=a×v+b×d, where a and b are model parameters, V is volume data, and D is deformation characteristic data. Specific values of parameters a and b are determined by training a model. For example, by collecting a series of volumetric data and deformation characteristic data for an article of known weight, a linear regression algorithm is used to fit the model to obtain the appropriate parameter values.
By utilizing the position data and the boundary data of the article, the volume data of the article can be accurately calculated, which is an important basis for accurately estimating the weight of the article. Based on the volume data of the object, a three-dimensional model of the heavy-duty object can be constructed, so that the understanding of the shape of the object is more visual, and an accurate data basis is provided for the subsequent deformation characteristic extraction. The deformation characteristic extraction is carried out on the three-dimensional model of the load-carrying article, so that the deformation information of the article in the weighing process can be captured, which is the key for accurately estimating the weight of the article. And the pre-set linear regression weight detection model is utilized to combine the three-dimensional model and deformation characteristic data to estimate the weight of the object, and the linear regression model is used to enable the estimation of the weight of the object to be more accurate and reliable.
Preferably, step S4 is specifically:
step S41: carrying out load article area division on the load article image data according to the article detection data so as to obtain load article area data;
specifically, for example, taking YOLO as an example, by loading pre-trained weights and configuration files, an image is detected by using a YOLO algorithm, and position and bounding box information of an object are obtained.
Step S42: clustering calculation is carried out on the load article area data, so that load article area clustering data are obtained;
specifically, for example, features (such as position, size, etc.) of the article regions are used as input, and similar article regions are classified into one type by using a clustering algorithm, so as to obtain a clustering result.
Step S43: carrying out image division on the load article area data according to the load article area clustering data so as to obtain load article division image data, wherein the load article division image data comprises load article division sub-image data load articles, load article area clustering data and load article area clustering description data generated by the load article area clustering data through a preset mapping rule;
specifically, for example, different areas in the original image are cut or segmented according to the clustering result, so as to obtain subdivided load article component image data. The load article divided image data is pixel divided region data having high similarity, and can be understood as image data considered to be the same material. And converting the clustered data into a group of descriptive information by a preset mapping rule. These descriptive information may include: shape of the object: such as rectangular, square, circular, etc. Size of the object: including length, width, height, etc. Material of the object: such as a carton, plastic box, etc. Color of the box: the color features are extracted and converted into text descriptors. The mapping rule of the material of the object is specifically to map the object area with specific color and texture into a 'carton'; the object area with another set of colors and textures is mapped as a "plastic box".
Specifically, similar regions are classified into one type, for example, using a clustering algorithm. In this way, articles of similar material or similar characteristics may be grouped together to form load carrying article division image data, such as articles of similar shape or color, both in proximity and in shape, divided in the same region.
Step S44: carrying out material identification on the load article division image data so as to obtain article material data;
specifically, for example, the finely divided image data is classified by using a trained material recognition model, and material information of the article is recognized. The material identification model is constructed by a machine learning algorithm.
Step S45: carrying out material tension and stress evaluation optimization on the article material data according to the article detection data so as to obtain article material optimization data;
specifically, the tensile and stress forces of the material are calculated, for example, using a material mechanics model, based on known material properties and article shapes, for optimal evaluation.
Step S46: calculating the material density according to the material optimization data of the article, so as to obtain second weight data of the article;
specifically, for example, the second weight data of the object is obtained by calculating according to the material density data and the geometric volume of the object by using a density formula.
The step of optimizing the material tension and the stress evaluation in the step S45 specifically includes:
step S451: carrying out material tension distribution calculation on the article detection data and the article material data so as to obtain article material tension distribution data;
specifically, for example, a corresponding material model is built in finite element analysis software, and a corresponding load is applied, so that tension distribution data of the load-carrying article is obtained.
Step S452: carrying out the internal stress distribution evaluation of the article according to the tension distribution data of the article material, thereby obtaining the internal stress distribution data of the article;
in particular, the tension profile is converted into stress profile data inside the article, for example, according to the geometry and stress conditions of the article.
Step S453: performing object deformation simulation according to the internal stress distribution data of the object and the material data of the object, so as to obtain object deformation simulation data;
specifically, deformation simulation is performed on the load-carrying article by setting corresponding material parameters and boundary conditions in finite element analysis software, for example, so as to obtain deformation simulation data.
Step S454: performing similarity calculation according to the object deformation simulation data and the object detection data, so as to obtain similarity data;
Specifically, for example, the deformation simulation data and the article detection data are substituted into the provided similarity calculation formula to perform calculation, thereby obtaining the similarity data.
Step S455: optimizing the material data of the article according to the similarity data, so as to obtain article material optimization data;
specifically, for example, optimization algorithms such as genetic algorithm, gradient descent and the like are used to perform iterative optimization according to the similarity data and the material data, thereby obtaining material optimization data.
In step S454, the similarity calculation is performed by using an object deformation similarity calculation formula, where the object deformation similarity calculation formula specifically includes:
s is object deformation similarity data, delta a is object deformation tiny displacement data, a 0 The initial position data of the deformation of the object is b is position data before the deformation of the object in the object detection data, a is position data after the deformation of the object in the object detection data, Q is position data before the deformation of the object in the object deformation simulation data, and A is the shape of the objectAnd (3) changing position data after deformation of the object in the simulation data, wherein G is external stress data of the object, G is internal stress data of the object, and m is deformation index data of the object.
Specifically, for example, data: initial position data (a) 0 ) 10cm, 0.1cm of tiny displacement data (delta a), 8cm of position data (b) before deformation of an object, 12cm of position data (a) after deformation of the object, 7cm of position data (Q) before deformation of the object, 13cm of position data (A) after deformation of the object, 50N of external stress data (G) of the object, 30N of internal stress data (G) of the object and 0.5 of deformation index data (m) of the object, and the method is calculated: s.apprxeq.0.82. Therefore, the object deformation similarity data S is about 0.82, which indicates that the shape of the deformed object is similar to the original shape to a high degree, approaching 1.
The invention constructs an object deformation similarity calculation formula, which is used for calculating the similarity of the object deformation by carrying out a series of mathematical operations on the position and stress condition of the object in the deformation process, wherein the calculation of the similarity involves the comprehensive influence of factors such as position change, external stress, internal resistance and the like before and after deformation. Wherein S represents deformation similarity data of the object, that is, a deformation similarity value calculated by this formula. Δa represents a minute displacement of the deformation of the object, and represents the amount of change in the deformation. a, a 0 The initial position representing the deformation of the object is the position at which the deformation starts. b represents position data before deformation of the object in the object detection data. a represents position data after deformation of an object in the object detection data. Q represents the position data before the deformation of the object in the object deformation simulation data. A represents position data after the deformation of the object in the object deformation simulation data. G represents external stress data of the object. g represents internal stress data of the object. m represents the deformation index data of the object. According to the method, after factors such as position change, external stress and internal resistance of the object before and after deformation are considered, a deformation similarity value is calculated and used for evaluating the similarity of the object in the deformation process.
According to the invention, the object area in the image is accurately divided by carrying out area division on the image of the loaded object according to the object detection data, so that accurate area data is provided for subsequent processing. And clustering calculation is carried out on the divided areas of the load articles, the similar areas can be classified into one type, and the characteristic information of the load articles is further extracted. Based on the clustering result, further image division is carried out on the load article area to obtain subdivision image data, wherein the subdivision image data comprise component data and position data of the load articles. The material identification is carried out on the subdivision image data, so that the material information of the load-carrying article can be accurately distinguished, and the material information is the basis of subsequent material mechanics evaluation. By carrying out the steps of tension distribution calculation, internal stress distribution evaluation, article deformation simulation and the like on the article detection data and the material data, the material mechanical properties of the articles can be comprehensively evaluated and optimized, and an accurate physical basis is provided for subsequent weight estimation. Based on the optimized material data, the density information of the object can be accurately calculated, so that the weight data of the second object can be obtained.
Preferably, the present application further provides an artificial intelligence based vehicle-mounted weighing management platform for executing the artificial intelligence based vehicle-mounted weighing management method as described above, the artificial intelligence based vehicle-mounted weighing management platform comprising:
The load article image acquisition module is used for acquiring the image of the load article in real time through the vehicle-mounted camera, so as to obtain image data of the load article;
the article data detection module is used for detecting article data of the image data of the loaded articles so as to obtain article detection data, wherein the article detection data comprises article position data and article boundary data;
the article volume weight estimating module is used for calculating volume data according to the article position data and the article boundary data so as to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data and estimating the article volume weight so as to obtain article first weight data;
the article material identification module is used for carrying out article material identification on the load article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
the vehicle-mounted weighing abnormality detection generation model is used for carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, so as to obtain vehicle-mounted weighing abnormality detection data for carrying out vehicle-mounted weighing abnormality warning operation.
By comprehensively utilizing the image data and the article characteristic data, the accurate identification and weighing precision of the loaded articles can be improved, and the delay problem caused by fixed-point weighing brought by the traditional weighing method is reduced. By adopting an automatic article detection and weighing method, compared with a traditional manual weighing mode, the weighing speed can be greatly improved, the labor cost is reduced, and the working efficiency is improved. By utilizing the vehicle-mounted camera to collect images, compared with a traditional sensor device, the vehicle-mounted camera is lower in cost and easy to implement and maintain. Multiple articles can be identified and weighed simultaneously, so that the treatment efficiency is improved, and the method has important application value in logistics and transportation industries. Through on-vehicle weighing anomaly detection, can real-time supervision load condition, once the abnormal condition is found, can in time send out the warning, help preventing potential safety problem. Through the identification of the material of the article, the material information of the article can be acquired, and the subsequent logistics processing and management are facilitated.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The vehicle-mounted weighing management method based on the artificial intelligence is characterized by comprising the following steps of:
step S1: carrying out real-time image acquisition of the load-carrying articles by using a vehicle-mounted camera so as to obtain image data of the load-carrying articles;
step S2: carrying out item data detection on the load item image data so as to obtain item detection data, wherein the item detection data comprises item position data and item boundary data;
step S3: calculating volume data according to the article position data and the article boundary data to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data, and estimating the volume and the weight of the article to obtain article first weight data;
Step S4: carrying out article material identification on the loaded article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
step S5: and carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, thereby obtaining vehicle-mounted weighing abnormality detection data and carrying out vehicle-mounted weighing abnormality warning operation.
2. The method according to claim 1, wherein step S1 is specifically:
step S11: acquiring real-time load article images through a vehicle-mounted camera, so as to obtain real-time load article image data;
step S12: carrying out dynamic denoising processing on real-time load article image data by using a preset dynamic adaptive noise detection model so as to obtain load article image denoising data;
step S13: carrying out image quality evaluation on the denoising data of the image of the load article, thereby evaluating the data of the image quality;
step S14: and marking the denoising data of the image of the load article according to the image quality evaluation data, thereby obtaining image data of the load picture.
3. The method according to claim 2, wherein the step of constructing the dynamic adaptive noise detection model in step S12 is specifically:
step S121: acquiring standard load article image data and corresponding noise tag data, wherein the noise tag data comprises sensor noise tag data and environment noise tag data;
step S122: performing similar pixel image region division on the standard load article image data so as to obtain image region division data;
step S123: performing spectrum conversion on the image region division data to obtain image region spectrum data;
step S124: frequency domain feature extraction is carried out on the image area spectrum data so as to obtain frequency domain feature data, and pixel statistical feature extraction is carried out on the image area division data so as to obtain pixel statistical feature data, wherein the pixel statistical feature data comprises pixel average feature data, pixel variance feature data and pixel characteristic feature data;
step S125: carrying out noise characteristic detection on the pixel statistical characteristic data and the frequency domain characteristic data by using a preset noise detection engine so as to obtain noise characteristic data;
step S126: generating a primary noise extractor from the noise characteristic data;
Step S127: carrying out noise data extraction on the image data of the standard load article by using a primary noise extractor so as to obtain image noise data;
step S128: performing self-attention clustering calculation on the image noise data so as to obtain image noise clustering data;
step S129: and carrying out parameter optimization on the primary noise extractor by using the image noise cluster data and carrying out model construction by using the noise label data so as to obtain a dynamic adaptive noise detection model.
4. A method according to claim 3, wherein the step of dividing the image area of the similar pixels in step S122 is specifically:
step S1221: image segmentation is carried out on the standard load article image data by utilizing preset image division parameter data, so that regional image data are obtained;
step S1222: extracting texture features and color features of the regional image data to obtain regional image texture feature data and regional image color feature data;
step S1223: carrying out similarity matrix construction on the regional image data according to the regional image texture feature data and the regional image color feature data, so as to obtain similarity matrix data;
Step S1224: carrying out regional image merging on the regional image data according to the similarity matrix data so as to obtain regional image merging data;
step S1225: generating an image region descriptor for the region image merging data, thereby obtaining image region descriptor data;
step S1226: performing region association calculation on adjacent image data in the region image merging data according to the image region descriptor data, so as to obtain region association data;
step S1227: and carrying out data marking on the region image merging data according to the region association data so as to obtain image region division data.
5. A method according to claim 3, wherein the pixel characteristic feature data is calculated in step S124 by a pixel characteristic calculation formula, wherein the pixel characteristic calculation formula is specifically:
f is pixel characteristic feature data, n is pixel number data of the image area division data, i is pixel order item data of the image area division data, F (x, y) is pixel data at (x, y) in the image area division data, x is pixel abscissa data of the image area division data, and y is pixel ordinate data of the image area division data.
6. The method according to claim 1, wherein step S2 is specifically:
step S21: carrying out high-dimensional feature extraction on the image data of the load article so as to obtain high-dimensional feature data;
step S22: positioning the region of interest on the high-dimensional characteristic data, so as to obtain region of interest data;
step S23: feature fusion is carried out on the data of the region of interest and the high-dimensional feature data, so that feature fusion data are obtained;
step S24: performing target detection on the feature fusion data and performing minimum error non-maximum suppression so as to obtain target detection data;
step S25: and generating article position data and article boundary data according to the target detection data, thereby obtaining the article position data and the article boundary data.
7. The method according to claim 6, wherein the minimum error non-maximum suppression in step S24 is non-maximum suppression processing by a minimum error attenuation calculation formula, wherein the minimum error attenuation calculation formula is specifically:
for minimum error non-maximum value inhibition data, k is minimum error attenuation coefficient, z is target frame characteristic parameter data, r is gamma shape parameter data, Γ (r) is gamma data, u is color high-dimensional characteristic weight data in characteristic fusion data, v is texture high-dimensional characteristic weight data in characteristic fusion data, NMS is primary non-maximum value inhibition data, and _on is performed >For the decay rate control term, ioU is cross ratio data, and θ is a minimum error non-maximum suppression severity adjustment term.
8. The method according to claim 1, wherein step S3 is specifically:
step S31: calculating volume data according to the article position data and the article boundary data, so as to obtain article volume data;
step S32: carrying out three-dimensional construction on the image data of the load-carrying article according to the article volume data so as to obtain a three-dimensional model of the load-carrying article;
step S33: extracting deformation characteristics of the three-dimensional model of the load-carrying article, thereby obtaining deformation characteristic data;
step S34: and estimating the volume and the weight of the object by utilizing a preset linear regression weight detection model to the three-dimensional model of the load object and deformation characteristic data, so as to obtain first weight data of the object.
9. The method according to claim 1, wherein step S4 is specifically:
step S41: carrying out load article area division on the load article image data according to the article detection data so as to obtain load article area data;
step S42: clustering calculation is carried out on the load article area data, so that load article area clustering data are obtained;
Step S43: carrying out image division on the load article area data according to the load article area clustering data so as to obtain load article division image data, wherein the load article division image data comprises load article division sub-image data load articles, load article area clustering data and load article area clustering description data generated by the load article area clustering data through a preset mapping rule;
step S44: carrying out material identification on the load article division image data so as to obtain article material data;
step S45: carrying out material tension and stress evaluation optimization on the article material data according to the article detection data so as to obtain article material optimization data;
step S46: calculating the material density according to the material optimization data of the article, so as to obtain second weight data of the article;
the step of optimizing the material tension and the stress evaluation in the step S45 specifically includes:
step S451: carrying out material tension distribution calculation on the article detection data and the article material data so as to obtain article material tension distribution data;
step S452: carrying out the internal stress distribution evaluation of the article according to the tension distribution data of the article material, thereby obtaining the internal stress distribution data of the article;
Step S453: performing object deformation simulation according to the internal stress distribution data of the object and the material data of the object, so as to obtain object deformation simulation data;
step S454: performing similarity calculation according to the object deformation simulation data and the object detection data, so as to obtain similarity data;
step S455: optimizing the material data of the article according to the similarity data, so as to obtain article material optimization data;
in step S454, the similarity calculation is performed by using an object deformation similarity calculation formula, where the object deformation similarity calculation formula specifically includes:
s is object deformation similarity data, and Deltaa is object shapeBecomes tiny displacement data, a 0 The method comprises the steps of obtaining initial position data of object deformation, wherein b is position data before the object deformation in object detection data, a is position data after the object deformation in object detection data, Q is position data before the object deformation in object deformation simulation data, A is position data after the object deformation in object deformation simulation data, G is stress data outside the object, G is stress data inside the object, and m is object deformation index data.
10. An artificial intelligence based vehicle weighing management platform for performing the artificial intelligence based vehicle weighing management method of claim 1, comprising:
The load article image acquisition module is used for acquiring the image of the load article in real time through the vehicle-mounted camera, so as to obtain image data of the load article;
the article data detection module is used for detecting article data of the image data of the loaded articles so as to obtain article detection data, wherein the article detection data comprises article position data and article boundary data;
the article volume weight estimating module is used for calculating volume data according to the article position data and the article boundary data so as to obtain article volume data, constructing a three-dimensional model of the loaded article image data by utilizing the article volume data and estimating the article volume weight so as to obtain article first weight data;
the article material identification module is used for carrying out article material identification on the load article image data according to the article detection data so as to obtain article material data, and carrying out article material weight estimation according to the article detection data and the article material data so as to obtain article second weight data;
the vehicle-mounted weighing abnormality detection generation model is used for carrying out vehicle-mounted weighing abnormality detection generation according to the first weight data of the object and the second weight data of the object, so as to obtain vehicle-mounted weighing abnormality detection data for carrying out vehicle-mounted weighing abnormality warning operation.
CN202311815795.7A 2023-12-26 2023-12-26 Vehicle-mounted weighing management method and platform based on artificial intelligence Pending CN117788871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815795.7A CN117788871A (en) 2023-12-26 2023-12-26 Vehicle-mounted weighing management method and platform based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815795.7A CN117788871A (en) 2023-12-26 2023-12-26 Vehicle-mounted weighing management method and platform based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN117788871A true CN117788871A (en) 2024-03-29

Family

ID=90388713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815795.7A Pending CN117788871A (en) 2023-12-26 2023-12-26 Vehicle-mounted weighing management method and platform based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN117788871A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196131A1 (en) * 2018-04-12 2019-10-17 广州飒特红外股份有限公司 Method and apparatus for filtering regions of interest for vehicle-mounted thermal imaging pedestrian detection
CN111426368A (en) * 2020-04-23 2020-07-17 广州市甬利格宝信息科技有限责任公司 Mobile weighing measurement method for automobile load
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN112964345A (en) * 2021-02-07 2021-06-15 广东电子工业研究院有限公司 Freight car weighing system and weighing method thereof
CN114022537A (en) * 2021-10-29 2022-02-08 浙江东鼎电子股份有限公司 Vehicle loading rate and unbalance loading rate analysis method for dynamic weighing area
KR102425437B1 (en) * 2021-10-13 2022-07-27 (주)유디엔에스 Weigh-In-Motion having function of automatic loads identification
FR3122941A3 (en) * 2021-05-11 2022-11-18 Anhui University Of Science & Technology System and method for calculating the volume and mass of a pile of materials based on image recognition technology
US20230130765A1 (en) * 2021-10-21 2023-04-27 Zhejiang Feidi Motors Co., Ltd Method for Detecting the Load Mass of Commercial Vehicle
CN116718255A (en) * 2023-06-06 2023-09-08 新疆成业建设集团有限公司 Intelligent earth and stone side traffic weighing system
CN117011477A (en) * 2023-10-07 2023-11-07 南通杰蕾机械有限公司 BIM-based steel structure deformation monitoring and processing method and system
CN117006953A (en) * 2023-07-06 2023-11-07 智慧互通科技股份有限公司 Vehicle overload detection early warning method and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196131A1 (en) * 2018-04-12 2019-10-17 广州飒特红外股份有限公司 Method and apparatus for filtering regions of interest for vehicle-mounted thermal imaging pedestrian detection
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
AU2020102091A4 (en) * 2019-10-17 2020-10-08 Wuhan University Of Science And Technology Intelligent steel slag detection method and system based on convolutional neural network
CN111426368A (en) * 2020-04-23 2020-07-17 广州市甬利格宝信息科技有限责任公司 Mobile weighing measurement method for automobile load
CN112964345A (en) * 2021-02-07 2021-06-15 广东电子工业研究院有限公司 Freight car weighing system and weighing method thereof
FR3122941A3 (en) * 2021-05-11 2022-11-18 Anhui University Of Science & Technology System and method for calculating the volume and mass of a pile of materials based on image recognition technology
KR102425437B1 (en) * 2021-10-13 2022-07-27 (주)유디엔에스 Weigh-In-Motion having function of automatic loads identification
US20230130765A1 (en) * 2021-10-21 2023-04-27 Zhejiang Feidi Motors Co., Ltd Method for Detecting the Load Mass of Commercial Vehicle
CN114022537A (en) * 2021-10-29 2022-02-08 浙江东鼎电子股份有限公司 Vehicle loading rate and unbalance loading rate analysis method for dynamic weighing area
CN116718255A (en) * 2023-06-06 2023-09-08 新疆成业建设集团有限公司 Intelligent earth and stone side traffic weighing system
CN117006953A (en) * 2023-07-06 2023-11-07 智慧互通科技股份有限公司 Vehicle overload detection early warning method and system
CN117011477A (en) * 2023-10-07 2023-11-07 南通杰蕾机械有限公司 BIM-based steel structure deformation monitoring and processing method and system

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
SIVARAMALINGAM KIRUSHANTH: "Design and Development of Weigh-In-Motion Using Vehicular Telematics", 《JOURNAL OF SENSORS》, 4 April 2020 (2020-04-04) *
吴培良;刘海东;孔令富;: "一种基于丰富视觉信息学习的3D场景物体标注算法", 小型微型计算机***, no. 01, 15 January 2017 (2017-01-15) *
周晓萍;: "高低频噪声区分滤除的车辆载重动态测量方法", 机械设计与制造, no. 09, 8 September 2020 (2020-09-08) *
潘文辉: "WIM车辆动态称重***的设计与实现", 《工程科技Ⅱ辑》, 15 March 2016 (2016-03-15) *
袁娜;宋伟刚;姜涛;: "散状物料输送机称重的图像处理方法初步研究", 煤矿机械, no. 12, 15 December 2007 (2007-12-15) *
谭松;李唯一;韩强;: "基于计算机视觉的车载钢轨光带异常检测***研制", 铁道建筑, no. 02, 20 February 2016 (2016-02-20) *
陈亮杰;王飞;王梨;王林;: "基于SSD的仓储物体检测算法研究", 软件导刊, no. 04, 25 March 2019 (2019-03-25) *

Similar Documents

Publication Publication Date Title
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
JP4603512B2 (en) Abnormal region detection apparatus and abnormal region detection method
CN111626277B (en) Vehicle tracking method and device based on over-station inter-modulation index analysis
CN112749616B (en) Multi-domain neighborhood embedding and weighting of point cloud data
CN114820465A (en) Point cloud detection model training method and device, electronic equipment and storage medium
JP4728444B2 (en) Abnormal region detection apparatus and abnormal region detection method
CN111191629A (en) Multi-target-based image visibility detection method
CN111735523A (en) Vehicle weight detection method and device based on video identification and storage medium
CN103810704A (en) SAR (synthetic aperture radar) image change detection method based on support vector machine and discriminative random field
CN110197185B (en) Method and system for monitoring space under bridge based on scale invariant feature transform algorithm
CN111735524A (en) Tire load obtaining method based on image recognition, vehicle weighing method and system
CN113689374A (en) Plant leaf surface roughness determination method and system
CN110555832A (en) machine vision-based automobile engine connecting rod quality multi-parameter detection method, system, equipment and storage medium
CN103514599B (en) A kind of segmentation of the image optimum based on neighborhood total variation scale selection method
CN112906816A (en) Target detection method and device based on optical differential and two-channel neural network
Touati et al. A new change detector in heterogeneous remote sensing imagery
CN115294541A (en) Local feature enhanced Transformer road crack detection method
Asif et al. An active contour and kalman filter for underwater target tracking and navigation
CN116703895B (en) Small sample 3D visual detection method and system based on generation countermeasure network
CN111611858B (en) Multi-angle discrimination-based automatic detection method and device for tilting track surface
CN113281718A (en) 3D multi-target tracking system and method based on laser radar scene flow estimation
CN117788871A (en) Vehicle-mounted weighing management method and platform based on artificial intelligence
CN108229562B (en) Method for obtaining classification condition of concrete pavement damage
CN113392695B (en) Highway truck and wheel axle identification method thereof
Li et al. Vehicle seat detection based on improved RANSAC-SURF algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination