CN110245663B - Method for identifying steel coil information - Google Patents
Method for identifying steel coil information Download PDFInfo
- Publication number
- CN110245663B CN110245663B CN201910559952.XA CN201910559952A CN110245663B CN 110245663 B CN110245663 B CN 110245663B CN 201910559952 A CN201910559952 A CN 201910559952A CN 110245663 B CN110245663 B CN 110245663B
- Authority
- CN
- China
- Prior art keywords
- layer
- steel coil
- image
- coil
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 229910000831 Steel Inorganic materials 0.000 title claims abstract description 102
- 239000010959 steel Substances 0.000 title claims abstract description 102
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000001514 detection method Methods 0.000 claims abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 39
- 238000005070 sampling Methods 0.000 claims description 21
- 230000004913 activation Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 125000003275 alpha amino acid group Chemical group 0.000 claims description 4
- 150000001875 compounds Chemical class 0.000 claims description 4
- 230000006872 improvement Effects 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 3
- 239000002184 metal Substances 0.000 description 6
- 229910052751 metal Inorganic materials 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5846—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/087—Inventory or stock management, e.g. order filling, procurement or balancing against orders
- G06Q10/0875—Itemisation or classification of parts, supplies or services, e.g. bill of materials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0006—Industrial image inspection using a design-rule based approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Economics (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Marketing (AREA)
- Finance (AREA)
- Data Mining & Analysis (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- General Engineering & Computer Science (AREA)
- Operations Research (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Geometry (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The utility model relates to a method for identifying steel coil information, which is characterized by comprising the following steps of; determining the position coordinates of the coil numbers in the obtained coil image by adopting a method of minimum bounding rectangle, and then converting the state characteristics of the coil image at the position coordinates into character values by utilizing a digital character classification mode based on a convolutional neural network, wherein the character values are the coil numbers of the current coil; and detecting the temperature of the steel coil by using a temperature detection module, simultaneously shooting by using a camera to obtain a side image of the steel coil, judging whether the roundness of the steel coil meets the standard by using the side image by using an upper computer, and transmitting a judgment result to a background. The steel coil information identification method provided by the utility model has the characteristics of not only accurately detecting the information of the steel coil, but also being high in detection speed and real-time.
Description
Technical Field
The utility model relates to the field of image processing, in particular to a method for identifying steel coil information.
Background
The automatic identification and positioning technology is one of the effective ways of intellectualization and unmanned finished product warehouse, and by intelligent detection and positioning, lifting appliance transformation and automatic control technology, manual assistance is greatly reduced, and particularly, the operation with high risk of manual intervention is avoided. The finished product warehouse is an important logistics storage department of steel companies, and the loading and unloading operation of steel coils is an outstanding link affecting logistics efficiency and safety. At present, most of steel warehouses mainly adopt manual operation and monitoring methods in the steel coil transportation process. Under the working mode, workers have potential safety hazards, manual operation mainly depends on naked eye observation of drivers, and certain randomness exists, so that the working efficiency is low due to unnecessary starting and stopping of driving. The steel warehouse with high automation degree adopts laser as a sensor to assist in automatically identifying and grabbing steel coils, but the production efficiency is low because the laser scans the steel coils for a long period, so a method capable of accurately identifying the steel coil information and reducing the scanning period is urgently needed to better complete the identification and positioning of the steel coils.
The utility model patent 201621247572.0 discloses an automatic steel coil information identification system which comprises a two-dimensional code generating device, an image acquisition device and an information processing device, wherein the two-dimensional code generating device is arranged in a working area of a leveling machine, and is used for generating paper two-dimensional codes from steel coil information two-dimensional codes and pasting the paper two-dimensional codes on steel coils; the image acquisition device is arranged in an operation area of the recoiling machine set and is used for acquiring paper two-dimensional code information stuck to the steel coil; the information processing device is used for receiving the two-dimensional code information acquired by the image acquisition device and restoring the steel coil information, the image acquisition device adopts at least two cameras, and the cameras are arranged at intervals up and down. The automatic steel coil information checking and inputting device has the advantages that operators do not need to check and input data beside the steel coil, steel coil information is not transmitted manually, errors of the steel coil information are avoided through automatic identification and transmission of the system, the number of times that the operators enter an operation area is reduced, and safety risks are greatly reduced.
The utility model patent 201811297283.5 relates to an intelligent image recognition system and method for a hot-metal ladle number, and belongs to the technical field of metallurgical automation. The system comprises an image acquisition module, a motion detection module, an object detection module, a color hot-metal ladle image dynamic and static characteristic representation and description module, a digital character classification module based on a convolutional neural network, an upper computer edge calculation module and other structures. The system and the method solve the problems that the hot metal ladle is in a high-temperature environment in the process of transporting molten iron, identification equipment such as RFID (radio frequency identification) cannot be used, the traditional digital image processing technology cannot be well suitable for various complex environments, and the like, and the real-time online identification of the hot metal ladle number is realized by analyzing and extracting the digital character image printed on the surface of the hot metal ladle and establishing an identification model based on the characteristics, so that the automatic tracking of the logistics information of the hot metal ladle is realized, the production efficiency is improved, and the labor cost is saved.
The utility model patent 201811112046.7 discloses a steel coil identification and positioning method based on stereoscopic vision, which mainly comprises the following steps: (1) establishing a binocular stereoscopic vision model to acquire an image pair; (2) calibrating the camera by adopting a Zhang Zhengyou calibration method; (3) obtaining a parallax image by adopting a stereo matching algorithm; (4) Dividing a target steel coil by using the depth histogram, and calculating world coordinates of an X axis and a Y axis of the steel coil; (5) Calculating to obtain three-dimensional point cloud data of the steel coil according to the re-projection matrix, and carrying out smooth denoising on the point cloud data of the steel coil; (6) And finally, performing cylindrical feature fitting on the denoised point cloud data, so as to obtain the world coordinate of the steel coil on the Z axis. The method can accurately identify and position the steel coil, reduce the laser scanning period and improve the logistics efficiency of the finished product warehouse.
Disclosure of Invention
The purpose of the utility model is that: the rapid identification of the steel coil in the warehouse is realized.
In order to achieve the above purpose, the technical scheme of the utility model provides a method for identifying steel coil information, which is characterized by comprising the following steps:
step 1, after a positioning sensor detects that a truck for transporting steel coils is parked to a designated position, feeding back coordinate information to an upper computer, and controlling a camera to move by the upper computer according to the obtained coordinate information, so that a main optical axis of the camera is perpendicular to a current truck, and the height of the camera is consistent with the center of the steel coils carried on the truck;
step 2, acquiring an image of the steel coil by using a camera;
step 3, judging whether the steel coil in the obtained image is complete, if so, keeping the current image, entering step 4, if not, giving up the current image, and returning to step 3;
step 4, the upper computer adopts a minimum bounding rectangle method to determine the position coordinates of the coil numbers in the image obtained in the previous step, and then the image state characteristics of the steel coil at the position coordinates are converted into character values by using a digital character classification mode based on a convolutional neural network, wherein the character values are the coil numbers of the current steel coil;
step 5, comparing the coil number of the current steel coil with the coil number stored in the MES database, after matching is successful, obtaining the coil number of the current steel coil as a final identification result, transmitting the final identification result to the background, managing the inventory by the background according to the final identification result, entering step 6, and returning to step 2 if matching is failed;
step 6, a crane lifts a steel coil on a truck into a moving track, the steel coil is transported to a target position through the moving track and is stored, in the transportation process, the temperature of the steel coil is detected by a temperature detection module when the moving track is temporarily stopped by utilizing the start-stop interval of the moving track, the detected temperature value is transmitted to a background, meanwhile, a side image of the steel coil is obtained through shooting by a camera, the upper computer judges whether the roundness of the steel coil meets the standard by utilizing the side image, and the judgment result is transmitted to the background, and the method comprises the following steps:
step 601, obtaining an outer circle edge point and an inner circle edge point of a steel coil in a side image;
602, obtaining an ideal outer circle by utilizing outer circle edge point fitting, and obtaining an ideal inner circle by utilizing inner circle edge point fitting;
step 603, obtaining the circle center of the outer circle and the circle center of the inner circle of the ideal outer circle, defining the circle center of the outer circle and the circle center of the inner circle after superposition as calculated circle centers if the circle center of the outer circle and the circle center of the inner circle are coincident, entering step 604, and returning to step 601 if the circle center of the outer circle and the circle center of the inner circle are not coincident;
step 604, calculating the distance from each outer circle edge point to the calculated circle center, obtaining a maximum distance value Rmax from the distance, calculating the distance from each inner circle edge point to the calculated circle center, obtaining a minimum distance value Rmin from the distance, and calculating to obtain a difference value delta, wherein delta=Rmax-Rmin, if the difference value delta is smaller than a preset threshold value, the roundness of the current steel coil is considered to reach the standard, otherwise, the roundness of the current steel coil is considered to not reach the standard.
Preferably, the camera is calibrated, the image obtained due to the camera lens is removed from being distorted, and the steel coil is ensured not to be influenced by the distortion.
Preferably, in step 4, the character area is extracted by using the position coordinates of the coil number, and for more accurately identifying the coil number of the steel coil, the extracted image state features of the steel coil include brightness features, chromaticity features and texture features, wherein:
the brightness characteristic acquisition method comprises the following steps:
let I (I, j) be the segmented character region pixels, then luminance feature b= (Σi (I, j))/Count, where Count is the number of pixels of the character region;
the method for acquiring the chromaticity characteristics comprises the following steps:
the third-order moment of the color of each component in the RGB space is taken to effectively reflect character information, and the third-order moment of the ith color channel component of the character areaWherein p is i, Probability of occurrence of a pixel of gray j in the ith color channel component,/th>
The texture feature acquisition method comprises the following steps:
describing static texture complexity by using a gray level difference statistical method, setting (x, y), (x+Δx, y+Δy) as two pixel points in a character area, setting the gray level difference between the pixel points (x, y) and (x+Δx, y+Δy) as a histogram of Δg (x, y) =g (x, y) -g (x+Δx, y+Δy), setting g (x, y) as the gray level of the pixel points (x, y), setting all possible gray level values of gray level difference as m levels, enabling the pixel points (x, y) to move in the given character area, accumulating the times of taking different values of Δg (x, y), making a histogram of Δg (x, y), knowing that the probability of taking the value of Δg (x, y) is Δp (i), adopting the texture to reflect the gray level of the pixel points (x, y), and describing that the gray level difference value is larger if the gray level of the pixel points (x+Δx, y+Δy) is larger.
Preferably, in step 4, the recognition of numbers based on the network architecture modified by the LeNet-5 converts the image state characteristics of the steel coil at the position coordinates into character values, and the network architecture modified based on the LeNet-5 comprises: an input layer for inputting a digital image; the convolution layer I carries out convolution operation on the digital image through a convolution window to extract the intrinsic characteristics of the digital image input by the input layer; a sampling layer I S2 is adopted, and a maximum pool sampling reduction method is adopted to carry out sampling reduction operation on the characteristic image of a convolution layer I so as to obtain a characteristic image; the second convolution layer is used for connecting the size of the second convolution filter with the first sampling layer; the second sampling layer is obtained by performing downsampling operation on the characteristic image of the second convolution layer; the third convolution layer is obtained by carrying out convolution operation on the characteristic image of the second sampling layer, and the third convolution layer and the second sampling layer are in a full connection mode, namely each convolution filter of the third convolution layer carries out convolution on the characteristic image of the second sampling layer; after the steps, the image is degraded into a single-pixel characteristic image to be classified; the third convolution layer is connected with the last output layer in a full connection mode, the output result is a one-dimensional vector, and the position corresponding to the maximum component in the vector is the final classification result output by the network model.
Preferably, during the network architecture training based on the LeNet-5 improvement, the network is trained back using an adaptive gradient descent algorithm using sparse cross entropy as a loss function, the cost function being as follows for a single sample:
equation (1) is a variance cost function, where W represents a weight parameter, b represents an intercept term, (x, y) represents a single sample, h w,b (x) Representing a complex nonlinear hypothesis model;
for a data set containing m samples, the overall cost function is defined as:
in the equation (2), the former term is a mean square error term of the cost function J (W, b), and the latter term is a weight attenuation term for reducing the weight amplitude and preventing the occurrence of the overfitting phenomenon. In the formula (2), (x) (i) ,y (i) ) Representing a sample set, n l Represents the total layer number of the neural network, l represents the neural network layer, lambda represents the maximum gradient which can be achieved by the objective function under the constraint,representing a join weight parameter, s, between a j-th node of a layer l and an i-th node of a layer l+1 i Representing the number of nodes of the i-th layer, s j Representing the number of nodes of the j-th layer;
in each iteration of the network training, the gradient descent method is used to minimize the cost function to fine tune the parameters W, b, and the formula is as follows:
in the formulas (3) and (4), alpha is the learning rate,an intercept term representing a layer 1 node;
for each node i of the output layer n, the node residuals are solved using the following formula:
in the formula (5), the amino acid sequence of the compound,represents the n1 layer i node residual,/->Representing the activation value of the ith node of the nth layer 1, y i Represents the output of the ith layer,/-)>Output value of the ith node of the nth layer 1,/->Representing the derivative of the activation function
The residual calculation formula of the layer L inode is as follows:
in the formula (6), the amino acid sequence of the compound,representing the residual error of the layer L i node, S l+1 Represents the number of nodes of layer l+1, < >>Representing the residual error of the j node of layer L+1, -/->An activation value representing the j-th node of the first layer,/->A derivative representing the activation function;
the residual formulas of the layers are expressed as follows:
the partial derivative calculation formula is expressed as:
in the formulas (7), (8) and (9),an output value representing a j-th node of the first layer;
the partial derivative of the whole sample cost function can be solved according to the method:
in the formulas (10) and (11), lambda represents the maximum gradient that the objective function can reach under constraint;
all parameters are setAnd->Initialized to a value close to zero, < >>Representing a link weight parameter between a j-th node of a layer l and an i-th node of a layer l+1,>the intercept term representing the i node of layer l+1 results in parameters W and b that minimize the cost function, which are used to reduce the error due to weight during training.
The steel coil information identification method provided by the utility model has the characteristics of not only accurately detecting the information of the steel coil, but also being high in detection speed and real-time.
Drawings
Fig. 1 is a flow chart for identifying steel coil information according to the present utility model;
FIG. 2 is a flow chart of the identification model establishment in the present utility model;
FIG. 3 is a flow chart of identifying volume numbers according to the present utility model;
FIG. 4 is a roundness detection flow chart of the present utility model;
FIG. 5 is a diagram of the overall architecture of a convolutional neural network employed in the present utility model;
fig. 6 is a diagram of a model architecture for digital identification based on a network architecture modified by the LeNet-5 of the present utility model.
Detailed Description
The utility model will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present utility model and are not intended to limit the scope of the present utility model. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present utility model, and such equivalents are intended to fall within the scope of the claims appended hereto.
As shown in fig. 1 and 2, the present utility model provides a method for identifying information of a steel coil, comprising the following steps:
step 1, after a positioning sensor detects that a truck for transporting steel coils is parked to a designated position, feeding back coordinate information to an upper computer, and controlling a calibrated camera to move according to the obtained coordinate information by the upper computer so that a main optical axis of the camera is perpendicular to a current truck and the height of the camera is consistent with the center of the steel coils carried on the truck; the purpose of calibrating the camera is to remove the distortion of the obtained image caused by the reason of the camera lens, and ensure that the calculation result of the whole calculation process is not influenced by the distortion;
step 2, acquiring an image of the steel coil by using a camera;
step 3, judging whether the steel coil in the obtained image is complete, if so, keeping the current image, entering step 4, if not, giving up the current image, and returning to step 3;
and 4, determining the position coordinates of the coil numbers in the image obtained in the last step by adopting a minimum bounding rectangle method by the upper computer, extracting a character area by utilizing the position coordinates of the coil numbers, and then converting the image state characteristics of the steel coil at the character area into character values by utilizing a digital character classification mode based on a convolutional neural network, wherein the character values are the coil numbers of the current steel coil.
The utility model adopts a deep learning algorithm based on a convolutional neural network to establish an identification model to identify the volume number in the image. As shown in fig. 5, the overall architecture for building the recognition model based on the deep learning algorithm of the convolutional neural network is composed of convolutional layers alternating with the maximum pool sampling layers. The high-level is the hidden layer and the logistic regression classifier of the full-connection layer corresponding to the traditional multi-layer perceptron. The input of the first full connection layer is a feature image obtained by feature extraction by the convolution layer and the sub-sampling layer. The last output layer is a classifier, which can be a logistic regression.
Specifically, as shown in fig. 6, the deep learning algorithm based on convolutional neural network establishes an identification model to carry out digital identification by adopting a network architecture based on the improvement of the LeNet-5. The network architecture based on the LeNet-5 improvement comprises: an input layer for inputting a digital image; and the convolution layer C1 is used for carrying out convolution operation on the digital image through a convolution window to extract the intrinsic characteristics of the digital image input by the input layer. In this embodiment, the first convolution layer obtains 6 convolution feature images; sampling layer one S2, and performing downsampling operation on 6 characteristic images of convolutional layer one C1 by adopting a maximum pool downsampling method to obtain characteristic images; the second convolution layer C3 is used for connecting the second convolution layer C3 convolution filter with the sampling layer S2; the sampling layer II S4 is obtained by performing downsampling operation on the characteristic image of the convolution layer II C3; the convolution layer three C5 is obtained by performing convolution operation on the characteristic image of the sampling layer two S4, and a full connection mode is adopted between the convolution layer three C5 and the sampling layer two S4, that is, each convolution filter of the convolution layer three C5 performs convolution on the characteristic image of the sampling layer two S4, and in this embodiment, the convolution layer three C5 obtains 120 single-pixel characteristic images. After the steps, the image is degraded into a single-pixel characteristic image for classification operation. The three C5 of the convolution layer is connected with the last output layer in a full connection mode, and ten nodes of the output layer represent ten classification possibilities of the handwritten digital image. The output result is a one-dimensional vector with the length of 10, and the position corresponding to the maximum component in the vector is the final classification result output by the network model. This coding scheme is also used for the labels of the training sample set. During training, the network is trained back using an adaptive gradient descent algorithm using sparse cross entropy as a loss function.
For a single sample, its cost function is as follows:
equation (1) is a variance cost function, in equation (1). W represents a weight parameter, b represents an intercept term, (x, y) represents a single sample, h W,b (x) Representing a complex nonlinear hypothesis model; for a data set containing m samples, its overall cost function can be defined as:
in the equation (2), the former term is a mean square error term of the cost function J (W, b), and the latter term is a weight attenuation term for reducing the weight amplitude and preventing the occurrence of the overfitting phenomenon. In formula (2), x (i) ,y (i) ) Representing a sample set, n l Represents the total layer number of the neural network, l represents the neural network layer, lambda represents the maximum gradient which can be achieved by the objective function under the constraint,representing a join weight parameter, s, between a j-th node of a layer l and an i-th node of a layer l+1 i Representing the number of nodes of the i-th layer, s j Representing the number of nodes of the j-th layer;
in each iteration of the network training, the gradient descent method is used to minimize the cost function to fine tune the parameters W, b, and the formula is as follows:
in the formulas (3) and (4), alpha is the learning rate,represents the intercept term of the i node of layer l+1. The most prominent operation step of the above formula is to solve the partial derivative of the cost function.
For each node i of the output layer n, the node residuals are solved using the following formula:
in the formula (5),represents the n1 layer i node residual,/->Representing the activation value of the ith node of the nth layer 1, y i Represents the output of the ith layer,/-)>Output value of the ith node of the nth layer 1,/->A derivative representing the activation function; the derivation of formula (5) is as follows:
in the above-mentioned method, the step of,represents the number of nodes of the n1 layer, y j Represents the output of the j-th layer,>output value of the jth node of the nth layer 1,/->Representing the activation value of the jth node of the n1 st layer.
The residual calculation formula of the layer L inode is as follows:
in the formulas (6) and (7),representing the residual error of the layer L i node, S l+1 Represents the number of l+1 layer i nodes, < >>Representing the residual error of the j node of layer L+1, -/->An activation value representing the j-th node of the first layer,/->Representing the derivative of the activation function->Represents the nth l-1 An activation value of a layer i node, +.>Representing the residual error of the j node of the n1 layer,/>Represents the nth l-1 Activation value of layer k node, +.>Representing an activation function->Represents the nth l-1 The connection weight parameters of the kth node of the layer and the jth node of the n1 layer, +.>Represents the nth l-1 The connection weight parameters of the layer i node and the layer n1 j node are +.>Derivative of the activation function means ∈ ->Representing the residual error of the j node of the n1 layer.
The layer residual formulas can be expressed as:
the partial derivative calculation formula can be expressed as:
the partial derivative of the whole sample cost function can be solved according to the method:
in the equations (11) and (12), λ represents the maximum gradient that the objective function can achieve under constraint.
All parameters are setAnd->Initialized to a value close to zero, < >>Representing a link weight parameter between a j-th node of a layer l and an i-th node of a layer l+1,>the intercept term representing the i-th node of layer l+1 results in parameters W and b that minimize the cost function. This parameter is used to reduce the error due to weight during training.
Through comparison of a large number of experimental results, the utility model designs 3 convolution layers, and the back of each convolution layer uses a Relu activation function to perform nonlinear conversion, thereby increasing the characteristic expression capability of the network. The characteristic data is subjected to Batchnormal normalization before entering the convolution layer, so that the network convergence speed can be increased. A MaxPool pooling layer was used after the third and fifth layers of the network. The last two layers adopt full-connection layers, and the rear of the first full-connection layer is processed by using a Dropout method, so that the occurrence of fitting during training is avoided. During training, the network is trained back using an adaptive gradient descent algorithm using sparse cross entropy as a loss function.
The image state characteristics of the steel coil in the step comprise brightness characteristics, chromaticity characteristics and texture characteristics, RGB space color third-order moment color information in the image is extracted, and gray level difference statistical characteristics are calculated to represent the texture characteristics:
(1) Brightness characteristics
Let I (I, j) be the segmented character region pixels, then luminance feature b= (Σi (I, j))/Count, where Count is the number of pixels of the character region;
(2) Chromaticity characteristics
Through multiple experimental verification, character information can be effectively reflected by taking the third-order moments of the colors of all components in the RGB space. Third moment under ith color channel component of character areaWherein p is i,j Probability of occurrence of a pixel of gray j in the ith color channel component,/th>
(3) Texture features
Describing static texture complexity by using a gray level difference statistical method, setting (x, y), (x+Δx, y+Δy) as two pixel points in a character area, setting the gray level difference between the pixel points (x, y) and (x+Δx, y+Δy) as a number of times of taking different values, and making a histogram of Δg (x, y), wherein g (x, y) is the gray level of the pixel point (x, y), g (x+Δx, y+Δy) is the gray level of the pixel point (x+Δx, y+Δy), setting all possible values of gray level difference as m levels, enabling the pixel point (x, y) to move in the given character area, accumulating the times of taking different values of Δg (x, y), and making the histogram of Δg (x, y), wherein the probability of taking the value of Δg (x, y) is Δp (i). The utility model adopts the second moment ASM in the extraction angle direction to reflect the uniformity degree of the gray distribution of the image, and if the difference of the gray values of the similar pixels is larger, the larger the ASM value is, the coarser the texture is.
Step 5, comparing the coil number of the current steel coil with the coil number stored in the MES database, after matching is successful, obtaining the coil number of the current steel coil as a final identification result, transmitting the final identification result to the background, managing the inventory by the background according to the final identification result, entering step 6, and returning to step 2 if matching is failed;
step 6, a crane lifts a steel coil on a truck into a moving track, the steel coil is transported to a target position through the moving track and is stored, in the transportation process, the temperature of the steel coil is detected by a temperature detection module when the moving track is temporarily stopped by utilizing the start-stop interval of the moving track, the detected temperature value is transmitted to a background, meanwhile, a side image of the steel coil is obtained through shooting by a camera, the upper computer judges whether the roundness of the steel coil meets the standard by utilizing the side image, and the judgment result is transmitted to the background, and the method comprises the following steps:
step 601, obtaining an outer circle edge point and an inner circle edge point of a steel coil in a side image;
602, obtaining an ideal outer circle by utilizing outer circle edge point fitting, and obtaining an ideal inner circle by utilizing inner circle edge point fitting;
step 603, obtaining the circle center of the outer circle and the circle center of the inner circle of the ideal outer circle, defining the circle center of the outer circle and the circle center of the inner circle after superposition as calculated circle centers if the circle center of the outer circle and the circle center of the inner circle are coincident, entering step 604, and returning to step 601 if the circle center of the outer circle and the circle center of the inner circle are not coincident;
step 604, calculating the distance from each outer circle edge point to the calculated circle center, obtaining a maximum distance value Rmax from the distance, calculating the distance from each inner circle edge point to the calculated circle center, obtaining a minimum distance value Rmin from the distance, and calculating to obtain a difference value delta, wherein delta=Rmax-Rmin, if the difference value delta is smaller than a preset threshold value, the roundness of the current steel coil is considered to reach the standard, otherwise, the roundness of the current steel coil is considered to not reach the standard.
Claims (5)
1. A method for identifying information of steel coil, comprising the steps of:
step 1, after a positioning sensor detects that a truck for transporting steel coils is parked to a designated position, feeding back coordinate information to an upper computer, and controlling a camera to move according to the obtained coordinate information by the upper computer, so that a main optical axis of the camera is perpendicular to the current truck, and the height of the camera is consistent with the center of the steel coils carried on the truck;
step 2, acquiring an image of the steel coil by using a camera;
step 3, judging whether the steel coil in the obtained image is complete, if so, keeping the current image, entering step 4, if not, giving up the current image, and returning to step 3;
step 4, the upper computer adopts a minimum bounding rectangle method to determine the position coordinates of the coil numbers in the image obtained in the previous step, and then the image state characteristics of the steel coil at the position coordinates are converted into character values by using a digital character classification mode based on a convolutional neural network, wherein the character values are the coil numbers of the current steel coil;
step 5, comparing the coil number of the current steel coil with the coil number stored in the MES database, after matching is successful, obtaining the coil number of the current steel coil as a final identification result, transmitting the final identification result to the background, managing the inventory by the background according to the final identification result, entering step 6, and returning to step 2 if matching is failed;
step 6, a crane lifts a steel coil on a truck into a moving track, the steel coil is transported to a target position through the moving track and is stored, in the transportation process, the temperature of the steel coil is detected by a temperature detection module when the moving track is temporarily stopped by utilizing the start-stop interval of the moving track, the detected temperature value is transmitted to a background, meanwhile, a side image of the steel coil is obtained through shooting by a camera, the upper computer judges whether the roundness of the steel coil meets the standard by utilizing the side image, and the judgment result is transmitted to the background, and the method comprises the following steps:
step 601, obtaining an outer circle edge point and an inner circle edge point of a steel coil in a side image;
602, obtaining an ideal outer circle by utilizing outer circle edge point fitting, and obtaining an ideal inner circle by utilizing inner circle edge point fitting;
step 603, obtaining the circle center of the outer circle and the circle center of the inner circle of the ideal outer circle, defining the circle center of the outer circle and the circle center of the inner circle after superposition as calculated circle centers if the circle center of the outer circle and the circle center of the inner circle are coincident, entering step 604, and returning to step 601 if the circle center of the outer circle and the circle center of the inner circle are not coincident;
step 604, calculating the distance from each outer circle edge point to the calculated circle center, obtaining a maximum distance value Rmax from the distance, calculating the distance from each inner circle edge point to the calculated circle center, obtaining a minimum distance value Rmin from the distance, and calculating to obtain a difference value delta, wherein delta=Rmax-Rmin, if the difference value delta is smaller than a preset threshold value, the roundness of the current steel coil is considered to reach the standard, otherwise, the roundness of the current steel coil is considered to not reach the standard.
2. The method for identifying information of steel coil as set forth in claim 1, wherein in step 1, the camera is calibrated to remove distortion of the obtained image caused by the camera lens, so as to ensure that the steel coil is not affected by the distortion.
3. The method for identifying information of steel coil as set forth in claim 1, wherein in step 4, for more accurately identifying the number of steel coil, the extracted image state features of steel coil include brightness features, chromaticity features and texture features, wherein:
the brightness characteristic acquisition method comprises the following steps:
let I (I, j) be the segmented character region pixels, then luminance feature b= (Σi (I, j))/Count, where Count is the number of pixels of the character region;
the method for acquiring the chromaticity characteristics comprises the following steps:
the third-order moment of the color of each component in the RGB space is taken to effectively reflect character information, and the third-order moment of the ith color channel component of the character areaWherein p is i,j Probability of occurrence of a pixel of gray j in the ith color channel component,/th>
The texture feature acquisition method comprises the following steps:
describing static texture complexity by using a gray level difference statistical method, setting (x, y), (x+Δx, y+Δy) as two pixel points in a character area, setting the gray level difference between the pixel points (x, y) and (x+Δx, y+Δy) as a histogram of Δg (x, y) =g (x, y) -g (x+Δx, y+Δy), setting g (x, y) as the gray level of the pixel points (x, y), setting all possible gray level values of gray level difference as m levels, enabling the pixel points (x, y) to move in the given character area, accumulating the times of taking different values of Δg (x, y), making a histogram of Δg (x, y), knowing that the probability of taking the value of Δg (x, y) is Δp (i), adopting the texture to reflect the gray level of the pixel points (x, y), and describing that the gray level difference value is larger if the gray level of the pixel points (x+Δx, y+Δy) is larger.
4. The method for identifying information of steel coil as set forth in claim 1, wherein in step 4, the identification of numbers based on the network architecture modified by the LeNet-5 converts the image state characteristics of the steel coil at the position coordinates into character values, and the network architecture modified based on the LeNet-5 comprises: an input layer for inputting a digital image; the convolution layer I carries out convolution operation on the digital image through a convolution window to extract the intrinsic characteristics of the digital image input by the input layer; the first sampling layer performs downsampling operation on the characteristic image of the first convolution layer by adopting a maximum pool downsampling method to obtain the characteristic image; the second convolution layer is used for connecting the size of the second convolution filter with the first convolution layer; the second sampling layer is obtained by performing downsampling operation on the characteristic image of the second convolution layer; the third convolution layer is obtained by carrying out convolution operation on the characteristic image of the second sampling layer, and the third convolution layer and the second sampling layer are connected in a full connection mode, namely each convolution filter of the third convolution layer carries out convolution on the characteristic image of the second sampling layer; after the steps, the image is degraded into a single-pixel characteristic image to be classified; the third convolution layer is connected with the last output layer in a full connection mode, the output result is a one-dimensional vector, and the position corresponding to the maximum component in the vector is the final classification result output by the network model.
5. The method for identifying steel coil information as recited in claim 4, wherein during said net-5 improvement based network architecture training, using sparse cross entropy as a loss function, the network is trained back using an adaptive gradient descent algorithm, the cost function for a single sample being as follows:
equation (1) is a variance cost function, where W represents a weight parameter, b represents an intercept term, (x, y) represents a single sample, h W,b (x) Representing a complex nonlinear hypothesis model;
for a data set containing m samples, the overall cost function is defined as:
in the formula (2), the former term is a mean square error term of the cost function J (W, b), and the latter term is a weight attenuation term, which is used for reducing the weight amplitude and preventing the occurrence of the overfitting phenomenon; in the formula (2) (x (i) ,y (i) ) Representing a sample set, n l Represents the total layer number of the neural network, l represents the neural network layer, lambda represents the maximum gradient which can be achieved by the objective function under the constraint,representing a join weight parameter, s, between a j-th node of a layer l and an i-th node of a layer l+1 i Representing the number of nodes of the i-th layer, s j Representing the number of nodes of the j-th layer;
in each iteration of the network training, the gradient descent method is used to minimize the cost function to fine tune the parameters W, b, and the formula is as follows:
in the formulas (3) and (4), alpha is learning speedThe rate of the product is determined by the ratio,an intercept term representing a layer 1 node;
for each node i of the output layer n, the node residuals are solved using the following formula:
in the formula (5), the amino acid sequence of the compound,represents the n1 layer i node residual,/->Representing the activation value of the ith node of the nth layer 1, y i Represents the output of the ith layer,/-)>Output value of the ith node of the nth layer 1,/->A derivative representing the activation function;
the residual calculation formula of the layer L inode is as follows:
in the formula (6), the amino acid sequence of the compound,representing the residual error of the layer L i node, S l+1 Represents the number of nodes of layer l+1, < >>Represents section j of layer L+1Residual of point,/->An activation value representing the j-th node of the first layer,/->A derivative representing the activation function;
the residual formulas of the layers are expressed as follows:
the partial derivative calculation formula is expressed as:
in the formulas (7), (8) and (9),an output value representing a j-th node of the first layer;
the partial derivative of the whole sample cost function can be solved according to the method:
in the formulas (10) and (11), lambda represents the maximum gradient that the objective function can reach under constraint;
all parameters are setAnd->Initialized to a value close to zero, < >>Representing a link weight parameter between a j-th node of a layer l and an i-th node of a layer l+1,>the intercept term representing the i node of layer l+1 results in parameters W and b that minimize the cost function, which are used to reduce the error due to weight during training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910559952.XA CN110245663B (en) | 2019-06-26 | 2019-06-26 | Method for identifying steel coil information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910559952.XA CN110245663B (en) | 2019-06-26 | 2019-06-26 | Method for identifying steel coil information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110245663A CN110245663A (en) | 2019-09-17 |
CN110245663B true CN110245663B (en) | 2024-02-02 |
Family
ID=67889512
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910559952.XA Active CN110245663B (en) | 2019-06-26 | 2019-06-26 | Method for identifying steel coil information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110245663B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110598958B (en) * | 2019-10-10 | 2023-09-08 | 武汉科技大学 | Ladle hierarchical management analysis method and system |
CN110763692B (en) * | 2019-10-29 | 2022-04-12 | 复旦大学 | Belted steel burr detecting system |
CN113280730B (en) * | 2020-02-19 | 2022-08-16 | 宝钢日铁汽车板有限公司 | System and method for efficiently detecting strip head of steel coil |
CN111443666B (en) * | 2020-03-25 | 2022-08-09 | 唐山钢铁集团有限责任公司 | Intelligent tracking method for steel coil quality judgment parameters based on database model |
CN111898716B (en) * | 2020-07-31 | 2023-05-23 | 广东昆仑信息科技有限公司 | Method and system for automatically matching and tracking iron frame number and ladle number based on RFID (radio frequency identification) identification technology |
CN111968103B (en) * | 2020-08-27 | 2023-05-09 | 中冶赛迪信息技术(重庆)有限公司 | Steel coil interval detection method, system, medium and electronic terminal |
CN113269759A (en) * | 2021-05-28 | 2021-08-17 | 中冶赛迪重庆信息技术有限公司 | Steel coil information detection method, system, medium and terminal based on image recognition |
CN113701652B (en) * | 2021-09-23 | 2024-06-07 | 安徽工业大学 | Intelligent high-precision detection and defect diagnosis system for inner diameter of steel coil |
CN113920116B (en) * | 2021-12-13 | 2022-03-15 | 武汉市什仔伟业勇进印务有限公司 | Intelligent control method and system for color box facial tissue attaching process based on artificial intelligence |
CN114486913A (en) * | 2022-01-20 | 2022-05-13 | 宝钢湛江钢铁有限公司 | Method for detecting geometric characteristics of edge of steel coil |
CN114581911B (en) * | 2022-03-07 | 2023-04-07 | 柳州钢铁股份有限公司 | Steel coil label identification method and system |
EP4350539A1 (en) * | 2022-10-04 | 2024-04-10 | Primetals Technologies Germany GmbH | Method and system for automatic image-based recognition of identification information on an object |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4796209A (en) * | 1986-06-26 | 1989-01-03 | Allegheny Ludlum Corporation | Random inventory system |
JPH0724528A (en) * | 1993-07-12 | 1995-01-27 | Nippon Steel Corp | Steel belt coil with longitudinal position mark |
KR101482466B1 (en) * | 2013-12-24 | 2015-01-13 | 주식회사 포스코 | Strip winding apparatus for hot rolling line and method of the same |
CN204680056U (en) * | 2015-05-22 | 2015-09-30 | 宝鸡石油钢管有限责任公司 | A kind of coil of strip information identification and positioning system |
KR20170074306A (en) * | 2015-12-21 | 2017-06-30 | 주식회사 포스코 | Strip winding apparatus |
CN206340020U (en) * | 2016-11-22 | 2017-07-18 | 柳州钢铁股份有限公司 | Coil of strip information automatic recognition system |
CN109344825A (en) * | 2018-09-14 | 2019-02-15 | 广州麦仑信息科技有限公司 | A kind of licence plate recognition method based on convolutional neural networks |
CN109447908A (en) * | 2018-09-25 | 2019-03-08 | 上海大学 | A kind of coil of strip recognition positioning method based on stereoscopic vision |
CN109635797A (en) * | 2018-12-01 | 2019-04-16 | 北京首钢自动化信息技术有限公司 | Coil of strip sequence precise positioning method based on multichip carrier identification technology |
-
2019
- 2019-06-26 CN CN201910559952.XA patent/CN110245663B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4796209A (en) * | 1986-06-26 | 1989-01-03 | Allegheny Ludlum Corporation | Random inventory system |
JPH0724528A (en) * | 1993-07-12 | 1995-01-27 | Nippon Steel Corp | Steel belt coil with longitudinal position mark |
KR101482466B1 (en) * | 2013-12-24 | 2015-01-13 | 주식회사 포스코 | Strip winding apparatus for hot rolling line and method of the same |
CN204680056U (en) * | 2015-05-22 | 2015-09-30 | 宝鸡石油钢管有限责任公司 | A kind of coil of strip information identification and positioning system |
KR20170074306A (en) * | 2015-12-21 | 2017-06-30 | 주식회사 포스코 | Strip winding apparatus |
CN206340020U (en) * | 2016-11-22 | 2017-07-18 | 柳州钢铁股份有限公司 | Coil of strip information automatic recognition system |
CN109344825A (en) * | 2018-09-14 | 2019-02-15 | 广州麦仑信息科技有限公司 | A kind of licence plate recognition method based on convolutional neural networks |
CN109447908A (en) * | 2018-09-25 | 2019-03-08 | 上海大学 | A kind of coil of strip recognition positioning method based on stereoscopic vision |
CN109635797A (en) * | 2018-12-01 | 2019-04-16 | 北京首钢自动化信息技术有限公司 | Coil of strip sequence precise positioning method based on multichip carrier identification technology |
Non-Patent Citations (1)
Title |
---|
基于立体视觉的钢卷检测技术;郑庆元;周思跃;陈金波;林万誉;;计量与测试技术(第05期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110245663A (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110245663B (en) | Method for identifying steel coil information | |
CN111292305B (en) | Improved YOLO-V3 metal processing surface defect detection method | |
CN110314854B (en) | Workpiece detecting and sorting device and method based on visual robot | |
CN106548182B (en) | Pavement crack detection method and device based on deep learning and main cause analysis | |
CN106886216B (en) | Robot automatic tracking method and system based on RGBD face detection | |
CN103593670B (en) | A kind of copper plate/strip detection method of surface flaw based on online limit of sequence learning machine | |
CN109559324B (en) | Target contour detection method in linear array image | |
CN102708691B (en) | False license plate identification method based on matching between license plate and automobile type | |
CN113962274B (en) | Abnormity identification method and device, electronic equipment and storage medium | |
CN110942450A (en) | Multi-production-line real-time defect detection method based on deep learning | |
CN112528979B (en) | Transformer substation inspection robot obstacle distinguishing method and system | |
CN109598200B (en) | Intelligent image identification system and method for molten iron tank number | |
CN112465706A (en) | Automatic gate container residual inspection method | |
Zhao et al. | Toward intelligent manufacturing: label characters marking and recognition method for steel products with machine vision | |
CN113012228B (en) | Workpiece positioning system and workpiece positioning method based on deep learning | |
CN113469195A (en) | Target identification method based on self-adaptive color fast point feature histogram | |
CN111539951B (en) | Visual detection method for outline size of ceramic grinding wheel head | |
CN117115249A (en) | Container lock hole automatic identification and positioning system and method | |
CN115620121A (en) | Photoelectric target high-precision detection method based on digital twinning | |
CN116309270A (en) | Binocular image-based transmission line typical defect identification method | |
CN111951334B (en) | Identification and positioning method and lifting method for stacked billets based on binocular vision technology | |
CN115330751A (en) | Bolt detection and positioning method based on YOLOv5 and Realsense | |
CN114943738A (en) | Sensor packaging curing adhesive defect identification method based on visual identification | |
CN210038833U (en) | Device for identifying information of steel coil | |
CN109993035B (en) | Human body detection method and device based on embedded system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |