CN110766075A - Tire area image comparison method and device, computer equipment and storage medium - Google Patents

Tire area image comparison method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110766075A
CN110766075A CN201911011220.3A CN201911011220A CN110766075A CN 110766075 A CN110766075 A CN 110766075A CN 201911011220 A CN201911011220 A CN 201911011220A CN 110766075 A CN110766075 A CN 110766075A
Authority
CN
China
Prior art keywords
image
tire
pattern feature
tire area
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911011220.3A
Other languages
Chinese (zh)
Inventor
周康明
罗余洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kos Technology Shanghai Co Ltd
Shanghai Eye Control Technology Co Ltd
Original Assignee
Kos Technology Shanghai Co Ltd
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kos Technology Shanghai Co Ltd, Shanghai Eye Control Technology Co Ltd filed Critical Kos Technology Shanghai Co Ltd
Priority to CN201911011220.3A priority Critical patent/CN110766075A/en
Publication of CN110766075A publication Critical patent/CN110766075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of computers, in particular to a tire area image comparison method, a tire area image comparison device, computer equipment and a storage medium, wherein the method comprises the following steps: receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image; inputting a plurality of tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in each tire area image; combining every two pattern feature vectors to obtain a pattern feature vector pair; circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair; and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful. The judgment efficiency of comparing the consistency of the images of the tire areas is improved.

Description

Tire area image comparison method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for comparing tire zone images, a computer device, and a storage medium.
Background
In the process of vehicle safety technology inspection, the tires of the vehicle need to be inspected, the uniformity of different tires in the vehicle is judged, whether the vehicle is refitted or not is checked, and the judgment accuracy of the function plays an important role in the inspection efficiency of the whole vehicle inspection.
In the prior art, the consistency of the vehicle tires is judged by comparing and checking the images of the tire areas by human eyes, the images of the tire areas are firstly shot, and then the human eyes compare the consistency of the images of different tire areas, but the prior tires are various in types, patterns on different types of tires are different, and even the problems of loss of textures of patterns on the surfaces of the tires caused by abrasion and the like exist, so that the comparison efficiency of the human eyes is low. Although there are some image processing methods, such as image gray histogram analysis, fourier transform analysis, edge analysis, difference detection, wavelet transform, etc., to determine the tire uniformity, these conventional image processing and identifying methods have a low accuracy in detecting small images, such as tire patterns.
Disclosure of Invention
In view of the above, it is necessary to provide a tire area image matching method, a tire area image matching device, a computer device, and a storage medium, which can improve the efficiency of tire area image matching.
A method of tire area image comparison, the method comprising:
receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image;
inputting a plurality of tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in each tire area image;
combining every two pattern feature vectors to obtain a pattern feature vector pair;
circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair;
and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful.
In one embodiment, the segmenting the vehicle image into the tire region image includes:
inputting the vehicle image into an image segmentation model, and identifying the category of pixels in the vehicle image according to pre-trained characteristic parameters through the image segmentation model to obtain a category identification image;
and carrying out bitwise and operation on the category identification image and the vehicle image to obtain a tire area image.
In one embodiment, the method for generating the image segmentation model includes:
acquiring a standard category identification image and a prediction category identification image;
traversing each pixel in the prediction category identification image and the standard category identification image by using an attention loss function, and calculating a loss value between the prediction category identification image and the standard category identification image according to the attention loss function;
when the loss value is smaller than a preset value, acquiring a corresponding characteristic parameter;
and obtaining an image segmentation model according to the characteristic parameters.
In one embodiment, the traversing pixels in the prediction class identification image and the standard class identification image by using the attention loss function, and calculating a loss value therebetween according to the attention loss function includes:
traversing the tire prediction loss value corresponding to the pixel identified as the tire area type in the prediction type identification image and the standard type identification image;
adjusting the tire prediction loss value by using the network parameters so as to focus more on training on the different pixel points in the training process and obtain the tire adjustment loss value;
traversing the background prediction loss value corresponding to the pixel which is identified as the background area type in the prediction type identification image and the standard type identification image;
and obtaining a loss value corresponding to the attention loss function according to the tire adjustment loss value and the background prediction loss value.
In one embodiment, said bitwise and operating said category identification image and said vehicle image to obtain a tire area image includes:
acquiring position information of pixels in the tire area image;
extracting position information forming the circumscribed rectangle into boundary position information;
and taking the boundary position information as the boundary of the tire area image, and obtaining the tire area image according to the boundary.
In one embodiment, the calculating the matching degree between each pattern feature vector pair includes:
and calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs, and determining the matching degree according to numerical values corresponding to the cosine distances.
A method of tire area image comparison, the method comprising:
receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image;
inputting the tire area image into an image classification model to obtain pattern feature vectors corresponding to patterns in the tire area image;
selecting a corresponding comparison image for the tire area image;
obtaining a comparison pattern feature vector corresponding to the comparison image, and calculating the matching degree between the pattern feature vector and the comparison pattern feature vector;
and when the matching degree is smaller than a preset threshold value, judging that the tire area image is successfully compared with the comparison image.
A tire area image comparison apparatus, the apparatus comprising:
the area image acquisition module is used for receiving a vehicle image sent by a terminal and segmenting the vehicle image to obtain a tire area image;
the vector extraction module is used for inputting the tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in the tire area images;
the vector pair obtaining module is used for combining every two of the pattern feature vectors to obtain a pattern feature vector pair;
the matching degree calculation module is used for circularly traversing each pattern feature vector pair and calculating the matching degree between each pattern feature vector pair;
and the judging module is used for judging that the comparison of the images of the tire areas is successful when the matching degree between the pattern feature vector pairs is smaller than a preset threshold value.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The tire area image comparison method, the tire area image comparison device, the computer equipment and the storage medium receive the vehicle image sent by the terminal, and divide the vehicle image to obtain the tire area image; the tire area images are input into the image classification model to obtain the pattern feature vectors corresponding to the patterns in the tire area images, so that the automatic extraction of the pattern feature vectors corresponding to the patterns in the tire area images by using the trained classification model is realized, and the extraction efficiency of the pattern feature vectors is improved. Combining the pattern feature vectors in pairs to obtain pattern feature vector pairs; circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair; and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful. According to the pattern feature vector, the automatic comparison of the tire area images is realized, and the efficiency of judging the tire area image consistency is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary application of a tire area image comparison method;
FIG. 2 is a schematic flow chart diagram illustrating a method for comparing tire area images in one embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a tire area image comparison method in accordance with another embodiment;
FIG. 4 is a schematic view of a tire area image acquisition method according to one embodiment;
FIG. 5 is a diagram illustrating a network structure of an image classification model according to an embodiment;
FIG. 6 is a flowchart illustrating a method for calculating attention loss function according to an embodiment;
FIG. 7 is a schematic diagram of a tire region image comparison module in one embodiment;
FIG. 8 is a diagram illustrating a tire area image comparison module according to one embodiment;
FIG. 9 is a schematic flow chart diagram illustrating a tire area image comparison method in accordance with another embodiment;
FIG. 10 is a block diagram of a tire area image comparison device according to the embodiment shown in FIG. 2;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The tire area image comparison method provided by the application can be applied to the application environment shown in fig. 1. Wherein a user terminal 102 communicates with a server 104 over a network. The server 104 receives the vehicle image sent by the user terminal 102, and the server 104 divides the vehicle image to obtain a tire area image; inputting the tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in the tire area images; combining the pattern feature vectors in pairs to obtain pattern feature vector pairs; circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair; and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful. Further, a message that the comparison is successful may be pushed to the user terminal 102.
The user terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster composed of a plurality of servers. When the server 104 is an independent server, a plurality of databases, each of which may store a plurality of types of tire area images, may be deployed in the server 104; when the server 104 is a server cluster composed of a plurality of servers, a database disposed in each server may store therein a plurality of types of tire area images.
In one embodiment, as shown in fig. 2, a flow chart of a tire area image comparison method is provided, which is illustrated by applying the method to the server 104 in fig. 1, and the method includes the following steps:
and step S210, receiving the vehicle image transmitted by the terminal, and dividing the vehicle image to obtain a tire area image.
The vehicle image may be an image captured by an end user, and the vehicle image may include a tire area and a background area, where the tire area is a target area, and the background area is a non-tire area, that is, a non-target area, and the server may segment the vehicle image by using an image segmentation model to obtain a tire area image.
Step S220, inputting the tire region images into the image classification model to obtain pattern feature vectors corresponding to the patterns in the tire region images.
The tire area image is an image formed by tire area elements in the vehicle image, and the tire area image includes a tire pattern or the like representing a tire category. The tire can be uniquely identified by the pattern feature vector corresponding to the pattern in the tire area image. The server compares the pattern characteristic vectors corresponding to different tire area images, calculates the matching degree between the two, and judges whether the patterns on the tire area images and the comparison images are consistent or not according to the matching degree value.
The image classification model can be a machine learning model trained in advance, the machine learning model already learns the classification parameters for classifying the images, and the image classification is recognized according to the classification parameters. Specifically, the server inputs the acquired tire region image into a pre-trained image classification model, the image classification model extracts the feature vector of the image according to the pre-learned classification parameters, and the image is classified according to the feature vector. Further, pattern features in the tire region images can be used for representing the category of each tire, and the image classification model realizes category identification of each tire region image according to pattern feature vectors by extracting pattern feature vectors corresponding to patterns in each tire region image.
The image classification model can be a pre-trained deep learning model, and the training process for the image classification model can include: and taking the tire area image and the pattern category corresponding to the pattern in the tire area image as training samples, inputting a deep learning model, and learning the relation between the tire area image and the pattern category by using the deep learning model to obtain an image classification model.
For example, the deep learning model may be a CNN network model, and specifically may be a VGG16 model, and the loss function is used to drive the model in the VGG16 model for training, specifically, the function is as shown in formula (1).
Figure BDA0002244246820000061
In the formula (1), M is the total number of tire patterns, and a and i are constants, and appropriate values can be selected as needed. Specifically, the server inputs the tire area image into the trained image classification model, and then the pattern category corresponding to the pattern in the tire can be obtained.
Further, the server sequentially passes all the tire area images through the trained VGG16 model, and extracts the pattern feature vector of the penultimate layer in each tire area image. The server can judge the matching degree between the pattern feature vectors according to the distance by calculating the distance between the pattern feature vectors.
And step S230, combining the pattern feature vectors in pairs to obtain pattern feature vector pairs.
When the vehicle is provided with a plurality of tires, the server acquires a plurality of tire area images and obtains tire pattern feature vectors corresponding to the tire area images, and the server combines the pattern feature vectors in pairs to obtain pattern feature vector pairs.
And step S240, circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair.
And the server sequentially and circularly traverses each pattern feature vector pair, calculates the distance between the pattern feature vectors in each pattern feature vector pair, and obtains the matching degree according to the distance. If the number of the tire area images received by the server is 4, the server respectively obtains 4 pattern feature vectors corresponding to the 4 tire area images, the 4 pattern feature vectors are combined in pairs to obtain 6 pairs of pattern feature vector pairs, the server circularly traverses the 6 pairs of pattern feature vectors to obtain the matching degree corresponding to each pattern feature vector pair, and then the matching degree between the tire pattern area images is obtained. The tire area images corresponding to different tires on the vehicle are compared, and the tire area images are used for judging the consistency among the tires in the vehicle.
And step S250, when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful.
The similarity between different patterns can be judged according to the value of the matching degree. And the server judges whether the patterns on the tires are consistent according to the calculated matching degree value, and further judges whether the tires are consistent. And when the matching degree is smaller than a preset threshold value, judging that the patterns corresponding to different tire area images are completely consistent, and judging that the tire is qualified, otherwise, judging that the tire is unqualified.
It should be noted that, considering that the pattern types corresponding to the tires of the same type are the same, when none of the tires are worn or the degree of wear is small, when the matching degree between the pattern feature vectors corresponding to the tires is smaller than a preset threshold, the server determines that the tires are successfully compared; if the tires of the same pattern type are worn, or the wear degree is large, or the wear degrees are different, if the matching degree between the pattern feature vectors corresponding to the tires is possibly larger than a preset threshold value, the server judges that the tire comparison is unsuccessful, and at the moment, the tires with different wear degrees are mainly considered, so that potential safety hazards exist in the actual operation process.
In this embodiment, the server determines the pattern consistency by calculating the matching degree between pattern feature vectors in the tire region images corresponding to different tires, and further determines the tire consistency. In the process of judging the tire consistency, the tire consistency is judged according to the category instead of the category identification of the tire, but the consistency is judged by acquiring the pattern feature vector corresponding to the pattern in the tire area image, compared with the category judgment, the feature vector can more accurately represent the characteristics of the tire, and the accuracy of the tire consistency judgment is further improved. And the tires in the vehicle are combined pairwise, and the consistency judgment of the tires is realized by comparing the matching degree of pattern feature vectors of different tires. Automatic extraction of pattern feature vectors is achieved through the image classification model, and consistency between tires is rapidly and accurately judged.
In one embodiment, segmenting the vehicle image into tire region images includes: inputting the vehicle image into an image segmentation model, and identifying the category of pixels in the vehicle image according to pre-trained characteristic parameters through the image segmentation model to obtain a category identification image; and carrying out bitwise and operation on the category identification image and the vehicle image to obtain a tire area image.
Specifically, the server receives the vehicle image transmitted by the terminal, inputs the vehicle image into the image segmentation model, and segments the tire area in the vehicle image through the image segmentation model. The image segmentation model records in advance the correspondence between each pixel in the vehicle image and each pixel type. And inputting the vehicle image into a pre-trained image classification model, and identifying the pixel class in the vehicle image according to the pre-trained characteristic parameters to obtain a class identification image.
The category identification image stores a category corresponding to each pixel in the vehicle image, wherein the category of the pixel may be a tire area, or the category of the pixel may be a background area. For example, the category identification image is an image composed of flag bits, the identification bit corresponding to the pixel of the tire area is 1, and the identification bit corresponding to the pixel of the background area is 0.
And the pixel values in the category identification image identify the categories of different areas in the vehicle image, the pixels of the category identification image and the pixels of the current vehicle image are sequentially operated, and the background area in the vehicle image is removed to obtain the tire area image. Specifically, the server reads the identification bits in the category identification image, extracts the positions in the identification bits, which are tire identifications, extracts the pixels, corresponding to the positions, in the vehicle image as tire area image pixels, and generates a tire area image according to the tire area image pixels.
As shown in fig. 3, a schematic flow chart of a tire area image comparison method in an embodiment is provided, which includes:
in step S310, an input image is acquired. The input image can be a vehicle image corresponding to a vehicle to be detected, the server inputs the acquired vehicle image into an image segmentation model, and the image segmentation model is used for identifying the category of pixels in the vehicle image to obtain a category identification image with pixel precision.
In step S320, a tire area is acquired. The category to which each pixel belongs is identified in the category identification image, and the server can obtain the tire area image according to the category identification image.
In step S330, it is determined whether the number of tires satisfying the requirement is greater than one. The server determines the number of tire area images acquired from the vehicle image, and if the number is smaller than one, the tire area image comparison procedure is ended, and if the number is larger than one, the process proceeds to step S340.
Step S340, sequentially acquiring tire area images. And the server sequentially divides each tire area image from the vehicle image according to the category identification image.
Step S350, tire region features are sequentially extracted. And the server sequentially inputs the divided tire area images into the image classification model, and sequentially extracts the pattern characteristic vectors corresponding to the tire area images by using the image classification model.
And step S360, comparing every two characteristics to obtain the cosine distance. Specifically, the server combines the pattern features two by two, and calculates the corresponding cosine distances for all combinations, and compares the cosine distances with the set threshold.
In step S370, it is determined whether at least one group of cosine distances is greater than a set threshold. And when the server judges that at least one group of cosine distances are larger than the set threshold value, judging that the tire patterns are inconsistent, and otherwise, judging that the patterns in the tire area image are consistent.
Referring to fig. 4, a method of acquiring an image of a tire area is provided. Including a vehicle image 410, a category identification image 420, and a tire area image 430 obtained by bitwise and operations based on the vehicle image and the category identification image.
In this embodiment, the image segmentation model performs class identification on pixels in the vehicle image to generate a class identification image, performs bitwise and operation processing on the class identification image and the vehicle image, extracts pixels at corresponding positions in the vehicle image as tire region pixels only when an identification bit in the class identification image is 1, and further segments the tire region image from the vehicle image to realize accurate segmentation of the target image.
In one embodiment, a method for generating an image segmentation model includes: acquiring a standard category identification image and a prediction category identification image; traversing each pixel in the prediction category identification image and the standard category identification image by using an attention loss function, and calculating a loss value between the prediction category identification image and the standard category identification image according to the attention loss function; when the loss value is smaller than a preset value, acquiring a corresponding characteristic parameter; and obtaining an image segmentation model according to the characteristic parameters.
The attention loss function is an objective function for optimizing the image segmentation model, and is used for evaluating the difference degree between the predicted value and the actual value of the image segmentation model. The process of training or optimizing the image segmentation model is the process of minimizing the loss function, the smaller the loss function is, the closer the predicted value of the image segmentation model is to the true value, and the better the accuracy of the model is. The attention loss function can be a square loss function, a logarithmic loss function, a cross entropy loss function, and other loss functions.
As shown in fig. 5, a network structure diagram of an image segmentation model is provided. The image segmentation model consists of an encoding network module, a decoding network module and an attention loss module. Wherein the coding network module consists of five sub-modules, each sub-module comprises four parts of convolution, batch normalization, linear rectification activation function and maximum value pooling, in one embodiment, the feature map length and width and the number of channels of each sub-module are (224, 224, 64), (112, 112, 128), (56, 56, 128), (28, 28, 256), (14, 14, 256), respectively, the decoding network module is also composed of five sub-modules, and each submodule respectively comprises four parts of convolution, batch normalization, linear rectification activation function and upsampling, the feature diagram length and width and the channel number of each submodule are respectively (14, 14, 256), (28, 28, 256), (56, 56, 128), (112, 112, 128), (224, 224 and 64), and finally the training is carried out through an attention loss module driving model. For example, the image segmentation model may be a semantic segmentation model in a deep learning model, such as a SegNet model.
In the process of training the training sample to generate the image segmentation model, the server obtains a plurality of intermediate segmentation models, wherein different intermediate segmentation models correspond to different segmentation characteristic parameters. In order to select the optimal image segmentation model from the plurality of intermediate segmentation models, the server obtains a verification set, the verification set comprises verification vehicle quantity images and verification category identification images, the verification vehicle images are input into the intermediate segmentation models to obtain to-be-verified category identification images output by the intermediate segmentation models respectively, and similarity comparison is carried out between the to-be-verified category identification images and the verification category identification images to obtain classification accuracy values corresponding to the intermediate segmentation models. The classification accuracy can be calculated through an attention loss function in the image segmentation model, and when the server judges that the loss value is smaller than a preset value, the middle segmentation model is extracted as the image segmentation model.
In this embodiment, the loss value corresponding to each intermediate segmentation model is calculated by the attention loss function, and further, the optimal image segmentation model can be selected from the intermediate segmentation models corresponding to the plurality of characteristic parameters, and the optimal image segmentation model is used, so that the efficiency of segmenting the vehicle image by the model is improved.
The server divides the vehicle image into a tire area image and a background area image using an image division model. In order to evaluate the segmentation accuracy of the image segmentation model, the method further comprises the step of calculating loss values corresponding to different segmentation areas of the image segmentation model by using an attention loss function, wherein the calculation formula of the loss values is as shown in formula (2).
Figure BDA0002244246820000101
Figure BDA0002244246820000111
Equation (2) is a calculation equation of the attention loss function, where SjFor the j-th class of probabilities, α and γ are fixed coefficients, such as α ═ 4, and γ ═ 1.
Specifically, when the pixel is classified as a tire region, j is 1, and the corresponding loss function is L (S)j)=-α(1-Sj)γlog(Sj) When the pixel type is a background region, j is 0, and the corresponding loss function is L (S)j)=log(Sj)。
In the formula (3), T represents the total number of categories, and may be selected to be 2, and a and i are constants, and appropriate values may be selected as required.
As shown in fig. 6, a schematic diagram of a calculation method corresponding to the attention loss function is provided. In one embodiment, traversing pixels in the prediction class identification image and the standard class identification image using an attention loss function, calculating a loss value therebetween from the attention loss function, comprising:
step S610, traversing the tire prediction loss values corresponding to the pixels identified as the tire area category in the prediction category identification image and the standard category identification image.
Specifically, the server obtains a standard class identification image and a prediction class identification image obtained by using an image classification model, calculates a loss value between the standard class identification image and the prediction class identification image according to an attention loss function, and records the loss value as L (S)j)。
And S620, adjusting the network parameters to adjust the tire predicted loss value, so that in the training process, the training of the different pixel points is concerned more, and the tire adjustment loss value is obtained.
Specifically, the server traverses each pixel in the prediction class identification image and the standard class identification image, and when the pixel class is the tire prediction area, the server follows the formula L (S)j)=-α(1-Sj)γlog(Sj) The loss value is calculated, and in this formula, the coefficient- α (1-S) is adjustedj)γThe training of the pixel points of the difference can be paid more attention to in the training process, and then the tire adjustment loss value is obtained.
More specifically, the factor γ >0 reduces the loss value of the easily categorizable background region, making the model training process more focused on the pixels of the difficult, wrongly categorized target region. For example, since the target region has a simple sample with y' of 0.95, the power of γ of (1-0.95) is small, the loss function value becomes smaller, and conversely, the loss value corresponding to a prediction probability of 0.3 is relatively large. When γ is 0, it is the cross entropy loss function, and when γ increases, the influence of the adjustment factor also increases.
In addition, a balance factor α is added to balance the non-uniformity of the ratio between the target region and the non-target region itself, wherein α takes 4, i.e., the target region is less than the non-target region because the non-target region is easier to separate and α is added to balance the importance of the target region and the non-target region.
Step S630, traversing the background prediction loss value corresponding to the pixel marked as the background area category in the prediction category marked image and the standard category marked image;
specifically, referring to formula (2), when the server traverses each pixel in the prediction class identification image and the standard class identification image, when the pixel class corresponds to the background region, the server follows formula L (S)j)=log(Sj) And calculating a loss value.
In step S640, a loss value corresponding to the attention loss function is obtained according to the tire adjustment loss value and the background predicted loss value.
And traversing each pixel in the prediction category identification image and the standard category identification image by the server, and selecting a corresponding loss value formula according to the category of the pixel to further obtain a loss value. Specifically, the method may include obtaining the loss value by performing weighted average on the tire adjustment loss value and the background predicted loss value, and in other embodiments, the method may also be performing variance calculation, and the like, which is not limited herein.
In this embodiment, the tire prediction loss value is adjusted by using the network parameters, so that the efficiency and accuracy of the image segmentation model for performing the class identification on the pixels in the vehicle image are improved, the adjustment coefficient is increased, the attention of the model to the target area is also improved, the image segmentation model focuses more on the pixel class identification of the target area, and the training efficiency and the training accuracy of the model are improved.
In one embodiment, bitwise and operating the category identification image and the current vehicle image to obtain the tire area image includes: acquiring position information of pixels in the tire area image; extracting position information forming the circumscribed rectangle into boundary position information; the boundary position information is used as the boundary of the tire area image, and the tire area image is obtained according to the boundary.
In order to input the obtained tire area image into the image category identification model and correctly extract the pattern feature vector, the method also comprises the step of carrying out bitwise AND operation on the category identification image and the vehicle image to obtain the tire area image, and carrying out preprocessing on the tire area image. The preprocessing may include processing the size of the tire region image, for example, normalizing the tire region image with irregular boundaries to obtain a tire region image with regular boundaries.
As shown in fig. 4, the obtained tire area image 430 is an image with irregular boundaries, and the irregular image needs to be subjected to a regularization process to obtain a regularized tire area image, where the regularization image may include a maximum circumscribed rectangle in the extracted tire area image, and the maximum circumscribed rectangle may include more pattern features in the tire area image. In other embodiments, extracting the tire region image in other manners may also be included, as long as the pattern image included in the extracted image is sufficient for extracting the pattern feature vector.
As shown in fig. 7, a schematic diagram of a tire area image comparison module in one embodiment is provided, including: an original image input module 710, a tire region acquisition module 720, and a tire pattern comparison module 730.
Fig. 8 provides a schematic diagram of the tire region image comparison module. Refer to fig. 7 and 8. The original image input module 710 is configured to receive a vehicle image, the tire region obtaining module 720 includes a tire segmentation unit and a tire region obtaining unit, where the tire segmentation unit relates to an image segmentation model with strong attention to a tire region, the model is improved over a SegNet network structure, and the output result is a tire region coordinate with pixel precision, that is, a category identification image, and the obtaining method of the image segmentation model includes: the server obtains an original vehicle image of the vehicle inspection tire pattern, marks a tire pattern area in the image by using polygons, drives a model to train according to an attention loss function and a learning rate of 0.001, and obtains a corresponding image segmentation model.
The tire area acquisition unit includes: the server processes the vehicle image according to the acquired image segmentation model to acquire a corresponding tire area image, and specifically includes: and carrying out bitwise operation on the category identification image obtained by the image segmentation model and the vehicle image to obtain a tire area image with the background filtered.
The tire pattern comparison module 730 includes a tire feature extraction unit and a tire feature comparison unit. Specifically, the server uses the image segmentation model to extract the tire area from the vehicle image, and the tire feature extraction unit uses the image classification model to extract the pattern feature vector corresponding to the pattern in the tire, so as to complete pattern similarity comparison between different tires.
The tire feature extraction unit relates to an image classification model which can be used for extracting pattern feature vectors in tires, the model uses a VGG16 model, the value of the last full connection layer of the model is extracted as the pattern feature vectors, and the model is obtained by the following method: and obtaining a tire region image by using a tire region strong attention image segmentation model, classifying and storing the obtained tire region image according to the pattern types, driving model training according to a loss function and a learning rate of 0.001, and obtaining an image classification model corresponding to corresponding pattern feature extraction.
And processing the tire pattern area image according to the obtained image classification model to obtain the pattern feature vector corresponding to the corresponding tire pattern.
The tire pattern feature comparison unit compares the extracted pattern feature vectors by using cosine distances, combines the tire pattern feature vectors in pairs, and sequentially calculates the cosine distances, wherein when at least one group of distance values are greater than a threshold value, the pattern is considered to have the phenomenon that the tire patterns are inconsistent, otherwise, the patterns are consistent, and in one embodiment, the threshold value is 0.3.
In one embodiment, calculating a degree of match between each pattern feature vector pair includes: and calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs, and determining the matching degree according to numerical values corresponding to the cosine distances.
Specifically, the server combines the obtained pattern feature vectors respectively, then uses a cosine distance function to judge the similarity, and when the judging value is smaller than a threshold value, the pattern of the group of tires is considered to be consistent, otherwise, the pattern of the group of tires is not consistent. For example, when the threshold value is 0.3, when the evaluation value is less than 0.3, the set of tire patterns is considered to be consistent, otherwise, the set of tire patterns is not consistent. Wherein, the cosine distance formula is shown as formula (4):
Figure BDA0002244246820000141
wherein the content of the first and second substances,a pattern feature vector representing a set of tire patterns.
In one embodiment, further comprising: acquiring a newly added target image and a newly added target image label; inputting the newly added target image and the newly added target image label into a classification machine learning model, learning a newly added characteristic relation between the newly added target image and the newly added target image label through the classification machine learning model, and updating the image classification model according to the newly added characteristic relation; and acquiring a current target image, inputting the current target image into the updated image classification model, and then continuously executing the step of judging whether the comparison between the current target image and the comparison image is successful.
In this embodiment, the obtained image classification model can be adapted to any existing vehicle, and when an unknown pattern type is encountered, only corresponding pattern data needs to be added to the image classification model for training, and the image classification model is updated, so that the method is easy to maintain and has high practical value.
As shown in fig. 9, there is provided a tire area image comparison method, including:
step S910, the vehicle image sent by the terminal is received, and the tire area image is obtained by dividing the vehicle image.
Specifically, the server may segment the vehicle image into a tire region image using the image segmentation model.
Step S920, inputting the tire region image into the image classification model to obtain a pattern feature vector corresponding to the pattern in the tire region image.
The server inputs the tire area image into a trained VGG16 model, and extracts the pattern feature vector of the next to last layer in the tire area image.
In step S930, a corresponding comparison image is selected for the tire region image.
In order to determine whether the tire is qualified, at least one comparison tire is selected, the comparison tire is used as a standard tire, the server compares the tire area image corresponding to the tire with the comparison tire image corresponding to the comparison tire, and whether the current tire is qualified is determined according to the comparison result.
The comparison image may be a standard comparison image stored in advance, for example, a tire image of an original tire stored in original configuration information of the vehicle. The selection mode of the comparison image can be suitable for the condition that the total number of tires of a vehicle is one, or the current tire and the original tire need to be compared.
Specifically, the server obtains the number of tires corresponding to a current vehicle, when the number of tires is multiple, the server extracts multiple tire area images in the vehicle image, then pattern feature vectors corresponding to patterns in the multiple tire area images are respectively obtained by using an image classification model, different pattern feature vectors are combined in pairs to obtain pattern feature vector pairs, each pattern feature vector pair is matched to obtain multiple matching values, and when each matching value is smaller than a preset threshold value, each tire is successfully matched.
In other words, the server acquires a plurality of tire area images, extracts any one of the tire area images as a comparison image, matches the pattern feature vector corresponding to the current tire area image with the comparison pattern feature vector corresponding to the comparison image to obtain a plurality of comparison matching values, and determines that the tire comparison is unqualified when any one of the comparison matching values is greater than a preset threshold value.
Step S940, obtaining a comparison pattern feature vector corresponding to the comparison image, and calculating a matching degree between the pattern feature vector and the comparison pattern feature vector.
Specifically, the server inputs the acquired comparison image into the classification model to obtain a comparison pattern feature vector through the classification model, and then calculates the cosine distance between the pattern feature vector and the comparison pattern feature vector.
And step S950, when the matching degree is smaller than a preset threshold value, judging that the comparison between the tire area image and the comparison image is successful.
The similarity between different patterns can be judged according to the value of the matching degree. And the server judges whether the patterns on the tires are consistent according to the calculated matching degree value, and further judges whether the tires are consistent. And when the matching degree is smaller than a preset threshold value, judging that the pattern types of the tire area image and the comparison image are consistent, and judging that the tire is qualified, otherwise, judging that the tire is unqualified.
In one embodiment, as shown in fig. 10, there is provided a tire area image comparing apparatus, including:
the area image acquiring module 1010 is configured to receive a vehicle image sent by a terminal, and segment the vehicle image to obtain a tire area image.
The vector extraction module 1020 is configured to input the tire region images into an image classification model, so as to obtain a pattern feature vector corresponding to a pattern in each tire region image.
And a vector pair obtaining module 1030, configured to combine every two of the pattern feature vectors to obtain a pattern feature vector pair.
And a matching degree calculation module 1040, configured to cycle through each pattern feature vector pair, and calculate a matching degree between each pattern feature vector pair.
The determining module 1050 is configured to determine that the comparison of the tire region images is successful when the matching degree between each pattern feature vector pair is smaller than a preset threshold.
In one embodiment, the area image acquisition module 1010 includes:
and the identification image acquisition unit is used for inputting the vehicle image into an image segmentation model so as to identify the category of the pixels in the vehicle image according to the pre-trained characteristic parameters through the image segmentation model to obtain a category identification image.
And the area image acquisition unit is used for carrying out position and operation on the category identification image and the vehicle image to obtain a tire area image.
In one embodiment, the apparatus comprises:
and the prediction identification image acquisition module is used for acquiring the standard category identification image and the prediction category identification image.
And the loss value calculation module is used for traversing each pixel in the prediction category identification image and the standard category identification image by using the attention loss function and calculating the loss value between the prediction category identification image and the standard category identification image according to the attention loss function.
And the parameter acquisition module is used for acquiring corresponding characteristic parameters when the loss value is smaller than a preset value.
And the model acquisition module is used for acquiring an image segmentation model according to the characteristic parameters.
In one embodiment the loss value calculation module comprises:
and the tire loss value calculating unit is used for traversing the tire prediction loss values corresponding to the pixels marked as the tire area types in the prediction type identification image and the standard type identification image.
And the adjustment calculation unit is used for adjusting the tire prediction loss value by utilizing the network parameters so as to focus more on the training of the different pixel points in the training process and obtain the tire adjustment loss value.
And the background loss value calculating unit is used for traversing the background prediction loss values corresponding to the pixels marked as the background region categories in the prediction category identification image and the standard category identification image.
And the loss value calculation unit is used for obtaining a loss value corresponding to the attention loss function according to the tire adjustment loss value and the background prediction loss value.
In one embodiment, the area image acquiring unit includes:
and the image calculation subunit is used for carrying out bitwise operation on the category identification image and the current vehicle image to obtain a tire area image.
And the position information acquisition subunit is used for acquiring the position information of the pixels in the tire area image.
And a boundary position information extraction subunit operable to extract position information constituting the inscribed rectangle as boundary position information.
And the image acquisition subunit is used for taking the boundary position information as the boundary of the tire area image and obtaining the tire area image according to the boundary.
In one embodiment, the matching degree calculating module 1040 includes:
and the area image calculation unit is used for carrying out bitwise operation on the category identification image and the current vehicle image to obtain a tire area image.
In one embodiment, the region image calculation unit includes:
and the distance calculation subunit is used for calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs and determining the matching degree according to numerical values corresponding to the cosine distances.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data corresponding to the vehicle images. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a tire area image comparison method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image; inputting a plurality of tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in each tire area image; combining every two pattern feature vectors to obtain a pattern feature vector pair; circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair; and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful.
In one embodiment, the processor when executing the computer program further performs the step of segmenting the vehicle image into the tire region image by: inputting the vehicle image into an image segmentation model, and identifying the category of pixels in the vehicle image according to pre-trained characteristic parameters through the image segmentation model to obtain a category identification image; and carrying out bitwise and operation on the category identification image and the vehicle image to obtain a tire area image.
In an embodiment, the processor, when executing the computer program, is further configured to: acquiring a standard category identification image and a prediction category identification image; traversing each pixel in the prediction category identification image and the standard category identification image by using an attention loss function, and calculating a loss value between the prediction category identification image and the standard category identification image according to the attention loss function; when the loss value is smaller than a preset value, acquiring a corresponding characteristic parameter; and obtaining an image segmentation model according to the characteristic parameters.
In one embodiment, the step of traversing pixels in the prediction class identification image and the standard class identification image using the attention loss function when the processor executes the computer program is further configured to: traversing the tire prediction loss value corresponding to the pixel identified as the tire area type in the prediction type identification image and the standard type identification image; adjusting the tire prediction loss value by using the network parameters so as to focus more on training on the different pixel points in the training process and obtain the tire adjustment loss value; traversing the background prediction loss value corresponding to the pixel which is identified as the background area type in the prediction type identification image and the standard type identification image; and obtaining a loss value corresponding to the attention loss function according to the tire adjustment loss value and the background prediction loss value.
In one embodiment, the processor when executing the computer program performs the bitwise anding the category identification image with the current vehicle image to obtain the tire area image is further configured to: acquiring position information of pixels in the tire area image; extracting position information forming the circumscribed rectangle into boundary position information; and taking the boundary position information as the boundary of the tire area image, and obtaining the tire area image according to the boundary.
In one embodiment, the processor when executing the computer program further performs the step of calculating the degree of matching between each of the pattern feature vector pairs by: and calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs, and determining the matching degree according to numerical values corresponding to the cosine distances.
In one embodiment, the computer program when executed by a processor implements the steps of: receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image; inputting a plurality of tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in each tire area image; combining every two pattern feature vectors to obtain a pattern feature vector pair; circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair; and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful.
In one embodiment, the computer program when executed by the processor further performs the step of segmenting the vehicle image into a tire region image by: inputting the vehicle image into an image segmentation model, and identifying the category of pixels in the vehicle image according to pre-trained characteristic parameters through the image segmentation model to obtain a category identification image; and carrying out bitwise and operation on the category identification image and the vehicle image to obtain a tire area image.
In an embodiment, the computer program, when being executed by the processor, further performs the steps of the method for generating an image segmentation model by: acquiring a standard category identification image and a prediction category identification image; traversing each pixel in the prediction category identification image and the standard category identification image by using an attention loss function, and calculating a loss value between the prediction category identification image and the standard category identification image according to the attention loss function; when the loss value is smaller than a preset value, acquiring a corresponding characteristic parameter; and obtaining an image segmentation model according to the characteristic parameters.
In one embodiment, the computer program when executed by the processor implements said step of traversing pixels in said prediction class identification image and in said standard class identification image using an attention loss function, the step of calculating a loss value therebetween from the attention loss function further being operable to: traversing the tire prediction loss value corresponding to the pixel identified as the tire area type in the prediction type identification image and the standard type identification image; adjusting the tire prediction loss value by using the network parameters so as to focus more on training on the different pixel points in the training process and obtain the tire adjustment loss value; traversing the background prediction loss value corresponding to the pixel which is identified as the background area type in the prediction type identification image and the standard type identification image; and obtaining a loss value corresponding to the attention loss function according to the tire adjustment loss value and the background prediction loss value.
In one embodiment, the computer program when executed by the processor performs the bitwise anding the category identification image with the current vehicle image to obtain the tire region image further operable to: acquiring position information of pixels in the tire area image; extracting position information forming the circumscribed rectangle into boundary position information; and taking the boundary position information as the boundary of the tire area image, and obtaining the tire area image according to the boundary.
In one embodiment, the computer program when executed by the processor performs the step of calculating the degree of matching between each of the pattern feature vector pairs is further configured to: and calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs, and determining the matching degree according to numerical values corresponding to the cosine distances.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of tire area image comparison, the method comprising:
receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image;
inputting a plurality of tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in each tire area image;
combining every two pattern feature vectors to obtain a pattern feature vector pair;
circularly traversing each pattern feature vector pair, and calculating the matching degree between each pattern feature vector pair;
and when the matching degree between each pattern feature vector pair is smaller than a preset threshold value, judging that the comparison of each tire area image is successful.
2. The method of claim 1, wherein the segmenting the vehicle image into a tire region image comprises:
inputting the vehicle image into an image segmentation model, and identifying the category of pixels in the vehicle image according to pre-trained characteristic parameters through the image segmentation model to obtain a category identification image;
and carrying out bitwise and operation on the category identification image and the vehicle image to obtain a tire area image.
3. The method according to claim 2, wherein the method for generating the image segmentation model comprises:
acquiring a standard category identification image and a prediction category identification image;
traversing each pixel in the prediction category identification image and the standard category identification image by using an attention loss function, and calculating a loss value between the prediction category identification image and the standard category identification image according to the attention loss function;
when the loss value is smaller than a preset value, acquiring a corresponding characteristic parameter;
and obtaining an image segmentation model according to the characteristic parameters.
4. The method of claim 3, wherein traversing pixels in the prediction class identification image and the standard class identification image using an attention loss function, and calculating a loss value therebetween according to the attention loss function comprises:
traversing the tire prediction loss value corresponding to the pixel identified as the tire area type in the prediction type identification image and the standard type identification image;
adjusting the tire prediction loss value by using the network parameters so as to focus more on training on the different pixel points in the training process and obtain the tire adjustment loss value;
traversing the background prediction loss value corresponding to the pixel which is identified as the background area type in the prediction type identification image and the standard type identification image;
and obtaining a loss value corresponding to the attention loss function according to the tire adjustment loss value and the background prediction loss value.
5. The method of claim 2, wherein said bitwise anding the category identification image with the current vehicle image to obtain a tire area image comprises:
acquiring position information of pixels in the tire area image;
extracting position information forming the circumscribed rectangle into boundary position information;
and taking the boundary position information as the boundary of the tire area image, and obtaining the tire area image according to the boundary.
6. The method of claim 1, wherein said calculating a degree of match between each of said pairs of pattern feature vectors comprises:
and calculating cosine distances between the pattern feature vectors in the pattern feature vector pairs, and determining the matching degree according to numerical values corresponding to the cosine distances.
7. A method of tire area image comparison, the method comprising:
receiving a vehicle image sent by a terminal, and segmenting the vehicle image to obtain a tire area image;
inputting the tire area image into an image classification model to obtain pattern feature vectors corresponding to patterns in the tire area image;
selecting a corresponding comparison image for the tire area image;
obtaining a comparison pattern feature vector corresponding to the comparison image, and calculating the matching degree between the pattern feature vector and the comparison pattern feature vector;
and when the matching degree is smaller than a preset threshold value, judging that the tire area image is successfully compared with the comparison image.
8. A tire area image comparison apparatus, the apparatus comprising:
the area image acquisition module is used for receiving a vehicle image sent by a terminal and segmenting the vehicle image to obtain a tire area image;
the vector extraction module is used for inputting the tire area images into an image classification model to obtain pattern feature vectors corresponding to patterns in the tire area images;
the vector pair obtaining module is used for combining every two of the pattern feature vectors to obtain a pattern feature vector pair;
the matching degree calculation module is used for circularly traversing each pattern feature vector pair and calculating the matching degree between each pattern feature vector pair;
and the judging module is used for judging that the comparison of the images of the tire areas is successful when the matching degree between the pattern feature vector pairs is smaller than a preset threshold value.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911011220.3A 2019-10-23 2019-10-23 Tire area image comparison method and device, computer equipment and storage medium Pending CN110766075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911011220.3A CN110766075A (en) 2019-10-23 2019-10-23 Tire area image comparison method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911011220.3A CN110766075A (en) 2019-10-23 2019-10-23 Tire area image comparison method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110766075A true CN110766075A (en) 2020-02-07

Family

ID=69333050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911011220.3A Pending CN110766075A (en) 2019-10-23 2019-10-23 Tire area image comparison method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110766075A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496162A (en) * 2020-04-01 2021-10-12 顺丰科技有限公司 Parking specification identification method and device, computer equipment and storage medium
CN113866167A (en) * 2021-09-13 2021-12-31 北京逸驰科技有限公司 Tire detection result generation method, computer equipment and storage medium
CN114216546A (en) * 2021-12-14 2022-03-22 江苏太平洋通信科技有限公司 Freight source overload identification management system and method
CN116385749A (en) * 2023-05-30 2023-07-04 成都锐菲网络科技有限公司 Longitudinal pattern comparison method for vehicle tyre

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955496A (en) * 2014-04-18 2014-07-30 大连恒锐科技股份有限公司 Fast field tire trace pattern retrieval algorithm
US20140232852A1 (en) * 2011-07-11 2014-08-21 Guenter Nobis Optical device and method for inspecting tires
CN106295918A (en) * 2015-05-11 2017-01-04 上海驷惠软件科技开发有限公司 A kind of vehicle in use safe example inspection management method and system
CN106780512A (en) * 2016-11-30 2017-05-31 厦门美图之家科技有限公司 The method of segmentation figure picture, using and computing device
CN107851314A (en) * 2015-07-27 2018-03-27 米其林集团总公司 For the method for the optimization for analyzing surface of tyre uniformity
CN108918536A (en) * 2018-07-13 2018-11-30 广东工业大学 Tire-mold face character defect inspection method, device, equipment and storage medium
TW201915831A (en) * 2017-09-29 2019-04-16 香港商阿里巴巴集團服務有限公司 System and method for entity recognition
CN110136102A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on standard drawing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140232852A1 (en) * 2011-07-11 2014-08-21 Guenter Nobis Optical device and method for inspecting tires
CN103955496A (en) * 2014-04-18 2014-07-30 大连恒锐科技股份有限公司 Fast field tire trace pattern retrieval algorithm
CN106295918A (en) * 2015-05-11 2017-01-04 上海驷惠软件科技开发有限公司 A kind of vehicle in use safe example inspection management method and system
CN107851314A (en) * 2015-07-27 2018-03-27 米其林集团总公司 For the method for the optimization for analyzing surface of tyre uniformity
CN106780512A (en) * 2016-11-30 2017-05-31 厦门美图之家科技有限公司 The method of segmentation figure picture, using and computing device
TW201915831A (en) * 2017-09-29 2019-04-16 香港商阿里巴巴集團服務有限公司 System and method for entity recognition
CN108918536A (en) * 2018-07-13 2018-11-30 广东工业大学 Tire-mold face character defect inspection method, device, equipment and storage medium
CN110136102A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on standard drawing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TSUNG-YI LIN ET AL.: "Focal Loss for Dense Object Detection", 《ARXIV:1708.02002V2》 *
YIRAN SUN: "《https://www.zhihu.com/question/298010057?sort=created》", 30 January 2019 *
官云兰 等: "《MATLAB遥感数字图像处理实践教程》", 31 January 2019, 同济大学出版社 *
王海燕 等: "《大足石刻佛教造像脸部虚拟修复》", 30 April 2019, 重庆大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496162A (en) * 2020-04-01 2021-10-12 顺丰科技有限公司 Parking specification identification method and device, computer equipment and storage medium
CN113866167A (en) * 2021-09-13 2021-12-31 北京逸驰科技有限公司 Tire detection result generation method, computer equipment and storage medium
CN114216546A (en) * 2021-12-14 2022-03-22 江苏太平洋通信科技有限公司 Freight source overload identification management system and method
CN116385749A (en) * 2023-05-30 2023-07-04 成都锐菲网络科技有限公司 Longitudinal pattern comparison method for vehicle tyre
CN116385749B (en) * 2023-05-30 2023-08-11 成都锐菲网络科技有限公司 Longitudinal pattern comparison method for vehicle tyre

Similar Documents

Publication Publication Date Title
WO2020221298A1 (en) Text detection model training method and apparatus, text region determination method and apparatus, and text content determination method and apparatus
CN110766075A (en) Tire area image comparison method and device, computer equipment and storage medium
CN110738125B (en) Method, device and storage medium for selecting detection frame by Mask R-CNN
CN110427807B (en) Time sequence event action detection method
US9275307B2 (en) Method and system for automatic selection of one or more image processing algorithm
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN111160275B (en) Pedestrian re-recognition model training method, device, computer equipment and storage medium
CN110991389B (en) Matching method for judging appearance of target pedestrian in non-overlapping camera view angles
CN110969166A (en) Small target identification method and system in inspection scene
EP2657884A2 (en) Identifying multimedia objects based on multimedia fingerprint
CN111008643B (en) Picture classification method and device based on semi-supervised learning and computer equipment
CN111368758A (en) Face ambiguity detection method and device, computer equipment and storage medium
CN113111968B (en) Image recognition model training method, device, electronic equipment and readable storage medium
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN110956615B (en) Image quality evaluation model training method and device, electronic equipment and storage medium
CN112101114B (en) Video target detection method, device, equipment and storage medium
CN114119460A (en) Semiconductor image defect identification method, semiconductor image defect identification device, computer equipment and storage medium
CN113269706B (en) Laser radar image quality evaluation method, device, equipment and storage medium
CN111144425B (en) Method and device for detecting shot screen picture, electronic equipment and storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116069969A (en) Image retrieval method, device and storage medium
CN110209865B (en) Object identification and matching method based on deep learning
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN110147824B (en) Automatic image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207

RJ01 Rejection of invention patent application after publication